Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Similar documents
Informed search algorithms. Chapter 4

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

ARTIFICIAL INTELLIGENCE. Informed search

CS 380: Artificial Intelligence Lecture #4

CS 771 Artificial Intelligence. Informed Search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Artificial Intelligence

Informed Search and Exploration for Agents

Informed search algorithms. Chapter 4

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Informed Search Algorithms. Chapter 4

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Artificial Intelligence

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Informed/Heuristic Search

Informed search algorithms

Informed search algorithms

Introduction to Artificial Intelligence. Informed Search

TDT4136 Logic and Reasoning Systems

Outline. Best-first search

Informed Search and Exploration

Outline. Best-first search

Artificial Intelligence

Lecture 4: Informed/Heuristic Search

Informed search algorithms

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed search algorithms

Informed Search and Exploration

Informed Search and Exploration

Informed search algorithms

Informed search algorithms

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

CS:4420 Artificial Intelligence

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Solving problems by searching

Planning, Execution & Learning 1. Heuristic Search Planning

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Informed Search and Exploration

Artificial Intelligence Informed search. Peter Antal

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Robot Programming with Lisp

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Artificial Intelligence

Artificial Intelligence: Search Part 2: Heuristic search

Solving Problems: Intelligent Search

DFS. Depth-limited Search

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Solving Problems using Search

CS414-Artificial Intelligence

Problem Solving: Informed Search

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Downloaded from ioenotes.edu.np

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

CS 331: Artificial Intelligence Informed Search. Informed Search

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

CS 331: Artificial Intelligence Informed Search. Informed Search

Informed search strategies (Section ) Source: Fotolia

Problem solving and search

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

CS 4700: Foundations of Artificial Intelligence

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Artificial Intelligence

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

CS 4700: Foundations of Artificial Intelligence

Artificial Intelligence

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Artificial Intelligence. Informed search methods

Informed Search and Exploration

Lecture 5 Heuristics. Last Time: A* Search

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Artificial Intelligence

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Chapters 3-5 Problem Solving using Search

Informed Search A* Algorithm

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Artificial Intelligence CS 6364

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COSC343: Artificial Intelligence

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Ar#ficial)Intelligence!!

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Solving problems by searching

Informed search methods

Foundations of Artificial Intelligence

Problem solving and search

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Transcription:

Outline for today s lecture Informed Search II Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing 2 Review: Greedy best-first search: f(n): estimated total cost of path through n to goal g(n): the cost to get to n, UCS: f(n)= g(n) h(n): heuristic function that estimates the remaining distance from n to goal Greedy best-first search Idea: f(n) = h(n) Expand the node that is estimated by h(n) to be closest to goal Here, h(n) = h SLD (n) = straight-line distance from ` to Bucharest Review: Properties of greedy best-first search Complete? No can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time? O(b m ) worst case (like Depth First Search) But a good heuristic can give dramatic improvement of average cost Space? O(b m ) priority queue, so worst case: keeps all (unexpanded) nodes in memory Optimal? No 3 Review: Optimal informed search: A* Best-known informed search method Key Idea: avoid expanding paths that are already expensive, but expand most promising first. Simple idea: f(n)=g(n) + h(n) Implementation: Same as before Frontier queue as priority queue by increasing f(n) Review: Admissible heuristics A heuristic h(n) is admissible if it never overestimates the cost to reach the goal; i.e. it is optimistic Formally: n, n a node: 1. h(n) h*(n) where h*(n) is the true cost from n 2. h(n) 0 so h(g)=0 for any goal G. Example: h SLD (n) never overestimates the actual road distance 5 Theorem: If h(n) is admissible, A * using Tree Search is optimal 6 1

Arad 366 Sibiu 393 We add the three nodes we found to the Frontier queue. We sort them according to the g()+h() calculation. 7 8 Rimricu Vicea 13 Fagaras 15 Fagaras 15 Pitesti 17 Craiova 526 Sibiu 553 When we expand Sibiu, we run into Arad again. Note that we ve already expanded this node once; but we still add it to the Frontier queue again. We expand Rimricu Vicea. 9 10 Pitesti 17 Bucharest 50 Craiova 526 Sibiu 553 Sibiu 591 When we expand Fagaras, we find Bucharest, but we re not done. The algorithm doesn t end until we expand the goal node it has to be at the top of the Frontier queue. Bucharest 18 Bucharest 50 Craiova 526 Sibiu 553 Sibiu 591 Rimricu Vicea 607 Craiova 615 Note that we just found a better value for Bucharest! Now we expand this better value for Bucharest since it s at the top of the queue. We re done and we know the value found is optimal! 11 12 2

Optimality of A * (intuitive) Lemma: A * expands nodes on frontier in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=f i, where f i < f i+1 (After all, A* is just a variant of uniform-cost search.) Optimality of A * using Tree-Search (proof idea ) Lemma: A * expands nodes on frontier in order of increasing f value Suppose some suboptimal goal G 2 (i.e a goal on a suboptimal path) has been generated and is in the frontier along with an optimal goal G. Must prove: f(g 2 ) > f(g) (Why? Because if f(g 2 ) > f(n), then G 2 will never get to the front of the priority queue.) Proof: 1. g(g 2 ) > g(g) since G 2 is suboptimal 2. f(g 2 ) = g(g 2 ) since f(g 2 )=g(g 2 )+h(g 2 ) & h(g 2 ) = 0, since G 2 is a goal 3. f(g) = g(g) similarly. f(g 2 ) > f(g) from 1,2,3 Also must show that G is added to the frontier before G 2 is expanded see AIMA for argument in the case of Graph Search 13 1 A* search, evaluation Completeness: YES Since bands of increasing f are added As long as b is finite (guaranteeing that there aren t infinitely many nodes n with f (n) < f(g) ) Time complexity: Same as UCS worst case Number of nodes expanded is still exponential in the length of the solution. Space complexity: Same as UCS worst case It keeps all generated nodes in memory so exponential Hence space is the major problem not time Optimality: YES Cannot expand f i+1 until f i is finished. A* expands all nodes with f(n)< f(g) A* expands one node with f(n)=f(g) A* expands no nodes with f(n)>f(g) 15 Consistency A heuristic is consistent if h(n) c(n,a,n') h(n') Consistency enforces that h(n) is optimistic Theorem: if h(n) is consistent, A* using Graph-Search is optimal See book for details Cost of getting from n to n by any action a 16 Outline for today s lecture Informed Search Optimal informed search: A* Creating good heuristic functions (AIMA 3.6) Hill Climbing Heuristic functions For the 8-puzzle Avg. solution cost is about 22 steps (branching factor 3) Exhaustive search to depth 22: 3.1 x 10 10 states A good heuristic function can reduce the search process 17 18 3

Example Admissible heuristics For the 8-puzzle: h oop (n) = number of out of place tiles h md (n) = total Manhattan distance (i.e., # of moves from desired location of each tile) h oop (S) = 8 h md (S) = 3+1+2+2+2+3+3+2 = 18 Relaxed problems A problem with fewer restrictions on the actions than the original is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h oop (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h md (n) gives the shortest solution 19 20 Defining Heuristics: h(n) Cost of an exact solution to a relaxed problem (fewer restrictions on operator) Constraints on Full Problem: A tile can move from square A to square B if A is adjacent to B and B is blank. Constraints on relaxed problems: A tile can move from square A to square B if A is adjacent to B. (h md ) A tile can move from square A to square B if B is blank. A tile can move from square A to square B. (h oop ) 21 Dominance: A metric on better heuristics If h 2 (n) h 1 (n) for all n (both admissible) then h 2 dominates h 1 So h 2 is optimistic, but more accurate than h 1 h2 is therefore better for search Notice: h md dominates h oop Typical search costs (average number of nodes expanded): d=12 Iterative Deepening Search = 3,6,035 nodes A * (h oop ) = 227 nodes A * (h md ) = 73 nodes d=2 IDS = too many nodes A * (h oop ) = 39,135 nodes A * (h md ) = 1,61 nodes 22 The best and worst admissible heuristics h*(n) - the (unachievable) Oracle heuristic h*(n) = the true distance from the root to n h we re here already (n)= h teleportation (n)=0 Admissible: both yes!!! h*(n) dominates all other heuristics h teleportation (n) is dominated by all heuristics Iterative Deepening A* and beyond Beyond our scope: Iterative Deepening A* Recursive best first search (incorporates A* idea, despite name) Memory Bounded A* Simplified Memory Bounded A* - R&N say the best algorithm to use in practice, but not described here at all. (If interested, follow reference to Russell article on Wikipedia article for SMA*) (see 3.5.3 if you re interested in these topics) 23 2

Outline for today s lecture Informed Search Optimal informed search: A* Creating good heuristic functions When informed search doesn t work: hill climbing (AIMA.1.1) A few slides adapted from CS 71, UBMC and Eric Eaton (in turn, adapted from slides by Charles R. Dyer, University of Wisconsin- Madison)... Local search and optimization Local search: Use single current state and move to neighboring states. Idea: start with an initial guess at a solution and incrementally improve it until it is one Advantages: Use very little memory Find often reasonable solutions in large or infinite state spaces. Useful for pure optimization problems. Find or approximate best state according to some objective function Optimal if the space to be searched is convex 25 26 Hill climbing on a surface of states h(s): Estimate of distance from a peak (smaller is better) OR: Height Defined by Evaluation Function (greater is better) 27 Hill-climbing search I. While ( uphill points): Move in the direction of increasing evaluation function f II. Let s next = arg max f( s), s a successor state to the current state n s If f(n) < f(s) then move to s Otherwise halt at n Properties: Terminates when a peak is reached. Does not look ahead of the immediate neighbors of the current state. Chooses randomly among the set of best successors, if there is more than one. Doesn t backtrack, since it doesn t remember where it s been a.k.a. greedy local search "Like climbing Everest in thick fog with amnesia" 28 Hill climbing example I (minimizing h) start 5 8 h oop = 5 6 7 6 h oop = 5 5 8 6 7 5 h oop = 3 5 1 2 goal 3 5 h oop = 0 5 h oop = 1 h oop = 2 5 29 Hill-climbing Example: n-queens n-queens problem: Put n queens on an n n board with no two queens on the same row, column, or diagonal Good heuristic: h = number of pairs of queens that are attacking each other h=5 h=3 h=1 (for illustration) 30 5

Hill-climbing example: 8-queens Search Space features A state with h=17 and the h-value for each possible successor A local minimum of h in the 8-queens state space (h=1). h = number of pairs of queens that are attacking each other 31 32 Drawbacks of hill climbing Local Maxima: peaks that aren t the highest point in the space Plateaus: the space has a broad flat region that gives the search algorithm no direction (random walk) Ridges: dropoffs to the sides; steps to the North, East, South and West may go down, but a step to the NW may go up. 33 Toy Example of a local "maximum" start 1 2 3 5 1 2 3 1 5 1 2 3 7 5 6 8 3 2 2 1 2 3 5 2 goal 1 2 3 5 0 The Shape of an Easy Problem (Convex) Gradient ascent/descent Images from http://en.wikipedia.org/wiki/gradient_descent Gradient descent procedure for finding the arg x min choose initial x 0 randomly repeat x i+1 x i η f '(x i) until the sequence x 0, x 1,, x i, x i+1 converges Step size η (eta) is small (perhaps 0.1 or 0.05) f(x) 35 36 6

Gradient methods vs. Newton s method The Shape of a Harder Problem A reminder of Newton's method from Calculus: x i+1 x i η f '(x i) / f ''(x i) Newton,s method uses 2 nd order information (the second derivative, or, curvature) to take a more direct route to the minimum. The second-order information is more expensive to compute, but converges quicker. (this and previous slide from Eric Eaton) Contour lines of a function Gradient descent (green) Newton,s method (red) Image from http://en.wikipedia.org/wiki/newton's_method_in_optimization 37 38 The Shape of a Yet Harder Problem One Remedy to Drawbacks of Hill Climbing: Random Restart In the end: Some problem spaces are great for hill climbing and others are terrible. 39 0 Fantana muzicala Arad - Musical fountain Arad, Romania 7