Lecture 5 Heuristics. Last Time: A* Search

Similar documents
CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Chapters 3-5 Problem Solving using Search

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Informed Search and Exploration for Agents

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Informed/Heuristic Search

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Informed search strategies (Section ) Source: Fotolia

Informed search algorithms. Chapter 4

CS 331: Artificial Intelligence Informed Search. Informed Search

Artificial Intelligence Informed search. Peter Antal

CS 331: Artificial Intelligence Informed Search. Informed Search

CS 771 Artificial Intelligence. Informed Search

Informed Search Algorithms

Lecture 4: Informed/Heuristic Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed search algorithms

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Informed search algorithms. Chapter 4, Sections 1 2 1

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed Search A* Algorithm

Informed search algorithms. Chapter 4

Outline. Best-first search

Artificial Intelligence

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search algorithms

Informed search algorithms

Heuris'c Search. Reading note: Chapter 4 covers heuristic search.

Outline. Best-first search

Informed Search Methods

Informed Search and Exploration

CS 4700: Foundations of Artificial Intelligence

Informed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013

ARTIFICIAL INTELLIGENCE. Informed search

CS 380: Artificial Intelligence Lecture #4

3 SOLVING PROBLEMS BY SEARCHING

CS 4700: Foundations of Artificial Intelligence

Artificial Intelligence

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Solving problems by searching

CS:4420 Artificial Intelligence

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Artificial Intelligence

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Informed search algorithms

Informed Search and Exploration

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Informed search algorithms

Introduction to Artificial Intelligence. Informed Search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

mywbut.com Informed Search Strategies-I

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Searching with Partial Information

Informed search methods

COMP9414: Artificial Intelligence Informed Search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Solving problems by searching

Heuristic (Informed) Search

Solving problems by searching

Problem Solving and Search

Vorlesung Grundlagen der Künstlichen Intelligenz

CPS 170: Artificial Intelligence Search

Planning, Execution & Learning 1. Heuristic Search Planning

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

Artificial Intelligence

Problem Solving & Heuristic Search

Ar#ficial)Intelligence!!

Informed search methods

CE 473: Artificial Intelligence. Autumn 2011

Heuristic Search. Heuristic Search. Heuristic Search. CSE 3401: Intro to AI & LP Informed Search

A* optimality proof, cycle checking

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Problem solving and search

Today. Informed Search. Graph Search. Heuristics Greedy Search A* Search

CS 343: Artificial Intelligence

Heuristic Search and Advanced Methods

DFS. Depth-limited Search

Problem Solving: Informed Search

Heuristic (Informed) Search

A* Optimality CS4804

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

COMP9414: Artificial Intelligence Informed Search

Artificial Intelligence

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Artificial Intelligence: Search Part 2: Heuristic search

Logistics. Heuristics & Constraint Satisfaction. Symbolic Integration. 573 Topics. Symbolic Integration. Search. E.g. x 2 e x dx = e x (x 2-2x+2) + C

Artificial Intelligence

Artificial Intelligence Informed Search

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Search. CS 3793/5233 Artificial Intelligence Search 1

CSCI 446: Artificial Intelligence

Transcription:

CSE 473 Lecture 5 Heuristics CSE AI Faculty Last Time: A* Search Use an evaluation function f(n) for node n. f(n) = estimated total cost of path thru n to goal f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal Always choose the node from frontier that has the lowest f value. Frontier = priority queue Problem: Search for shortest path from start to goal 2 1

Admissible Heuristics A heuristic h(n) is admissible if for every node n, h(n) h * (n) where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal 3 Admissible Heuristics Is the Straight Line Distance heuristic h SLD (n) admissible? Yes, it never overestimates the actual road distance Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimal. 4 2

Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the frontier. Let n be an unexpanded node in the frontier such that n is on a shortest path to an optimal goal G. f(g) = g(g) since h(g) = 0 f(g 2 ) = g(g 2 ) since h(g 2 ) = 0 g(g) < g(g 2 ) since G 2 is suboptimal f(g) < f(g 2 ) from above 5 Optimality of A * (cont.) f(g) < f(g 2 ) from prev slide h(n) h*(n) since h is admissible g(n) + h(n) g(n) + h * (n) = f(g) f(n) f(g) < f(g 2 ) Hence f(n) < f(g 2 ) A * will select n and never G 2 for expansion. 6 3

Optimality of A * for Graph Search A heuristic h(n) is consistent if for every node n and every successor n generated by an action a, n h(n) h(n) c(n,a,n ) + h(n ) c(n,a,n ) (general triangle inequality) Theorem: If h(n) is consistent, A * using GRAPH-SEARCH is optimal. (see text for proof) n h(n ) G n Most admissible heuristics turn out to be consistent too E.g. SLD is a consistent heuristic for the route problem (prove it!) 7 Okay, enough theory time to wake up! 8 4

Properties of A* Complete? Yes (unless there are infinitely many nodes with f f(g) ) Time? Exponential worst case but may be faster in many cases Space? Exponential: Keeps all generated nodes in memory (exponential # of nodes) Optimal? Yes 9 A* vs. Uniform Cost Search Both are optimal but differ in search strategy and time/space complexity A* uses f(n) = g(n) + h(n) to find shortest path to a single goal Uniform cost search uses f(n) = g(n) to find shortest path to a single goal 10 5

A* vs. Uniform Cost Search A* expands mainly toward the goal with the help of the heuristic function Uniform-cost expands uniformly in all directions A* can be more efficient (i.e., expands fewer nodes) if the heuristic is good Start Start A* Goal UC Goal 11 Uniform Cost Pac-Man 12 6

A* Pac-Man with Manhattan distance heuristic 13 Let s explore heuristic functions For the 8-puzzle (get to goal state with smallest # of moves), what are some heuristic functions? h 1 (n) =? h 2 (n) =? 14 7

Example heuristic functions Examples: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (no. of squares from desired location of each tile) h 1 (S) =? h 2 (S) =? S G 15 Example heuristics Examples: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (no. of squares from desired location of each tile) h 1 (S) =? 8 h 2 (S) =? 3+1+2+2+2+3+3+2 = 18 Are these admissible heuristics? 16 8

Dominance If h 2 (n) h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search (why?) Getting closer to the actual cost to goal Does one dominate the other for: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance 17 Dominance For 8-puzzle heuristics h 1 and h 2, typical search costs (average number of nodes expanded for solution depth d): d=12 d=24 IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes IDS = too many nodes to fit in memory A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes 18 9

For many problems, A* can still require too much memory 19 Iterative-Deepening A* (IDA*) Less memory required compared to A* Like iterative-deepening search, but... Depth bound modified to be an f-limit Start with limit = h(start) Prune any node if f(node) > f-limit Next f-limit=min-cost of any node pruned a e f d f-l=15 b f-l=21 c f-l=32 20 10

That s cool yo but howdya derive em heuristic functions? Just relax, bro! Relaxed Problems Derive admissible heuristic from exact cost of a solution to a relaxed version of problem For route planning, what is a relaxed problem? Relax requirement that car has to stay on road Straight Line Distance becomes optimal cost Cost of optimal soln to relaxed problem cost of optimal soln for real problem 22 11

Heuristics for eight puzzle 7 2 3 5 1 6 8 4 1 2 3 4 5 6 7 8 start What can we relax? goal 23 Heuristics for eight puzzle 7 2 3 5 1 6 8 4 1 2 3 4 5 6 7 8 Original: Tile can move from location A to B if A is horizontally or vertically next to B and B is blank Relaxed 1: Tile can move from any loc A to any loc B Cost = h 1 = number of misplaced tiles Relaxed 2: Tile can move from loc A to loc B if A is horizontally or vertically next to B Cost = h 2 = total Manhattan distance 24 12

Need for Better Heuristics Performance of h 2 (Manhattan Distance Heuristic) 8 Puzzle < 1 second 15 Puzzle 1 minute 24 Puzzle 65000 years Can we do better? Adapted from Richard Korf presentation 25 Creating New Heuristics Given admissible heuristics h 1, h 2,, h m, none of them dominating any other, how to choose the best? Answer: No need to choose only one! Use: h(n) = max {h 1 (n), h 2 (n),, h n (n)} h is admissible (prove it!) h dominates each individual h i (by construction) 26 13

Pattern Databases [Culberson & Schaeffer 1996] Idea: Use solution cost of a subproblem as heuristic. For 8-puzzle: pick any subset of tiles E.g., 3 tiles Precompute a table Compute optimal cost of solving just these tiles This is a lower bound on actual cost with all tiles Search backwards from goal and record cost of each new pattern encountered State = position of just these tiles & blank Admissible heuristic h DB for complete state = cost of corresponding sub-problem state in database * * 3 2 1 * * * 1 2 3 * * * * * Adapted from Richard Korf presentation 27 Combining Multiple Databases Repeat for another subset of tiles Precompute multiple tables How to combine table values? Use the max trick! E.g. Optimal solutions to Rubik s cube First found w/ IDA* using pattern DB heuristics Multiple DBs were used (diff subsets of cubies) Most problems solved optimally in 1 day Compare with 574,000 years for IDS Adapted from Richard Korf presentation 28 14

Drawbacks of Standard Pattern DBs Since we can only take max Diminishing returns on additional DBs Would like to be able to add values But not exceed the actual solution cost (admissible) How? Adapted from Richard Korf presentation 29 Disjoint Pattern DBs Partition tiles into disjoint sets For each set, precompute table Don t count moves of tiles not in set This makes sure costs are disjoint Can be added without overestimating! E.g. 8 tile DB has 519 million entries And 7 tile DB has 58 million During search Look up costs for each set in DB Add values to get heuristic function value 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Manhattan distance is a special case of this idea where each set is a single tile Adapted from Richard Korf presentation 30 15

Performance of Disjoint PDBs 15 Puzzle: 2000x speedup vs Manhattan dist IDA* with the two DBs solves 15 Puzzle optimally in 30 milliseconds 24 Puzzle: 12 millionx speedup vs Manhattan IDA* can solve random instances in 2 days Uses DBs for 4 disjoint sets as shown Each DB has 128 million entries Without PDBs: 65,000 years Adapted from Richard Korf presentation 31 Next Time Local search Gaming search and searching for Games To do: Project #1, Read Sec. 4.1, Chap. 5 32 16