Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Similar documents
CSCI 446: Artificial Intelligence

CS 188: Artificial Intelligence

CS 5522: Artificial Intelligence II

PEAS: Medical diagnosis system

Artificial Intelligence Informed Search

Today. Informed Search. Graph Search. Heuristics Greedy Search A* Search

CS 343: Artificial Intelligence

CS 343H: Artificial Intelligence

Informed Search A* Algorithm

521495A: Artificial Intelligence

Warm- up. IteraAve version, not recursive. class TreeNode TreeNode[] children() boolean isgoal() DFS(TreeNode start)

Informed Search Lecture 5

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Introduction to Artificial Intelligence. Informed Search

CSC 2114: Artificial Intelligence Search

CE 473: Artificial Intelligence. Autumn 2011

Artificial Intelligence. Informed search methods

Announcements. CS 188: Artificial Intelligence

Informed search algorithms

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS:4420 Artificial Intelligence

Problem solving and search

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed Search and Exploration

Announcements. Project 0: Python Tutorial Due last night

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COSC343: Artificial Intelligence

Informed search algorithms

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

CS486/686 Lecture Slides (c) 2014 P.Poupart

Robot Programming with Lisp

AGENTS AND ENVIRONMENTS. What is AI in reality?

Introduction to Artificial Intelligence (G51IAI)

Informed Search and Exploration

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Artificial Intelligence: Search Part 2: Heuristic search

Informed search strategies (Section ) Source: Fotolia

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

CSE 40171: Artificial Intelligence. Informed Search: A* Search

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

CS414-Artificial Intelligence

Informed search methods

Problem solving and search

521495A: Artificial Intelligence

CS:4420 Artificial Intelligence

Planning, Execution & Learning 1. Heuristic Search Planning

CS 188: Artificial Intelligence Fall Search Gone Wrong?

CS 188: Ar)ficial Intelligence

Informed Search Algorithms. Chapter 4

CSE 473: Ar+ficial Intelligence

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Foundations of Artificial Intelligence

Artificial Intelligence

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

CS 4100 // artificial intelligence

4. Solving Problems by Searching

Artificial Intelligence

Artificial Intelligence

Problem solving and search

Problem Solving and Search

Outline. Best-first search

Artificial Intelligence CS 6364

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Informed search algorithms

Artificial Intelligence

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Informed Search and Exploration

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Solving problems by searching

Artificial Intelligence: Search Part 1: Uninformed graph search

Solving Problems using Search

DFS. Depth-limited Search

Route planning / Search Movement Group behavior Decision making

Graph Search. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

CS-E4800 Artificial Intelligence

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Foundations of Artificial Intelligence

Clustering (Un-supervised Learning)

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

CS 771 Artificial Intelligence. Informed Search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Solving Problems by Searching

Solving Problems by Searching

TDT4136 Logic and Reasoning Systems

Outline. Best-first search

Seminar: Search and Optimization

Automated Planning & Artificial Intelligence

Foundations of Artificial Intelligence

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Informed Search and Exploration

Problem solving and search

Informed (heuristic) search (cont). Constraint-satisfaction search.

Transcription:

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics, performance data Pattern DBs?

General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier Important ideas: Frontier Expansion: for each action, create the child node Exploration strategy Main question: which frontier nodes to explore?

A note on implementation Nodes have state, parent, action, path-cost A child of node by action a has state = result(node.state,a) 5 4 Node PARENT ACTION = Right PATH-COST = 6 parent action = node = a 6 7 1 3 8 2 STATE path-cost = node.path-cost + step-cost(node.state,a,state) Extract solution by tracing back parent pointers, collecting actions

Depth-First Search

Depth-First Search S a b d p a c e p h f r q q c G a q e p h f r q q c G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack

Search Algorithm Properties

Search Algorithm Properties Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? b 1 node b nodes Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths Number of nodes in entire tree? 1 + b + b 2 +. b m = O(b m ) m tiers b 2 nodes b m nodes

Depth-First Search (DFS) Properties What nodes does DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(b m ) b 1 node b nodes b 2 nodes How much space does the frontier take? Only has siblings on path to root, so O(bm) m tiers Is it complete? m could be infinite, so only if we prevent cycles (more later) b m nodes Is it optimal? No, it finds the leftmost solution, regardless of depth or cost

Breadth-First Search

Breadth-First Search Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue S b p a d q c h e r f G S Search Tiers b a d c a h e r p h q e r f p q p q f q c G q c G a a

Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(b s ) s tiers b 1 node b nodes b 2 nodes How much space does the frontier take? b s nodes Has roughly the last tier, so O(b s ) Is it complete? b m nodes s must be finite if a solution exists, so yes! Is it optimal? Only if costs are all 1 (more on costs later)

Quiz: DFS vs BFS

Quiz: DFS vs BFS When will BFS outperform DFS? When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]

Video of Demo Maze Water DFS/BFS (part 1)

Video of Demo Maze Water DFS/BFS (part 2)

Iterative Deepening Idea: get DFS s space advantage with BFS s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution Run a DFS with depth limit 2. If no solution Run a DFS with depth limit 3... b Isn t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!

Finding a least-cost path b 2 3 1 a d 2 c 3 8 2 e 9 8 2 GOAL 2 f START 1 p 4 15 q 4 h r 2 BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.

Uniform Cost Search

Uniform Cost Search Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) 2 b 1 3 S 1 p a d 15 8 9 q 2 c h e 8 2 r G 2 f 1 S 0 d 3 e 9 p 1 b 4 c 11 e 5 h 17 r 11 q 16 Cost contours a 6 a h 13 r 7 p q f p q f 8 q c G q 11 c G a 10 a

Uniform Cost Search (UCS) Properties What nodes does UCS expand? Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least, then the effective depth is roughly C*/ Takes time O(b C*/ ) (exponential in effective depth) C*/ tiers b c 1 c 2 c 3 How much space does the frontier take? Has roughly the last tier, so O(b C*/ ) Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes! Is it optimal? Yes! (Proof next lecture via A*)

Uniform Cost Issues Remember: UCS explores increasing cost contours The good: UCS is complete and optimal! c 1 c 2 c 3 The bad: Explores options in every direction No information about goal location Start Goal We ll fix that soon!

Search Gone Wrong?

Tree Search vs Graph Search

Tree Search: Extra Work! Failure to detect repeated states can cause exponentially more work. State Graph Search Tree O(2 m ) O(m)

Tree Search: Extra Work! Failure to detect repeated states can cause exponentially more work. State Graph Search Tree O(2m 2 ) O(4 m ) m=20: 800 states, 1,099,511,627,776 tree nodes

General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

General Graph Search function GRAPH-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier but only if the node is not already in the frontier or explored set

Tree Search vs Graph Search Graph search Avoids infinite loops: m is finite in a finite state space Eliminates exponentially many redundant paths Requires memory proportional to its runtime!

CS 188: Artificial Intelligence Informed Search Instructor: Stuart Russell University of California, Berkeley

Search Heuristics A heuristic is: A function that estimates how close a state is to a goal Designed for a particular search problem Examples: Manhattan distance, Euclidean distance for pathing 10 5 11.2

Example: distance to Bucharest 71 Oradea Neamt Arad 118 75 Zerind 140 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 h(x) 70 75 Drobeta Mehadia 120 146 101 85 138 Bucharest 90 Craiova Giurgiu 98 Urziceni Hirsova 86 Eforie

Example: Pancake Problem

Example: Pancake Problem Step Cost: Number of pancakes flipped

Example: Pancake Problem State space graph with step costs 2 4 2 3 3 3 4 4 2 2 4 3 2 3

Example: Pancake Problem Heuristic: the number of the largest pancake that is still out of place 4 3 h(x) 4 3 3 0 4 4 3 4 4 3 2

Effect of heuristics Guide search towards the goal instead of all over the place Start Goal Start Goal Uninformed Informed

Greedy Search

Greedy Search Expand the node that seems closest (order frontier by h) What can possibly go wrong? Sibiu-Fagaras-Bucharest = Sibiu 99 Fagaras 253 176 99+211 = 310 80 Sibiu-Rimnicu Vilcea-Pitesti-Bucharest = Rimnicu Vilcea 80+97+101=278 193 97 100 Pitesti 101 211 0 Bucharest

Greedy Search Strategy: expand a node that seems closest to a goal state, according to h b Problem 1: it chooses a node even if it s at the end of a very long and winding road Problem 2: it takes h literally even if it s completely wrong

Video of Demo Contours Greedy (Empty)

Video of Demo Contours Greedy (Pacman Small Maze)

A* Search

A* Search UCS Greedy A*

Combining UCS and Greedy h=6 Uniform-cost orders by path cost, or backward cost g(n) Greedy orders by goal proximity, or forward cost h(n) 1 3 S a d h=6 1 h=5 h=2 1 c b h=7 8 1 e h=1 A* Search orders by the sum: f(n) = g(n) + h(n) 2 G h=0 g = 2 h=6 g = 3 h=7 g = 1 h=5 b c a d G S g = 4 h=2 g = 6 h=0 g = 0 h=6 e d G g = 9 h=1 g = 10 h=2 g = 12 h=0 Example: Teg Grenager

Is A* Optimal? h = 6 1 A 3 S h = 7 G h = 0 5 What went wrong? Actual bad goal cost < estimated good goal cost We need estimates to be less than actual costs!

Admissible Heuristics

Idea: Admissibility Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the frontier Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs

Admissible Heuristics A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal Examples: 15 4 Coming up with admissible heuristics is most of what s involved in using A* in practice.

Optimality of A* Tree Search

Optimality of A* Tree Search Assume: A is an optimal goal node B is a suboptimal goal node h is admissible Claim: A will be chosen for expansion before B

Optimality of A* Tree Search: Blocking Proof: Imagine B is on the frontier Some ancestor n of A is on the frontier, too (maybe A!) Claim: n will be expanded before B 1. f(n) is less or equal to f(a) Definition of f-cost Admissibility of h h = 0 at a goal

Optimality of A* Tree Search: Blocking Proof: Imagine B is on the frontier Some ancestor n of A is on the frontier, too (maybe A!) Claim: n will be expanded before B 1. f(n) is less or equal to f(a) 2. f(a) is less than f(b) B is suboptimal h = 0 at a goal

Optimality of A* Tree Search: Blocking Proof: Imagine B is on the frontier Some ancestor n of A is on the frontier, too (maybe A!) Claim: n will be expanded before B 1. f(n) is less or equal to f(a) 2. f(a) is less than f(b) 3. n expands before B All ancestors of A expand before B A expands before B A* search is optimal

Properties of A*

Properties of A* Uniform-Cost A* b b

UCS vs A* Contours Uniform-cost expands equally in all directions Start Goal A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal

Video of Demo Contours (Empty) -- UCS

Video of Demo Contours (Empty) -- Greedy

Video of Demo Contours (Empty) A*

Video of Demo Contours (Pacman Small Maze) A*

Comparison Greedy Uniform Cost A*

A* Applications Video games Pathing / routing problems Resource planning problems Robot motion planning Language analysis Machine translation Speech recognition [Demo: UCS / A* pacman tiny maze (L3D6,L3D7)] [Demo: guess algorithm Empty Shallow/Deep (L3D8)]

Video of Demo Empty Water Shallow/Deep Guess Algorithm

Creating Heuristics

Creating Admissible Heuristics Most of the work in solving hard search problems optimally is in coming up with admissible heuristics Often, admissible heuristics are solutions to relaxed problems, where new actions are available 366 15

Example: 8 Puzzle Start State Actions Goal State What are the states? How many states? What are the actions? How many actions from the start state? What should the step costs be?

Statistics from Andrew Moore 8 Puzzle I Heuristic: Number of tiles misplaced Why is it admissible? h(start) = 8 This is a relaxed-problem heuristic Start State Goal State Average nodes expanded when the optimal path has 4 steps 8 steps 12 steps UCS 112 6,300 3.6 x 10 6 TILES 13 39 227

8 Puzzle II What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Start State Goal State Why is it admissible? h(start) = 3 + 1 + 2 + = 18 Average nodes expanded when the optimal path has 4 steps 8 steps 12 steps TILES 13 39 227 MANHATTAN 12 25 73

Combining heuristics Dominance: h a h c if Roughly speaking, larger is better as long as both are admissible The zero heuristic is pretty bad (what does A* do with h=0?) The exact heuristic is pretty good, but usually too expensive! What if we have two heuristics, neither dominates the other? Form a new heuristic by taking the max of both: Max of admissible heuristics is admissible and dominates both! Example: number of knight s moves to get from A to B h1 = (Manhattan distance)/3 (rounded up to correct parity) h2 = (Euclidean distance)/ 5 (rounded up to correct parity) h3 = (max x or y shift)/2 (rounded up to correct parity)

Optimality of A* Graph Search This part is a bit fiddly, sorry about that

A* Graph Search Gone Wrong? State space graph Search tree S h=2 1 1 B A h=4 2 1 h=1 C 3 S (0+2) A (1+4) B (1+1) C (2+1) C (3+1) h=1 G h=0 G (5+0) G (6+0) Simple check against explored set blocks C Fancy check allows new C if cheaper than old but requires recalculating C s descendants

Consistency of Heuristics Main idea: estimated heuristic costs actual costs A h=4 1 C h=1 h=2 3 G Admissibility: heuristic cost actual cost to goal h(a) actual cost from A to G Consistency: heuristic arc cost actual cost for each arc h(a) h(c) cost(a to C) Consequences of consistency: The f value along a path never decreases h(a) cost(a to C) + h(c) A* graph search is optimal

Optimality of A* Graph Search Sketch: consider what A* does with a consistent heuristic: Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours) Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally Result: A* graph search is optimal f 1 f 2 f 3

Optimality Tree search: A* is optimal if heuristic is admissible UCS is a special case (h = 0) Graph search: A* optimal if heuristic is consistent UCS optimal (h = 0 is consistent) Consistency implies admissibility In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems

A*: Summary

A*: Summary A* uses both backward costs and (estimates of) forward costs A* is optimal with admissible / consistent heuristics Heuristic design is key: often use relaxed problems