Solving problems by searching

Similar documents
Solving problems by searching

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Informed Search A* Algorithm

Solving problems by searching

Solving Problem by Searching. Chapter 3

Artificial Intelligence

Solving problems by searching

CS 380: Artificial Intelligence Lecture #3

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

Problem solving and search

Chapter 2. Blind Search 8/19/2017

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Solving problems by searching

Artificial Intelligence

Problem Solving and Search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Solving Problems by Searching

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Chapter 3 Solving problems by searching

3 SOLVING PROBLEMS BY SEARCHING

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Problem solving and search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

CAP 4630 Artificial Intelligence

AI: problem solving and search

Solving problems by searching. Chapter 3

Informed Search and Exploration

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Informed Search and Exploration

CS 8520: Artificial Intelligence

Robot Programming with Lisp

Outline. Best-first search

Ar#ficial)Intelligence!!

Informed Search and Exploration for Agents

Ar#ficial)Intelligence!!

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Problem solving and search

CS 771 Artificial Intelligence. Informed Search

ARTIFICIAL INTELLIGENCE. Informed search

Lecture 4: Informed/Heuristic Search

CS:4420 Artificial Intelligence

Chapter 3: Solving Problems by Searching

Solving problems by searching

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Informed search algorithms

Outline. Best-first search

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Informed/Heuristic Search

Artificial Intelligence

Artificial Intelligence Problem Solving and Uninformed Search

Artificial Intelligence Uninformed search

Multiagent Systems Problem Solving and Uninformed Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Goal-Based Agents Problem solving as search. Outline

Solving Problems: Blind Search

Problem solving and search

Informed search strategies (Section ) Source: Fotolia

CS 4700: Foundations of Artificial Intelligence

Informed search algorithms. Chapter 4

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

CS 380: Artificial Intelligence Lecture #4

Informed search algorithms. Chapter 4, Sections 1 2 1

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Pengju XJTU 2016

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Informed search algorithms. Chapter 4

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Informed Search Algorithms

Informed search algorithms

Informed search algorithms

Artificial Intelligence Informed search. Peter Antal

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

4. Solving Problems by Searching

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Searching with Partial Information

DFS. Depth-limited Search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

CS 4700: Foundations of Artificial Intelligence

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Solving Problems by Searching

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Pengju

Artificial Intelligence

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Informed Search CS457 David Kauchak Fall 2011

Transcription:

Solving problems by searching CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2014 Soleymani Artificial Intelligence: A Modern Approach, Chapter 3

Outline Problem-solving agents Problem formulation and some examples of problems Search algorithms Uninformed Using only the problem definition Informed Using also problem specific knowledge 2

Problem-Solving Agents Problem Formulation: process of deciding what actions and states to consider States of the world Actions as transitions between states Goal Formulation: process of deciding what the next goal to be sought will be Agent must find out how to act now and in the future to reach a goal state Search: process of looking for solution (a sequence of actions that reaches the goal starting from initial state) 3

Problem-Solving Agents A goal-based agent adopts a goal and aim at satisfying it (as a simple version of intelligent agent maximizing a performance measure) How does an intelligent system formulate its problem as a search problem Goal formulation: specifying a goal (or a set of goals) that agent must reach Problem formulation: abstraction (removing detail) Retaining validity and ensuring that the abstract actions are easy to perform 4

Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Initial state currently in Arad Map of Romania Formulate goal be in Bucharest Formulate problem states: various cities actions: drive between cities Solution sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest 5

Example: Romania (Cont.) Assumptions about environment Known Observable The initial state can be specified exactly. Deterministic Each applied action to a state results in a specified state. Discrete Given the above first three assumptions, by starting in an initial state and running a sequence of actions, it is absolute where the agent will be Perceptions after each action provide no new information Can search with closed eyes (open-loop) 6

Problem-solving agents Formulate, Search, Execute 7

Problem types Deterministic and fully observable (single-state problem) Agent knows exactly its state even after a sequence of actions Solution is a sequence Non-observable or sensor-less (conformant problem) Agent s percepts provide no information at all Solution is a sequence Nondeterministic and/or partially observable (contingency problem) Percepts provide new information about current state Solution can be a contingency plan (tree or strategy) and not a sequence Often interleave search and execution Unknown state space (exploration problem) 8

Belief State In partially observable & nondeterministic environments, a state is not necessarily mapped to a world configuration State shows the agent s conception of the world state Agent's current belief (given the sequence of actions and percepts up to that point) about the possible physical states it might be in. 9 World states A sample belief state

Example: vacuum world Single-state, start in {5} Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt only at the current location Percept: [L, Clean], i.e., start in {5} or {7} Solution? [Right, if dirt then Suck] 10 [Right, while dirt do Suck]

Single-state problem In this lecture, we focus on single-state problem Search for this type of problems is simpler And also provide strategies that can be base for search in more complex problems 11

Single-state problem formulation A problem is defined by five items: Initial state e.g., In( Arad) Actions: ACTIONS(s) shows set of actions that can be executed in s e.g., ACTIONS(In(Arad)) = {Go(Sibiu), Go(Timisoara), Go(Zerind)} 12

Single-state problem formulation A problem is defined by five items: Initial state e.g., In( Arad) Actions: ACTIONS(s) shows set of actions that can be executed in s Transition model: RESULTS(s, a) shows the state that results from doing action a in state s e.g., RESULTS(In(Arad), Go(Zerind)) = In(Zerind) 13

Single-state problem formulation A problem is defined by five items: Initial state e.g., In( Arad) Actions: ACTIONS(s) shows set of actions that can be executed in s Transition model: RESULTS(s, a) shows the state that results from doing action a in state s Goal test: GOAL_TEST(s) shows whether a given state is a goal state explicit, e.g., x = "at Bucharest" abstract e.g., Checkmate(x) 14

Single-state problem formulation A problem is defined by five items: Initial state e.g., In( Arad) Actions: ACTIONS(s) shows set of actions that can be executed in s Transition model: RESULTS(s, a) shows the state that results from doing action a in state s Goal test: GOAL_TEST(s) shows whether a given state is a goal state Path cost (additive): assigns a numeric cost to each path that reflects agent s performance measure e.g., sum of distances, number of actions executed, etc. c(x, a, y) 0 is the step cost 15

Single-state problem formulation A problem is defined by five items: Initial state e.g., In( Arad) Actions: ACTIONS(s) shows set of actions that can be executed in s Transition model: RESULTS(s, a) shows the state that results from doing action a in state s Goal test: GOAL_TEST(s) shows whether a given state is a goal state Path cost (additive): assigns a numeric cost to each path that reflects agent s performance measure Solution: a sequence of actions leading from the initial state to a goal state Optimal Solution has the lowest path cost among all solutions. 16

State Space State space: set of all reachable states from initial state Initial state, actions, and transition model together define it It forms a directed graph Nodes: states Links: actions Constructing this graph on demand 17

Vacuum world state space graph 2 2 2 = 8 States States? Actions? Goal test? Path cost? dirt locations & robot location Left, Right, Suck no dirt at all locations one per action 18

Example: 8-puzzle 9!/2 = 181,440 States States? Actions? Goal test? Path cost? locations of eight tiles and blank in 9 squares move blank left, right, up, down (within the board) e.g., the above goal state one per move 19 Note: optimal solution of n-puzzle family is NP-complete

Example: 8-queens problem 64 63 57 1.8 10 14 States Initial State? no queens on the board States? any arrangement of 0-8 queens on the board is a state Actions? Goal test? Path cost? add a queen to the state (any empty square) 8 queens are on the board, none attacked of no interest search cost vs. solution path cost 20

Example: 8-queens problem (other formulation) 2,057 States Initial state? States? Actions? Goal test? Path cost? 21 no queens on the board any arrangement of k queens one per column in the leftmost k columns with no queen attacking another add a queen to any square in the leftmost empty column such that it is not attacked by any other queen 8 queens are on the board of no interest

Example: Knuth problem Knuth Conjecture: Starting with 4, a sequence of factorial, square root, and floor operations will reach any desired positive integer. Example: 4!! = 5 States? Positive numbers Initial State? 4 Actions? Factorial (for integers only), square root, floor Goal test? State is the objective positive number Path cost? Of no interest 23

Read-world problems Route finding Travelling salesman problem VLSI layout Robot navigation Automatic assembly sequencing 24

Example: Robot navigation (real-world) Infinite set of possible actions and states Techniques are required to make the search space finite. For the robot with arms and legs or wheels, the search space becomes many-dimensional. Dealing with errors in sensor readings and motor controls 25

Tree search algorithm Basic idea offline, simulated exploration of state space by generating successors of already-explored states function TREE-SEARCH( problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier Frontier: all leaf nodes available for expansion at any given point Different data structures (e.g, FIFO, LIFO) for frontier can cause different orders of node expansion and thus produce different search algorithms. 27

Tree search example 28

Tree search example 29

Tree search example 30

Graph Search Redundant paths in tree search: more than one way to get from one state to another may be due to a bad problem definition or the essence of the problem can cause a tractable problem to become intractable function GRAPH-SEARCH( problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier only if not in the frontier or explored set explored set: remembered every explored node 31

Graph Search Example: rectangular grid explored frontier 32

Search for 8-puzzle Problem Start Goal 33 Taken from: http://iis.kaist.ac.kr/es/

Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth 34

Search strategies Search strategy: order of node expansion Strategies performance evaluation: Completeness: Does it always find a solution when there is one? Time complexity: How many nodes are generated to find solution? Space complexity: Maximum number of nodes in memory during search Optimality: Does it always find a solution with minimum path cost? Time and space complexity are expressed by b (branching factor): maximum number of successors of any node d (depth): depth of the shallowest goal node m: maximum depth of any node in the search space (may be ) Time & space are described for tree search For graph search, analysis depends on redundant paths 35

36 Uninformed Search Algorithms

Uninformed (blind) search strategies No additional information beyond the problem definition Breadth-First Search (BFS) Uniform-Cost Search (UCS) Depth-First Search (DFS) Depth-Limited Search (DLS) Iterative Deepening Search (IDS) Bidirectional search 37

Breadth-first search Expand the shallowest unexpanded node Implementation: FIFO queue for the frontier 38

Breadth-first search 39

Breadth-first search 40

Breadth-first search 41

Properties of breadth-first search Complete? Time Yes (for finite b and d) b + b 2 + b 3 + + b d = O(b d ) total number of generated nodes Space goal test has been applied to each node when it is generated O(b d 1 ) + O(b d ) = O(b d ) (graph search) Optimal? Tree search does not save much space while may cause a great time excess Yes, if path cost is a non-decreasing function of d explored frontier e.g. all actions having the same cost 43

Properties of breadth-first search Space complexity is a bigger problem than time complexity Time is also prohibitive Exponential-complexity search problems cannot be solved by uninformed methods (only the smallest instances) 1 million node/sec, 1kb/node d Time Memory 10 3 hours 10 terabytes 12 13 days 1 pentabyte 14 3.5 years 99 pentabytes 16 350 years 10 exabytes 44

Uniform-Cost Search (UCS) Expand node n (in the frontier) with the lowest path cost g(n) Extension of BFS that is proper for any step cost function Implementation: Priority queue (ordered by path cost) for frontier Equivalent to breadth-first if all step costs are equal Two differences Goal test is applied when a node is selected for expansion A test is added when a better path is found to a node currently on the frontier 80 + 97 + 101 < 99 + 211 45

Properties of uniform-cost search Complete? Time Yes, if step cost ε > 0 (to avoid infinite sequence of zero-cost actions) Number of nodes with g cost of optimal solution, O(b 1+ C ε ) where C is the optimal solution cost Space O(b d+1 ) when all step costs are equal Number of nodes with g cost of optimal solution, O(b 1+ C ε ) Optimal? Yes nodes expanded in increasing order of g(n) Difficulty: many long paths may exist with cost C 46

Uniform-cost search (proof of optimality) Lemma: If UCS selects a node n for expansion, the optimal solution to that node has been found. Proof by contradiction: Another frontier node n must exist on the optimal path from initial node to n (using graph separation property). Moreover, based on definition of path cost (due to non-negative step costs, paths never get shorter as nodes are added), we have g n g n and thus n would have been selected first. Nodes are expanded in order of their optimal path cost. 47

Depth First Search (DFS) Expand the deepest node in frontier Implementation: LIFO queue (i.e., put successors at front) for frontier 48

DFS Expand the deepest unexpanded node in frontier 49

DFS Expand the deepest unexpanded node in frontier 50

DFS Expand the deepest unexpanded node in frontier 51

DFS Expand the deepest unexpanded node in frontier 52

DFS Expand the deepest unexpanded node in frontier 53

DFS Expand the deepest unexpanded node in frontier 54

DFS Expand the deepest unexpanded node in frontier 55

DFS Expand the deepest unexpanded node in frontier 56

DFS Expand the deepest unexpanded node in frontier 57

DFS Expand the deepest unexpanded node in frontier 58

DFS Expand the deepest unexpanded node in frontier 59

Properties of DFS Complete? Tree-search version: not complete (repeated states & redundant paths) Graph-search version: fails in infinite state spaces (with infinite non-goal path) but complete in finite ones Time O(b m ): terrible if m is much larger than d Space In tree-version, m can be much larger than the size of the state space O(bm), i.e., linear space complexity for tree search So depth first tree search as the base of many AI areas Recursive version called backtracking search can be implemented in O(m) space Optimal? No DFS: tree-search version 61

Depth Limited Search Depth-first search with depth limit l (nodes at depth l have no successors) Solves the infinite-path problem In some problems (e.g., route finding), using knowledge of problem to specify l Complete? Time If l > d, it is complete O(b l ) Space O(bl) Optimal? No 62

Iterative Deepening Search (IDS) Combines benefits of DFS & BFS DFS: low memory requirement BFS: completeness & also optimality for special path cost functions Not such wasteful (most of the nodes are in the bottom level) 63

IDS: Example l =0 64

IDS: Example l =1 65

IDS: Example l =2 66

IDS: Example d =3 67

Properties of iterative deepening search Complete? Time Yes (for finite b and d) d b 1 + (d 1) b 2 + + 2 b d 1 + 1 b d = O(b d ) Space O(bd) Optimal? Yes, if path cost is a non-decreasing function of the node depth IDS is the preferred method when search space is large and the depth of solution is unknown 68

Iterative deepening search Number of nodes generated to depth d: N IDS = d b 1 + (d 1) b 2 + + 2 b d 1 + 1 b d = O(b d ) For b = 10, d = 5, we compute number of generated nodes: N BFS = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110 N IDS = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450 Overhead of IDS = (123,450-111,110)/111,110 = 11% 69

Bidirectional search Simultaneous forward and backward search (hoping that they meet in the middle) Idea: b d/2 + b d/2 is much less than b d Do the frontiers of two searches intersect? instead of goal test First solution may not be optimal Implementation Hash table for frontiers in one of these two searches Space requirement: most significant weakness Computing predecessors? May be difficult List of goals? a new dummy goal Abstract goal (checkmate)?! 70

Summary of algorithms (tree search) a Complete if b is finite b Complete if step cost ε>0 c Optimal if step costs are equal d If both directions use BFS Iterative deepening search uses only linear space and not much more time than other uninformed algorithms 71

Informed Search When exhaustive search is impractical, heuristic methods are used to speed up the process of finding a satisfactory solution. 72

Outline Best-first search Greedy best-first search A * search Finding heuristics 73

Best-first search Idea: use an evaluation function f(n) for each node and expand the most desirable unexpanded node More general than g n = cost so far to reach n Evaluation function provides an upper bound on the desirability (lower bound on the cost) that can be obtained through expanding a node Implementation: priority queue with decreasing order of desirability (search strategy is determined based on evaluation function) Special cases: Greedy best-first search A * search Uniform-cost search 74

Heuristic Function Incorporating problem-specific knowledge in search Information more than problem definition In order to come to an optimal solution as rapidly as possible Heuristic function can be used as a component of f(n) h n : estimated cost of cheapest path from n to a goal Depends only on n (not path from root to n) If n is a goal state then h(n)=0 h(n) 0 Examples of heuristic functions include using a rule-of-thumb, an educated guess, or an intuitive judgment 75

Greedy best-first search Evaluation function f n = h(n) e.g., h SLD n = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal Greedy 76

Romania with step costs in km 77

Greedy best-first search example 78

Greedy best-first search example 79

Greedy best-first search example 80

Greedy best-first search example 81

Properties of greedy best-first search Complete? No Time Space Similar to DFS, only graph search version is complete in finite spaces Infinite loops, e.g., (Iasi to Fagaras) Iasi Neamt Iasi Neamt O(b m ), but a good heuristic can give dramatic improvement O(b m ): keeps all nodes in memory Optimal? No 82

A * search Idea: minimizing the total estimated solution cost Evaluation function f n = g n + h(n) g n = cost so far to reach n h n = estimated cost of the cheapest path from n to goal So, f n = estimated total cost of path through n to goal Actual cost g n Estimated cost h n start n goal 83 f n = g n + h(n)

A * search Combines advantages of uniform-cost and greedy searches A * can be complete and optimal when h(n) has some properties 84

A * search: example 85

A * search: example 86

A * search: example 87

A * search: example 88

A * search: example 89

A * search: example 90

Conditions for optimality of A * Admissibility: h(n) be a lower bound on the cost to reach goal Condition for optimality of TREE-SEARCH version of A * Consistency (monotonicity): h n c n, a, n + h n Condition for optimality of GRAPH-SEARCH version of A * 91

Admissible heuristics Admissible heuristic h(n) never overestimates the cost to reach the goal (optimistic) h(n) is a lower bound on path cost from n to goal n, h(n) h (n) where h (n) is the real cost to reach the goal state from n Example: h SLD (n) the actual road distance 92

Consistent heuristics Triangle inequality n c(n, a, n ) n h(n) h(n ) G for every node n and every successor n generated by any action a h n c n, a, n + h n c n, a, n : cost of generating n by applying action to n 93

Consistency vs. admissibility Consistency Admissblity All consistent heuristic functions are admissible Nonetheless, most admissible heuristics are also consistent c(n 1, a 1, n 2 ) c(n 2, a 2, n 3 ) c(n k, a k, G) n 1 n 2 n 3 n k G h n 1 c n 1, a 1, n 2 + h(n 2 ) c n 1, a 1, n 2 + c n 2, a 2, n 3 + h(n 3 ) k i=1 c n i, a i, n i+1 + h(g) 0 h n 1 cost of (every) path from n 1 to goal cost of optimal path from n 1 to goal 94

Admissible but not consistent: Example G 10 10 1 n n g n = 5 h n = 9 f(n) = 14 g n = 6 h n = 6 f(n ) = 12 c(n, a, n ) = 1 h(n) = 9 h(n ) = 6 h n h n + c(n, a, n ) f (for admissible heuristic) may decrease along a path Is there any way to make h consistent? h n = max (h n, h n c(n, a, n )) 95

Optimality of A * (admissible heuristics) Theorem: If h(n) is admissible, A * using TREE-SEARCH is optimal Assumptions: G 2 is a suboptimal goal in the frontier, n is an unexpanded node in the frontier and it is on a shortest path to an optimal goal G. I. h G 2 = 0 f G 2 = g G 2 II. h(g) = 0 f(g) = g(g) III. G 2 is suboptimal g G 2 > g G IV. I, II, III f G 2 > f G V. h is admissible h n h n g(n) + h(n) g(n) + h (n) A * will never select G 2 for expansion 96 f n f G IV f n < f(g 2 )

Optimality of A * (consistent heuristics) Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal Lemma1: if h(n) is consistent then f(n) values are nondecreasing along any path Proof: Let n be a successor of n I. f(n ) = g(n ) + h(n ) II. g(n ) = g(n) + c(n, a, n ) III. I, II f n = g n + c n, a, n + h n IV. h n is consistent h n c n, a, n + h n V. III, IV f(n ) g(n) + h(n) = f(n) 97

Optimality of A * (consistent heuristics) Lemma 2: If A* selects a node n for expansion, the optimal solution to that node has been found. Proof by contradiction: Another frontier node n must exist on the optimal path from initial node to n (using graph separation property). Moreover, based on Lemma 1, f n f n and thus n would have been selected first. The sequence of nodes expanded by A* (using GRAPH-SEARCH) is in nondecreasing order of f(n) Since h = 0 for goal nodes, the first selected goal node for expansion is an optimal solution (f is the true cost for goal nodes) 98

Admissible vs. consistent (tree vs. graph search) Consistent heuristic: When selecting a node for expansion, the path with the lowest cost to that node has been found When an admissible heuristic is not consistent, a node will need repeated expansion, every time a new best (so-far) cost is achieved for it. 99

Contours in the state space A * (using GRAPH-SEARCH) expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f = f i wheref i < f i+1 A* expands all nodes with f(n) < C* A* expands some nodes with f(n) = C* (nodes on the goal contour) A* expands no nodes with f(n) > C* pruning 100

A * search vs. uniform cost search Uniform-cost search (A * using h(n) = 0) causes circular bands around initial state A * causes irregular bands More accurate heuristics stretched toward the goal (more narrowly focused around the optimal path) States are points in 2-D Euclidean space. g(n)=distance from start h(n)=estimate of distance from goal goal Start 101

Properties of A* Complete? Yes if nodes with f f G = C are finite Time? Step cost ε > 0 and b is finite Exponential But, with a smaller branching factor b h h or when equal step costs b d h h h Polynomial when h(x) h (x) = O(log h (x)) However, A* is optimally efficient for any given consistent heuristic Space? No optimal algorithm of this type is guaranteed to expand fewer nodes than A* (except to node with f = C ) Keeps all leaf and/or explored nodes in memory Optimal? Yes (expanding node in non-decreasing order of f) 102

Robot navigation example Initial state? Red cell States? Cells on rectangular grid (except to obstacle) Actions? Move to one of 8 neighbors (if it is not obstacle) Goal test? Green cell Path cost? Action cost is the Euclidean length of movement 103

A* vs. UCS: Robot navigation example Heuristic: Euclidean distance to goal Expanded nodes: filled circles in red & green Color indicating g value (red: lower, green: higher) Frontier: empty nodes with blue boundary Nodes falling inside the obstacle are discarded 104 Adopted from: http://en.wikipedia.org/wiki/talk%3aa*_search_algorithm

Robot navigation: Admissible heuristic Is Manhattan d M x, y = x 1 y 1 + x 2 y 2 distance an admissible heuristic for previous example? 105

A*: inadmissible heuristic h = 5 h_sld h = h_sld Adopted from: http://en.wikipedia.org/wiki/talk%3aa*_search_algorithm 106

A*, Greedy, UCS: Pacman UCS Heuristic: Manhattan distance Greedy Color: expanded in which iteration (red: lower) A* 107 Adapted from Dan Klein s slides

8-puzzle problem: state space b 3, average solution cost for random 8-puzzle 22 Tree search: b 3, d 22 3 22 3.1 10 10 states Graph search: 9!/2 181,440 states for 8-puzzle 10 13 for 15-puzzle 108

Admissible heuristics: 8-puzzle h 1 (n) = number of misplaced tiles h 2 (n) = sum of Manhattan distance of tiles from their target position i.e., no. of squares from desired location of each tile h 1 (S) = h 2 (S) = 8 3+1+2+2+2+3+3+2 = 18 109

Effect of heuristic on accuracy N: number of generated nodes by A* d: solution depth Effective branching factor b : branching factor of a uniform tree of depth d containing N + 1 nodes. N + 1 = 1 + b + (b ) 2 + + (b ) d Well-defined heuristic: b is close to one 110

Comparison on 8-puzzle Search Cost (N) d IDS A*(h 1 ) A*(h 2 ) 6 680 20 18 12 3644035 227 73 24 -- 39135 1641 Effective branching factor (b ) d IDS A*(h 1 ) A*(h 2 ) 6 2.87 1.34 1.30 12 2.78 1.42 1.24 24 -- 1.48 1.26 111

Heuristic quality If n, h 2 (n) h 1 (n) (both admissible) then h 2 dominates h 1 and it is better for search Surely expanded nodes: f n < C h n < C g n If h 2 (n) h 1 (n) then every node expanded for h 2 will also be surely expanded with h 1 (h 1 may also causes some more node expansion) 112

More accurate heuristic Max of admissible heuristics is admissible (while it is a more accurate estimate) h n = max (h 1 n, h 2 n ) How about using the actual cost as a heuristic? h(n) = h (n) for all n Will go straight to the goal?! Trade of between accuracy and computation time 113

Generating heuristics Relaxed problems Inventing admissible heuristics automatically Sub-problems (pattern databases) Learning heuristics from experience 114

Relaxed problem Relaxed problem: Problem with fewer restrictions on the actions Optimal solution to the relaxed problem may be computed easily (without search) The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem The optimal solution is the shortest path in the super-graph of the statespace. 115

Relaxed problem: 8-puzzle 8-Puzzle: move a tile from square A to B if A is adjacent (left, right, above, below) to B and B is blank Relaxed problems 1) can move from A to B if A is adjacent to B (ignore whether or not position is blank) 2) can move from A to B if B is blank (ignore adjacency) 3) can move from A to B (ignore both conditions) Admissible heuristics for original problem (h 1 (n) and h 2 (n)) are optimal path costs for relaxed problems First case: a tile can move to any adjacent square h 2 (n) Third case: a tile can move anywhere h 1 (n) 116

Sub-problem heuristic The cost to solve a sub-problem Store exact solution costs for every possible sub-problem Admissible? The cost of the optimal solution to this problem is a lower bound on the cost of the complete problem 117

Pattern databases heuristics Storing the exact solution cost for every possible subproblem instance Combination (taking maximum) of heuristics resulted by different sub-problems 15-Puzzle: 10 3 times reduction in no. of generated nodes vs. h 2 118

Disjoint pattern databases Adding these pattern-database heuristics yields an admissible heuristic?! Dividing up the problem such that each move affects only one sub-problem (disjoint sub-problems) and then adding heuristics 15-puzzle: 10 4 times reduction in no. of generated nodes vs. h 2 24-Puzzle: 10 6 times reduction in no. of generated nodes vs. h 2 Can Rubik s cube be divided up to disjoint sub-problems? 119

Learning heuristics from experience Machine Learning Techniques Learn h(n) from samples of optimally solved problems (predicting solution cost for other states) Features of state (instead of raw state description) 8-puzzle number of misplaced tiles number of adjacent pairs of tiles that are not adjacent in the goal state Linear Combination of features 120

A* difficulties Space is the main problem of A* Overcoming space problem while retaining completeness and optimality IDA*, RBFS, MA*, SMA* A* time complexity Variants of A* trying to find suboptimal solutions quickly More accurate but not strictly admissible heuristics 121