A.I.: Informed Search Algorithms. Chapter III: Part Deux

Similar documents
Informed Search and Exploration for Agents

CS 771 Artificial Intelligence. Informed Search

Informed/Heuristic Search

Informed search algorithms. Chapter 4

Informed search algorithms. Chapter 4

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Lecture 4: Informed/Heuristic Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Artificial Intelligence

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

CS 380: Artificial Intelligence Lecture #4

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed search algorithms

Artificial Intelligence

Informed search algorithms

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed Search and Exploration

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Solving problems by searching

Outline. Best-first search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed Search and Exploration

Introduction to Artificial Intelligence. Informed Search

Outline. Best-first search

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

ARTIFICIAL INTELLIGENCE. Informed search

Artificial Intelligence

CS 331: Artificial Intelligence Informed Search. Informed Search

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed search algorithms

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Informed search algorithms

CS 331: Artificial Intelligence Informed Search. Informed Search

CS:4420 Artificial Intelligence

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

Informed search algorithms

Informed search strategies (Section ) Source: Fotolia

Artificial Intelligence Informed search. Peter Antal

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Ar#ficial)Intelligence!!

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Planning, Execution & Learning 1. Heuristic Search Planning

Lecture 5 Heuristics. Last Time: A* Search

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Problem solving and search

Informed Search Algorithms

Artificial Intelligence: Search Part 2: Heuristic search

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

mywbut.com Informed Search Strategies-I

Problem Solving and Search

Artificial Intelligence

Informed Search Methods

Informed Search A* Algorithm

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Solving Problems: Intelligent Search

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

TDT4136 Logic and Reasoning Systems

Informed Search Algorithms. Chapter 4

Informed Search and Exploration

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Informed search algorithms

Artificial Intelligence. Informed search methods

Problem Solving: Informed Search

Problem solving and search

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Problem Solving & Heuristic Search

Advanced Artificial Intelligence (DT4019, HT10)

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search and Exploration

Informed search methods

Solving problems by searching

COMP9414: Artificial Intelligence Informed Search

Solving problems by searching

3 SOLVING PROBLEMS BY SEARCHING

CS 4700: Foundations of Artificial Intelligence

COMP9414: Artificial Intelligence Informed Search

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

DFS. Depth-limited Search

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Informed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013

CS 4700: Foundations of Artificial Intelligence

Robot Programming with Lisp

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Artificial Intelligence

Informed Search and Exploration

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Artificial Intelligence CS 6364

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Artificial Intelligence

Downloaded from ioenotes.edu.np

Transcription:

A.I.: Informed Search Algorithms Chapter III: Part Deux

Best-first search Greedy best-first search A * search Heuristics Outline

Overview Informed Search: uses problem-specific knowledge. General approach: best-first search; an instance of TREE- SEARCH (or GRAPH-SEARCH) where a search strategy is defined by picking the order of node expansion. With best-first, node is selected for expansion based on evaluation function f(n). Evaluation function is a cost estimate; expand lowest cost node first (same as uniform-cost search but we replace g with f).

Overview (cont d) The choice of f determines the search strategy (one can show that best-first tree search includes DFS as a special case). Often, for best-first algorithms, f is defined in terms of a heuristic function, h(n). h(n) = estimated cost of the cheapest path from the state at node n to a goal state. (for goal state: h(n)=0) Heuristic functions are the most common form in which additional knowledge of the problem is passed to the search algorithm.

Overview (cont d) Best-First Search algorithms constitute a large family of algorithms, with different evaluation functions. Each has a heuristic function h(n) Example: in route planning the estimate of the cost of the cheapest path might be the straight line distance between two cities. Recall: g(n) = cost from the initial state to the current state n. h(n) = estimated cost of the cheapest path from node n to a goal node. f(n) = evaluation function to select a node for expansion (usually the lowest cost node).

Best-First Search Idea: use an evaluation function f(n) for each node f(n) provides an estimate for the total cost. Expand the node n with smallest f(n). Implementation: Order the nodes in the frontier increasing order of cost. Special cases: Greedy best-first search A * search

Greedy best-first search Evaluation function f(n) = h(n) (heuristic), the estimate of cost from n to goal. We use the straight-line distance heuristic: h SLD (n) = straightline distance from n to Bucharest. Note that the heuristic values cannot be computed from the problem description itself! In addition, we require extrinsic knowledge to understand that h SLD is correlated with the actual road distances, making it a useful heuristic. Greedy best-first search expands the node that appears to be closest to goal.

Romania with step costs in km

Greedy best-first search example

Greedy best-first search example

Greedy best-first search example

Greedy best-first search example

GBFS is incomplete! Why? Greedy best-first search Graph-Search version is, however, complete in finite spaces.

Properties of greedy best-first search Complete? No can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time? O(b m ), (in worst case) but a good heuristic can give dramatic improvement (m is max depth of search space). Space? O(b m ) -- keeps all nodes in memory. Optimal? No (not guaranteed to render lowest cost solution).

A* Search Most widely-known form of best-first search. It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get from the node to the goal: f(n) = g(n) + h(n) (estimated cost of cheapest solution through n). A reasonable strategy: try node with the lowest g(n) + h(n) value! Provided heuristic meets some basic conditions, A* is both complete and optimal.

A * search example f(n)=g(n)+h(n)

A * search example

A * search example

A * search example

A * search example

A * search example

Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic. Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimal.

Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the frontier. Let n be an unexpanded node in the frontier such that n is on a shortest path to an optimal goal G. f(g 2 ) = g(g 2 ) since h(g 2 ) = 0 g(g 2 ) > g(g) since G 2 is suboptimal f(g) = g(g) since h(g) = 0 f(g 2 ) > f(g) from above

Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(g 2 ) > f(g) (from above) h(n) h * (n) (since h is admissible) -> g(n) + h(n) g(n) + h * (n) f(n) g(n) + h * (n) < f(g) < f(g 2 ) Hence f(g 2 ) > f(n), and A * will never select G 2 for expansion.

Consistent Heuristics A heuristic is consistent (or monotonic) if for every node n, every successor n' of n generated by any action a: h(n) c(n,a,n') + h(n') If h is consistent, we have: f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal.

Optimality of A * A * expands nodes in order of increasing f value. Gradually adds "f-contours" of nodes. Contour i has all nodes with f=f i, where f i < f i+1. That is to say, nodes inside a given contour have f-costs less than or equal to contour value.

Properties of A* Complete: Yes (unless there are infinitely many nodes with f f(g) ). Time: Exponential. Space: Keeps all nodes in memory, so also exponential. Optimal: Yes (provided h admissible or consistent). Optimally Efficient: Yes (no algorithm with the same heuristic is guaranteed to expand fewer nodes). NB: Every consistent heuristic is also admissible (Pearl). Q: What about the converse?

E.g., for the 8-puzzle: Admissible Heuristics h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e. 1-norm) (i.e., no. of squares from desired location of each tile) Q: Why are these admissible heuristics? h 1 (S) =? h 2 (S) =?

E.g., for the 8-puzzle: Admissible Heuristics h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) =? 8 h 2 (S) =? 3+1+2+2+2+3+3+2 = 18

Dominance If h 2 (n) h 1 (n) for all n (both admissible), then h 2 dominates h 1. Essentially, domination translates directly into efficiency: h 2 is better for search. A* using h 2 will never expand more nodes than A* using h 1. Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes (IDS=iterative deepening search)

Memory Bounded Heuristic Search: Recursive BFS (best-first) How can we solve the memory problem for A* search? Idea: Try something like depth-first search, but let s not forget everything about the branches we have partially explored. We remember the best f-value we have found so far in the branch we are deleting.

Memory Bounded Heuristic Search: RBFS changes its mind very often in practice. This is because f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f-values and will be explored first. Recursive BFS Best alternative over frontier nodes, which are not children: i.e. do I want to back up? Problem: We should keep in memory whatever we can.

Simple Memory-Bounded A* This is like A*, but when memory is full we delete the worst node (largest f-value). Like RBFS, we remember the best descendent in the branch we delete. If there is a tie (equal f-values) we delete the oldest nodes first. Simple-MBA* finds the optimal reachable solution given the memory constraint (reachable means path from root to goal fits in memory). Can also use iterative deepening with A* (IDA*). Time can still be exponential.

Relaxed Problems A problem with fewer restrictions on the actions is called a relaxed problem. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. (why?) If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution. If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution.

Summary Informed search methods may have access to a heuristic function h(n) that estimates the cost of a solution from n. The generic best-first search algorithm selects a node for expansion according to an evaluation function. Greedy best-first search expands nodes with minimal h(n). It is not optimal, but is often efficient. A* search expands nodes with minimal f(n)=g(n)+h(n). A* s complete and optimal, provided that h(n) is admissible (for TREE-SEARCH) or consistent (for GRAPH-SEARCH). The space complexity of A* is still prohibitive. The performance of heuristic search algorithms depends on the quality of the h(n) function. One can sometimes construct good heuristics by relaxing the problem definition.