INTRODUCTION TO HEURISTIC SEARCH

Similar documents
Route planning / Search Movement Group behavior Decision making

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

Artificial Intelligence

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Heuristic (Informed) Search

Informed Search Methods

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

ARTIFICIAL INTELLIGENCE. Informed search

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

Artificial Intelligence

Outline. Best-first search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed/Heuristic Search

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Informed search algorithms. Chapter 4

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Search. CS 3793/5233 Artificial Intelligence Search 1

Last time: Problem-Solving

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Lecture 4: Informed/Heuristic Search

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Informed search strategies (Section ) Source: Fotolia

DFS. Depth-limited Search

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Artificial Intelligence Informed search. Peter Antal

CS 520: Introduction to Artificial Intelligence. Lectures on Search

Artificial Intelligence

Problem Solving and Search

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

The wolf sheep cabbage problem. Search. Terminology. Solution. Finite state acceptor / state space

Informed Search A* Algorithm

Heuristic Search: A* CPSC 322 Search 4 January 19, Textbook 3.6 Taught by: Vasanth

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

Heuristic (Informed) Search

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Informed Search and Exploration for Agents

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

Downloaded from ioenotes.edu.np

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Chapters 3-5 Problem Solving using Search

Outline. Best-first search

Informed search methods

Artificial Intelligence (part 4c) Strategies for State Space Search. (Informed..Heuristic search)

Problem Solving & Heuristic Search

Informed search algorithms. Chapter 4

Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Lecture 5 Heuristics. Last Time: A* Search

Lecture 3 - States and Searching

Search Algorithms for Discrete Optimization Problems

Basic Search Algorithms

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Set 3: Informed Heuristic Search. ICS 271 Fall 2017 Kalev Kask

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Heuristic Search and Advanced Methods

CS 188: Artificial Intelligence. Recap Search I

Informed Search and Exploration

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

CS 4700: Foundations of Artificial Intelligence

CIS 192: Artificial Intelligence. Search and Constraint Satisfaction Alex Frias Nov. 30 th

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Downloded from: CSITauthority.blogspot.com

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Ar#ficial)Intelligence!!

Robot Programming with Lisp

Solving Problems: Intelligent Search

Informed Search and Exploration

Searching with Partial Information

Search: Advanced Topics and Conclusion

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

mywbut.com Informed Search Strategies-II

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

3 SOLVING PROBLEMS BY SEARCHING

State Spaces

Informed Search Algorithms

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Uninformed Search Methods

CS 4700: Foundations of Artificial Intelligence

Solving problems by searching

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Solving problems by searching

Solving problems by searching

A* optimality proof, cycle checking

Artificial Intelligence. Chapters Reviews. Readings: Chapters 3-8 of Russell & Norvig.

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Informed Search CS457 David Kauchak Fall 2011

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Transcription:

INTRODUCTION TO HEURISTIC SEARCH

What is heuristic search? Given a problem in which we must make a series of decisions, determine the sequence of decisions which provably optimizes some criterion.

What is NOT heuristic search? Any algorithm which cannot produce a provably correct (globally optimal) solution Greedy hill climbing Simulated annealing Genetic algorithms Gradient descent EM...

Overview Basics Brute force search Depth-first search Breadth-first search Heuristic search Heuristic function Best-first search (A*) Other details Conclusions

What is a state space search problem? State: a node in a graph Start state: where we begin our search Goal state(s): where we want to go Operators: how we move from one node to the next Typical problem: What is the minimum cost sequence of operators to move from start to a goal? What is the shortest path from start to a goal? Common variant: does there exist a path from the start to any goal?

Dijkstra s algorithm State: Cities in North America Start state: New York Goal state: Hollywood Operators: taking a bus, cost: bus ticket

Dijkstra s algorithm State: Cities in North America Start state: New York Goal state: Hollywood Operators: taking a bus, cost: bus ticket

Dijkstra s algorithm State: Cities in North America Start state: New York Goal state: Hollywood Operators: taking a bus, cost: bus ticket

Sliding tile, the most studied puzzle 3 6 4 7 2 8 1 5 6 4 3 7 2 8 1 5 3 6 4 7 2 8 1 5... 3 6 4 7 2 8 1 5 1 2 3 4 5 6 7 8

What is an implicit state space? Observation: some state spaces are huge Problem: we cannot store all nodes Solution: define start and goals, but implicitly define all other states via operators Operator moveblankright(state) copy(state, newstate) updatecol(newstate, blank, col(state, blank)+1) updatecol(newstate, toright(state, blank), col(state, blank))

Standard terminology Expand. apply operators to a state (aka, node) Generate. create a state by expanding an existing one Successor. state created when expanding a node Duplicate. generate a state more than once OPEN. a sorted list of nodes which have been generated but not expanded CLOSED. a list of nodes which have been expanded

Standard notation: successors distance from start to node n (known) distance from start to predecessor n cost to move from node n to successor n

Another simple problem State: a subset of {X 0, X 1,..., X n } Start state: {} Goal state: {X 0, X 1,..., X n } Operators: Add one new X, i c(n, n ) is given as input

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Depth-first search (DFS) Intuition: expand a node, then all of its successors, then all of its successors, etc. Solution reconstruction: pop back up the call stack Memory-efficient Easy to implement Infinite branches Not naively optimal Duplicate expansion!

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Breadth-first search (BFS) Intuition: expand all of start s successor, then all of their successors, then all of their successors, etc. Solution reconstruction: store back pointers that can be retraced Some duplicate detection No depth problems Memory (for DD)

Heuristic functions A heuristic function estimates the distance from a state to the (closest) goal state. We often create heuristics by relaxing the problem.

Heuristic functions A heuristic function estimates the distance from a state to the (closest) goal state. We often create heuristics by relaxing the problem. Sliding tile puzzle heuristic: Manhattan distance 3 6 4 h( ) = 7 2 8 1 5 1 2...

Standard notation: node priority (known) distance from start to n heuristic estimate of the distance from n to goal estimate of the cost of a path from start to goal through n

Admissible heuristic functions An admissible heuristic function always underestimates the distance from a state to the goal. They are sometimes called optimistic. Theorem: The Manhattan distance heuristic function is admissible for the sliding tile puzzle.

Consistent heuristic functions A consistent heuristic function is admissible and satisfies that. They are sometimes called monotonic because f costs cannot decrease on a path from start to goal.

Depth-first branch and bound (DFBnB) Intuition: perform DFS, but calculate f(n) for each node. If f(n) is worse than some bound, prune. We have found a solution with cost 34. (over DFS): ignores parts of the search space worse than current known solution

Depth-first branch and bound (DFBnB) Intuition: perform DFS, but calculate f(n) for each node. If f(n) is worse than some bound, prune. We have found a solution with cost 34. We continue to search. (over DFS): ignores parts of the search space worse than current known solution

Depth-first branch and bound (DFBnB) Intuition: perform DFS, but calculate f(n) for each node. If f(n) is worse than some bound, prune. We have found a solution with cost 34. We continue to search. The lower bound of the node is 34. (over DFS): ignores parts of the search space worse than current known solution

Depth-first branch and bound (DFBnB) Intuition: perform DFS, but calculate f(n) for each node. If f(n) is worse than some bound, prune. We have found a solution with cost 34. We continue to search. The lower bound of the node is 34. Since it cannot be optimal, prune. (over DFS): ignores parts of the search space worse than current known solution

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

Best-first search (A*) Intuition: expand nodes in best-first order according to priority until expanding the goal Solution reconstruction: store back pointers that can be retraced Duplicate detection Ignores states with worse f than goal Memory (for DD) Priority queue

A* Theoretical properties Given an admissible heuristic h... Upon selecting goal for expansion, the shortest path has been found. Given h, no state space search strategy can prove optimality and expand fewer nodes than A*. If h is also consistent, no intermediate node will be re-expanded.

Terminology and data structures The function of the OPEN list is to determine the next node to expand. DFS: stack BFS: queue (naively) or hash table A*: priority queue (can also require hash table) The function of the CLOSED list is to detect nodes that have already been generated. BFS: hash table, possibly one layer at a time A*: hash table

Other search problems Constraint-satisfaction problems We specify conditions that goal must satisfy, but there could be some parts of the state we do not care about. Games against nature Operators are not deterministic. Under typical assumptions, these are called Markov decision processes. Multi-player games Other agents apply operators which we cannot control.

Conclusions Heuristic search algorithms use a heuristic function to guide a search from a start state to a goal state. A variety of search strategies can leverage the heuristic function in different ways. Depth-first Breadth-first Best-first (A*) A* has several important theoretical properties.