Advanced A* Improvements

Similar documents
Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Heuristic (Informed) Search

CS 380: Artificial Intelligence Lecture #4

Artificial Intelligence

mywbut.com Informed Search Strategies-II

Artificial Intelligence

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Informed search algorithms. Chapter 4

Informed search algorithms. Chapter 4

Introduction to Computer Science and Programming for Astronomers

Lecture 9. Heuristic search, continued. CS-424 Gregory Dudek

Artificial Intelligence

Informed Search and Exploration

TDT4136 Logic and Reasoning Systems

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Outline. Best-first search

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Informed Search and Exploration

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm

Informed (Heuristic) Search. Idea: be smart about what paths to try.

DFS. Depth-limited Search

ARTIFICIAL INTELLIGENCE. Informed search

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Outline. Best-first search

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

Problem Solving & Heuristic Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Search : Lecture 2. September 9, 2003

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Artificial Intelligence

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Informed Search and Exploration

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed Search Methods

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Foundations of Artificial Intelligence

Lecture 4: Informed/Heuristic Search

Artificial Intelligence: Search Part 2: Heuristic search

Artificial Intelligence

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Solving Problems: Intelligent Search

Artificial Intelligence

Informed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013

! A* is best first search with f(n) = g(n) + h(n) ! Three changes make it an anytime algorithm:

Artificial Intelligence

CS:4420 Artificial Intelligence

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Foundations of Artificial Intelligence

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Motion Planning for a Point Robot (2/2) Point Robot on a Grid. Planning requires models. Point Robot on a Grid 1/18/2012.

Heuristic Search. Heuristic Search. Heuristic Search. CSE 3401: Intro to AI & LP Informed Search

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Informed Search Algorithms

University of Waterloo Department of Electrical and Computer Engineering ECE 457A: Cooperative and Adaptive Algorithms Midterm Examination

Blind (Uninformed) Search (Where we systematically explore alternatives)

Informed Search and Exploration

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Administrative. Local Search!

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Heuris'c Search. Reading note: Chapter 4 covers heuristic search.

Chapters 3-5 Problem Solving using Search

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Efficient memory-bounded search methods

Artificial Intelligence

A4B36ZUI - Introduction ARTIFICIAL INTELLIGENCE

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Informed Search Algorithms. Chapter 4

Blind (Uninformed) Search (Where we systematically explore alternatives)

3 SOLVING PROBLEMS BY SEARCHING

Informed search strategies (Section ) Source: Fotolia

N-Queens problem. Administrative. Local Search

Local Search (Ch )

Heuristic (Informed) Search

3.6.2 Generating admissible heuristics from relaxed problems

Downloaded from ioenotes.edu.np

Midterm Examination CS540-2: Introduction to Artificial Intelligence

A4B36ZUI - Introduction to ARTIFICIAL INTELLIGENCE

Ar#ficial)Intelligence!!

Artificial Intelligence Informed search. Peter Antal

Lecture 5 Heuristics. Last Time: A* Search

INTRODUCTION TO HEURISTIC SEARCH

Searching. Assume goal- or utilitybased. Next task to achieve is to determine the best path to the goal

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Beyond Classical Search: Local Search. CMPSCI 383 September 23, 2011

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Transcription:

Advanced A* Improvements 1

Iterative Deepening A* (IDA*) Idea: Reduce memory requirement of A* by applying cutoff on values of f Consistent heuristic function h Algorithm IDA*: 1. Initialize cutoff to f(initial-node). Repeat: a. Perform depth-first search by expanding all nodes N such that f(n) cutoff b. Reset cutoff to smallest value f of non-expanded (leaf) nodes

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=4 3

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=4 4 4

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=4 4 5 5

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 5 4 Cutoff=4 4 5

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 5 4 Cutoff=4 4 5 7

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 8

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 4 9

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 4 5 10

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 4 5 7 11

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 4 5 5 7 1

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 Cutoff=5 4 5 5 5 7 13

8-Puzzle f(n) = g(n) + h(n) with h(n) = number of misplaced tiles 4 5 5 Cutoff=5 4 5 7 5 14

Advantages/Drawbacks of IDA* Advantages: Still complete and optimal Requires less memory than A* Avoid the overhead to sort the fringe Drawbacks: Can t avoid revisiting states not on the current path Available memory is poorly used (! memory-bounded search, see R&N p. 101-104) 15

Recursive Best-First Search Recursive algorithm that attempts to mimic standard best-first search with linear space. Keeps track of the f-value of the bestalternative path available. If current f-values exceeds this alternative f- value than backtrack to alternative path. Upon backtracking change f-value to best f- value of its children. Re-expansion of this result is thus still possible. 1

17

18

19

0

1

3

(simplified) Memory-Bounded A* Use all available memory. I.e. expand best leafs until available memory is full When full, SMA* drops worst leaf node (highest f-value) Like RFBS backup forgotten node to its parent What if all leafs have the same f-value? Same node could be selected for expansion and deletion. SMA* solves this by expanding newest best leaf and deleting oldest worst leaf. SMA* is complete if solution is reachable, optimal if optimal solution is reachable. 4

5

Further A* Variants Anytime A* Dynamic Weighting Anytime A* Dynamic A* Theta* Accelerated A*

Anytime A* Three changes make A* an anytime algorithm: 1) Use a non-admissible heuristic so that sub-optimal solutions are found quickly. ) Continue the search after the first solution is found using it to prune the open list 3) When the open list is empty, the best solution generated is optimal. How to choose a non-admissible heuristic? 93

Anytime A* How to choose a non-admissible heuristic? Weighted evaluation functions: f'(n) = (1-w)*g(N) + w*h(n) Higher weight on h(n) tends to search deeper. Admissible if h(n) is admissible and w 0.5 Otherwise, the search is non-admissible, but it normally finds solutions much faster. An appropriate w makes possible a tradeoff between the solution quality and the computation time. 93

Dynamic Weighting Anytime A* With dynamic weighting, you assume that at the beginning of your search, it s more important to get (anywhere) quickly; at the end of the search, it s more important to get to the goal. There is a weight (w >= 1) associated with the heuristic. As you get closer to the goal, you decrease the weight; this decreases the importance of the heuristic, and increases the relative importance of the actual cost of the path. 93

Dynamic A* There are variants of A* that allow for changes to the world after the initial path is computed. D* is intended for use when you don t have complete information. If you don t have all the information, A* can make mistakes; D* s contribution is that it can correct those mistakes without taking much time. However, D* require a lot of space to keep around its internal information (OPEN/CLOSED sets, path tree, gvalues), and then when the map changes, D* will tell you if you need to adjust your path to take into account the map changes. For a game with lots of moving units, you usually don t want to keep all that information around, so D* isn't applicable. D* is designed for robotics, where there is only one robot you don t need to reuse the memory for some other robot s path. 30

Theta* Sometimes grids are used for pathfinding because the map is made on a grid, not because you actually want movement on a grid. A* would run faster and produce better paths if given a graph of key points (such as corners) instead of the grid. However if you don t want to precompute the graph of corners, you can use Theta*, a variant of A* that runs on square grids, to find paths that don t strictly follow the grid. When building parent pointers, Theta* will point directly to an ancestor if there s a line of sight to that node, and skips the nodes in between. 31

Accelerated A* Trajectory planning in the 4D space (3D+time) represented by OctanTrees dynamic search step, function of the proximity to an obstacle or a waypoint 3

Local Search 1

Local Search Light-memory search method No search tree; only the current state is represented! Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state Many similarities with optimisation techniques 85

Gradient Descent 1) S " initial state ) Repeat: a) S " arg min S SUCCESSORS(S) {h(s )} b) if GOAL?(S ) return S c) if h(s ) < h(s) then S " S else return failure Similar to: - hill climbing with h - gradient descent over continuous space 8

Repeat n times: Application: 8-Queen 1) Pick an initial state S at random with one queen in each column ) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens! new S [min-conflicts heuristic] 3) Return failure 1 3 3 3 0

Repeat n times: Application: 8-Queen 1) Pick an initial state S at random with one queen in each column ) Repeat k times: a) If GOAL?(S) then return S b) Pick an attacked queen Q at random c) Move Q in its column to minimize the number of attacking queens! new S [min-conflicts heuristic] 3) Return failure 1 3 3 3 0

Repeat n times: Application: 8-Queen Why does it work??? 1) There are many goal states that are a) well-distributed If GOAL?(S) then return over S the state space 1) Pick an initial state S at random with one queen in each column ) Repeat k times: ) b) If Pick no an solution attacked queen has been Q at random found after a few c) Move Q in its column to minimize the number of attacking queens! new S [min-conflicts heuristic] steps, it s better to start it all over again. Building a search tree would be much less efficient because of the high branching 1 factor 3) Running time almost independent of the number of queens 3) Return failure 3 3 3 0

Gradient Descent 1) S " initial state ) Repeat: a) S " arg min S SUCCESSORS(S) {h(s )} b) if GOAL?(S ) return S c) if h(s ) < h(s) then S " S else return failure may easily get stuck in local minima! Random restart (as in n-queen example)! Monte Carlo descent 89

Monte Carlo Descent 1) S " initial state ) Repeat k times: a) If GOAL?(S) then return S b) S " successor of S picked at random c) if h(s ) h(s) then S " S d) else - Δh = h(s )-h(s) 3) Return failure - with probability ~ exp( Δh/T), where T is called the temperature, do: S " S [Metropolis criterion] Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T 90

Simulated Annealing 1) S " initial state ) Repeat k times: a) If GOAL?(S) then return S b) S " successor of S picked at random c) if h(s ) h(s) then S " S d) else - Δh = h(s )-h(s) 3) Return failure - with probability ~ exp( Δh/T), where T is called the temperature, do: S " S [Metropolis criterion] Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T 90

Parallel Local Search Techniques They perform several local searches concurrently, but not independently: Beam search Genetic algorithms See R&N, pages 115-119 91

Local Beam Search Idea: keep k states instead of 1; choose top k of all their successors Not the same as k searches run in parallel. Searches that find good states recruit other searches to join them Problem: quite often, all k states end up on same local hill Idea: choose k successors randomly, biased towards good ones Observe the close analogy to natural selection! 91

Genetic Algorithms Idea: stochastic local beam search + generate successors from pairs of states 91

Genetic Algorithms GAs require states encoded as strings (GPs use programs) Crossover helps iff substrings are meaningful components 91

Search problems Blind search Heuristic search: best-first and A* Construction of heuristics Variants of A* Local search 9