Extending Heuris.c Search

Similar documents
Using Lookaheads with Optimal Best-First Search

Puzzle Solving (single-agent search) Robert Holte Compu.ng Science Department University of Alberta

New Settings in Heuristic Search. Roni Stern

Bounded Suboptimal Heuristic Search in Linear Space

Inconsistent Heuristics

BIL 682 Ar+ficial Intelligence Week #2: Solving problems by searching. Asst. Prof. Aykut Erdem Dept. of Computer Engineering HaceDepe University

Potential-Based Bounded-Cost Search and Anytime Non-Parametric A

State- space search algorithm

Heuristic (Informed) Search

Set 3: Informed Heuris2c Search. ICS 271 Fall 2012

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Instructor: Wei-Min Shen

Position Paper: The Collapse Macro in Best-first Search Algorithms and an Iterative Variant of RBFS

INTRODUCTION TO HEURISTIC SEARCH

Heuris'c Search. Reading note: Chapter 4 covers heuristic search.

Decision making for autonomous naviga2on. Anoop Aroor Advisor: Susan Epstein CUNY Graduate Center, Computer science

Sta$c Single Assignment (SSA) Form

DFS. Depth-limited Search

Bounded Suboptimal Search in Linear Space: New Results

Uninformed search strategies

Compressed Pattern Databases

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

Artificial Intelligence

Ar#ficial Intelligence

Register Alloca.on Deconstructed. David Ryan Koes Seth Copen Goldstein

Partial-expansion A* with Selective Node Generation

Recent Progress in Heuristic Search: A Case Study of the Four-Peg Towers of Hanoi Problem

Outline. Where Do Heuristics Come From? Part 3. Question. Max ing Multiple Heuristics

A Survey of Suboptimal Search Algorithms

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Informed search strategies (Section ) Source: Fotolia

Efficient memory-bounded search methods

CS 188: Ar)ficial Intelligence

Informed search methods

Partial Pattern Databases

CSE 473: Ar+ficial Intelligence

Enhanced Partial Expansion A*

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Problem Solving & Heuristic Search

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Optimal-Generation Variants of EPEA*

Predicting the Performance of IDA* using Conditional Distributions

Recursive Best-First Search with Bounded Overhead

Informed Search Methods

Machine Learning Crash Course: Part I

Minimum Redundancy and Maximum Relevance Feature Selec4on. Hang Xiao

Mondrian Mul+dimensional K Anonymity

Learning from Multiple Heuristics

Bounded Suboptimal Search: A Direct Approach Using Inadmissible Estimates

CS 380: Artificial Intelligence Lecture #4

Informed search algorithms

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Outline. Best-first search

State Space Representa,on and Search

Heuristic (Informed) Search

Problem Solving and Search

Lecture 4: Informed/Heuristic Search

Informed Search and Exploration

Route planning / Search Movement Group behavior Decision making

Topic 1 Uninformed Search

Last time: Problem-Solving

Search. CS 3793/5233 Artificial Intelligence Search 1

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson

A Security Punctua.on Framework for Enforcing Access Control on Streaming Data. Rimma V. Nehme, Elke A. Rundensteinerr, Elisa Ber.

Using Classical Planners for Tasks with Con5nuous Ac5ons in Robo5cs

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson

Estimating Search Tree Size with Duplicate Detection

Informed Search A* Algorithm

Web- Scale Mul,media: Op,mizing LSH. Malcolm Slaney Yury Li<shits Junfeng He Y! Research

Basic Search Algorithms

CPS 170: Artificial Intelligence Search

Artificial Intelligence

Informed Search and Exploration

CSE 473: Ar+ficial Intelligence Uncertainty and Expec+max Tree Search

Today. Informed Search. Graph Search. Heuristics Greedy Search A* Search

mywbut.com Informed Search Strategies-I

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

H=1. H=0 Goal state H=2 H=2

Downloaded from ioenotes.edu.np

Artificial Intelligence

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 19 January, 2018

ICBS: Improved Conflict-Based Search Algorithm for Multi-Agent Pathfinding

Heuristic Search: Intro

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Ar#ficial)Intelligence!!

ARTIFICIAL INTELLIGENCE LECTURE 3. Ph. D. Lect. Horia Popa Andreescu rd year, semester 5

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Heuristic Search and Advanced Methods

Outline. Best-first search

Informed search algorithms. Chapter 4

Move Pruning and Duplicate Detection

Advanced A* Improvements

Artificial Intelligence

Informed Search and Exploration

4 Search Problem formulation (23 points)

Hypergraph Sparsifica/on and Its Applica/on to Par//oning

Common Misconceptions Concerning Heuristic Search

Principles of Ar.ficial Intelligence

Transcription:

Extending Heuris.c Search Talk at Hebrew University, Cri.cal MAS group Roni Stern Department of Informa.on System Engineering, Ben Gurion University, Israel 1

Heuris.c search 2

Outline Combining lookahead with op.mal BFS AAAI Searching for bounded cost solu.ons SoCs Merging PAC learning and heuris.c search? 3

Roni Stern, Informa.on Systems Engineering, Ben Gurion University Tamar Kulberis, Informa.on Systems Engineering, Ben Gurion University Ariel Felner, Informa.on Systems Engineering, Ben Gurion University and Deutsche Telekom Laboratories Robert C. Holte, Compu.ng Science Department,University of Alberta 4

Searching for the op.mal path S G S G 5

Searching for the op.mal path S G 6

BFS finds the op.mal path Assump.on: unit edge cost, no heuris0c D B 1 E A 2 3 4 5 6 7 H I J K L M N F C G O While Queue not empty node pop Queue If goal - halt For each child If not duplicate Add child to Queue 8 9 10 11 Generated Goal 7

Breadth- first with lookahead Lookahead depth: k = 2 H D I B J 1 E K A L F M C N G O While Queue not empty node pop Queue If goal - halt For each child If not duplicate Run Add Lookahead(k) child to Queue Add child to Queue Generated Goal Lookahead 8

Breadth- first with lookahead Lookahead depth: k = 2 Searched twice H D I J E K A 2 While Queue not empty B C node pop Queue L F M N G O Not inserted to queue Only DFS Time- per- node For each child If not duplicate Run Lookahead(k) If goal - halt Add child to Queue Generated Goal Lookahead 9

What have we gained? d = Op.mal goal depth k = Lookahead depth b = Branching factor b e = Effec.ve branching factor BFS BFS+Lookahead Itera.ve D. Memory O(b ed ) O(b d- k e ) O(bd) Visited O(b ed ) O(b d- k e b k ) O(b d ) Time??? 10

Empirical evalua.on Memory Expanded LH- Visited Time Lookahead of 0 = Simple BFS Lookahead of 18 = Almost limited DFS 11

What if we have a heuris.c? 12

A* with lookahead If an admissible heuris0c h exists If a goal is found in lookahead Can t halt Can prune Op.mality not verified Found an upper bound Improve the heuris.c Use minimum value found in the lookahead While Openlist not empty node pop Openlist If goal - halt For each child If not duplicate Add Run child Lookahead(f(node)+k) to Openlist Else If stored f(child)<f (child) Update child in Openlist Run Lookahead(f(node)+k) 13

Example Threshold = 10+2, edge cost = 2 UB updated to 12 Update f(b) with min. of children Over UB - Prune B 1 10 C A 10 2 3 10 D 12 Over UB - Prune E 12 F 10 G 12 H I 16 J 10 K 12 L 14 M 12 14 Generated Goal Lookahead 14

Inconsistent heuris.cs Inconsistency: h(a)- h(b) >c(a,b) Bidirec.onal Pathmax (BPMX) [IJCAI 05, Felner et. al.] BPMX Propagate inconsistent values How to propagate? Natural in IDA* Difficult in A* D B 10 9 4 A 8 5 C 8 6 4 h=10 h=2 6 7 6 E 6 F 7 5 G 15

Example: AL* with BPMX Lookahead = 3 Threshold = 10+3 Final f values f(b)=15 f(c)=13 11 B A 10 10 C 12 D f(d)=13 13 16 Generated Goal Lookahead BPMX

Experimental results 9 0mes faster than A* With 13 0mes less memory Memory Expanded LH- Visited Time 2.5 0mes faster than IDA* 2 0mes faster than A* With 13 0mes less memory TopSpin Memory Expanded LH- Visited Time 4 0mes faster than IDA* 17

Conclusion: Op.mal Search Using lookahead is simple and effec0ve Just use it! 18

Challenges Choosing the best lookahead depth Dynamic lookahead More domains 19

Roni Stern, Rami Puzis, Ariel Felner, Informa.on Systems Engineering, Ben Gurion University Informa.on Systems Engineering, Ben Gurion University and Deutsche Telekom Laboratories Informa.on Systems Engineering, Ben Gurion University and Deutsche Telekom Laboratories

Expanding the horizon of Best- First Search Tradi.onal BFS algorithms search for Op.mal solu.ons [A*, IDA*, RBFS] Solu.ons that are almost op.mal (w- admissible) [wa*, wida*, Op.mis.c Search] New goal: Solu0ons with a bounded cost

Finding a good enough solu.on Classic search sejng in AI: Start node, goal func.on, operators Heuris.c es.mate of distance to goal Task: Find a path to a goal with cost less than X Mo.va.on Limited budget: Find a solu.on for a given budget Find a befer solu0on than current best Current best = UB, Required cost= UB- ε

Can be solved with Op.mal BFS A*, IDA* [AIJ 85, Korf], RBFS [AIJ 93, Korf] Any.me variants of subop.mal BFS Any.me Weighted A* [JAIR 07, Hansen & Zhou],wIDA*[AIJ 85, Korf] How to search toward a goal with cost less than X? Op.mis.c Search [ICAPS 08, Thayer & Ruml] Local Search But Hill climbing, Simulated Annealing, GA, DFBnB All of these approaches ignore X during the search!

Who to expand? Desired goal cost (X) = 150 S Goal cost Probability 120 10% 150 90% A B Goal cost Probability 100 50% 200 50% G G G G Which subtree has the highest poten0al?

Where is the op0mal path to G Where is a goal with cost < X

Defini.on: Poten.al of a node Poten.al=probability of finding G with cost < X S Poten.al(n) = Pr(g(n)+h*(n)<X) Distance from S to n Poten(al Search is a BFS that Expands the node with the highest poten1al A B Distance from n to closest goal G

27

Approach A: Machine Learning Learning h* Solve 1,000 instances op.mally Extract domain specific features from the node Use ML techniques to Learn the h* value given the extracted features Es.mate the generaliza.on error Poten.al(n) = Pr(g(n)+h*(n)<X features of n) Advantages: No need for admissibility Can use any feature - exploit domain knowledge

Approach B: Heuris.c error models g(a)=10 A S g(b)=100 h(a)=90 B h(b)=3 X=120 What is Which the rela.on has the between higher h(n) poten0al and h*(n)?? Common intui.on Es.mates are more accurate closer to a goal Long range es.mates may be very misleading Poten.al(n) = Pr(g(n)+h*(n)<X h(n))

Addi.ve heuris.c error h errors by a random constant h*(n)=h(n)+α [an i.i.d random variable]. h=3 h=10 h=17 error error error Theorem: A* is PTS for any distribu1on of α

Rela.ve linear error H errors by a random factor h*(n)= α h(n) h=3 error h=10 error h=17. Poten.al cost func.on: error Theorem: BFS with cost func1on p lr is PTS Lower bound of h* Upper bound to relevant h*

Empirical example Real distance to goal (h*) 400000 350000 300000 250000 200000 150000 100000 50000 KPP- COM y = 0.7319x + 8799.7 Real distance to goal (h*) 70 60 50 40 30 20 10 15 Tile Puzzle y = 1.2981x - 2.0344 0 0 100000 200000 300000 400000 500000 600000 0 0 10 20 30 40 50 Heuris0c func0on (h) Heuris0c func0on (h)

General error model Given A heuris.c error func.on h* = e(h,α) The inverse func.on e r (h*,h)=α Let p g (n) =e r (X- g(n),h(n)) Then a BFS with u.l. func.on p g is equal to PTS Error model e(h, α) e r (h, α) p g Addi.ve h*=h+α α=h*- h X- g- h g+h [A*] Rela.ve linear h*=h α α=h*/h (X- g)/h h/(x- g) [p lr ] Exponen.al h*=h α α=log h h* Log h (X- g) General h*=e(h,α) α=e r (h*,h) E r (X- g,h)

PTS as a greedy any.me search Key idea: Find a bezer solu.on than current If UB is the current best solu.on Any.me PTS: Run PTS with X to be UB- ε KPP- COM (800 nodes, group size 20) 15 Tile Puzzle

Conclusion A new type of search problems Search for a solu(on with bounded cost PTS: Expand nodes with high poten.al Poten.al to lead to a solu.on of the desired cost Explore the error of the heuris(c func.on Use error model to implement PTS Without explicitly calcula.ng the poten.al

Future work Challenge: Incorpora.ng distance es.ma.on Desired goal cost (X) = 150 Goal cost Probability 120 10% 150 90% S A B Goal cost Probability 100 50% 200 50% Goal cost Pr. d(a,g) 120 10% 110 150 90% 140 G G G G Goal cost Pr. d(a,g) 100 50% 10 200 50% 110 Which subtree has the highest expected search effort?

Ques.ons? 37

Roni Stern, Ariel Felner, Informa.on Systems Engineering, Ben Gurion University Informa.on Systems Engineering, Ben Gurion University and Deutsche Telekom Laboratories

Quality assurance Find a solu.on with (sub)op0mal cost (ε- admissible) PAC Learning Heuris0c Search PAC Heuris0c Search Learn a hypothesis that has with high probability (1- δ) a low generaliza0on error (ε) Find a solu.on that with high probability (1- δ) is ε- admissible

Finding a good enough solu.on Classic search sejng in AI: Start node, goal func.on, operators Heuris.c es.mate of distance to goal In PAC search we search for a path that is with high probability (1- δ) almost op0mal (ε) Mo.va.on Verifying a solu.on is op0mal is hard (A* op.mally effec.ve)

PAC heuris.c search Given: ε, the desired subop.mality bound We want a solu1on that is no more than 1+ε 1mes the op1mum δ, the desired confidence level The solu1on must be ε- admissible with probability 1- δ Any any.me search algorithm will do (e.g. AWA*) PAC-Search: 1. do 1. GoalCost Search for a low cost goal 2. Until GoalCost/OptimalCost<ε with probability 1- δ How to iden.fy a PAC solu.on?

Finding a PAC solu.on A goal cost is PAC if Need to know Pr(X<h*(n))

Es.ma.ng Pr(h*(n)>X) Several ideas 1) Gather sta.s.cs and generate a PDF for - h to h* rela.on - States abstracted to the same pazern 2) Accuracy advisors - Use different features for accuracy

Iden.fy PAC condi.ons a- priori Halt when S Ini.al state Advantages Simple No overhead Disadvantages Search space Does not improve throughout the search GC=100

Path length and error es.ma.on Ernandes et. al. 04 h inadmissible Given Pr(h ) [Probability that h underes1mates h*] Search with A* Halt if Pr(h ) 5 <δ Straigh~orward extension Assumes ε=0 Search with Weighted A* (w=1+ε) Disadvantages Searching wider does not improve accuracy Pr(h ) is constant to all the states S GC=100 GC=100 Search space

Knowledge propaga.on Halt when S Can be done efficiently On expand of n decrease On generate increase GC=100 GC=100 Search space

Searching for a PAC goal Expanding a node n has 2 effects A bezer goal may be found, reducing GC Sum of probabili.es is changed Decreases by Increases by Best- first search with value of informa.on

Conclusion Finding 100% op.mal solu.on is hard Adop.ng the PAC framework for search allows more realis.c no.on of good enough goals Less effort on verifying op0mality/subop0mality Allows using inadmissible heuris0cs Only requires probabilis.c knowledge of heuris.c Use ML techniques to es.mate distance to goal 48

Ques.ons?