Uninformed Search. Day 1 & 2 of Search. Russel & Norvig Chap. 3. Material in part from

Similar documents
CS540 Uninformed Search

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

CAP 4630 Artificial Intelligence

CS242 Uninformed Search

Uninformed Search Methods

Uninformed (also called blind) search algorithms

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Problem Solving as Search. CMPSCI 383 September 15, 2011

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Solving Problems by Searching

Artificial Intelligence

Lecture 3. Uninformed Search

State Space Search in AI

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Chapter 3: Solving Problems by Searching

Search EECS 348 Intro to Artificial Intelligence

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

AI: problem solving and search

Uninformed Search Strategies

Algorithms for Data Structures: Uninformed Search. Phillip Smith 19/11/2013

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Solving Problems by Searching

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Search EECS 395/495 Intro to Artificial Intelligence

A4B36ZUI - Introduction ARTIFICIAL INTELLIGENCE

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

CPS 170: Artificial Intelligence Search

4. Solving Problems by Searching

Uninformed Search Strategies AIMA

Solving Problems: Blind Search

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Chapter 2 Classical algorithms in Search and Relaxation

Path Planning. CMPS 146, Fall Josh McCoy

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Problem Solving and Searching

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Uninformed Search Strategies AIMA 3.4

CS486/686 Lecture Slides (c) 2015 P.Poupart

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Chapter 4. Uninformed Search Strategies

CS486/686 Lecture Slides (c) 2014 P.Poupart

Artificial Intelligence Uninformed search

Topic 1 Uninformed Search

Search. CS 3793/5233 Artificial Intelligence Search 1

Uninformed Search B. Navigating through a search tree. Navigating through a search tree. Navigating through a search tree

CS 8520: Artificial Intelligence

Last time: Problem-Solving

Blind (Uninformed) Search (Where we systematically explore alternatives)

Planning Techniques for Robotics Search Algorithms: Multi-goal A*, IDA*

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Artificial Intelligence

CSC 2114: Artificial Intelligence Search

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

Uninformed search strategies (Section 3.4) Source: Fotolia

State Spaces

Blind (Uninformed) Search (Where we systematically explore alternatives)

Ar#ficial)Intelligence!!

Informed search strategies (Section ) Source: Fotolia

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Solving Problems by Searching

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Search. (Textbook Chpt ) Computer Science cpsc322, Lecture 2. May, 10, CPSC 322, Lecture 2 Slide 1

CS 331: Artificial Intelligence Uninformed Search. Leftovers from last time

Uninformed Search B. Navigating through a search tree. Navigating through a search tree. Navigating through a search tree

Announcements. Project 0: Python Tutorial Due last night

Lecture 3 - States and Searching

521495A: Artificial Intelligence

Lecture 3 of 42. Lecture 3 of 42

Pengju XJTU 2016

Uninformed Search. (Textbook Chpt 3.5) Computer Science cpsc322, Lecture 5. May 18, CPSC 322, Lecture 5 Slide 1

Uninformed Search. Chapter 3

Solving problems by searching. Chapter 3

Goal-Based Agents Problem solving as search. Outline

UNINFORMED SEARCH. What to do if teammates drop? Still have 3 or more? No problem keep going. Have two or fewer and want to be merged?

State Space Search. Many problems can be represented as a set of states and a set of rules of how one state is transformed to another.

Route planning / Search Movement Group behavior Decision making

Artificial Intelligence

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Solving Problems by Searching

Problem solving and search

AI: Week 2. Tom Henderson. Fall 2014 CS 5300

3 SOLVING PROBLEMS BY SEARCHING

COMP3702/7702 Artificial Intelligence Week2: Search (Russell & Norvig ch. 3)" Hanna Kurniawati"

CS 4700: Artificial Intelligence

Basic Search Algorithms

Vorlesung Grundlagen der Künstlichen Intelligenz

Artificial Intelligence

Solving Problem by Searching. Chapter 3

Artificial Intelligence

Search and Games. Adi Botea. ANU Summer Schools in Logic and Learning February, 2009

mywbut.com Informed Search Strategies-I

Problem solving and search

Discrete Motion Planning

Assignment 1 is out! Due: 9 Sep 23:59! Can work in a group of 2-3 students.! NO cheating!!!! Submit in turnitin! Code + report!

Chapter 3 Solving problems by searching

Transcription:

Uninformed Day & 2 of Russel & Norvig Chap. 3 Material in part from http://www.cs.cmu.edu/~awm/tutorials

Examples of problems? The Oak Tree Informed versus Uninformed Heuristic versus Blind

A Problem a GOAL b d c e f START h p q r Find a path from START to GOAL Find the minimum number of transitions

Example 3 2 8 2 4 3 4 7 6 6 7 8 START GOAL

Example 8 7 3 2 2 4 3 4 6 6 7 8 START GOAL State: Configuration of puzzle Transitions: Up to 4 possible moves (up, down, left, right) Solvable in 22 steps (average) But:.8 0 states (.3 0 2 states for the - puzzle) Cannot represent set of states explicitly

Example: Robot Navigation States = positions in the map Transitions = allowed motions x GOAL N W E X S START Navigation: Going from point START to point GOAL given a (deterministic) map

Example Solution: Brushfire x GOAL X START

Other Real-Life Examples Protein design http://www.blueprint.org/proteinfolding/trades/trades_problem.html Scheduling/Manufacturing http://www.ozone.ri.cmu.edu/projects/dms/dmsmain.html Route planning Robot navigation http://www.frc.ri.cmu.edu/projects/mars/dstar.html Don t necessarily know explicitly the structure of a search problem Scheduling/Science http://www.ozone.ri.cmu.edu/projects/hsts/hstsmain.html

Other Real-Life Examples Protein design http://www.blueprint.org/proteinfolding/trades/trades_problem.html Scheduling/Manufacturing http://www.ozone.ri.cmu.edu/projects/dms/dmsmain.html Route planning Robot navigation http://www.frc.ri.cmu.edu/projects/mars/dstar.html Don t have a clue when you re doing well versus poorly! Scheduling/Science http://www.ozone.ri.cmu.edu/projects/hsts/hstsmain.html

0cm resolution 4km 2 = 4 0 8 states

What we are not addressing (yet) Uncertainty/Chance State and transitions are known and deterministic Game against adversary Multiple agents/cooperation Continuous state space For now, the set of states is discrete

Overview Definition and formulation Optimality, Completeness, and Complexity Uninformed Breadth First Trees Depth First Iterative Deepening Informed Best First Greedy Heuristic, A*

A Problem: Square World

Formulation Q: Finite set of states S Q: Non-empty set of start states G Q: Non-empty set of goal states succs: function Q (Q) succs(s) = Set of states that can be reached from s in one step cost: function QxQ Positive Numbers cost(s,s ) = Cost of taking a one-step transition from state s to state s Problem: Find a sequence {s,,s K } such that:. s S 2. s K G 3. s i+ succs(s i ) 4. Σ cost(s i, s i+ ) is the smallest among all possible sequences (desirable but optional)

What about actions? Q: Finite set of states S Q: Non-empty set of start states G Q: Non-empty set of goal states succs: function Q (Q) succs(s) = Set of states that can be reached from s in one step cost: function QxQ Positive Numbers cost(s,s ) = Cost of taking a one-step transition from state s to state s Problem: Find a sequence {s,,s K } such that: Actions define transitions from states to states. Example: Square World

Example Q = {AA, AB, AC, AD, AI, BB, BC, BD, BI, } S = {AB} G = {DD} succs(aa) = {AI,BA} cost(s,s ) = for each action (transition)

Desirable Properties b a d c e f GOAL GOAL START START p q Completeness: An algorithm is complete if it is guaranteed to find a path if one exists Optimality: The total cost of the path is the lowest among all possible paths from start to goal Time Complexity Space Complexity h r

Breadth-First a GOAL b d c e f START h p q r Label all states that are 0 steps from S Call that set V o

0 steps step Breadth-First a GOAL b d c e f START h p q r Label the successors of the states in V o that are not yet labelled Set V of states that are step away from the start

0 steps step 2 steps b Breadth-First a d c e f GOAL START h p q r Label the successors of the states in V that are not yet labelled Set V 2 of states that are step away from the start

0 steps step 2 steps 3 steps b Breadth-First a d c e f GOAL START h p q r Label the successors of the states in V 2 that are not yet labelled Set V 3 of states that are step away from the start

0 steps step 2 steps 3 steps 4 steps b Breadth-First a c e d f GOAL START h p q r Stop when goal is reached in the current expansion set goal can be reached in 4 steps

Recovering the Path a GOAL b d c e f START h p q Record the predecessor state when labeling a new state When I labeled GOAL, I was expanding the neighbors of f so therefore f is the predecessor of GOAL When I labeled f, I was expanding the neighbors of r so therefore r is the predecessor of f Final solution: {START, e, r, f, GOAL} r

Using Backpointers a GOAL b d c e f START h p q r A backpointer previous(s) points to the node that stored the state that was expanded to label s The path is recovered by following the backpointers starting at the goal state

Example: Robot Navigation States = positions in the map Transitions = allowed motions x GOAL N W E X S START Navigation: Going from point START to point GOAL given a (deterministic) map

Breadth First V 0 S (the set of start states) previous(start) := NULL k 0 while (V k is not a subset of the goal set and V k is not empty) do V k+ empty set For each state s in V k For each state s in succs(s) If s has not already been labeled Set previous(s ) s Add s into V k+ k k+ if V k is empty signal FAILURE else build the solution path thus: Define S k = GOAL, and forall i <= k, define S i- = previous(s i ) Return path = {S,.., S k }

Properties BFS can handle multiple start and goal states *what does multiple start mean?* Can work either by searching forward from the start or backward for the goal (forward/backward chaining) (Which way is better?) Guaranteed to find the lowest-cost path in terms of number of transitions?? See maze example

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length from start to goal with smallest number of steps Algorithm Complete Optimal Time Space BFS Breadth First

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length from start to goal with smallest number of steps Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. have same cost O(min(N,B L )) O(min(N,B L ))

Bidirectional BFS search simultaneously forward from START and backward from GOAL When do the two search meet? What stopping criterion should be used? Under what condition is it optimal? START V 3 V 3 V 2 V V 2 V GOAL

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps Algorithm Complete Optimal Time Space BFS BIBFS Breadth First Bi-directional Breadth First Major savings when bidirectional search is possible because 2B L/2 << B L B = 0, L = 6 22,200 states generated vs. ~0 7

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps Algorithm Complete Optimal Time Space BFS Breadth First Y Y, if all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Breadth First Major savings when bidirectional search is possible because 2B L/2 << B L B = 0, L = 6 22,200 states generated vs. ~0 7

BFS Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps Algorithm Complete Optimal Time Space Breadth First Y Y, if all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Y Y, If all O(min(N,2B L/2 )) O(min(N,2B L/2 )) Breadth First trans. have same cost Major savings when bidirectional search is possible because 2B L/2 << B L B = 0, L = 6 22,200 states generated vs. ~0 7

Complexity A note about island-driven search in general: What happens to complexity if you have L islands enroute to the goal? Algorithm Complete Optimal Time Space BFS Breadth First Y Y, if all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Breadth First Y Y, If all trans. have same cost O(min(N,2B L/2 )) O(min(N,2B L/2 ))

Counting Transition Costs Instead of Transitions 2 a 2 GOAL START b 3 p d 8 9 4 c q 2 4 h 3 e 9 r f

Counting Transition Costs Instead of Transitions 2 a 2 GOAL START 3 b p d 8 9 4 BFS finds the shortest path in number of steps but does not take into account transition costs Simple modification finds the least cost path New field: At iteration k, g(s) = least cost path to s in k or fewer steps c q 2 4 h 3 e 9 r f

Uniform Cost Strategy to select state to expand next Use the state with the smallest value of g() so far Use priority queue for efficient access to Use priority queue for efficient access to minimum g at every iteration

Priority Queue Priority queue = data structure in which data of the form (item, value) can be inserted and the item of minimum value can be retrieved efficiently Operations: Init (PQ): Initialize empty queue Insert (PQ, item, value): Insert a pair in the queue Pop (PQ): Returns the pair with the minimum value In our case: item = state value = current cost g() Complexity: O(log(number of pairs in PQ)) for insertion and pop operations very efficient http://www.leekillough.com/heaps/ Knuth&Sedwick.

Uniform Cost PQ = Current set of evaluated states Value (priority) of state = g(s) = current cost of path to s Basic iteration:. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ We add the successors of s that have not yet been visited and we update the cost of those currently in the queue

2 a 2 GOAL START b 3 p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(START,0)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(p,) (d,3) (e,9)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(d,3) (e,9) (q,6)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(b,4) (e,) (c,) (q,6)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

START 3 b 2 p a d PQ = {(b,4) (e,) (c,) (q,6)} 8 2 9 4 GOAL c 2 e f 9 Important: h We realized that going to e through d is cheaper 4 than going r to e q directly the value of e is 3 updated from 9 to and it moves up in PQ. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(e,) (a,6) (c,) (q,6)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

START 3 b 2 p a d PQ = {(a,6) (h,6) (c,) (r,4) (q,6)} 8 2 9 4 c q 2 4 h 3 e 9 r f GOAL. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

START 2 a 2 c b 8 2 3 d 9 4 4 p q PQ = {(h,6) (c,) (r,4) (q,6)} h 3 e 9 r f GOAL. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(q,0) (c,) (r,4)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

START 2 a 2 c b 8 2 e 3 d f 9 9 h 4 4 p r q 3 Important: We realized that. Pop the state s with the going to q through h is lowest path cost from PQ cheaper than 2. going Evaluate through the path cost p to the value of all the q is successors updated of s from 6 to 0 and it moves PQ up in PQ PQ = {(q,0) (c,) (r,4)} GOAL 3. Add the successors of s to

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(c,) (r,3)}. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START 3 b p d 8 9 4 c q 2 4 h 3 e 9 r f PQ = {(r,3)}. Pop the state s with the lowest path cost from PQ PQ = {(f,8)} 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START b 8 3 d 9 4 p PQ = {(GOAL,23)} c q 2 4 h 3 e 9 r f. Pop the state s with the lowest path cost from PQ 2. Evaluate the path cost to all the successors of s 3. Add the successors of s to PQ

2 a 2 GOAL START b 3 p d 8 9 4 c q 2 4 h 3 e 9 r f Final path: {START, d, e, h, q, r, f, GOAL} This path is optimal in total cost even though it has more transitions than the one found by BFS What should be the stopping condition? Under what conditions is UCS complete/optimal?

Example: Robot Navigation States = positions in the map Transitions = allowed motions x GOAL N Cost = sqrt(2) X W E Cost = START Navigation: Going from point START to point GOAL given a (deterministic) map S

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps Q = Average size of the priority queue Algorithm Complete Optimal Time Space BFS Breadth First BIBFS UCS Bi-directional Breadth First Uniform Cost

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Breadth First Y Y, If all trans. have same cost O(min(N,2B L/2 )) O(min(N,2B L/2 )) UCS Uniform Cost

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue ε = average cost per link? Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Breadth First Y Y, If all trans. have same cost O(min(N,2B L/2 )) O(min(N,2B L/2 )) UCS Uniform Cost Y, If cost > ε > 0 Y, If cost > 0 O(log(Q)*min(N,B C/ε )) O(min(N,B C/ε ))

Limitations of BFS Memory usage is O(B L ) in general Limitation in many problems in which the states cannot be enumerated or stored explicitly, e.g., large branching factor Alternative: Find a search strategy that requires little storage for use in large problems

Philosophical Limitation We cannot shoot for perfection, we want good enough

Depth First left first: START START d START d b START d b a START d c START d c a START d e START d e r START d e r f START d e r f c START d e r f c a START d e r f GOAL START b p a d c q h e r f GOAL General idea: Expand the most recently expanded node if it has successors Otherwise backup to the previous node on the current path

DFS Implementation DFS (s) if s = GOAL return SUCCESS else For all s in succs(s) DFS (s ) return FAILURE In a recursive implementation, the program stack keeps track of the states in the current path s is current state being expanded, starting with START

Depth First START START d START d b START d b a START d c START START d c a START d e START d e r START d e r f START d e r f c START d e r f c a START d e r f GOAL b a d May explore the same state over again. Potential problem? p 4 c q h Memory usage never exceeds maximum length of a path through the graph e r f GOAL

BFS: Tree Interpretation START DFS: START d e p b c e r h q d e p b c e r h q a a r h f p q a a r h f p q f p q e GOAL q f p q e GOAL q c GOAL q a a a Root: START state Children of node containing state s: All states in succs(s) In the worst case the entire tree is explored O(B Lmax ) Infinite branches if there are loops in the graph! c GOAL q a

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First BIBFS UCS DFS Bi-directional Breadth First Uniform Cost Depth First

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. have same cost O(min(N,B L )) O(min(N,B L )) BIBFS Bi-directional Breadth First Y Y, If all trans. have same cost O(min(N,2B L/2 )) O(min(N,2B L/2 )) UCS Uniform Cost Y (if cost > 0) Y O(log(Q)*min(N,B C/ε )) O(min(N,B C/ε )) DFS Depth First

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all O(min(N,B L )) O(min(N,B L )) trans. have same cost BIBFS UCS Bi-directional Breadth First Uniform Cost Y Y, If cost > 0 Y, If all trans. have same cost Y, If cost > 0 O(min(N,2B L/2 )) O(min(N,2B L/2 )) O(log(Q)*min(N,B C/ε )) O(min(N,B C/ε )) DFS Depth First Y N O(B Lmax ) O(BL max ) For graphs without cycles

Is this a problem: Complexity Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all O(min(N,B L )) O(min(N,B L )) trans. have same cost BIBFS UCS Bi-directional Breadth First Uniform Cost Y Y, If cost > 0 Y, If all trans. have same cost Y, If cost > 0 O(min(N,2B L/2 )) O(min(N,2B L/2 )) O(log(Q)*min(N,B C/ε )) O(min(N,B C/ε )) DFS Depth First Y N O(B Lmax ) O(BL max ) For graphs without cycles

DFS Limitation Need to prevent DFS from looping Avoid visiting the same states repeatedly PC-DFS (Path Checking DFS): Because B d may be much larger than the number of states d steps away from the start Don t use a state that is already in the current path MEMDFS (Memorizing DFS): Keep track of all the states expanded so far. Do not expand any state twice Comparison PC-DFS vs. MEMDFS?

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS BIBFS UCS PCDFS MEMD FS Breadth First Bi- Direction. BFS Uniform Cost Path Check DFS Memorizing DFS

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. have same cost O(Min(N,B L )) O(Min(N,B L )) BIBFS Bi- Direction. BFS Y Y, If all trans. have same cost O(Min(N,2B L/2 )) O(Min(N,2B L/2 )) UCS Uniform Cost Y, If cost > 0 Y, If cost > 0 O(log(Q)*Min(N,B C/ε ) ) O(Min(N,B C/ε )) PCDFS Path Check DFS MEMD FS Memorizing DFS

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. O(Min(N,B L )) O(Min(N,B L )) have same cost BIBFS Bi- Direction. BFS Y Y, If all trans. have same cost O(Min(N,2B L/2 )) O(Min(N,2B L/2 )) UCS PCDFS MEMD FS Uniform Cost Path Check DFS Memorizing DFS Y, If cost > 0 Y, If cost > 0 O(log(Q)*Min(N,B C/ε ) ) O(Min(N,B C/ε )) Y N O(B Lmax ) O(BL max ) Y N O(Min(N,B Lmax )) O(Min(N,B Lmax ))

DFS Limitation 2 Need to make DFS optimal IDS (Iterative Deepening ): Run DFS by searching only path of length (DFS stops if length of path is greater than ) If that doesn t find a solution, try again by running DFS on paths of length 2 or less If that doesn t find a solution, try again by running DFS on paths of length 3 or less.. Continue until a solution is found Depth-Limited

Iterative Deepening Sounds horrible: We need to run DFS many times Actually not a problem: O(LB +(L-)B 2 + +B L ) = O(B L ) Nodes generated at depth Nodes generated at depth 2 Nodes generated at depth L Compare B L and B Lmax Optimal if transition costs are equal

Iterative Deepening (DFID) Memory usage same as DFS Computation cost comparable to BFS even with repeated searches, especially for large B. Example: B=0, L= BFS:, expansions IDS: 23,46 expansions

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS BIBFS UCS PCDFS MEMD FS IDS Breadth First Bi- Direction. BFS Uniform Cost Path Check DFS Memorizing DFS Iterative Deepening

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. O(Min(N,B L )) O(Min(N,B L )) have same cost BIBFS Bi- Direction. BFS Y Y, If all trans. have same cost O(Min(N,2B L/2 )) O(Min(N,2B L/2 )) UCS PCDFS MEMD FS IDS Uniform Cost Path Check DFS Memorizing DFS Iterative Deepening Y, If cost > 0 Y, If cost > 0 O(log(Q)*Min(N,B C/ε ) ) O(Min(N,B C/ε )) Y N O(B Lmax ) O(BL max ) Y N O(Min(N,B Lmax )) O(Min(N,B Lmax ))

Complexity N = Total number of states B = Average number of successors (branching factor) L = Length for start to goal with smallest number of steps C = Cost of optimal path Q = Average size of the priority queue Lmax = Length of longest path from START to any state Algorithm Complete Optimal Time Space BFS Breadth First Y Y, If all trans. O(Min(N,B L )) O(Min(N,B L )) have same cost BIBFS Bi- Direction. BFS Y Y, If all trans. have same cost O(Min(N,2B L/2 )) O(Min(N,2B L/2 )) UCS PCDFS MEMD FS Uniform Cost Path Check DFS Memorizing DFS Y, If cost > 0 Y, If cost > 0 O(log(Q)*Min(N,B C/ε ) ) O(Min(N,B C/ε )) Y N O(B Lmax ) O(BL max ) Y N O(Min(N,B Lmax )) O(Min(N,B Lmax )) IDS Iterative Deepening Y Y, If all trans. have same cost O(B L ) O(BL)

Summary Basic search techniques: BFS, UCS, PCDFS, MEMDFS, DFID Property of search algorithms: Completeness, optimality, time and space complexity Iterative deepening and bidirectional search ideas Trade-offs between the different techniques and when they might be used

Some Challenges Driving directions Robot navigation in Wean Hall Adversarial games Tic Tac Toe Chess