A2 Uninformed Search

Similar documents
Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

CS:4420 Artificial Intelligence

Artificial Intelligence: Search Part 1: Uninformed graph search

Problem solving and search

Problem solving and search

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

Artificial Intelligence Problem Solving and Uninformed Search

Multiagent Systems Problem Solving and Uninformed Search

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Problem solving and search

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Solving problems by searching. Chapter 3

Problem solving and search

Solving problems by searching

Reminders. Problem solving and search. Problem-solving agents. Outline. Assignment 0 due 5pm today

Chapter 2. Blind Search 8/19/2017

Problem solving by search

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Problem solving and search

Problem solving and search

CS 8520: Artificial Intelligence

Outline. Problem solving and search. Problem-solving agents. Example: Romania. Example: Romania. Problem types. Problem-solving agents.

Artificial Intelligence

Problem-solving agents. Problem solving and search. Example: Romania. Reminders. Example: Romania. Outline. Chapter 3

AI: problem solving and search

CS 380: Artificial Intelligence Lecture #3

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

KI-Programmierung. Basic Search Algorithms

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Problem Solving and Search. Chapter 3

Solving problems by searching

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Solving Problem by Searching. Chapter 3

Solving Problems: Blind Search

Artificial Intelligence Uninformed search

Problem solving and search

Solving problems by searching

Uninformed Search strategies. AIMA sections 3.4,3.5

Problem solving and search

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Pengju XJTU 2016

Solving problems by searching

Outline. Solving problems by searching. Prologue. Example: Romania. Uninformed search

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Solving problems by searching

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Artificial Intelligence

Artificial Intelligence

Pengju

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Chapter 3 Solving problems by searching

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

4. Solving Problems by Searching

Uninformed Search Strategies AIMA

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Lecture 3. Uninformed Search

ARTIFICIAL INTELLIGENCE. Pathfinding and search

CS 331: Artificial Intelligence Uninformed Search. Leftovers from last time

Uninformed Search Methods

Chapter 3: Solving Problems by Searching

Problem Solving as Search. CMPSCI 383 September 15, 2011

Solving Problems by Searching

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Solving Problems by Searching

Robot Programming with Lisp

mywbut.com Uninformed Search

Blind (Uninformed) Search (Where we systematically explore alternatives)

Problem Solving and Search

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Uninformed Search B. Navigating through a search tree. Navigating through a search tree. Navigating through a search tree

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Uninformed Search B. Navigating through a search tree. Navigating through a search tree. Navigating through a search tree

Solving Problems by Searching

Problem Solving & Heuristic Search

CS486/686 Lecture Slides (c) 2015 P.Poupart

Problem-solving agents. Solving Problems by Searching. Outline. Example: Romania. Chapter 3

CS486/686 Lecture Slides (c) 2014 P.Poupart

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Blind (Uninformed) Search (Where we systematically explore alternatives)

Solving Problems by Searching

Chapter 4. Uninformed Search Strategies

Solving Problems by Searching (Blindly)

Ar#ficial)Intelligence!!

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Search EECS 395/495 Intro to Artificial Intelligence

Uninformed (also called blind) search algorithms

Uninformed Search Strategies AIMA 3.4

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Search EECS 348 Intro to Artificial Intelligence

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Uninformed search strategies (Section 3.4) Source: Fotolia

Solving Problems by Searching. AIMA Sections , 3.6

Intelligent Agents. Foundations of Artificial Intelligence. Problem-Solving as Search. A Simple Reflex Agent. Agent with Model and Internal State

Solving Problems by Searching

Transcription:

2 Uninformed Search Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of omputer Science rtificial Intelligence p.1/82

Example: The 8-puzzle 2 8 3 1 7 6 4 5 It can be generalized to 15-puzzle, 24-puzzle, or (n 2 1)-puzzle for n 6. rtificial Intelligence p.2/82

Example: The 8-puzzle Go from state S to state G. 2 8 3 1 6 4 7 5 (S) 1 2 3 8 7 6 4 5 (G) 2 8 3 1 6 4 7 5 R L R 2 8 3 1 6 4 7 5 D D U D U D L R U 2 8 3 2 8 3 6 4 1 4 1 7 5 7 6 5 U R 2 8 3 2 8 3 1 4 1 6 4 7 6 5 L 7 5 2 3 1 8 4 7 6 5 L 2 8 3 1 4 7 6 5 2 8 3 1 6 7 5 4 rtificial Intelligence p.3/82

Problem formulation problem is defined by four items: initial state e.g., the starting configuration successor function S(x) = set of action state pairs e.g., for each configuration, we may perform four actions: Move the blank tile up, down, left and right. For each action, we have a new configuration. goal test, e.g., x = the target configuration path cost (additive) e.g., sum of distances, number of actions executed, etc. Usually given as c(x, a, y), the step cost from x to y by action a, assumed to be 0. solution is a sequence of actions leading from the initial state to a goal state rtificial Intelligence p.4/82

Selecting a State Space Real world is absurdly complex state space must be abstracted for problem solving (bstract) state = set of real states (bstract) action = complex combination of real actions e.g., edar Rapids hicago represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state in edar Rapids must get to some real state in hicago. Each abstract action should be easier than the original problem! (bstract) Solution (path-sensitive) solution = set of paths that are solutions in the real world (path-insensitive) solution = set of states that pass the rtificial Intelligence p.5/82

Formulating Problem as a Graph In the graph each node represents a possible state; a node is designated as the initial state; one or more nodes represent goal states, states in which the agent s goal is considered accomplished. each edge represents a state transition caused by a specific agent action; associated to each edge is the cost of performing that transition. rtificial Intelligence p.6/82

Search Graph How do we reach a goal state? initial state 3 4 4 S 5 2 5 7 F goal states 4 D There may be several possible ways. Or none! Factors to consider: cost of finding a path; cost of traversing a path. 2 E 3 G rtificial Intelligence p.7/82

Problem Solving as Search Search space: set of states reachable from an initial state S 0 via a (possibly empty/finite/infinite) sequence of state transitions. To achieve the problem s goal search the space for a (possibly optimal) sequence of transitions starting from S 0 and leading to a goal state; execute (in order) the actions associated to each transition in the identified sequence. Depending on the features of the agent s world the two steps above can be interleaved. rtificial Intelligence p.8/82

Problem Solving as Search Reduce the original problem to a search problem. solution for the search problem is a path initial state goal state. The solution for the original problem is either the sequence of actions associated with the path or the description of the goal state. rtificial Intelligence p.9/82

Example: The 8-puzzle States: Operators: configurations of tiles move one tile Up/Down/Left/Right There are 9! = 362, 880 possible states (all permutations of {, 1, 2, 3, 4, 5, 6, 7, 8}). There are 16! possible states for 15-puzzle. Not all states are directly reachable from a given state. (In fact, exactly half of them are reachable from a given state.) How can a computer represent the states and the state space for this problem? rtificial Intelligence p.10/82

Problem Formulation 1. hoose an appropriate data structure to represent the world states. 2. Define each operator as a precondition/effects pair where the precondition holds exactly in the states the operator applies to, effects describe how a state changes into a successor state by the application of the operator. 3. Specify an initial state. 4. Provide a description of the goal (used to check if a reached state is a goal state). rtificial Intelligence p.11/82

Formulating the 8-puzzle Problem States: each represented by a 3 3 array of numbers in [0...8], where value 0 is for the empty cell. 2 1 7 8 3 6 4 5 becomes = 2 8 3 1 6 4 7 0 5 rtificial Intelligence p.12/82

Formulating the 8-puzzle Problem Operators: 24 operators of the form Op (r,c,d) where r,c {1, 2, 3}, d {L,R,U,D}. Op (r,c,d) moves the empty space at position (r,c) in the direction d. 2 8 3 1 6 4 7 0 5 Op (3,2,L) = 2 8 3 1 6 4 0 7 5 rtificial Intelligence p.13/82

Preconditions and Effects Example: Op (3,2,R) 2 8 3 1 6 4 7 0 5 Op (3,2,R) = 2 8 3 1 6 4 7 5 0 Preconditions: 2] = 0 { [3, Effects: [3, 2] [3, 3] [3, 3] 0 We have 24 operators in this problem formulation... 20 too many! rtificial Intelligence p.14/82

etter Formulation States: each represented by a pair (, (i,j)) where: is a 3 3 array of numbers in [0...8] (i,j) is the position of the empty space (0) in the array. 2 1 7 8 3 6 4 5 becomes ( 2 8 3 1 6 4 7 0 5, (3, 2)) rtificial Intelligence p.15/82

etter Formulation Operators: 4 operators of the form Op d where d {L,R,U,D}. Op d moves the empty space in the direction d. 2 8 3 1 6 4 7 0 5 2 8 3 Op = L 1 6 4 0 7 5 rtificial Intelligence p.16/82

Preconditions and Effects Example: Op L ( 2 8 3 1 6 4 7 0 5, (3, 2)) 2 8 3 Op = L ( 1 6 4 0 7 5 (3, 1)) Let (r 0,c 0 ) be the position of 0 in. Preconditions: c 0 > 1 Effects: [r 0,c 0 ] [r 0,c 0 1] [r 0,c 0 1] 0 (r 0,c 0 ) (r 0,c 0 1) rtificial Intelligence p.17/82

Half states are not reachable? an this be done? 1 2 3 4 5 6 7 8 any steps = 1 2 3 4 5 6 8 7 $1,000 award for anyone who can do it! rtificial Intelligence p.18/82

Half states are not reachable? a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 Let the 8-puzzle be represented by (a 1, a 2, a 3, a 4, a 5, a 6, a 7, a 8, a 9 ). We say (a i, a j ) is an inversion if neither a i nor a j is blank, i < j and a i > a j. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 8 7 The first one has 0 inversions and the second has 1. laim: # of inversions modulo two remains the same after each move. rtificial Intelligence p.19/82

The Water Jugs Problem 3gl 4gl Get exactly 2 gallons of water into the 4gl jug. rtificial Intelligence p.20/82

The Water Jugs Problem States: Determined by the amount of water in each jug. State Representation: Two real-valued variables, J 3,J 4, indicating the amount of water in the two jugs, with the constraints: 0 J 3 3, 0 J 4 4 Initial State Description J 3 = 0, J 4 = 0 Goal State Description: J 4 = 2 non exhaustive description rtificial Intelligence p.21/82

The Water Jugs Problem: Operators F4: fill jug4 from the pump. precond: J 4 < 4 effect: J 4 = 4 E4: empty jug4 on the ground. precond: J 4 > 0 effect: J 4 = 0 E4-3: pour water from jug4 into jug3 until jug3 is full. precond: J 3 < 3, effect: J 3 = 3, J 4 3 J 3 J 4 = J 4 (3 J 3 ) P3-4: pour water from jug3 into jug4 until jug4 is full. precond: J 4 < 4, effect: J 4 = 4, J 3 4 J 4 J 3 = J 3 (4 J 4 ) E3-4: pour water from jug3 into jug4 until jug3 is empty. precond: J 3 + J 4 < 4, effect: J 4 = J 3 + J 4, J 3 > 0 J 3 = 0... rtificial Intelligence p.22/82

The Water Jugs Problem Problem Search Graph J_3 = 0 J_4 = 0 J_4 = 2 F4 J_3 = 0 J_4 = 0 F3 J_3 = 0 F3 J_4 = 4 P4-3 F4 J_3 = 3 J_4 = 0 E3-4 J_3 = 3 J_4 = 4 J_3 = 3 J_4 = 1 J_3 = 0 J_4 = 4 F3 J_3 = 0 J_4 = 3......... P3-4 J_3 = 3 J_4 = 3 E4 J_3 = 2 J_4 = 4... E3-4 J_3 = 2 J_4 = 0 J_3 = 0 J_4 = 2 rtificial Intelligence p.23/82

Real-World Search Problems Route Finding (computer networks, airline travel planning system,... ) Travelling Salesman Optimization Problem (package delivery, automatic drills,... ) Layout Problems (VLSI layout, furniture layout, packaging,... ) ssembly Sequencing (assembly of electric motors,... ) Task Scheduling (manufacturing, timetables,... ) rtificial Intelligence p.24/82

Problem Solution Path-sensitive Problems whose solution is a description of how to reach a goal state from the initial state: n-puzzle route-finding problem assembly sequencing Path-insensitive Problems whose solution is simply a description of the goal state itself: 8-queen problem scheduling problems layout problems rtificial Intelligence p.25/82

Simple Search Problem Finding a painting in a museum. rtificial Intelligence p.26/82

Hanoi Tower Problem How to move disks from one pile to another, one disk at a time, and no disk can be placed on any smaller disk? rtificial Intelligence p.27/82

More on Graphs graph is a set of notes and edges (arcs) between them. S F D E G graph is directed if an edge can be traversed only in a specified direction. When an edge is directed from n i to n j it is univocally identified by the pair (n i,n j ) n i is a parent (or predecessor) of n j n j is a child (or successor) of n i rtificial Intelligence p.28/82

Directed Graphs S F D E G path, of length k 0, is a list of k nodes, n 0,n 1,n 2,...,n k, such that (n i,n i+1 ) is an edge for all 0 i < k. Ex: (S,D,E,) For 1 i < j k + 1, N i is a ancestor of N j ; N j is a descendant of N i. graph is cyclic if it has a path starting from and ending into the same node. Ex: (,D,E,) rtificial Intelligence p.29/82

From Search Graphs to Search Trees The set of all possible paths of a graph can be represented as a tree. tree is a directed acyclic graph all of whose nodes have at most one parent. root of a tree is a node with no parents. leaf is a node with no children. The branching factor of a node is the number of its children. Graphs can be turned into trees by duplicating nodes and breaking cyclic paths, if any. rtificial Intelligence p.30/82

From Graphs to Trees To unravel a graph into a tree choose a root node and trace every path from that node until you reach a leaf node or a node already in that path. S depth 0 D depth 1 S F E D depth 2 D E G G E F depth 3 D... G depth 4 must remember which nodes have been visited a node may get duplicated several times in the tree the tree has infinite paths only if the graph has infinite non-cyclic paths. rtificial Intelligence p.31/82

Tree Search lgorithms asic idea: simulated exploration of state space by generating successors of already-explored states (a.k.a. expanding states) fringe = the nodes to be expanded function TREE-SERH( problem, strategy) returns a solution, or failure let the fringe contain the initial state of problem loop do end if there are no candidates in fringe then return failure choose a node in fringe for expansion according to strategy if the node contains a goal state then return the solution else expand the node and add the resulting nodes to fringe rtificial Intelligence p.32/82

Implementation: states vs. nodes state is (a representation of) a physical configuration node is a data structure constituting part of a search tree includes parent, children, depth, path cost g(x) States do not have parents, children, depth, or path cost! parent, action State 5 4 Node depth = 6 6 1 8 g = 6 7 3 2 state The EXPND function creates new nodes, filling in the various fields and using the SUESSORFN of the problem to create the corresponding states. rtificial Intelligence p.33/82

Search Strategies strategy is defined by picking the order of node expansion. Strategies are evaluated along the following dimensions: completeness does it always find a solution if one exists? time complexity number of nodes generated/expanded space complexity maximum number of nodes in memory optimality does it always find a least-cost solution? Time and space complexity are measured in terms of b maximum branching factor of the search tree d depth of the least-cost solution m maximum depth of the state space (may be ) rtificial Intelligence p.34/82

Uninformed Search Strategies Uninformed strategies use only the information available in the problem definition readth-first search Depth-first search Depth-limited search Iterative deepening search idirectional search Uniform-cost search rtificial Intelligence p.35/82

readth-first Search Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end D E F G rtificial Intelligence p.36/82

readth-first Search Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end D E F G rtificial Intelligence p.37/82

readth-first Search Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end D E F G rtificial Intelligence p.38/82

readth-first Search Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end D E F G rtificial Intelligence p.39/82

Properties of readth-first Search omplete?? rtificial Intelligence p.40/82

Properties of readth-first Search omplete?? Yes (if b is finite) Time?? rtificial Intelligence p.41/82

Properties of readth-first Search omplete?? Yes (if b is finite) Time?? 1 + b + b 2 + b 3 +... + b d + b(b d 1) = O(b d+1 ), i.e., exp. in d Space?? rtificial Intelligence p.42/82

Properties of readth-first Search omplete?? Yes (if b is finite) Time?? 1 + b + b 2 + b 3 +... + b d + b(b d 1) = O(b d+1 ), i.e., exp. in d Space?? O(b d+1 ) (keeps every node in memory) Optimal?? rtificial Intelligence p.43/82

Properties of readth-first Search omplete?? Yes (if b is finite) Time?? 1 + b + b 2 + b 3 +... + b d + b(b d 1) = O(b d+1 ), i.e., exp. in d Space?? O(b d+1 ) (keeps every node in memory) Optimal?? Yes (if cost = 1 per step); not optimal in general Space?? rtificial Intelligence p.44/82

Properties of readth-first Search omplete?? Yes (if b is finite) Time?? 1 + b + b 2 + b 3 +... + b d = O(b d+1 ), i.e., exp. in d Space?? O(b d+1 ) (keeps every node in memory) Optimal?? Yes (if cost = 1 per step); not optimal in general Space?? It is the big problem; can easily generate nodes at 10M/sec so 24hrs = 860G. rtificial Intelligence p.45/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.46/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.47/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.48/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.49/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.50/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.51/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.52/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.53/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.54/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.55/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.56/82

Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front D E F G H I J K L M N O rtificial Intelligence p.57/82

Properties of depth-first search omplete?? rtificial Intelligence p.58/82

Properties of depth-first search omplete?? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time?? rtificial Intelligence p.59/82

Properties of depth-first search omplete?? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time?? O(b m ): terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space?? rtificial Intelligence p.60/82

Properties of depth-first search omplete?? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time?? O(b m ): terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space?? O(bm), i.e., linear space! Optimal?? rtificial Intelligence p.61/82

Properties of depth-first search omplete?? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path complete in finite spaces Time?? O(b m ): terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space?? O(bm), i.e., linear space! Optimal?? No rtificial Intelligence p.62/82

Depth-Limited Search = depth-first search with depth limit l, i.e., nodes at depth l have no successors function Depth-Limited-Search (problem, limit) return soln/fail/cutoff return Recursive-DLS(Initial-node(problem), problem, limit) end function function Recursive-DLS (node, problem, limit) return soln/fail/cutoff cutoff-occurred := false; if (Goal-State(problem, State(node))) then return node; else if (Depth(node) == limit) then return cutoff; else for each successor in Expand(node, problem) do result := Recursive-DLS(successor, problem, limit) if (result == cutoff) then cutoff-occurred := true; else if (result!= fail) then return result; end for if (cutoff-occurred) then return cutoff; else return fail; end function rtificial Intelligence p.63/82

Iterative Deepening Search function Iterative-Deepening-Search (problem) return soln for depth from 0 to MX-INT do result := Depth-Limited-Search(problem, depth) if (result!= cutoff) then return result end for end function rtificial Intelligence p.64/82

Iterative deepening search l = 0 Limit = 0 rtificial Intelligence p.65/82

Iterative deepening search l = 1 Limit = 1 rtificial Intelligence p.66/82

Iterative deepening search l = 2 Limit = 2 D E F G D E F G D E F G D E F G D E F G D E F G D E F G D E F G rtificial Intelligence p.67/82

Iterative deepening search l = 3 Limit = 3 D E F G D E F G D E F G D E F G H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O D E F G D E F G D E F G D E F G H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O D E F G D E F G D E F G D E F G H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O rtificial Intelligence p.68/82

Properties of iterative deepening search omplete?? rtificial Intelligence p.69/82

Properties of iterative deepening search omplete?? Yes Time?? rtificial Intelligence p.70/82

Properties of iterative deepening search omplete?? Yes Time?? (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = O(b d ) Space?? rtificial Intelligence p.71/82

Properties of iterative deepening search omplete?? Yes Time?? (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = O(b d ) Space?? O(bd) Optimal?? rtificial Intelligence p.72/82

Properties of iterative deepening search omplete?? Yes Time?? (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = O(b d ) Space?? O(bd) Optimal?? Yes, if step cost = 1 an be modified to explore uniform-cost tree Numerical comparison for b = 10 and d = 5, solution at far right: N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450 N(FS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100 rtificial Intelligence p.73/82

idirectional Search Expand th starting node forward and the goal node backward at the same time using breadth-first search Implementation: fringe = two FIFO queues, one for forward nodes and one for backward nodes Each node has a flag: f-visited, b-visited, null solution is found if a b-visited node is found in the forward search or a f-visited node is found in the backward search. omplete?? Yes Time?? # of nodes visited, O(b d/2 ) where d is the distance of the nearest solution Space?? # of nodes with g cost of optimal solution, O(b d/2 ) Optimal?? Yes nodes expanded in increasing order of g(n) rtificial Intelligence p.74/82

idirectional Search: Example rtificial Intelligence p.75/82

Uniform-ost Search Expand least-cost unexpanded node Implementation: fringe = queue ordered by path cost (priority queue) Equivalent to breadth-first if step costs all equal omplete?? Yes, if step cost ǫ Time?? # of nodes with g cost of optimal solution, O(b /ǫ ) where is the cost of the optimal solution Space?? # of nodes with g cost of optimal solution, O(b /ǫ ) Optimal?? Yes nodes expanded in increasing order of g(n) rtificial Intelligence p.76/82

Uniform-ost Search: Example rtificial Intelligence p.77/82

Repeated states Failure to detect repeated states can turn a linear problem into an exponential one! D We may also keep a list of closed nodes to avoid expanding the same node twice. rtificial Intelligence p.78/82

Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms rtificial Intelligence p.79/82

Uninformed Search Strategies Strategies Time Space omplete? readth-first Search O(b d ) O(b d ) Yes Depth-first Search O(b m ) O(bm) No Depth-limited Search O(b l ) O(bl) No Iterative Deepening Search O(b d ) O(bd) Yes idirectional Search O(b d/2 ) O(b d/2 ) Yes Uniform ost Search O(b d ) O(b d ) Yes where b is the branching factor, d is the depth of the optimal solution, m is the length of the longest path, l is the limit set by the user. rtificial Intelligence p.80/82

omplexity of Iterative Deepening Search (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = O(b d ) t first, we show that d i=1 ibi 1 = (d + 1)b d /(b 1) (b d+1 1)/(b 1) 2. Let F(x) = d i=0 xi = (x d+1 1)/(x 1). Take the derivation on both sides, we have F (x) = d i=0 ixi 1 = (d + 1)x d /(x 1) (x d+1 1)/(x 1) 2. So F (b) = d i=0 ibi 1 = (d + 1)b d /(b 1) (b d+1 1)/(b 1) 2. rtificial Intelligence p.81/82

omplexity of Iterative Deepening Search (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = O(b d ) (d + 1)b 0 + db 1 + (d 1)b 2 +... + b d = d i=0 (d + 1 i)bi = (d + 1) d i=0 bi b d i=1 ibi 1 = (d + 1)(b d+1 1)/(b 1) b d i=1 ibi 1 = (d + 1)(b d+1 1)/(b 1) bf (b) = (d + 1)(b d+1 1)/(b 1) b(d + 1)b d /(b 1) + b(b d+1 1)/(b 1) 2 = (d + 1)/(b 1) + b(b d+1 1)/(b 1) 2 = O(b d ) rtificial Intelligence p.82/82