Chapter 4. Uninformed Search Strategies

Similar documents
Uninformed Search Methods

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

UNINFORMED SEARCH. Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5

ARTIFICIAL INTELLIGENCE. Pathfinding and search

An Appropriate Search Algorithm for Finding Grid Resources

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

KI-Programmierung. Basic Search Algorithms

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Problem solving and search

Lecture 3. Uninformed Search

Artificial Intelligence

UNINFORMED SEARCH. What to do if teammates drop? Still have 3 or more? No problem keep going. Have two or fewer and want to be merged?

CS 4100 // artificial intelligence

CSC 2114: Artificial Intelligence Search

Announcements. Project 0: Python Tutorial Due last night

Artificial Intelligence Uninformed search

Uninformed search strategies (Section 3.4) Source: Fotolia

Uninformed Search Strategies AIMA

AI: problem solving and search

A2 Uninformed Search

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Solving Problems: Blind Search

CS540 Uninformed Search

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Artificial Intelligence Problem Solving and Uninformed Search

6.034 Notes: Section 2.1

CS486/686 Lecture Slides (c) 2015 P.Poupart

521495A: Artificial Intelligence

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Artificial Intelligence

Uninformed Search strategies. AIMA sections 3.4,3.5

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Solving Problem by Searching. Chapter 3

CS486/686 Lecture Slides (c) 2014 P.Poupart

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

COMP3702/7702 Artificial Intelligence Week2: Search (Russell & Norvig ch. 3)" Hanna Kurniawati"

Algorithms for Data Structures: Uninformed Search. Phillip Smith 19/11/2013

Multiagent Systems Problem Solving and Uninformed Search

Uninformed (also called blind) search algorithms

Solving problems by searching. Chapter 3

CS 5522: Artificial Intelligence II

CS:4420 Artificial Intelligence

CS242 Uninformed Search

State Spaces

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Solving Problems by Searching

Blind (Uninformed) Search (Where we systematically explore alternatives)

CAP 4630 Artificial Intelligence

Uninformed Search. (Textbook Chpt 3.5) Computer Science cpsc322, Lecture 5. May 18, CPSC 322, Lecture 5 Slide 1

Chapter 3: Solving Problems by Searching

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Problem Solving & Heuristic Search

Search: Advanced Topics and Conclusion

Lecture 14: March 9, 2015

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Uninformed Search Strategies

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Solving problems by searching

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Artificial Intelligence: Search Part 1: Uninformed graph search

Uninformed search strategies

Uninformed Search Strategies AIMA 3.4

DFS. Depth-limited Search

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Blind (Uninformed) Search (Where we systematically explore alternatives)

CPS 170: Artificial Intelligence Search

Problem Solving and Search

Solving problems by searching

Problem Solving and Searching

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

CS 331: Artificial Intelligence Uninformed Search. Real World Search Problems

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Topic 1 Uninformed Search

Chapter 2. Blind Search 8/19/2017

Search EECS 348 Intro to Artificial Intelligence

Search EECS 395/495 Intro to Artificial Intelligence

Artificial Intelligence

Ar#ficial)Intelligence!!

CS 8520: Artificial Intelligence

Informed Search CS457 David Kauchak Fall 2011

Problem solving and search

CS 331: Artificial Intelligence Uninformed Search. Leftovers from last time

Problem solving and search

521495A: Artificial Intelligence

Solving Problems by Searching

Search: Advanced Topics and Conclusion

CS 380: Artificial Intelligence Lecture #3

Introduction to Intelligent Systems

Solving Problems by Searching

Chapter 3 Solving problems by searching

Search. CS 3793/5233 Artificial Intelligence Search 1

Uninformed Search. Day 1 & 2 of Search. Russel & Norvig Chap. 3. Material in part from

Solving problems by searching

Search. (Textbook Chpt ) Computer Science cpsc322, Lecture 2. May, 10, CPSC 322, Lecture 2 Slide 1

Problem Solving as Search. CMPSCI 383 September 15, 2011

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Problem solving and search

Transcription:

Chapter 4. Uninformed Search Strategies An uninformed (a.k.a. blind, brute-force) search algorithm generates the search tree without using any domain specific knowledge. The two basic approaches differ as to whether you check for a goal when a node is generated or when it isexpanded. Checking at generation time: if start_state is a goal state return the empty action list fringe := [make_node(start_state, null, null)] while fringe is not empty n := select and remove some node from the fringe for each action a applicable to n.state s := succ(n.state, a) n' := make_node(s, a, n) if s is a goal state, return n'.actionlistfromroot() return failure add n' to fringe Checking at expansion time:

fringe := [make_node(start_state, null, null)] while fringe is not empty n := select and remove some node from the fringe if n.state is a goal state return n.actionlistfromroot() for each action a applicable to n.state return failure add make_node(succ(n.state, a), a, n) to fringe Breadth-First Search BFS is the general search algorithm where the "insert" function is "enqueue-atend". This means that newly generated nodes are added to the fringe at the end, so they are expanded last (FIFO Queue). BFS first considers all paths of length 1, then all paths of length 2, and so on. This is why it is called "breadth-first". The picture below shows intuitively how BFS works.

Consider a search tree where expanding a node always gives exactly b children, where the "branching factor" b >= 2. So there is 1 root node b nodes at depth 1 b*b nodes at depth 2 b*b*b nodes at depth 3 and so on. The number of nodes at depth d or less is N = 1 + b + b^2 +... + b^d. Somewhat surprisingly, N = O(b^d). Intuitively, almost all the nodes are at the deepest level, and the number of shallower nodes is negligible. For a problem with branching factor b where the first solution is at depth d, the time complexity of BFS is O(b^d). The space complexity of BFS is also O(b^d). BFS is complete (if bb is finite) BFS is optimal if all path costs are the same (because it always finds the shallowest node first) The big problem with depth-first search is that it uses about as much space as it uses time. Any real computer will run out of space before it runs out of time, at this rate. Depth-First Search DFS is the general search algorithm where the "insert" function is "enqueue-atfront". This means that newly generated nodes are added to the fringe at the beginning, so they are expanded immediately (LIFO or Stack).

DFS goes down a path until it reaches a node that has no children. Then DFS "backtracks" and expands a sibling of the node that had no children. If this node has no siblings, then DFS looks for a sibling of the grandparent, and so on. See the picture below for an illustration of DFS. No matter how deep the current node is, DFS will always go deeper if it has a child. Thus, if the solution is closer to the root, DFS may not find it (not optimal). The major weakness of DFS is that it will fail to terminate if there is an infinite path "to the left of" the path to the first solution. In other words, for many problems DFS is not complete: a solution exists but DFS cannot find it. The major advantage of DFS is that it only uses O(bm) space if the branching factor is b and the maximum depth is m. (Explanation: there are m nodes on the longest path, and for each of these b-1 siblings must be stored.)

Time complexity (worst case: solution is at m): O(b^m) regardless of whether we test at generation or expansion time. Thus, it can be better than BFS for dense solution space Depth-Limited Search This is just a depth-first search with a cutoff depth. Here is the algorithm (for the test-at-expansion-time case) fringe := [make_node(start_state, null, null)] reached_limit := false while fringe is not empty n := fringe.pop() if n.state is a goal state return n.actionlistfromroot() if n.depth == limit else reached_limit := true for each action a applicable to n.state fringe.push(make_node(succ(n.state, a), a, n)) return reached_limit? cutoff : failure Properties of DLS: Won t run forever unless bb is infinite not complete because a goal may be below the cutoff not optimal

Time complexity (worst case: every path reaches cutoff): O(b^c) regardless of whether we test at generation or expansion time. Space complexity is O(bc) linear space. Iterative Deepening Search Iterative deepening is a very simple, very good, but counter-intuitive idea that was not discovered until the mid 1970s. Then it was invented by many people simultaneously. The idea is to do depth-limited DFS repeatedly, with an increasing depth limit, until a solution is found. Intuitively, this is a dubious idea because each repetition of depth-limited DFS will repeat uselessly all the work done by previous repetitions. But, this useless repetition is not significant because a branching factor b > 1 implies that # nodes at depth k >> # nodes at depth k-1 or less Iterative deepening simulates BFS with linear space complexity. For a problem with branching factor b where the first solution is at depth d, the time complexity of iterative deepening is O(b^d), and its space complexity is O(bd). It complete for finite b and optimal in terms of the solution depth (and optimal in general if path cost is non-decreasing function of depth). Uniform Cost Search Uniform Cost Search is the best algorithm for a search problem, which does not involve the use of heuristics. It can solve any general graph for optimal cost. As it sounds, it searches in branches which are more or less the same in cost.

Uniform Cost Search again demands the use of a priority queue. Recall that Depth First Search used a priority queue with the depth up to a particular node being the priority and the path from the root to the node being the element stored. The priority queue used here is similar with the priority being the cumulative cost upto the node. Unlike Depth First Search where the maximum depth had the maximum priority, Uniform Cost Search gives the minimum cumulative cost the maximum priority. The algorithm using this priority queue is the following: Insert the root into the queue While the queue is not empty Dequeue the maximum priority element from the queue (If priorities are same, alphabetically smaller path is chosen) If the path is ending in the goal state, print the path and exit Else Insert all the children of the dequeued element, with the cumulative costs as priority Now let us apply the algorithm on the above search tree and see what it gives us. We will go through each iteration and look at the final output. Each element of the priority queue is written as [path,cumulative cost].

Initialization: { [ S, 0 ] } Iteration1: { [ S->A, 1 ], [ S->G, 12 ] } Iteration2: { [ S->A->C, 2 ], [ S->A->B, 4 ], [ S->G, 12] } Iteration3: { [ S->A->C->D, 3 ], [ S->A->B, 4 ], [ S->A->C->G, 4 ], [ S->G, 12 ] } Iteration4: { [ S->A->B, 4 ], [ S->A->C->G, 4 ], [ S->A->C->D->G, 6 ], [ S->G, 12 ] } Iteration5: { [ S->A->C->G, 4 ], [ S->A->C->D->G, 6 ], [ S->A->B->D, 7 ], [ S->G, 12 ] } Iteration6 gives the final output as S->A->C->G. It is worth to mention that: ->The creation of the tree is not a part of the algorithm. It is just for visualization. ->The algorithm returns the first path encountered. It does not search for all paths. ->The algorithm returns a path which is optimal in terms of cost. At any given point in the execution, the algorithm never expands a node which has a cost greater than the cost of the shortest path in the graph. The elements in the priority queue have almost the same costs at a given time, and thus the name Uniform Cost Search. It may seem as if the elements don t have almost the same costs, from the above example. But when applied on a much larger graph it is certainly so. Uniform Cost Search can also be used as Breadth First Search if all the edges are given a cost of 1.