Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Similar documents
Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Uninformed Search Methods

Solving problems by searching

Chapter 2. Blind Search 8/19/2017

CS 8520: Artificial Intelligence

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

CS 380: Artificial Intelligence Lecture #3

AI: problem solving and search

Solving problems by searching

Solving problems by searching. Chapter 3

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Solving Problem by Searching. Chapter 3

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Problem solving and search

Solving problems by searching

Solving problems by searching

Problem solving and search

Solving Problems by Searching (Blindly)

Search EECS 395/495 Intro to Artificial Intelligence

Chapter 3 Solving problems by searching

Search EECS 348 Intro to Artificial Intelligence

Problem solving and search

Artificial Intelligence

Artificial Intelligence Problem Solving and Uninformed Search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Solving Problems by Searching

Solving Problems: Blind Search

Multiagent Systems Problem Solving and Uninformed Search

KI-Programmierung. Basic Search Algorithms

Solving Problems by Searching

Artificial Intelligence

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Artificial Intelligence Uninformed search

Problem Solving as Search. CMPSCI 383 September 15, 2011

Ar#ficial)Intelligence!!

Goal-Based Agents Problem solving as search. Outline

CAP 4630 Artificial Intelligence

CS486/686 Lecture Slides (c) 2014 P.Poupart

Problem solving and search

4. Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

Problem solving and search

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Uninformed Search. Chapter 3

Problem Solving and Search

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Lecture 3. Uninformed Search

Uninformed Search. Chapter 3. (Based on slides by Stuart Russell, Subbarao Kambhampati, Dan Weld, Oren Etzioni, Henry Kautz, and other UW-AI faculty)

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

A2 Uninformed Search

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

Problem Solving. Russell and Norvig: Chapter 3

Pengju XJTU 2016

Robot Programming with Lisp

CS 151: Intelligent Agents, Problem Formulation and Search

Solving Problems by Searching

Solving Problems by Searching

521495A: Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

AGENTS AND ENVIRONMENTS. What is AI in reality?

Blind (Uninformed) Search (Where we systematically explore alternatives)

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

A4B36ZUI - Introduction ARTIFICIAL INTELLIGENCE

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

Solving Problems by Searching. AIMA Sections , 3.6

Chapter 3: Solving Problems by Searching

Pengju

Artificial Intelligence: Search Part 1: Uninformed graph search

Graph Search. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

Problem solving and search

AGENTS AND ENVIRONMENTS. What is AI in reality?

Blind (Uninformed) Search (Where we systematically explore alternatives)

Problem Solving and Search. Chapter 3

Solving problems by searching

Solving problems by searching

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Uninformed Search Strategies AIMA

Artificial Intelligence

Problem solving and search

3 SOLVING PROBLEMS BY SEARCHING

Problem solving by search

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Problem solving and search

CSC 2114: Artificial Intelligence Search

Reminders. Problem solving and search. Problem-solving agents. Outline. Assignment 0 due 5pm today

Informed Search CS457 David Kauchak Fall 2011

CS242 Uninformed Search

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Uninformed Search Strategies AIMA 3.4

Vorlesung Grundlagen der Künstlichen Intelligenz

Transcription:

Search I slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Problem-Solving Agents Intelligent agents can solve problems by searching a state-space State-space Model the agent s model of the world usually a set of discrete states e.g., in driving, the states in the model could be towns/cities Goal State(s) a goal is defined as a desirable state for an agent there may be many states which satisfy the goal test e.g., drive to a town with a ski-resort or just one state which satisfies the goal e.g., drive to Mammoth Operators (actions, successor function) operators are legal actions which the agent can take to move from one state to another

Initial Simplifying Assumptions Environment is static no changes in environment while problem is being solved Environment is observable Environment and actions are discrete (typically assumed, but we will see some exceptions) Environment is deterministic

Example: Traveling in Romania On holiday in Romania; currently in Arad Flight leaves tomorrow from Bucharest Formulate goal: be in Bucharest Formulate problem: states: various cities actions/operators: drive between cities Find solution By searching through states to find a goal sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Execute states that lead to a solution

Example: Traveling in Romania

State-Space Problem Formulation A problem is defined by four items: 1. initial state e.g., "at Arad 2. actions or successor function S(x) = set of action state pairs e.g., S(Arad) = {<Arad Zerind, Zerind>, } 3. goal test (or set of goal states) e.g., x = "at Bucharest, Checkmate(x) 4. path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be 0 A solution is a sequence of actions leading from the initial state to a goal state

Example: Formulating the Navigation Problem Set of States individual cities e.g., Irvine, SF, Las Vegas, Reno, Boise, Phoenix, Denver Operators freeway routes from one city to another e.g., Irvine to SF via 5, SF to Seattle, etc Start State current city where we are, Irvine Goal States set of cities we would like to be in e.g., cities which are closer than Irvine Solution a specific goal city, e.g., Boise a sequence of operators which get us there, e.g., Irvine to SF via 5, SF to Reno via 80, etc

Abstraction Definition of Abstraction: Process of removing irrelevant detail to create an abstract representation: ``high-level, ignores irrelevant details Navigation Example: how do we define states and operators? First step is to abstract the big picture i.e., solve a map problem nodes = cities, links = freeways/roads (a high-level description) this description is an abstraction of the real problem Can later worry about details like freeway onramps, refueling, etc Abstraction is critical for automated problem solving must create an approximate, simplified, model of the world for the computer to deal with: real-world is too detailed to model exactly good abstractions retain all important details

The State-Space Graph Graphs: nodes, arcs, directed arcs, paths Search graphs: States are nodes operators are directed arcs solution is a path from start S to goal G Problem formulation: Give an abstract description of states, operators, initial state and goal state. Problem solving: Generate a part of the search space that contains a solution

The Traveling Salesperson Problem Find the shortest tour that visits all cities without visiting any city twice and return to starting point. State: sequence of cities visited S 0 = A C B A D F E G = a complete tour

Example: 8-queens problem

State-Space problem formulation states? -any arrangement of n<=8 queens -or arrangements of n<=8 queens in leftmost n columns, 1 per column, such that no queen attacks any other. initial state? no queens on the board actions? -add queen to any empty square, or -add queen to leftmost empty square such that it is not attacked by other queens. goal test? 8 queens on the board, none attacked. path cost? 1 per move

Example: Robot Assembly States Initial state Actions Goal test Path Cost

Example: Robot Assembly States: configuration of robot (angles, positions) and object parts Initial state: any configuration of robot and object parts Actions: continuous motion of robot joints Goal test: object assembled? Path Cost: time-taken or number of actions

Learning a spam email classifier States Initial state Actions Goal test Path Cost

Learning a spam email classifier States: settings of the parameters in our model Initial state: random parameter settings Actions: moving in parameter space Goal test: optimal accuracy on the training data Path Cost: time taken to find optimal parameters (Note: this is an optimization problem many machine learning problems can be cast as optimization)

Example: 8-puzzle states? initial state? actions? goal test? path cost?

Example: 8-puzzle states? locations of tiles initial state? given actions? move blank left, right, up, down goal test? goal state (given) path cost? 1 per move

A Water Jug Problem You have a 4-gallon and a 3-gallon water jug You have a faucet with an unlimited amount of water You need to get exactly 2 gallons in 4-gallon jug

Puzzle-solving as Search State representation: (x, y) x: Contents of four gallon y: Contents of three gallon Start state: (0, 0) Goal state (2, n) Operators Fill 3-gallon from faucet, fill 4-gallon from faucet Fill 3-gallon from 4-gallon, fill 4-gallon from 3-gallon Empty 3-gallon into 4-gallon, empty 4-gallon into 3-gallon Dump 3-gallon down drain, dump 4-gallon down drain

Production Rules for the Water Jug Problem 1 (x,y) (4,y) if x < 4 2 (x,y) (x,3) if y < 3 3 (x,y) (x d,y) if x > 0 4 (x,y) (x,y d) if x > 0 5 (x,y) (0,y) if x > 0 6 (x,y) (x,0) if y > 0 7 (x,y) (4,y (4 x)) if x + y 4 and y > 0 Fill the 4-gallon jug Fill the 3-gallon jug Pour some water out of the 4-gallon jug Pour some water out of the 3-gallon jug Empty the 4-gallon jug on the ground Empty the 3-gallon jug on the ground Pour water from the 3-gallon jug into the 4- gallon jug until the 4-gallon jug is full

8 (x,y) (x (3 y),3) if x + y 3 and x > 0 Pour water from the 4-gallon jug into the 3- gallon jug until the 3-gallon jug is full 9 (x,y) (x + y, 0) if x + y 4 and y > 0 Pour all the water from the 3-gallon jug into the 4-gallon jug 10 (x,y) (0, x + y) if x + y 3 and x > 0 Pour all the water from the 4-gallon jug into the 3-gallon jug

One Solution to the Water Jug Problem Gallons in the 4- Gallon Jug Gallons in the 3- Gallon Jug Rule Applied 0 0 2 0 3 9 3 0 2 3 3 7 4 2 5 0 2 9 2 0

Tree-based Search Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a. expanding states). Every state is evaluated: is it a goal state? In practice, the solution space can be a graph, not a tree E.g., 8-puzzle More general approach is graph search Tree search can end up repeatedly visiting the same nodes Unless it keeps track of all nodes visited but this could take vast amounts of memory

Tree search example

Tree search example

Tree search example

Tree search example This strategy is what differentiates different search algorithms

States versus Nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree contains info such as: state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

State Spaces versus Search Trees State Space Set of valid states for a problem Linked by operators e.g., 20 valid states (cities) in the Romanian travel problem Search Tree Root node = initial state Child nodes = states that can be visited from parent Note that the depth of the tree can be infinite E.g., via repeated states Partial search tree Portion of tree that has been expanded so far Fringe Leaves of partial search tree, candidates for expansion Search trees = data structure to search state-space

Search Tree for the 8 puzzle problem

Search Strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness: does it always find a solution if one exists? time complexity: number of nodes generated space complexity: maximum number of nodes in memory optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be )

Why Search can be difficult Assuming b=10, 1000 nodes/sec, 100 bytes/node Depth of Nodes to Solution Expand Time Memory 0 1 1 millisecond 100 bytes 2 111 0.1 seconds 11 kbytes 4 11,111 11 seconds 1 megabyte 8 10 8 31 hours 11 giabytes 12 10 12 35 years 111 terabytes

Breadth-First Search (BFS) Expand shallowest unexpanded node Fringe: nodes waiting in a queue to be explored, also called OPEN Implementation: For BFS, fringe is a first-in-first-out (FIFO) queue new successors go at end of the queue Repeated states? Simple strategy: do not add parent of a node as a leaf

Example: Map Navigation State Space: S = start, G = goal, other nodes = intermediate states, links = legal transitions A B C S G D E F

BFS Search Tree A B C S S G D E F Queue = {S} Select S Goal(S) = true? If not, Expand(S)

BFS Search Tree A B C S S G A D D E F Queue = {A, D} Select A Goal(A) = true? If not, Expand(A)

BFS Search Tree A B C S S G A D D E F B D Queue = {D, B, D} Select D Goal(D) = true? If not, expand(d)

BFS Search Tree A B C S S G A D D E F B D A E Queue = {B, D, A, E} Select B etc.

BFS Search Tree A B C S S G A D D E F B D A E C E S E S B B F Level 3 Queue = {C, E, S, E, S, B, B, F}

BFS Search Tree A B C S S G A D D E F B D A E C E S E S B B F D F A B F D C E A C G Level 4 Expand queue until G is at front Select G Goal(G) = true

Breadth-First Search

Depth-First Search (DFS) Expand deepest unexpanded node Implementation: For DFS, fringe is a first-in-first-out (FIFO) queue (stack) new successors go at beginning of the queue Repeated nodes? Simple strategy: Do not add a state as a leaf if that state is on the path from the root to the current node

DFS Search Tree A B C S S G A D D E F Queue = {A,D}

DFS Search Tree A B C S S G A D D E F B D Queue = {B,D,D}

DFS Search Tree A B C S S G A D D E F B D Queue = {C,E,D,D} C E

DFS Search Tree A B C S S G A D D E F B D Queue = {D,F,D,D} C E D F

DFS Search Tree A B C S S G A D D E F B D Queue = {G,D,D} C E D F G

Note that the fringe argument must be an empty queue, and the type of the queue affects the order of the search.

Evaluation of Search Algorithms Completeness does it always find a solution if one exists? Optimality does it always find a least-cost (or min depth) solution? Time complexity number of nodes generated (worst case) Space complexity number of nodes in memory (worst case) Time and space complexity are measured in terms of b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be )

Breadth-First Search (BFS) Properties Complete? Yes Optimal? Only if path-cost = non-decreasing function of depth Time complexity O(b d ) Space complexity O(b d ) Main practical drawback? exponential space complexity

Complexity of Breadth-First Search Time Complexity assume (worst case) that there is 1 goal leaf at the RHS at depth d so BFS will generate = b + b 2 +... + b d + b d+1 - b = O (b d+1 ) G d=0 d=1 d=2 Space Complexity how many nodes can be in the queue (worst-case)? at depth d there are b d+1 unexpanded nodes in the Q = O (b d+1 ) G d=0 d=1 d=2

Examples of Time and Memory Requirements for Breadth-First Search Assuming b=10, 10 000 nodes/sec, 1kbyte/node Depth of Nodes Solution Generated Time Memory 2 1100 0.11 seconds 1 MB 4 111,100 11 seconds 106 MB 8 10 9 31 hours 1 TB 12 10 13 35 years 10 PB

What is the Complexity of Depth-First Search? Time Complexity maximum tree depth = m assume (worst case) that there is 1 goal leaf at the RHS at depth d so DFS will generate O (b m ) G d=0 d=1 d=2 Space Complexity how many nodes can be in the queue (worst-case)? at depth m we have b nodes and b-1 nodes at earlier depths total = b + (m-1)*(b-1) = O(bm) d=0 d=1 d=2 d=3 d=4

Examples of Time and Memory Requirements for Depth-First Search Assuming b=10, m = 12, 10000 nodes/sec, 1kbyte/node Depth of Nodes Solution Generated Time Memory 2 10 12 3 years 120kb 4 10 12 3 years 120kb 8 10 12 3 years 120kb 12 10 12 3 years 120kb

Depth-First Search (DFS) Properties Complete? Not complete if tree has unbounded depth Optimal? No Time complexity? Exponential Space complexity? Linear

Comparing DFS and BFS Time complexity: same, but In the worst-case BFS is always better than DFS Sometime, on the average DFS is better if: many goals, no loops and no infinite paths BFS is much worse memory-wise DFS is linear space BFS may store the whole search space. In general BFS is better if goal is not deep, if infinite paths, if many loops, if small search space DFS is better if many goals, not many loops, DFS is much better in terms of memory

DFS with a depth-limit L Standard DFS, but tree is not explored below some depth-limit L Solves problem of infinitely deep paths with no solutions But will be incomplete if solution is below depth-limit Depth-limit L can be selected based on problem knowledge E.g., diameter of state-space: E.g., max number of steps between 2 cities But typically not known ahead of time in practice

Depth-First Search with a depth-limit, L = 5

Depth-First Search with a depth-limit

Iterative Deepening Search (IDS) Run multiple DFS searches with increasing depth-limits Iterative deepening search L = 1 While no solution, do DFS from initial state S 0 with cutoff L If found goal, stop and return solution, else, increment depth limit L

Iterative deepening search L=0

Iterative deepening search L=1

Iterative deepening search L=2

Iterative Deepening Search L=3

Iterative deepening search

Properties of Iterative Deepening Search Space complexity = O(bd) (since its like depth first search run different times, with maximum depth limit d) Time Complexity b + (b+b 2 ) +...(b+...b d ) = O(b d ) (i.e., asymptotically the same as BFS or DFS to limited depth d in the worst case) Complete? Yes Optimal Only if path cost is a non-decreasing function of depth IDS combines the small memory footprint of DFS, and has the completeness guarantee of BFS

IDS in Practice Isn t IDS wasteful? Repeated searches on different iterations Compare IDS and BFS: E.g., b = 10 and d = 5 N(IDS) ~ db + (d-1)b 2 + b d = 123,450 N(BFS) ~ b + b 2 + b d = 111,110 Difference is only about 10% Most of the time is spent at depth d, which is the same amount of time in both algorithms In practice, IDS is the preferred uniform search method with a large search space and unknown solution depth

Bidirectional Search Idea simultaneously search forward from S and backwards from G stop when both meet in the middle need to keep track of the intersection of 2 open sets of nodes What does searching backwards from G mean need a way to specify the predecessors of G this can be difficult, e.g., predecessors of checkmate in chess? what if there are multiple goal states? what if there is only a goal test, no explicit list? Complexity time complexity at best is: O(2 b (d/2) ) = O(b (d/2) ) memory complexity is the same

Bi-Directional Search

Uniform Cost Search Optimality: path found = lowest cost Algorithms so far are only optimal under restricted circumstances Let g(n) = cost from start state S to node n Uniform Cost Search: Always expand the node on the fringe with minimum cost g(n) Note that if costs are equal (or almost equal) will behave similarly to BFS

Uniform Cost Search

Optimality of Uniform Cost Search? Assume that every step costs at least ε > 0 Proof of Completeness: Given that every step will cost more than 0, and assuming a finite branching factor, there is a finite number of expansions required before the total path cost is equal to the path cost of the goal state. Hence, we will reach it in a finite number of steps. Proof of Optimality given Completeness: Assume UCS is not optimal. Then there must be a goal state with path cost smaller than the goal state which was found (invoking completeness) However, this is impossible because UCS would have expanded that node first by definition. Contradiction.

Complexity of Uniform Cost Let C* be the cost of the optimal solution Assume that every step costs at least ε > 0 Worst-case time and space complexity is: O( b [1 + floor(c*/ε)] ) Why? floor(c*/ε) ~ depth of solution if all costs are approximately equal

Comparison of Uninformed Search Algorithms