Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Similar documents
4. Solving Problems by Searching

Problem solving and search

Problem solving and search

Artificial Intelligence: Search Part 1: Uninformed graph search

Problem solving and search

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Solving Problems by Searching

Problem solving and search

Robot Programming with Lisp

CS486/686 Lecture Slides (c) 2015 P.Poupart

CS486/686 Lecture Slides (c) 2014 P.Poupart

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

CS 8520: Artificial Intelligence

Problem solving and search

CS:4420 Artificial Intelligence

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Solving problems by searching

Solving Problems by Searching

Solving Problems by Searching

CS 380: Artificial Intelligence Lecture #3

AGENTS AND ENVIRONMENTS. What is AI in reality?

Chapter 2. Blind Search 8/19/2017

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

Solving Problem by Searching. Chapter 3

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving problems by searching

Solving problems by searching

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Artificial Intelligence Problem Solving and Uninformed Search

Artificial Intelligence

Problem solving and search

Artificial Intelligence. Informed search methods

Solving Problems using Search

Reminders. Problem solving and search. Problem-solving agents. Outline. Assignment 0 due 5pm today

Problem solving and search

Multiagent Systems Problem Solving and Uninformed Search

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Problem solving by search

KI-Programmierung. Basic Search Algorithms

CS:4420 Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Introduction to Artificial Intelligence. Informed Search

Solving problems by searching

Problem solving and search

Problem-solving agents. Problem solving and search. Example: Romania. Reminders. Example: Romania. Outline. Chapter 3

Informed search algorithms

Outline. Problem solving and search. Problem-solving agents. Example: Romania. Example: Romania. Problem types. Problem-solving agents.

PEAS: Medical diagnosis system

Outline. Solving problems by searching. Prologue. Example: Romania. Uninformed search

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Solving problems by searching

Problem Solving and Search. Chapter 3

Artificial Intelligence

Informed search algorithms

Artificial Intelligence

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed Search and Exploration

COSC343: Artificial Intelligence

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Chapter 3 Solving problems by searching

Artificial Intelligence

Informed Search and Exploration

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Problem Solving and Search. Geraint A. Wiggins Professor of Computational Creativity Department of Computer Science Vrije Universiteit Brussel

Solving problems by searching. Chapter 3

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Artificial Intelligence: Search Part 2: Heuristic search

Problem-solving agents. Solving Problems by Searching. Outline. Example: Romania. Chapter 3

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Search EECS 395/495 Intro to Artificial Intelligence

Search EECS 348 Intro to Artificial Intelligence

CS-E4800 Artificial Intelligence

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Searching and NetLogo

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

CAP 4630 Artificial Intelligence

Problem Solving and Search

AI: problem solving and search

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Planning, Execution & Learning 1. Heuristic Search Planning

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Informed search algorithms

Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu. Blind Searches

CS414-Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Solving Problems: Blind Search

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Informed Search and Exploration

Transcription:

COMP9414/9814/3411 17s1 Search 1 COMP9414/ 9814/ 3411: Artificial Intelligence Week 3: Path Search Russell & Norvig, Chapter 3. Motivation Reactive and Model-Based Agents choose their actions based only on what they currently perceive, or have perceived in the past. a Planning Agent can use Search techniques to plan several steps ahead in order to achieve its goal(s). two classes of search strategies: Uninformed search strategies can only distinguish goal states from non-goal states Informed search strategies use heuristics to try to get closer to the goal COMP9414/9814/3411 17s1 Search 2 Romania Street Map COMP9414/9814/3411 17s1 Search 3 Example: Romania Neamt On touring holiday in Romania; currently in. Flight leaves tomorrow from Bucharest; non-refundable ticket. Zerind Timisoara Lugoj Sibiu Fagaras Rimnicu Vilcea Pitesti Iasi Vaslui Step 1 Formulate goal: be in Bucharest on time Step 2 Specify task: states: various cities operators or actions (= transitions between states): drive between cities Dobreta Mehadia Urziceni Bucharest Hirsova Step 3 Find solution (= action sequences): sequence of cities, e.g., Sibiu, Fagaras, Bucharest Craiova Giurgiu Eforie Step 4 Execute: drive through all the cities given by the solution.

COMP9414/9814/3411 17s1 Search 4 Single-State Task Specification A task is specified by states and actions: state space initial state e.g. other cities e.g. at actions or operators (or successor function S(x)) e.g. Zerind Sibiu etc. goal test, check if a state is goal state In this case, there is only one goal specified ( at Bucharest ) path cost e.g. sum of distances, number of actions etc. COMP9414/9814/3411 17s1 Search 5 Choosing States and Actions Real world is absurdly complex state space must be abstracted for problem solving (abstract) state = set of real states (abstract) action = complex combination of real actions e.g. Zerind represents a complex set of possible routes, detours, rest stops, etc. for guaranteed realizability, any real state in must get to some real state in Zerind (abstract) solution = set of real paths that are solutions in the real world COMP9414/9814/3411 17s1 Search 6 Example Problems COMP9414/9814/3411 17s1 Search 7 The 8-Puzzle Toy problems: concise exact description 7 2 4 51 2 3 Real world problems: don t have a single agreed desription 5 6 4 5 6 8 3 1 7 8 Start State Goal State states:? operators:? goal test:? path cost:?

COMP9414/9814/3411 17s1 Search 8 The 8-Puzzle COMP9414/9814/3411 17s1 Search 9 Robotic Assembly 7 2 4 51 2 3 P R R R R 5 6 4 5 6 8 3 1 7 8 R Start State Goal State states: integer locations of tiles (ignore intermediate positions) operators: move blank left, right, up, down (ignore unjamming etc.) goal test: = goal state (given) path cost: 1 per move states:? operators:? goal test:? path cost:? COMP9414/9814/3411 17s1 Search 10 Rubik s Cube COMP9414/9814/3411 17s1 Search 11 Path Search Algorithms Search: Finding state-action sequences that lead to desirable states. Search is a function solution search(task) states:? Basic idea: Offline, simulated exploration of state space by generating successors of already-explored states (i.e. expanding them) operators:? goal test:? path cost:?

COMP9414/9814/3411 17s1 Search 12 Generating Action Sequences COMP9414/9814/3411 17s1 Search 13 General Search Example 1. Start with a priority queue consisting of just the initial state. 2. Choose a state from the queue of states which have been generated but not yet expanded. 3. Check if the selected state is a Goal State. If it is, STOP (solution has been found). 4. Otherwise, expand the chosen state by applying all possible transitions and generating all its children. 5. If the queue is empty, Stop (no solution exists). 6. Otherwise, go back to Step 2. Sibiu Fagaras Bucharest Rimnicu Vilcea COMP9414/9814/3411 17s1 Search 14 Search Tree COMP9414/9814/3411 17s1 Search 15 Data Structures for a Node Search tree: superimposed over the state space. Root: search node corresponding to the initial state. Leaf nodes: correspond to states that have no successors in the tree because they were not expanded or generated no new nodes. state space is not the same as search tree there are 20 states = 20 cities in the route finding example but there are infinitely many paths! One possibility is to have a node data structure with five components: 1. Corresponding state 2. Parent node: the node which generated the current node. 3. Operator that was applied to generate the current node. 4. Depth: number of nodes from the root to the current node. 5. Path cost.

COMP9414/9814/3411 17s1 Search 16 States vs. Nodes a state is (a representation of) a physical configuration a node is a data structure constituting part of a search tree includes parent, children, depth, path cost g(x) States do not have parents, children, depth, or path cost! parent, action COMP9414/9814/3411 17s1 Search 17 Data Structures for Search Trees Frontier: collection of nodes waiting to be expanded It can be implemented as a priority queue with the following operations: MAKE-QUEUE(ITEMS) creates queue with given items. State 5 4 6 1 8 7 3 2 state Node Note: two different nodes can contain the same state. depth = 6 g = 6 Boolean EMPTY(QUEUE) returns TRUE if no items in queue. REMOVE-FRONT(QUEUE) removes the item at the front of the queue and returns it. QUEUEING-FUNCTION(ITEMS, QUEUE) inserts new items into the queue. COMP9414/9814/3411 17s1 Search 18 Search Strategies COMP9414/9814/3411 17s1 Search 19 How Fast and How Much Memory? A strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: completeness does it always find a solution if one exists? time complexity number of nodes generated/expanded space complexity maximum number of nodes in memory optimality does it always find a least-cost solution? How to compare algorithms? Two approaches: 1. Benchmarking: run both algorithms on a computer and measure speed 2. Analysis of algorithms: mathematical analysis of the algorithm Time and space complexity are measured in terms of b maximum branching factor of the search tree d depth of the least-cost solution m maximum depth of the state space (may be )

COMP9414/9814/3411 17s1 Search 20 Benchmarking COMP9414/9814/3411 17s1 Search 21 Analysis of Algorithms Run two algorithms on a computer and measure speed. Depends on implementation, compiler, computer, data, network... Measuring time Processor cycles Counting operations Statistical comparison, confidence intervals T(n) iso( f(n)) means n 0,k : n>n 0 T(n) k f(n) n = input size T(n) = total number of step of the algorithm Independent of the implementation, compiler,... Asymptotic analysis: For large n, ano(n) algorithm is better than an O(n 2 ) algorithm. O() abstracts over constant factors e.g. T(100 n+1000) is better than T(n 2 + 1) only for n>110. O() notation is a good compromise between precision and ease of analysis. COMP9414/9814/3411 17s1 Search 22 Uninformed search strategies COMP9414/9814/3411 17s1 Search 23 Informed search strategies Uninformed (or blind ) search strategies use only the information available in the problem definition (can only distinguish a goal from a non-goal state): Breadth First Search Uniform Cost Search Depth First Search Depth Limited Search Informed (or heuristic ) search strategies use task-specific knowledge. Example of task-specific knowledge: distance between cities on the map. Informed search is more efficient than Uninformed search. Uninformed search systematically generates new states and tests them against the goal. Iterative Deepening Search Strategies are distinguished by the order in which the nodes are expanded.

COMP9414/9814/3411 17s1 Search 24 COMP9414/9814/3411 17s1 Search 25 Breadth-First Search Breadth-First Search All nodes are expanded at a given depth in the tree before any nodes at the next level are expanded Expand root first, then all nodes generated by root, then All nodes generated by those nodes, etc. Expand shallowest unexpanded node Rimnicu Fagaras Vilcea Lugoj implementation: QUEUEING-FUNCTION = put newly generated successors at end of queue Very systematic Finds the shallowest goal first COMP9414/9814/3411 17s1 Search 26 COMP9414/9814/3411 17s1 Search 27 Properties of Breadth-First Search Complete? Yes (if b is finite the shallowest goal is at a fixed depth d and will be found before any deeper nodes are generated) Time: 1+b+b 2 + b 3 +...+b d = bd+1 1 b 1 =O(b d ) Space: O(b d ) (keeps every node in memory; generate all nodes up to level d) Optimal? Yes, but only if all actions have the same cost Space is the big problem for Breadth-First Search; it grows exponentially with depth! Romania with step costs in km Breadth First Search assumes that all steps have equal cost. 75 118 71 Zerind Timisoara 111 Dobreta 140 70 75 151 Lugoj Mehadia 120 Sibiu 80 99 Craiova Fagaras Rimnicu Vilcea 97 Pitesti 211 Neamt 146 101 85 138 90 Bucharest Giurgiu 87 Iasi Urziceni 92 142 98 Vaslui Hirsova However, we are often looking for the path with the shortest total distance rather than the number of steps. 86 Eforie

COMP9414/9814/3411 17s1 Search 28 Uniform-Cost Search COMP9414/9814/3411 17s1 Search 29 Uniform-Cost Search Expand root first, then expand least-cost unexpanded node Implementation: QUEUEINGFUNCTION = insert nodes in order of increasing path cost. 75 71 75 140 118 118 111 Reduces to Breadth First Search when all actions have same cost Lugoj Finds the cheapest goal provided path cost is monotonically increasing along each path (i.e. no negative-cost steps) COMP9414/9814/3411 17s1 Search 30 Properties of Uniform-Cost Search COMP9414/9814/3411 17s1 Search 31 Depth First Search Complete? Yes, if b is finite and step cost ε with ε>0 Time: O(b C /ε ) where C = cost of optimal solution, and assume every action costs at least ε Space: O(b C /ε ) (b C /ε = b d if all step costs are equal) Optimal? Yes. Expands one of the nodes at the deepest level of the tree Implementation: QUEUEINGFUNCTION = insert newly generated states at the front of the queue (thus making it a stack) can alternatively be implemented by recursive function calls

COMP9414/9814/3411 17s1 Search 32 Depth First Search COMP9414/9814/3411 17s1 Search 33 Properties of Depth First Search Complete? No! fails in infinite-depth spaces, spaces with loops; modify to avoid repeated states along path complete in finite spaces Time: O(b m ) (terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first) Space: O(bm), i.e. linear space! Optimal? No, can find suboptimal solutions first. COMP9414/9814/3411 17s1 Search 34 Depth Limited Search COMP9414/9814/3411 17s1 Search 35 Iterative Deepening Search Expands nodes like Depth First Search but imposes a cutoff on the maximum depth of path. Complete? Yes (no infinite loops anymore) Time: O(b k ), where k is the depth limit Space: O(bk), i.e. linear space similar to DFS Tries to combine the benefits of depth-first (low memory) and breadth-first (optimal and complete) by doing a series of depthlimited searches to depth 1, 2, 3, etc. Early states will be expanded multiple times, but that might not matter too much because most of the nodes are near the leaves. Optimal? No, can find suboptimal solutions first Problem: How to pick a good limit?

COMP9414/9814/3411 17s1 Search 36 COMP9414/9814/3411 17s1 Search 37 Iterative Deepening Search Iterative Deepening Search Fagaras Rimnicu Vilcea Lugoj COMP9414/9814/3411 17s1 Search 38 COMP9414/9814/3411 17s1 Search 39 Properties of Iterative Deepening Search Properties of Iterative Deepening Search Complete? Yes. Time: nodes at the bottom level are expanded once, nodes at the next level twice, and so on: depth-limited: 1+b 1 + b 2 +...+b d 1 + b d = O(b d ) iterative deepening: (d+ 1)b 0 + db 1 +(d 1)b 2 +...+2 b d 1 + 1 b d = O(b d ) (We assume b>1) Complete? Yes. Time: nodes at the bottom level are expanded once, nodes at the next level twice, and so on: depth-limited: 1+b 1 + b 2 +...+b d 1 + b d = O(b d ) iterative deepening: (d+ 1)b 0 + db 1 +(d 1)b 2 +...+2 b d 1 + 1 b d = O(b d ) example b = 10, d = 5 : depth-limited: 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 iterative-deepening: 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 only about 11% more nodes (for b = 10).

COMP9414/9814/3411 17s1 Search 40 Properties of Iterative Deepening Search COMP9414/9814/3411 17s1 Search 41 Bidirectional Search Complete? Yes Time: O(b d ) Space: O(bd) Optimal? Yes, if step costs are identical. Start Goal COMP9414/9814/3411 17s1 Search 42 Bidirectional Search COMP9414/9814/3411 17s1 Search 43 Bidirectional Search Issues Idea: Search both forward from the initial state and backward from the goal, and stop when the two searches meet in the middle. We need an efficient way to check if a new node already appears in the other half of the search. The complexity analysis assumes this can be done in constant time, using a Hash Table. Assume branching factor = b in both directions and that there is a solution at depth = d. Then bidirectional search finds a solution in O(2b d/2 )=O(b d/2 ) time steps. searching backwards means generating predecessors starting from the goal, which may be difficult there can be several goals e.g. chekmate positions in chess space complexity: O(b d/2 ) because the nodes of at least one half must be kept in memory.

COMP9414/9814/3411 17s1 Search 44 Summary COMP9414/9814/3411 17s1 Search 45 Complexity Results for Uninformed Search problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored. variety of Uninformed search strategies Iterative Deepening Search uses only linear space and not much more time than other Uninformed algorithms. Breadth- Uniform- Depth- Depth- Iterative Criterion First Cost First Limited Deepening Time O(b d ) O(b C /ε ) O(b m ) O(b k ) O(b d ) Space O(b d ) O(b C /ε ) O(bm) O(bk) O(bd) Complete? Yes 1 Yes 2 No No Yes 1 Optimal? Yes 3 Yes No No Yes 3 b = branching factor, d = depth of the shallowest solution, m = maximum depth of the search tree, k = depth limit. 1 = complete if b is finite. 2 = complete if b is finite and step costs εwith ε>0. 3 = optimal if actions all have the same cost.