Chapter 3: Solving Problems by Searching

Similar documents
CAP 4630 Artificial Intelligence

Ar#ficial)Intelligence!!

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Solving Problems by Searching

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Chapter 3 Solving problems by searching

Solving Problems by Searching

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Uninformed Search Strategies AIMA

Solving Problems by Searching

Solving problems by searching

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Problem Solving as Search. CMPSCI 383 September 15, 2011

Artificial Intelligence

Solving Problem by Searching. Chapter 3

3 SOLVING PROBLEMS BY SEARCHING

CS 8520: Artificial Intelligence

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Uninformed Search Strategies AIMA 3.4

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Solving problems by searching

Solving problems by searching

Chapter 2. Blind Search 8/19/2017

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Problem Solving and Search

Problem solving and search

AI: problem solving and search

CS 380: Artificial Intelligence Lecture #3

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

Solving Problems by Searching

Solving Problems by Searching

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Artificial Intelligence

Problem solving and search

Solving problems by searching

Solving problems by searching. Chapter 3

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Search EECS 395/495 Intro to Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Search EECS 348 Intro to Artificial Intelligence

Informed Search and Exploration

Artificial Intelligence

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Uninformed Search Methods

Solving problems by searching

Solving problems by searching

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

ARTIFICIAL INTELLIGENCE. Pathfinding and search

CS486/686 Lecture Slides (c) 2014 P.Poupart

521495A: Artificial Intelligence

Solving Problems: Blind Search

Topic 1 Uninformed Search (Updates: Jan. 30, 2017)

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

State Spaces

Blind (Uninformed) Search (Where we systematically explore alternatives)

Pengju XJTU 2016

A2 Uninformed Search

Example: The 8-puzzle. A2 Uninformed Search. It can be generalized to 15-puzzle, 24-puzzle, or. (n 2 1)-puzzle for n 6. Department of Computer Science

4. Solving Problems by Searching

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Uninformed (also called blind) search algorithms

Informed Search and Exploration

Artificial Intelligence Problem Solving and Uninformed Search

Problem solving and search

Artificial Intelligence Uninformed search

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Solving Problems by Searching (Blindly)

Vorlesung Grundlagen der Künstlichen Intelligenz

Lecture 3. Uninformed Search

Multiagent Systems Problem Solving and Uninformed Search

A4B36ZUI - Introduction ARTIFICIAL INTELLIGENCE

Uninformed Search. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop. Reading: R&N

Blind (Uninformed) Search (Where we systematically explore alternatives)

Pengju

Route planning / Search Movement Group behavior Decision making

Uninformed search strategies (Section 3.4) Source: Fotolia

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2. Problem Solving Agents. Problem Solving Agents: Assumptions

KI-Programmierung. Basic Search Algorithms

CS 4100 // artificial intelligence

Problem solving and search

Solving Problems by Searching

Topic 1 Uninformed Search

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Downloaded from ioenotes.edu.np

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Robot Programming with Lisp

Class Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 2: Search. Problem Solving Agents

Problem Solving. Russell and Norvig: Chapter 3

TDDC17. Physical Symbol System Hypothesis. Designation and Interpretation. Physical Symbol Systems

AGENTS AND ENVIRONMENTS. What is AI in reality?

AGENTS AND ENVIRONMENTS. What is AI in reality?

Problem solving and uninformed search. Chapter 5

Artificial Intelligence

PEAS: Medical diagnosis system

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Uninformed Search (Ch )

Search. CS 3793/5233 Artificial Intelligence Search 1

Transcription:

Chapter 3: Solving Problems by Searching Prepared by: Dr. Ziad Kobti 1

Problem-Solving Agent Reflex agent -> base its actions on a direct mapping from states to actions. Cannot operate well in large environments Would take too long to learn Goal-based agent -> consider future actions and the desirability of their outcomes Example: problem-solving agent 2

Problem-Solving Agent Intelligent Agents are supposed to: MAXIMIZE their performance measure Adopt a GOAL Problem Formulation: is the process of deciding what actions and states to consider, given a goal. If the agent has no additional information about its environment, (unknown environment), the it has no choice but to try one of the actions at random. But the agent can examine future actions that can eventually lead to states of known value. 3

Well-defined problems and solutions A PROBLEM can be defined formally by five components: 1. The initial state that the agent starts in. 2. A description of the possible actions available to the agent. (actions applicable in a given state) 3. Description of what each action does; a transition model. State Space = initial state + actions + transition model 4. A goal test which determines whether a given state is a goal state. 4

Well-defined problems and solutions State space forms a directed network or graph in which the nodes are states and the links between nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions. A path cost function that assigns a numeric cost to each path. The problem solving agent chooses a cost function that reflects its own performance measure. The step cost is the cost of taking action from one state to another. 5

Example 6

Formulating Problems A model is an abstract mathematical description, and not the real thing. The process of removing detail from a representation is called abstraction. Can you formulate an abstract model for the Wall following problem robot? States, initial state, actions, transition model, goal test, path cost. 7

Toy example: The 8-puzzle States: A state description specifies the lcoation of each of the eight tiles and the blank in one of the nine squares. Initial state: Any state can be designated as the initial state. Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending on where the blank is. Transitional model: Goal test: match? Path Cost: same. 8

Real-world problems Route-finding problems e.g. driving directions, routing video streams in computer networks, military operations planning, airline travel-planning systems, etc Touring problems E.g. visit every city at least once, starting and ending in the same node. TSP (Travelling Salesman Problem): visit every city exactly once with the aim to have the shortest (or least cost) tour. (NP-hard) e.g. Movement of automatic circuit-board drill 9

A bit of complexity! A problem is NP-hard if an algorithm for solving it can be translated into one for solving any NP-problem (nondeterministic polynomial time) problem. NP-hard therefore means "at least as hard as any NP-problem," although it might, in fact, be harder. 10

Real-world problems A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area, minimize circuit delays Robot navigation is a generalization of the routefinding problem, but rather than following a discrete set of routes, a robot can move in a continuous space with an infinite set of possible actions and states. Automatic Assembly Sequencing of complex objects by a robot. Aim is to find an order in which to assemble the parts of some object. E.g. protein design find sequence of amino acids that will fold into a 3D protein with the right properties to cure some disease. 11

Searching for Solutions Search algorithms work by considering various possible action sequences. Search tree (nodes=states, expand a state by applying legal action by generating new states). 12

General tree-search and graph search algorithms function TREE-SEARCH( problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier --- function GRAPH-SEARCH(problem) returns a solution. or failure initialize the frontier using the initial stale of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier only if not in the frontier or explored set 13

Infrastructure for search algorithms Search algorithms require a data structure to keep track of the search tree that is being constructed. For each node n of the tree, we have a structure that contains four components: n.state: the state in the state space to which the node corresponds; n.parent: the node in the search tree that generated this node; n.action: the action that was applied to the parent to generate the node; n.path-cost: the cost, traditionally denoted by g(n), of the path from the initial state to the node; as indicated by parent pointers. 14

Infrastructure for search algorithms function CHILD-NODE(problem, parent, action) returns a node return a node with STATE = problem.result(parent.state, action), PARENT = parent, ACTION = action, PATH COST = parent.path-cost + problem.step-cost(parent.state, action) 15

Measuring problem-solving performance Evaluate an algorithm s performance in four ways: Completeness: Is the algorithm guaranteed to find a solution when there is one? Optimality: Does the strategy find the optimal solution? Time complexity: How long does it take to find a solution? Space complexity: How much memory is needed to perform the search? (branching, depth, search cost, total cost) 16

Uninformed search strategies Uninformed search (blind search) Strategies have no additional information about states beyond that provided in the problem definition. All the can do is generate successors and distinguish a goal state from a non-goal state. All search strategies are distinguished by the order in which nodes are expanded. Informed search (or heuristic search): Know whether one non-goal state is more promising than another state. 17

Uninformed Search: Breadth-first search Root node is expanded first, the all the successors of the root node are expanded next, then their successors, and so on. BFS is an instance of the general graph-search A queue is used to keep track of node order. 18

Breadth-first search on a graph function BREADTH-FIRST-SEARCH (problem) returns a solution, or failure node <-- a node with STATE = proldem.initial- STATE, PATH-COST =0 if problem.goal-test(node.state) then return SOLUTION(node) frontier <-- a FIFO queue with node as the only element explored <-- an empty set loop do if EMPTY?(frontier) then return failure node q<--pop( frontier) /* chooses the shallowest node in frontier */ add node. STATE to explored for each action in problem.actions(node.state) do child <-- CHILD-NODE(problem, node, action) if child.state is not in explored or frontier then if problem.goal-test(child.state) then return SOLUTION( child) frontier <-- INSERT( child, frontier) (note: frontier = set of all leaf nodes available for expansion at any given point) 19

Time and Space Complexity: not so good! Imagine searching a uniform tree where every state has b successors. The root of the search tree generates b nodes at the first level, each of which generates b more nodes, for a total of b 2 at the second level. Each of these generates b more nodes, yielding b 3 nodes at the third level, and so on. Now suppose that the solution is at depth d. In the worst case, it is the last node generated at that level. Then the total number of nodes generated is b + b 2 + b 3 + + b d = 0(b d ) where d is the depth (If the algorithm were to apply the goal test to nodes when selected for expansion, rather than when generated, the whole layer of nodes at depth d would be expanded before the goal was detected and the time complexity would be 0(b d+1 ).) As for space complexity: for any kind of graph search, which stores every expanded node in the explored set, the space complexity is always within a factor of b of the time complexity. For breadth-first graph search in particular, every node generated remains in memory. There will be O(b d+1 ) nodes in the explored set and O(b d ) nodes in the frontier, so the space complexity is O(b d ) Exponential complexity! Memory requirements are a bigger problem for BFS than its execution time Exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances! 20

Uniform-cost search BFS expands the shallowest unexpanded node assuming all step costs are equal. If the step cost are not equal: Uniform-cost search expands the node n with the lowest path cost g(n). This is done by storing the frontier as a priority queue ordered by g. Uniform-cost search expands nodes in order of their optimal path cost. 21

Example Use uniform-cost search from Sibiu to Bucharest. P.84 Is the search optimal? (ie find least cost path?) 22

Depth-first search DFS always expands the deepest node in the current frontier of the search tree. DFS is an instance of the graph-search algorithm. BFS uses a FIFO queue, DFS uses a LIFO queue Typically a recursive algorithm (careful!) 23

24

DFS complexity? Time complexity of DFS is bounded by the size of the state space (which may be infinite!) O(b m ) where m is the max depth. Space complexity of DFS: need to store only a single path from the root to a leaf node, along with all remaining unexpanded sibling nodes for each node on the path. O(bm) better than BFS Variant: backtracking search uses O(m) space 25

Depth-limited search DFS fails in infinite state spaces. Simply limit the depth and hope the goal is within that depth limit. Here we rely on knowledge about the problem! Iterative deepening DFS is a general strategy often used in combination with DF tree search, that finds the best depth limit. It works by gradually increasing the limit until a goal is found. Combines DFS and BFS. Iterative Deepening is the preferred uninformed search method when the search space is large and the depth of the solution is not known. 26

27

Bidirectional search Run two simultaneous searches, one forward from the initial state and the other backward from the goal hoping that the two searches meet in the middle! Motivation: b d/2 + b d/2 < b d 28

Criterion Comparing uninformed search Breadth- First Uniform- Cost strategies Depth- First Depth- Limited Iterative Deepening Complete? Yes Yes No No Yes Yes Time Space See P91 See P91 Optimal? Yes Yes No No Yes Yes Bidirection al (if applicable) 29