Goal-Based Agents Problem solving as search. Outline

Similar documents
AI: problem solving and search

Solving problems by searching

Solving Problem by Searching. Chapter 3

Solving problems by searching

Solving problems by searching. Chapter 3

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

Chapter 2. Blind Search 8/19/2017

CS 380: Artificial Intelligence Lecture #3

Artificial Intelligence Uninformed search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CS 8520: Artificial Intelligence

Solving problems by searching

Artificial Intelligence Problem Solving and Uninformed Search

Multiagent Systems Problem Solving and Uninformed Search

Solving Problems by Searching

Problem solving and search

Solving problems by searching

Solving Problems: Blind Search

Artificial Intelligence

Chapter 3 Solving problems by searching

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Ar#ficial)Intelligence!!

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Problem Solving. Russell and Norvig: Chapter 3

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Solving Problems by Searching (Blindly)

Problem solving and search

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

KI-Programmierung. Basic Search Algorithms

CS 151: Intelligent Agents, Problem Formulation and Search

Solving Problems by Searching. AIMA Sections , 3.6

Artificial Intelligence

Problem solving and search

Problem Solving by Searching

Problem solving and search

Problem Solving and Search

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

Solving Problems by Searching

Pengju XJTU 2016

Search EECS 395/495 Intro to Artificial Intelligence

Search EECS 348 Intro to Artificial Intelligence

CAP 4630 Artificial Intelligence

Problem Solving as Search. CMPSCI 383 September 15, 2011

CS486/686 Lecture Slides (c) 2015 P.Poupart

CS486/686 Lecture Slides (c) 2014 P.Poupart

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

Problem solving and search

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving problems by searching

Solving problems by searching

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Pengju

Problem solving Basic search. Example: Romania. Example: Romania. Problem types. Intelligent Systems and HCI D7023E. Single-state problem formulation

3 SOLVING PROBLEMS BY SEARCHING

AGENTS AND ENVIRONMENTS. What is AI in reality?

Why Search. Things to consider. Example, a holiday in Jamaica. CSE 3401: Intro to Artificial Intelligence Uninformed Search

Graph Search. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

Problem solving and search

Uninformed Search Strategies AIMA

Problem solving and search

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Uninformed Search. Chapter 3

Solving problems by searching

9/7/2015. Outline for today s lecture. Problem Solving Agents & Problem Formulation. Task environments

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Uninformed Search Strategies AIMA 3.4

Outline. Solving problems by searching. Prologue. Example: Romania. Uninformed search

Problem-solving agents. Problem solving and search. Example: Romania. Reminders. Example: Romania. Outline. Chapter 3

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Problem solving by search

Reminders. Problem solving and search. Problem-solving agents. Outline. Assignment 0 due 5pm today

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Robot Programming with Lisp

4. Solving Problems by Searching

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Informed Search CS457 David Kauchak Fall 2011

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

521495A: Artificial Intelligence

Outline. Problem solving and search. Problem-solving agents. Example: Romania. Example: Romania. Problem types. Problem-solving agents.

Solving Problems by Searching

Solving Problems by Searching

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Today s s lecture. Lecture 2: Search - 1. Examples of toy problems. Searching for Solutions. Victor R. Lesser

Chapter 3: Solving Problems by Searching

COMP3702/7702 Artificial Intelligence Week2: Search (Russell & Norvig ch. 3)" Hanna Kurniawati"

Vorlesung Grundlagen der Künstlichen Intelligenz

Solving Problems by Searching

Informed search algorithms

Uninformed Search B. Navigating through a search tree. Navigating through a search tree. Navigating through a search tree

Artificial Intelligence: Search Part 1: Uninformed graph search

Problem Solving Agents

TDDC17. Physical Symbol System Hypothesis. Designation and Interpretation. Physical Symbol Systems

Transcription:

Goal-Based Agents Problem solving as search Vasant Honavar Bioinformatics and Computational Biology Program Center for Computational Intelligence, Learning, & Discovery honavar@cs.iastate.edu www.cs.iastate.edu/~honavar/ www.cild.iastate.edu/ www.bcb.iastate.edu/ www.igert.iastate.edu Outline Goal-based agents Design of simple goal-based agents Discrete, fully observable states Discrete actions Problem formulation Problem solving as search State space search Example problems (Review of) Basic (Uninformed) Search Algorithms 1

Problem Formulation Formulate the goals Explicit specification Implicit specification (goal predicate) Formulate the actions Preconditions (before) Post-conditions (after) Design a representation that Captures relevant aspects of the world Abstracts away unimportant details Example: 8-puzzle States? Position of each tile on the board Initial state? Any state can be initial Actions? {Left, Right, Up, Down} Goal test? Check whether goal configuration is reached 2

Problem Formulation Simplifying assumptions Discrete, fully observable states in class, at home Discrete actions Mary executes action Go home in state in class to reach the at home state In this setup, we can t speak of Mary being on her way home Passive environment All state changes due to the agent s action Mary can t end up at home because her mom picked her up Representation A representation Maps each (physical) state of the external environment into the corresponding abstract state via sensors Maps each (physical) action on an environmental state into an abstract action on the corresponding abstract state Maps effects of an abstract action on an abstract state into a corresponding effect on the corresponding environmental state via effectors The mapping from environmental states and abstract states is many to one abstract state to an environmental state is one to many 3

Representation The mapping from environmental states and abstract states is many to one abstract state to an environmental state is one to many A representation induces a partition over environmental states 4 abstract states, 12 abstract actions (if we allow only lateral or vertical moves not all environmental state transitions can be thought about by the agent) Representation Effects of abstract actions in the abstract state space is fully deterministic and predictable.. But.. The corresponding effects of the physical actions on the environmental state space are predictable to the extent allowed by the resolution of the representation and the fidelity of sensors and effectors 4

Representation A representation Is a surrogate inside an agent s brain for entities that exist in the external world Is not just a data structure why? Derives its semantics through grounding (sensors, effectors) Embodies a set of ontological commitments assumptions about the entities, properties, relationships, and actions that we care about Properties of representations expressive power complexity,.. More on representations later Problem Formulation Formulate the goals Explicit specification Implicit specification (goal predicate) Formulate the actions Preconditions (before) Post-conditions (after) Design a representation that Captures relevant aspects of the world Abstracts away unimportant details 5

Example: Missionaries and Cannibals Initial state: 3 missionaries, 3 cannibals, and the boat on the left bank of the river Goal: all on the right bank Constraints: The boat which can carry at most 2 people at a time If missionaries are outnumbered by cannibals, the cannibals will eat the missionaries States: The positions of missionaries, cannibals, and the boat on either side of the river Actions: Movement of the boat with its occupants from one side of the river to the other Solution: A sequence of boat trips across the river complete with their passenger lists Example: Getting around in Romania 6

Example: Getting around in Romania On holiday in Romania; currently in Arad Flight leaves tomorrow from Bucharest Formulate goal Be in Bucharest Formulate problem States: various cities Actions: drive between cities Find solution Sequence of cities; e.g. Arad, Sibiu, Fagaras, Bucharest, Problem formulation in the observable, deterministic case A problem is defined by: An initial state, e.g. Arad Successor function S(X)= set of action-state pairs e.g. S(Arad)={<Arad Zerind, Zerind>, } Goal test, can be Explicit, e.g. x= at bucharest Implicit, e.g. checkmate(x) Initial state + successor function defines a state space A solution is a sequence of actions from the initial to goal state 7

Basic State Space Search Problem A state space search problem is specified by a 3-tuple (s, A, G) where s is a start state s S, the set of possible start states O is the set of actions (operators) Partial functions that map a state into another G the set of goal states G may be explicitly enumerated or implicitly specified using a goal predicate goal (g) = True iff g G Solution to a state space search problem is a sequence of action applications leading from the start state s to a goal g G Problem formulation finding an optimal solution A problem is defined by: An initial state, e.g. Arad Successor function S(X)= set of action-state pairs e.g. S(Arad)={<Arad Zerind, Zerind>, } intial state and the successor function together define the state space Goal test Explicit, e.g. x= at bucharest Implicit, e.g. checkmate(x) Path cost (additive) e.g. sum of distances, number of actions executed, c(x,a,y) is the step cost, assumed to be 0 Optimal solution has the lowest path cost 8

Finding an optimal solution All operator applications may not be equally expensive Suppose we have a cost function c: Q x O + c (q,o,r) = cost of applying operator o in state q to reach state r Path cost is typically assumed to be the sum of costs of operator applications along the path An optimal solution is one with the lowest cost path from the specified start state s to a goal g G State space representation Real world can be absurdly complex State space representation is an abstraction (Abstract) state corresponds to a set of real world states (Abstract) action corresponds to a complex combination of real world actions e.g. Arad Zerind represents a complex set of possible routes, detours, rest stops, etc. The abstraction is valid if the path between two states is reflected in the real world. (Abstract) solution = set of real paths that are solutions in the real world. 9

Importance of Representation Sticks and Squares Problem 17 sticks arranged in 6 squares Goal remove 5 sticks so we are left with exactly 3 squares (no extra sticks) What is the size of the state space? Importance of Representation Sticks and Squares Problem 17 sticks arranged in 6 squares Goal remove 5 sticks so we are left with exactly 3 squares (no extra sticks) What is the size of the state space? Depends on the representation 10

Importance of representation Ontological commitment matters Abstraction matters Granularity of the representation matters Good representations preserve the relevant aspects of the problem expose the relevant problem structure Bad representations Lose potentially relevant information obscure the relevant problem structure How to automatically discover good representations is a fundamental problem in AI Millions of years of evolution have given humans a head start Example: vacuum world States? Initial state? Actions? Goal test? Path cost? 11

Example: vacuum world States? two locations with or without dirt, with or without the vacuum cleaner: 2 x 2 2 =8 states. Initial state? Any state can be initial Actions? {Left, Right, Suck} Goal test? Check whether both locations are clean. Path cost? Number of actions to reach goal Example: 8-puzzle States? Initial state? Actions? Goal test? Path cost? 12

Example: 8-puzzle States? Integer location of each tile Initial state? Any state can be initial Actions? {Left, Right, Up, Down} Goal test? Check whether goal configuration is reached Path cost? Number of actions to reach goal Example: 8-queens problem Constraints: No two queens can share A row A column A diagonal States? Initial state? Actions? Goal test? Path cost? 13

Example: 8-queens problem Problem formulation States? Any arrangement of 0 to 8 queens on the board Initial state? Empty board (no queens) Actions? Add a queen in empty square Goal test? 8 queens on board and none under attack State space representation: 8-queens problem Solution 1 Any arrangement of 0 to 8 queens on the board 64 squares, 8 queens (64)(63)(62)(61)..(57) 3 10 14 1.2681 2 47 states! 14

State space representation: 8-queens problem Solution 2 Any arrangement of 0 to 8 queens on the board 8 rows need to specify the column in which a queen is placed in each row (8)(7)(6)(5)(4)(3)(2) 1.231 2 15 states! Absorbed the `no two queens can share a row constraint into the representation! Example: 8-queens problem Solution 3 Any arrangement of 0 to 8 queens on the board States: n (0 n 8) queens on the board, one per column in the n leftmost columns with no queen attacking another Actions: Add a queen to the leftmost empty column so as not to attack the other queens Number of states = 2057 Representation matters! 15

Finding solution State space search Let L be a list of nodes yet to be expanded 1. Let L = (s) 2. If L is empty, return failure else pick a node n from L (which node?) 3. If n is a goal node, a. return path from s to n and stop. b. Otherwise i. Delete n from L ii. Expand n: Add to L all of n s successors (where?) 4. Return to 2. State space search A state is an (internal representation of) a physical configuration A node is a data structure that is used to construct a search tree A node has a parent, successors, and includes bookkeeping information e.g., depth, node = <state, parent-node, action, depth..> Each arc corresponds to an operator application Each node in the search tree implicitly represents a candidate partial solution 16

Basic Search strategies A search strategy specifies a particular order of node expansion Search strategies are evaluated in terms of: Completeness: Does it always find a solution if one exists? Admissibility Russell and Norvig call this optimality Does it always find an optimal solution? Time Complexity Number of nodes generated or expanded Space Complexity memory needed to store L during search Optimality Optimal in its use of space, time, or both Analysis of Basic Search strategies Time and space complexity are measured in terms of problem difficulty defined by: b - maximum branching factor of the search tree d - depth of the least-cost solution m - maximum depth of the state space (may be ) Assumptions Uniform, finite branching factor b A single goal node exists at a finite depth d Goal is uniformly distributed at depth d Maximum depth of search space is m 17

Uninformed (blind) search strategies Use only information available in problem definition Blind search strategies: Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search Bidirectional search BF search, an example Expand shallowest unexpanded node Implementation: L is a FIFO queue Nodes on L = (A) A 18

BF search, an example Expand shallowest unexpanded node Implementation: L is a FIFO queue A L = (B, C) B C BF-search, an example Expand shallowest unexpanded node Implementation: L is a FIFO queue L = (C, D, E) A B C D E 19

BF search, an example Expand shallowest unexpanded node Implementation: L is a FIFO queue L = (D, E, F, G) A B C D E E F G BF search (BFS) Is BFS complete? Is BFS guaranteed to find a solution if one exists? Yes (b is finite, d is finite) 20

BFS space complexity Worst case space complexity Every node at depth d must be on the list L before a solution at depth d can be found In the worst case, all successors of depth d non-goal nodes must be on the list before a solution at depth d can be found BFS time complexity Worst case time complexity - Number of nodes generated - Number of nodes expanded 21

BFS time complexity - Average case time complexity - number of nodes expanded - Best case = - Worst case = - The goal is uniformly distributed at depth d - Expected case time complexity BFS Is BFS admissible? Yes, if all operator costs are equal Otherwise, in general, No Is BFS optimal? As we will see later, No We can do significantly better than BFS in terms of space requirement 22

BFS Summary Memory requirements are more problematic than execution time Uninformed search methods are infeasible for all but the smallest problem instances DEPTH NODES TIME MEMORY 2 1100 0.11 seconds 1 megabyte 4 111100 11 seconds 106 megabytes 6 10 7 19 minutes 10 gigabytes 8 10 9 31 hours 1 terabyte 10 10 11 129 days 101 terabytes 12 10 13 35 years 10 petabytes 14 10 15 3523 years 1 exabyte b = 10, processing speed = 10,000 nodes / sec space = 1000 bytes per node Depth first search (DFS) Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) L = (A) A 23

DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) B A C L = (B C) DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (=stack) A L = (D E C) BB C C D E 24

DFS, an example Expand the deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E H I I DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (=stack) A B C D E H I 25

DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) B A C D E H I DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E H I J K 26

DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E H I J K DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E H I J K 27

DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E F G H I J K DFS, an example Expand deepest unexpanded node Implementation: L is a LIFO queue (stack) A B C D E F G H I J K L M 28

DFS Completeness Is DFS guaranteed to find a solution if one exists? No! unless search space is finite and no loops are possible Admissibility Is DFS guaranteed to find an optimal solution? No! DFS Space Complexity b-1 b-1 Space complexity where m = maximum depth of the search space 29

DFS Space Complexity Unlike BFS, DFS has space complexity that grows linearly in b and m Space complexity of DFS can be further improved to O(m) with backtracking search Each partially expanded node remembers which successor to generate next Avoids the need to put all successors on the list DFS Summary Worst case time complexity: Space complexity O(mb) or even O(m) Complete? No Admissible? No Optimal? No 30

BFS and DFS BFS is complete, but has terrible space complexity DFS has attractive space complexity, but is not complete Can we get the best of both worlds? Iterative deepening search (IDS) Combines benefits of DFS and BFS 1. Set depth limit l =1 2. Perform DFS with depth limit l 3. If solution is found, return solution 4. Otherwise, increment depth limit l and return to step 2 Space complexity O(bd) or O(d) if backtracking is used Time complexity 31

Summary of IDS IDS is complete IDS is admissible (if actions have equal cost) IDS is an optimal blind search algorithm Time complexity Solution located at depth d Cannot avoid examining O( b d ) nodes Time complexity of IDS is optimal Space complexity Run time O( b d ) Must be able to count up to b d Counter needs O(d) bits IDS, example Limit=0 32

IDS, example Limit=1 IDS, example Limit=2 33

IDS, example Limit=3 Bidirectional search Two simultaneous searches from start and goal states Check whether a node belongs to the other fringe before expansion The predecessor of a node must be easy to compute helps if the actions are reversible Complete and optimal if both searches are BF 34

Criterion Summary of algorithms Breadth -First Depth- First Iterative deepening Bidirectional Complete? YES NO YES YES* Time b d+1 b m b d b d/2 Space b d+1 bm bd b d/2 Admissible? YES NO YES YES* Optimal? NO NO YES NO Assuming all arc costs are equal m max depth of search d depth of solution, b finite branching factor * Assuming forward and backward search are BFS Repeated states Failure to detect repeated states can turn a solvable problems into unsolvable ones 35

Graph search algorithm Use an open list to store the list of nodes to be expanded Use a closed list to store previously expanded nodes Maintaining closed list means space complexity of DFS and IDS can t be linear Cycles need to be dealt with Search with partial information Previous assumption: Environment is fully observable Environment is deterministic Agent knows the effects of its actions What if knowledge of states or actions is incomplete? 36

Search with partial information Partial knowledge of states and actions: Sensorless or conformant problem Agent may have no idea the state it is in Agent knows the effects of its actions contingency problem Percepts provide new information about current state solution is a tree or policy interleave search and execution If uncertainty is caused by actions of another agent: adversarial problem exploration problem When states and actions of the environment are unknown Start in {1,2,3,4,5,6,7,8} e.g Right goes to {2,4,6,8} Solution? [Right, Suck, Left,Suck] When the world is not observable reason about a set of states that might be reached Actions correspond to transitions between sets of states (belief states) Sensorless problems 37

Search space of belief states Solution = belief state with each member being a goal state If the state space has S states then there are 2 S belief states in the worst case what if the environment is non deterministic? Conformant problems Belief state of vacuum-world 38

Contingency problems Contingency, start in {1,3} Murphy s law, Suck can dirty a clean carpet Local sensing: dirt, location only Percept = [L,Dirty] ={1,3} [Suck] = {5,7} [Right] ={6,8} [Suck] in {6}={8} (Success) BUT [Suck] in {8} = failure Contingency problems Solution? [Suck, Right, if [R,dirty] then Suck] It is hard to account for every possible contingency before acting Select actions based on contingencies arising during execution 39

Exploration problems Can be viewed as an extreme case of contingency problems Often solved using reinforcement learning 40