521495A: Artificial Intelligence

Similar documents
CSC 2114: Artificial Intelligence Search

Announcements. Project 0: Python Tutorial Due last night

CS 4100 // artificial intelligence

PEAS: Medical diagnosis system

CS 5522: Artificial Intelligence II

CS 188: Ar)ficial Intelligence

CSE 473: Ar+ficial Intelligence

Warm- up. IteraAve version, not recursive. class TreeNode TreeNode[] children() boolean isgoal() DFS(TreeNode start)

521495A: Artificial Intelligence

Graph Search. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Problem solving and search

CS 4700: Foundations of Artificial Intelligence. Bart Selman. Search Techniques R&N: Chapter 3

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

KI-Programmierung. Basic Search Algorithms

Artificial Intelligence

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving Problems by Searching

ARTIFICIAL INTELLIGENCE. Pathfinding and search

ITCS 6150 Intelligent Systems. Lecture 3 Uninformed Searches

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Solving problems by searching

CS 380: Artificial Intelligence Lecture #3

Solving problems by searching

Problem Solving and Search

Artificial Intelligence Problem Solving and Uninformed Search

Multiagent Systems Problem Solving and Uninformed Search

Solving problems by searching

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving Problem by Searching. Chapter 3

Chapter 2. Blind Search 8/19/2017

CS 8520: Artificial Intelligence

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Uninformed Search. Reading: Chapter 4 (Tuesday, 2/5) HW#1 due next Tuesday

Solving Problems: Blind Search

Search EECS 395/495 Intro to Artificial Intelligence

Search EECS 348 Intro to Artificial Intelligence

Solving problems by searching

Artificial Intelligence

CAP 4630 Artificial Intelligence

Outline. Solving problems by searching. Problem-solving agents. Example: Romania

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 1

CS 188: Artificial Intelligence. Recap Search I

AI: problem solving and search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Chapter 3 Solving problems by searching

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

CS486/686 Lecture Slides (c) 2015 P.Poupart

CMU-Q Lecture 2: Search problems Uninformed search. Teacher: Gianni A. Di Caro

Ar#ficial)Intelligence!!

Lecture 3. Uninformed Search

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

CS486/686 Lecture Slides (c) 2014 P.Poupart

Problem solving and search

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Solving Problems by Searching

Chapter 3: Solving Problems by Searching

Uninformed search strategies (Section 3.4) Source: Fotolia

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Problem solving and search

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

UNINFORMED SEARCH. What to do if teammates drop? Still have 3 or more? No problem keep going. Have two or fewer and want to be merged?

CS 151: Intelligent Agents, Problem Formulation and Search

Uninformed Search Methods

CSCI 446: Artificial Intelligence

Problem solving and search

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Robot Programming with Lisp

Chapter3. Problem-Solving Agents. Problem Solving Agents (cont.) Well-defined Problems and Solutions. Example Problems.

Problem solving and search

COMP3702/7702 Artificial Intelligence Week2: Search (Russell & Norvig ch. 3)" Hanna Kurniawati"

Solving problems by searching. Chapter 3

Chapter 3. A problem-solving agent is a kind of goal-based agent. It decide what to do by finding sequences of actions that lead to desirable states.

Artificial Intelligence: Search Part 1: Uninformed graph search

Artificial Intelligence Uninformed search

Search Algorithms. Uninformed Blind search. Informed Heuristic search. Important concepts:

Pengju XJTU 2016

Problem Solving as Search. CMPSCI 383 September 15, 2011

Solving Problems by Searching

Solving Problems by Searching

Artificial Intelligence

Intelligent Agents. Foundations of Artificial Intelligence. Problem-Solving as Search. A Simple Reflex Agent. Agent with Model and Internal State

4. Solving Problems by Searching

Downloaded from ioenotes.edu.np

Today. Search Problems. Uninformed Search Methods. Depth-First Search Breadth-First Search Uniform-Cost Search

Goal-Based Agents Problem solving as search. Outline

Solving problems by searching

Solving problems by searching

Problem Solving & Heuristic Search

Uninformed Search strategies. AIMA sections 3.4,3.5

Solving Problems by Searching (Blindly)

CS510 \ Lecture Ariel Stolerman

Computer Science and Software Engineering University of Wisconsin - Platteville. 3. Search (Part 1) CS 3030 Lecture Notes Yan Shi UW-Platteville

Pengju

DFS. Depth-limited Search

Uninformed Search. Chapter 3

Uninformed (also called blind) search algorithms

Transcription:

521495A: Artificial Intelligence Search Lectured by Abdenour Hadid Associate Professor, CMVS, University of Oulu Slides adopted from http://ai.berkeley.edu

Agent An agent is an entity that perceives the environments and acts to maximize its (expected) utility The goal of the course is to learn how to design rational agents Sensors? Actuators Percepts Actions Environment

Agent An agent is an entity that perceives the environments and acts to maximize its (expected) utility The goal of the course is to learn how to design rational agents Sensors? Actuators Percepts Actions Environment

Agent An agent is an entity that perceives the environments and acts to maximize its (expected) utility The goal of the course is to learn how to design rational agents Sensors? Actuators Percepts Actions Environment

Agent An agent is an entity that perceives the environments and acts to maximize its (expected) utility The goal of the course is to learn how to design rational agents Sensors? Actuators Percepts Actions Environment

Search Agent function can be presented by tabulating all combinations of percepts and related actions? For most of the cases, this table would be huge or perhaps infinite and hence impractical.

Search For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance Computational limitations make perfect rationality unachievable

For any given class of environments and tasks, we seek the agent with the best performance

Search

# Environment types Observable: Determines if the agents sensors give access to the complete state of the environment at each point in time. In such case the environment is fully observable. Fully observable environments are convenient, since the agent does not need memory to keep track of the changes in environment. Unfortunately, such environments are rare in practice. In other cases, environments are called partially observable or nonbservable. Deterministic: If the next state of the environment is completely determined by the current state and the action performed by the agent, then it is called deterministic; otherwise it is stochastic. Most real world environments are unfortunately stochastic. Static: A static environment is not changing while the agent is deliberating (e.g. many board games). If this is not true (e.g. driving a car), the environment is called dynamic. Discrete: Discrete/continuous distinction applies to the way time is handled and to the percepts and actions of the agent. For instance, chess game has a finite number of distinct states, percepts and actions. Hence it is discrete environment. On the other hand, taxi driving is continuous-state and continuos-time problem. Single-agent: This feature determines if the agent is acting alone in the environment, or if there are multiple agents (called as multi-agent environment). E.g. crossword puzzle is a single agent environment. The environment type largely determines the agent design

# Environment types Observable: Determines if the agents sensors give access to the complete state of the environment at each point in time. In such case the environment is fully observable. Fully observable environments are convenient, since the agent does not need memory to keep track of the changes in environment. Unfortunately, such environments are rare in practice. In other cases, environments are called partially observable or nonbservable.

# Environment types Deterministic: If the next state of the environment is completely determined by the current state and the action performed by the agent, then it is called deterministic; otherwise it is stochastic. Most real world environments are unfortunately stochastic.

# Environment types Static: A static environment is not changing while the agent is deliberating (e.g. many board games). If this is not true (e.g. driving a car), the environment is called dynamic. Discrete: Discrete/continuous distinction applies to the way time is handled and to the percepts and actions of the agent. For instance, chess game has a finite number of distinct states, percepts and actions. Hence it is discrete environment. On the other hand, taxi driving is continuous-state and continuos-time problem.

# Environment types Single-agent: This feature determines if the agent is acting alone in the environment, or if there are multiple agents (called as multi-agent environment). E.g. crossword puzzle is a single agent environment. The environment type largely determines the agent design

Reflex Agents Reflex agent is the most simple agent design. Choose action based on current percept (and maybe memory) May have memory or a model of the world s current state Do not consider the future consequences of their actions Consider how the world IS Can a reflex agent be rational?

Reflex Agents Reflex agent is the most simple agent design. Choose action based on current percept (and maybe memory) May have memory or a model of the world s current state Do not consider the future consequences of their actions Consider how the world IS Can a reflex agent be rational?

Video of Demo Reflex Optimal

Video of Demo Reflex Odd

Agents that Plan

Agents that Plan Ask what if Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Must formulate a goal (test) Consider how the world WOULD BE

Agents that Plan

Agents that learns In addition to planning: An agent can adapt its behavior during the operation based on the external performance standard. The agent is observing how well it is doing and adapts its decision making process.

Agents that learn?

Problem Solving Agents via Search

Search Problems A search problem consists of: A state space A successor function (with actions, costs) A start state and a goal test N, 1.0 E, 1.0 A solution is a sequence of actions (a plan) which transforms the start state to a goal state

Search Problems Are Models

Example: Traveling in Romania State space: Cities Successor function: Roads: Go to adjacent city Cost = distance Start state: Arad Goal test: Is state == Bucharest? Solution? Sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

What s in a State Space? The world state includes every last detail of the environment A search state keeps only the details needed for planning (abstraction) Problem: Pathing States: (x,y) location Actions: NSEW Successor: update location only Goal test: is (x,y)=end

What s in a State Space? The world state includes every last detail of the environment A search state keeps only the details needed for planning (abstraction) Problem: Eat-All-Dots States: {(x,y), dot booleans} Actions: NSEW Successor: update location and possibly a dot boolean Goal test: dots all false

State Space Sizes? World state: Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW How many World states? 120x(2 30 )x(12 2 )x4 States for pathing? 120 States for eat-all-dots? 120x(2 30 )

Search Trees This is now / start N, 1.0 E, 1.0 Possible futures A search tree: A what if tree of plans and their outcomes The start state is the root node Children correspond to successors Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree

Tree Search

Search Example: Romania

General Tree Search The simplest approach to problem solving using search algorithms is a tree search. Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a. expanding states)

Searching with a Search Tree Search: Expand out potential plans (tree nodes) Maintain a fringe of partial plans under consideration Try to expand as few tree nodes as possible

General Tree Search The simplest approach to problem solving using search algorithms is a tree search. Important ideas: Fringe Expansion Exploration strategy Main question: which fringe nodes to explore? Strategy!!

Search Strategies The efficiency of the search algorithm is dictated by the strategy. A strategy is defined by picking the order of node expansion. Strategies are evaluated along the following dimensions: Completeness does it always find a solution if one exists? Time complexity number of nodes generated/expanded Space complexity maximum number of nodes in memory Optimality does it always find a least-cost solution?

Uninformed search strategies Uninformed (blind) strategies use only the information available in the problem definition. Uninformed Search Methods: Depth-First Search Breadth-First Search Uniform-Cost Search

Uninformed (blind) vs. Informed Search

Uninformed (blind) vs. Informed Search

General Tree Search The simplest approach to problem solving using search algorithms is a tree search. Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a. expanding states)

S a b d p a c e p h f r q q c G a q e p h f r q q c G a S G d b p q c e h a f r q p h f d b a c e r General Tree Search Fringe

Uninformed search strategies b c 1 c 2 b b c 3 Uniform-Cost Search Breadth-First Search Depth-First Search

Search Algorithm Properties Complete: Guaranteed to find a solution if one exists Optimal: Guaranteed to find the least cost path Time complexity Space complexity b 1 node b nodes Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths Number of nodes in entire tree 1 + b + b 2 +. b m = O(b m ) m tiers b 2 nodes b m nodes

Depth-First Search

Depth-First Search S a b d p a c e p h f r q q c G a q e p h f r q q c G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Fringe is a LIFO stack i.e., put successors at front

Depth-First Search (DFS) Properties What nodes DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(b m ) b 1 node b nodes b 2 nodes How much space does the fringe take? Only has siblings on path to root, so O(bm) Linear space m tiers Is it complete? No: fails in infinite-depth spaces!! Yes, in finite space Is it optimal? No, it finds the leftmost solution, regardless of depth or cost b m nodes Note that most cases (like Romania example) have infinite depth (in Romania example you can travel a circle).

Breadth-First Search

Breadth-First Search Strategy: expand a shallowest node first Implementation: Fringe is a FIFO queue (i.e., new successors go at end) S b p a d q c h e r f G S Search Tiers b a d c a h e r p h q e r f p q p q f q c G q c G a a

Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(b s ) s tiers b 1 node b nodes b 2 nodes How much space does the fringe take? b s nodes Keeps all nodes in memory, so O(b s ) Is it complete? b m nodes s must be finite if a solution exists, so yes! Is it optimal? Yes (if cost = 1 per step); not optimal in general Space is the big problem; can easily generate nodes at 100MB/sec, so 24hrs = 8640GB.

Iterative Deepening Idea: get DFS s space advantage with BFS s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution Run a DFS with depth limit 2. If no solution Run a DFS with depth limit 3... b Isn t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!

Cost-Sensitive Search b 2 3 1 a d 2 c 3 8 2 e 9 8 2 GOAL 2 f START 1 p 4 15 q 4 h r 2 BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.

Uniform Cost Search

Uniform Cost Search Strategy: expand a cheapest node first (least-cost unexpanded node) Fringe = queue ordered by path cost, lowest first 2 b 1 3 S 1 p a d 15 8 9 q 2 c h e 8 2 r G 2 f 1 (e.g. continue exploration from city that is closest to Arad). S 0 d 3 e 9 p 1 Cost contours b a 4 6 c a 11 h p e 13 q 5 r f 7 8 h p 17 r 11 q f q c G q 16 q 11 c G 10 a a

Uniform Cost Search (UCS) Properties What nodes does UCS expand? Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least, then the effective depth is roughly C*/ Takes time O(b C*/ ) (exponential in effective depth) C*/ tiers b c 1 c 2 c 3 How much space does the fringe take? Has roughly the last tier, so O(b C*/ ) Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes! Is it optimal? Yes

Uniform Cost Issues Remember: UCS explores increasing cost contours The good: UCS is complete and optimal! c 1 c 2 c 3 The bad: Explores options in every direction No information about goal location Start Goal We ll fix that soon!

Search and Models Search operates over models of the world The agent doesn t actually try all the plans out in the real world! Planning is all in simulation Your search is only as good as your models