PEAS: Medical diagnosis system

Similar documents
Warm- up. IteraAve version, not recursive. class TreeNode TreeNode[] children() boolean isgoal() DFS(TreeNode start)

CSC 2114: Artificial Intelligence Search

Announcements. Project 0: Python Tutorial Due last night

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

521495A: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 4100 // artificial intelligence

CS 188: Ar)ficial Intelligence

CSE 473: Ar+ficial Intelligence

AGENTS AND ENVIRONMENTS. What is AI in reality?

AGENTS AND ENVIRONMENTS. What is AI in reality?

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

CS486/686 Lecture Slides (c) 2014 P.Poupart

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Robot Programming with Lisp

Introduction to Artificial Intelligence. Informed Search

Problem solving and search

4. Solving Problems by Searching

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed search algorithms

Artificial Intelligence. Informed search methods

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Solving Problems using Search

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

Searching and NetLogo

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Solving Problems by Searching

Solving Problems by Searching

CS:4420 Artificial Intelligence

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Problem solving and search

CS-E4800 Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed search algorithms. Chapter 4, Sections 1 2 1

COSC343: Artificial Intelligence

Artificial Intelligence: Search Part 2: Heuristic search

Informed Search Algorithms. Chapter 4

Problem solving and search

Informed Search and Exploration

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Graph Search. Chris Amato Northeastern University. Some images and slides are used from: Rob Platt, CS188 UC Berkeley, AIMA

CS:4420 Artificial Intelligence

Artificial Intelligence: Search Part 1: Uninformed graph search

Informed search algorithms

Problem solving and search

Informed Search and Exploration

Artificial Intelligence

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Introduction to Artificial Intelligence (G51IAI)

CS414-Artificial Intelligence

Artificial Intelligence

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Informed search algorithms

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

Seminar: Search and Optimization

Foundations of Artificial Intelligence

Informed Search and Exploration

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Artificial Intelligence

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

ARTIFICIAL INTELLIGENCE (CSC9YE ) LECTURES 2 AND 3: PROBLEM SOLVING

Foundations of Artificial Intelligence

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Artificial Intelligence CS 6364

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Planning, Execution & Learning 1. Heuristic Search Planning

Artificial Intelligence

Problem solving and search: Chapter 3, Sections 1 5

Problem solving and search: Chapter 3, Sections 1 5

TDT4136 Logic and Reasoning Systems

Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu. Blind Searches

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Basic Search. Fall Xin Yao. Artificial Intelligence: Basic Search

Clustering (Un-supervised Learning)

Algorithm. December 7, Shortest path using A Algorithm. Phaneendhar Reddy Vanam. Introduction. History. Components of A.

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Solving Problems by Searching

CAP 4630 Artificial Intelligence

9/7/2015. Outline for today s lecture. Problem Solving Agents & Problem Formulation. Task environments

CS 771 Artificial Intelligence. Problem Solving by Searching Uninformed search

Solving Problem by Searching. Chapter 3

Solving problems by searching

Set 2: State-spaces and Uninformed Search. ICS 271 Fall 2015 Kalev Kask

Search. Intelligent agents. Problem-solving. Problem-solving agents. Road map of Romania. The components of a problem. that will take me to the goal!

Automated Planning & Artificial Intelligence

Problem solving and search

Informed Search Lecture 5

Problem Solving and Search. Geraint A. Wiggins Professor of Computational Creativity Department of Computer Science Vrije Universiteit Brussel

Problem Solving and Search. Chapter 3

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Uninformed Search. Problem-solving agents. Tree search algorithms. Single-State Problems

Optimal Control and Dynamic Programming

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Artificial Intelligence

Informed search methods

CS 380: Artificial Intelligence Lecture #3

521495A: Artificial Intelligence

Search I. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig

Transcription:

PEAS: Medical diagnosis system Performance measure Patient health, cost, reputation Environment Patients, medical staff, insurers, courts Actuators Screen display, email Sensors Keyboard/mouse

Environment types Pacman Backgammon Diagnosis Taxi Fully or partially observable Single-agent or multiagent Deterministic or stochastic Static or dynamic Discrete or continuous Known or unknown

Agent design The environment type largely determines the agent design Partially observable => agent requires memory (internal state), takes actions to obtain information Stochastic => agent may have to prepare for contingencies, must pay attention while executing plans Multi-agent => agent may need to behave randomly Static => agent has enough time to compute a rational decision Continuous time => continuously operating controller

Agent types In order of increasing generality and complexity Simple reflex agents Reflex agents with state Goal-based agents Utility-based agents

Environment Simple reflex agents Agent Sensors What the world is like now Condition-action rules What action I should do now Actuators

A reflex Pacman agent in Python class TurnLeftAgent(Agent): def getaction(self, percept): legal = percept.getlegalpacmanactions() current = percept.getpacmanstate().configuration.direction if current == Directions.STOP: current = Directions.NORTH left = Directions.LEFT[current] if left in legal: return left if current in legal: return current if Directions.RIGHT[current] in legal: return Directions.RIGHT[current] if Directions.LEFT[left] in legal: return Directions.LEFT[left] return Directions.STOP

Pacman agent contd. Can we (in principle) extend this reflex agent to behave well in all standard Pacman environments?

Handling complexity Writing behavioral rules or environment models more difficult for more complex environments E.g., rules of chess (32 pieces, 64 squares, ~100 moves) ~100 000 000 000 000 000 000 000 000 000 000 000 000 pages as a state-to-state transition matrix (cf HMMs, automata) R.B.KB.RPPP..PPP..N..N..PP.q.pp..Q..n..n..ppp..pppr.b.kb.r ~100 000 pages in propositional logic (cf circuits, graphical models) WhiteKingOnC4@Move12 1 page in first-order logic x,y,t,color,piece On(color,piece,x,y,t)

Environment Reflex agents with state State How the world evolves Sensors What the world is like now What my actions do Condition-action rules What action I should do now Agent Actuators

Environment Goal-based agents Sensors State How the world evolves What the world is like now What my actions do What it will be like if I do action A Goals What action I should do now Agent Actuators

Environment Utility-based agents Sensors State How the world evolves What the world is like now What my actions do Utility What it will be like if I do action A How happy I will be in such a state What action I should do now Agent Actuators

Summary An agent interacts with an environment through sensors and actuators The agent function, implemented by an agent program running on a machine, describes what the agent does in all circumstances PEAS descriptions define task environments; precise PEAS specifications are essential More difficult environments require more complex agent designs and more sophisticated representations

CS 188: Artificial Intelligence Search Instructor: Stuart Russell ]

Today Agents that Plan Ahead Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search

Agents that plan ahead Planning agents: Decisions based on predicted consequences of actions Must have a transition model: how the world evolves in response to actions Must formulate a goal Spectrum of deliberativeness: Generate complete, optimal plan offline, then execute Generate a simple, greedy plan, start executing, replan when something goes wrong

Video of Demo Replanning

Video of Demo Mastermind

Search Problems

A search problem consists of: A state space Search Problems For each state, a set Actions(s) of allowable actions A transition model Result(s,a) {N, E} N 1 A step cost function c(s,a,s ) E 1 A start state and a goal test A solution is a sequence of actions (a plan) which transforms the start state to a goal state

Search Problems Are Models

Example: Travelling in Romania Oradea State space: 71 Neamt Cities Arad 118 75 Zerind 140 Timisoara 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui Actions: Go to adjacent city Transition model Result(Go(B),A) = B Step cost 111 70 75 Drobeta Lugoj Mehadia 120 Pitesti 211 97 146 101 85 138 Bucharest 90 Craiova Giurgiu 142 98 Urziceni Hirsova 86 Eforie Distance along road link Start state: Arad Goal test: Is state == Bucharest? Solution?

What s in a State Space? The real world state includes every last detail of the environment A search state abstracts away details not needed to solve the problem Problem: Pathing States: (x,y) location Actions: NSEW Transition model: update location Goal test: is (x,y)=end MN states Problem: Eat-All-Dots States: {(x,y), dot booleans} Actions: NSEW Transition model: update location and possibly a dot boolean Goal test: dots all false MN2 MN states

Quiz: Safe Passage Problem: eat all dots while keeping the ghosts perma-scared What does the state space have to specify? (agent position, dot booleans, power pellet booleans, remaining scared time)

State Space Graphs and Search Trees

State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent transitions resulting from actions The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once! We can rarely build this full graph in memory (it s too big), but it s a useful idea

More examples 71 Oradea Neamt Arad 118 75 Zerind Timisoara 140 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 146 101 85 138 Bucharest 90 Craiova Giurgiu Urziceni 98 Hirsova 86 Eforie

More examples L R L R S S R R L R L R L L S S S S R L R L S S

Search Trees This is now / start N, 1.0 E, 1.0 Possible futures A search tree: A what if tree of plans and their outcomes The start state is the root node Children correspond to possible action outcomes Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree

State Space Graphs vs. Search Trees

Quiz: State Space Graphs vs. Search Trees Consider this 4-state graph: How big is its search tree (from S)? a S S G b a b G G a b G a b G Important: Lots of repeated structure in the search tree!

Tree Search

Search Example: Romania 71 Oradea Neamt Arad 118 75 Zerind Timisoara 140 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea 87 Iasi 92 Vaslui 111 Lugoj 97 Pitesti 211 142 70 75 Drobeta Mehadia 120 146 101 85 138 Bucharest 90 Craiova Giurgiu Urziceni 98 Hirsova 86 Eforie

Searching with a Search Tree Search: Expand out potential plans (tree nodes) Maintain a frontier of partial plans under consideration Try to expand as few tree nodes as possible

General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier Important ideas: Frontier Expansion Exploration strategy Main question: which frontier nodes to explore?

Depth-First Search

Depth-First Search S a b d p a c e p h f r q q c G a q e p h f r q q c G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack

Search Algorithm Properties

Search Algorithm Properties Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? b 1 node b nodes Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths Number of nodes in entire tree? 1 + b + b 2 +. b m = O(b m ) m tiers b 2 nodes b m nodes

Depth-First Search (DFS) Properties What nodes does DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(b m ) b 1 node b nodes b 2 nodes How much space does the frontier take? Only has siblings on path to root, so O(bm) m tiers Is it complete? m could be infinite, so only if we prevent cycles (more later) b m nodes Is it optimal? No, it finds the leftmost solution, regardless of depth or cost

Breadth-First Search

Breadth-First Search Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue S b p a d q c h e r f G S Search Tiers b a d c a h e r p h q e r f p q p q f q c G q c G a a

Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(b s ) s tiers b 1 node b nodes b 2 nodes How much space does the frontier take? b s nodes Has roughly the last tier, so O(b s ) Is it complete? b m nodes s must be finite if a solution exists, so yes! Is it optimal? Only if costs are all 1 (more on costs later)

Quiz: DFS vs BFS

Quiz: DFS vs BFS When will BFS outperform DFS? When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]

Video of Demo Maze Water DFS/BFS (part 1)

Video of Demo Maze Water DFS/BFS (part 2)

Iterative Deepening Idea: get DFS s space advantage with BFS s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution Run a DFS with depth limit 2. If no solution Run a DFS with depth limit 3... b Isn t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!

Finding a least-cost path b 2 3 1 a d 2 c 3 8 2 e 9 8 2 GOAL 2 f START 1 p 4 15 q 4 h r 2 BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.

Uniform Cost Search

Uniform Cost Search Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) 2 b 1 3 S 1 p a d 15 8 9 q 2 c h e 8 2 r G 2 f 1 S 0 d 3 e 9 p 1 b 4 c 11 e 5 h 17 r 11 q 16 Cost contours a 6 a h 13 r 7 p q f p q f 8 q c G q 11 c G a 10 a

Uniform Cost Search (UCS) Properties What nodes does UCS expand? Processes all nodes with cost less than cheapest solution! If that solution costs C* and arcs cost at least, then the effective depth is roughly C*/ Takes time O(b C*/ ) (exponential in effective depth) C*/ tiers b c 1 c 2 c 3 How much space does the frontier take? Has roughly the last tier, so O(b C*/ ) Is it complete? Assuming best solution has a finite cost and minimum arc cost is positive, yes! Is it optimal? Yes! (Proof next lecture via A*)

Uniform Cost Issues Remember: UCS explores increasing cost contours The good: UCS is complete and optimal! c 1 c 2 c 3 The bad: Explores options in every direction No information about goal location Start Goal We ll fix that soon!

Search Gone Wrong?