CS 5100: Founda.ons of Ar.ficial Intelligence

Similar documents
Planning, Execution & Learning 1. Heuristic Search Planning

Searching and NetLogo

Artificial Intelligence. Informed search methods

Introduction to Artificial Intelligence. Informed Search

Automated Planning & Artificial Intelligence

Solving Problems by Searching

CS486/686 Lecture Slides (c) 2015 P.Poupart

Informed search algorithms

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

CS486/686 Lecture Slides (c) 2014 P.Poupart

Solving Problems using Search

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

AGENTS AND ENVIRONMENTS. What is AI in reality?

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed Search Algorithms. Chapter 4

Informed Search and Exploration

COSC343: Artificial Intelligence

AGENTS AND ENVIRONMENTS. What is AI in reality?

CS414-Artificial Intelligence

Planning. Outside Materials (see Materials page)

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

CS:4420 Artificial Intelligence

Artificial Intelligence: Search Part 2: Heuristic search

Foundations of Artificial Intelligence

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Foundations of Artificial Intelligence

PEAS: Medical diagnosis system

Informed search algorithms

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Informed Search and Exploration

16.1 Introduction. Foundations of Artificial Intelligence Introduction Greedy Best-first Search 16.3 A Weighted A. 16.

Informed search algorithms

Problem solving and search

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

CS 5100: Founda.ons of Ar.ficial Intelligence

Introduction to Artificial Intelligence (G51IAI)

Algorithm. December 7, Shortest path using A Algorithm. Phaneendhar Reddy Vanam. Introduction. History. Components of A.

Informed Search and Exploration

Robot Programming with Lisp

Artificial Intelligence

Artificial Intelligence CS 6364

4. Solving Problems by Searching

TDT4136 Logic and Reasoning Systems

Classical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016

CS-E4800 Artificial Intelligence

Solving Problems by Searching

Solving Problems by Searching

Outline for today s lecture. Informed Search I. One issue: How to search backwards? Very briefly: Bidirectional search. Outline for today s lecture

Artificial Intelligence

Week 3: Path Search. COMP9414/ 9814/ 3411: Artificial Intelligence. Motivation. Example: Romania. Romania Street Map. Russell & Norvig, Chapter 3.

Problem solving and search

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 2: Search 1.

Planning. Introduction

Planning and search. Lecture 1: Introduction and Revision of Search. Lecture 1: Introduction and Revision of Search 1

Problem solving and search

CMU-Q Lecture 6: Planning Graph GRAPHPLAN. Teacher: Gianni A. Di Caro

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014

Informed (heuristic) search (cont). Constraint-satisfaction search.

Seminar: Search and Optimization

INF Kunstig intelligens. Agents That Plan. Roar Fjellheim. INF5390-AI-06 Agents That Plan 1

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Summary. Search CSL302 - ARTIFICIAL INTELLIGENCE 1

Foundations of Artificial Intelligence

Warm- up. IteraAve version, not recursive. class TreeNode TreeNode[] children() boolean isgoal() DFS(TreeNode start)

Informatics 2D: Tutorial 2 (Solutions)

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013

Artificial Intelligence: Search Part 1: Uninformed graph search

Optimal Control and Dynamic Programming

COMP219: Artificial Intelligence. Lecture 7: Search Strategies

Chapter 3 Solving Problems by Searching Informed (heuristic) search strategies More on heuristics

Artificial Intelligence

Planning. Planning. What is Planning. Why not standard search?

Problem solving and search

Informed search methods

Informed Search and Exploration

Planning (Chapter 10)

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Planning Graphs and Graphplan

Shortest path using A Algorithm

! A* is best first search with f(n) = g(n) + h(n) ! Three changes make it an anytime algorithm:

Clustering (Un-supervised Learning)

Complete the quizzes in the "Problem Set 1" unit in the Introduction to Artificial Intelligence course from Udacity:

CS:4420 Artificial Intelligence

Search. Intelligent agents. Problem-solving. Problem-solving agents. Road map of Romania. The components of a problem. that will take me to the goal!

Outline. Solving Problems by Searching. Introduction. Problem-solving agents

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Problem Solving and Search. Geraint A. Wiggins Professor of Computational Creativity Department of Computer Science Vrije Universiteit Brussel

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

ARTIFICIAL INTELLIGENCE SOLVING PROBLEMS BY SEARCHING. Chapter 3

Artificial Intelligence

Game Programming. Bing-Yu Chen National Taiwan University

Informed search algorithms Michal Pěchouček, Milan Rollo. Department of Cybernetics Czech Technical University in Prague

Artificial Intelligence. 2. Informed Search

Outline. Best-first search

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Transcription:

CS 5100: Founda.ons of Ar.ficial Intelligence AI Planning Prof. Amy Sliva October 13, 2011

Outline Review A* Search AI Planning State space search Planning graphs Situation calculus

Best- first search Use an evaluation function f(n) for each node Estimate of desirability Includes a heuristic function to add more knowledge to the problem h(n) = estimated cost of cheapest path from state at n to the goal Heuristics are problem- specinic Expand most desirable unexpanded node Implementation Order nodes in the frontier in decreasing order of desirability

Heuris.c for path- finding Romania example: estimate the straight- line distance from each node to Bucharest (the goal) Arad 118 75 Oradea 71 Neamt Zerind 151 140 Sibiu 99 Fagaras 80 Timisoara Rimnicu Vilcea 111 Lugoj 70 Mehadia 75 Drobeta 120 Pitesti 211 97 146 101 85 138 Bucharest 90 Craiova Giurgiu 87 Iasi 92 142 98 Urziceni Vaslui Hirsova 86 Eforie Straight- line distance to Bucharest Arad 366 Bucharest 0 Craiova 160 Drobeta 242 Eforie 161 Fagaras 176 Giurgiu 77 Hirsova 151 Iasi 226 Lugoj 244 Mehadia 241 Neamt 234 Oradea 380 Pitesti 100 Rimnicu Vilcea 193 Sibiu 253 Timisoara 329 Urziceni 80 Vaslui 199 Zerind 374

A* Search Avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost of path so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

A* Search (a) The initial state (b) After expanding Arad Arad 366=0+366 Arad

A* Search (b) After expanding Arad Arad Sibiu 393=140+253 Timisoara 447=118+329 Zerind 449=75+374 (c) After expanding Sibiu Arad

A* Search (c) After expanding Sibiu Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 415=239+176 671=291+380 413=220+193 (d) After expanding Rimnicu Vilcea Arad

A* Search (d) After expanding Rimnicu Vilcea Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea 646=280+366 415=239+176 671=291+380 Rimnicu Vilcea Craiova Pitesti Sibiu 526=366+160 417=317+100 553=300+253 (e) After expanding Fagaras

A* Search (e) After expanding Fagaras Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253 (f) After expanding Pitesti

A* Search (f) After expanding Pitesti Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova Rimnicu Vilcea 418=418+0 615=455+160 607=414+193

AI Planning Description of a planning problem Way to describe the world Initial state Goal description Set of possible actions to change the world Result: generate a plan Sequence of actions for changing the initial state into a goal state Note similarity to state space search Planning extends to more complex worlds

Applica.ons of planning Mobile robots Initial motivator of the Nield, and still being developed Simulated environments Goal directed agents for training or games Web and grid environments Intelligent web bots WorkNlows on a computational grid Managing crisis situations E.g., Forest Nires, oil spills, urban evacuations, in factories, And many more! Automation in factories, UAVs, playing bridge, military logistics,

Planning is represen.ng change As an agent considers actions it needs to Know how an action will change the world Keep track of the history of world states (avoid loops) 3 planning approaches STRIPS with state space search STRIPS with planning graph Situation calculus

Assump.ons in classical planning Discrete time Instantaneous actions Deterministic effects Omniscience Sole agent of change Goals of attainment (not avoidance) Negated goals are possible with slight expansion to classic planning Uses a factored representation State of the world is collection of variables

Language for planning representa.on STRIPS highly innluential representation for actions Planning Domain DeNinition Language (PDDL) is STRIPS variant Instead of transition function T: state action à new state uses planning operators to achieve goals and subgoals Preconditions list of propositions to be true Delete list list of propositions that will become false Add list list of propositions that will become true Subset of Nirst- order logic Effects More efnicient to capture known strategies than searching state space

Example planning problem Initial state at(home), have(beer), have(chips) Goal at(home), have(beer), have(chips) Operators Buy(X): Go(X,Y): Pre: at(store) Pre: at(x) Add: have(x) Del: at(x) Add: at(y)

States of the world States are assertions of agent s beliefs Given by =luents ground (no variables), functionless logical atoms representing aspects of the world that change Contain partial descriptions Set (or conjunction) of Nluents that are true State space graph of world states and actions S0 holds(at(home)) holds(color(door, red)) go(store) paint(door, green) S1 S2 holds(at(store)) holds(color(door, red)) [ holds(at(home))] holds(at(home)) holds(color(door, green))

Frame problem (again!) I go from home to the store new situation S In S The store still sells chips My age is still the same Los Angeles is still the largest city in California How can we efniciently represent everything that doesn t change? STRIPS/PDDL provides a good solution for simple actions

Ramifica.on problem I go from home to the store new situation S In S I am now in Marina del Rey The number of people in the store went up by 1 The contents of my pockets are now in the store Do we want to say all that in the action deninition?

Solu.ons to the frame and ramifica.on problems Some facts are inferred with a world state e.g., the number of people in the store All other facts e.g., at(home) persist between states unless changed Remain unless explicitly on delete list A challenge for knowledge engineer not to make mistakes!

Blocks world example A B C A B C Start State Initial state On(A, table), On(C,A), On(B, table), clear(b), clear(c) Goal On(A, B), On(B,C) Goal State Naïve planning algorithm output Sussman anomaly [Put(C, table), Put(A, B) goal 1 accomplished, Put(A, table), Put(B, C) both goals accomplished] DONE!!

Planning as a search problem Nodes world states Arcs actions Solution path from initial state to one that satisnies goal Initial state is fully specinied There may be many goal states Total- order planning Search algorithms Progression forward search Regression backward search

Progression Search from initial state to the goal At(P 1, A) At(P 2, A) Fly(P 1, A, B) Fly(P 2, A, B) At(P 1, B) At(P 2, A) At(P 1, A) At(P 2, B)

Progression algorithm ProgWS(state, goals, actions, path)! if state satisfies goals then return path! else a = choose(actions) s.t.! preconditions(a) satisfied in state!! if no such a then return failure!! else return! ProgWS(apply(a, state), goals,!!! actions,concatenate(path, a))! First call: ProgWS(initState, G, Actions, ())

Progression example Initial state On(A, table), On(C,A), On(B, table), clear(b), clear(c) Goal On(A, B), On(B,C) Progression execution 1. P(I, G, BlocksWorldActions, ()) 2. P(S1, G, BWA, (move- C- from- A- to- table)) 3. P(S2, G, BWA, (move- C- from- A- to- table, move- B- from- table- to- C)) 4. P(S3, G, BWA, (move- C- from- A- to- table, move- B- from- table- to- C, move- A- from- table- to- B)) G S3 Success! Non- Deterministic Choice!

Regression Start at the goal and apply actions backward until reach initial state Only considers actions that are relevant to the goal At(P 1, A) At(P 2, B) At(P 1, B) At(P 2, A) Fly(P 1, A, B) Fly(P 2, A, B) At(P 1, B) At(P 2, B)

First call: RegWS(initState, G, Actions, ()) Regression algorithm RegWS(state, current-goals, actions, path)! if state satisfies current-goals then return path! else a = choose(actions) s.t. some effect of a satisfies one of current-goals!! if no such a then return failure #unachievable! if some effect of a contradicts some of currentgoals then return failure #inconsistent state!! CG = current-goals effects(a) + preconditions(a)!! if current-goals CG then return failure #useless!! return RegWS(state, CG, actions, concatenate(a,path))

Regression example Initial state On(A, table), On(C,A), On(B, table), clear(b), clear(c) Goal On(A, B), On(B,C) Regression execution 1. R(I, G, BlocksWorldActions, ()) 2. R(I, ((clear A) (on- table A) (clear B) (on B C)), BWA, (move- A- from- table- to- B)) 3. R(I, ((clear A) (on- table A) (clear B) (clear C), (on- table B)), BWA, (move- B- from- table- to- C, move- A- from- table- to- B)) 4. R(I, ((on- table A) (clear B) (clear C) (on- table B) (on C A)), BWA, (move- C- from- A- to- table, move- B- from- table- to- C, move- A- from- table- to- B)) current- goals I Success! Non- Deterministic Choice!

Progression vs. regression Both algorithms are: Sound the result plan is valid Complete if a valid plan exists, they Nind one Non- deterministic choiceà search problem! Uninformed: BFS, DFS, iterative deepening, etc Heuristic: A* Regression is often faster because focused by goals Progression often has more accurate heuristics because uses full state

Planning graphs Directed graph organized as levels Nodes Nluents and actions Alternating levels of states and actions All literals/actions that could occur at each level A i contains actions applicable in S i, S i+1 contains literals resulting from A i Edges preconditions, effects, and connlicts Provide more accurate heuristics First level where literal appears estimates difniculty of getting there from initial state Search graph for planning solution GRAPHPLAN algorithm

Persistence and mutex Persistence actions (no- ops) Literal persists if not negated no- op action gets it to next graph level Mutual exclusion (mutex) links edges recording connlicts between actions Inconsistent effects: one action negates an effect of the other Interference: one of the effects of an action is a negation of the precondition of another Competing needs: precondition of one action is mutually exclusive with precondition of another Mutex between literals (state level) One literal is negation of another at the same level Inconsistent support: actions that support the literals are mutex

Example: Have your cake and eat it too! Initial state Have(Cake) Goal Have(Cake), Eaten(Cake) Operators Eat(Cake): Bake(Cake): Pre: Have(Cake) Pre: Have(Cake) Eff: Have(Cake), Eaten(Cake) Eff: Have(Cake)

Example: Have your cake and eat it too! S 0 A 0 S 1 A 1 S 2 Bake(Cake) Have(Cake) Have(Cake) Have(Cake) Have(Cake) Have(Cake) Eat(Cake) Eat(Cake) Eaten(Cake) Eaten(Cake) Eaten(Cake) Eaten(Cake) Eaten(Cake) Add states/actions until graph levels off Two consecutive levels are identical

GRAPHPLAN Algorithm function GRAPHPLAN(problem) returns solution or failure! graph goals! INITIAL-PLANNING-GRAPH(problem)!! GOALS[problem]! loop do! if goals all non-mutex in last level of graph then do! solution EXTRACT-SOLUTION (graph, goals, LENGTH(graph))! if solution failure then return solution! else if NO-SOLUTION-POSSIBLE(graph) then return failure! graph! EXPAND-GRAPH(graph, problem)! Builds planning graph until goals show up as non- mutex Searches for a plan that solves the problem Regression search from goals

The situa.on calculus Planning using logic and resolution FOL representation of the planning problem Resolution to Nind a solution Key idea: represent a snapshot of the world, called a situation explicitly Fluents statements that are true or false in any given situation, e.g., I am at home Actions map situations to situations

Blocks world example A move action: Move(x, loc) Use of the Result function: Result(Move(x, loc), state) à the state resulting from doing the Move action An axiom about moving: x loc s [ At(x, loc, Result(Move(x, loc), s)) ] If you move some object to a location, then in the resulting state that object is at that location At(B1, Table, S0) At(B1, Top(B2), Result(Move(B1, Top(B2)), S0)) using the axiom

Monkeys and bananas problem The monkey- and- bananas problem is faced by a monkey standing under some bananas which are hanging out of reach from the ceiling. There is a box in the corner of the room that will enable the monkey to reach the bananas if he climbs on it. Use situation calculus to represent this problem and solve it using resolution theorem proving

Representa.on of Monkey/banana problem Fluents Constants At(x, loc, s) BANANAS On(x, y, s) MONKEY Reachable(x, Bananas, s) BOX Has(x, y, s) S0 Other predicates CORNER Moveable(x), Climbable(x) UNDER- BANANAS Can- move(x) Actions Climb- on(x, y) Move(x, loc) Reach(x, y) Push(x, y, loc)

Monkey/bananas axioms 1. x1, s1 [ Reachable(x1, BANANAS, s1) Has(x1, BANANAS, Result(Reach(x1, BANANAS), s1)) ] If a person (or monkey!) can reach the bananas,then the result of reaching them is to have them 2. s2 [At(BOX, UNDER- BANANAS, s2) On(MONKEY, BOX, s2) Reachable(MONKEY, BANANAS, s2) If a box is under the bananas and the monkey is on the box then the monkey can reach the bananas 3. x3,loc3,s3 [ Can- move(x3) At(x3, loc, Result(Move(x3, loc3), s3)) ] The result of moving to a location is to be at that location

Monkey/bananas axioms (cont.) 4. x4, y4, s4 [ loc4 [At(x4, loc4, s4) At(y4, loc4, s4)] Climbable(y4) On(x4, y4, Result(Climb- on(x4, y4), s4))] The result of climbing on an object is to be on the object 5. x5, y5, loc5, s5 [ loc [At(x5, loc0, s) At(y5, loc0, s5) ] Moveable(y5) At(y5, loc5, Result(Push(x5, y5, loc5), s5)) The result of x pushing y to a location is y is at that location 6. <same> At(x6, loc6, Result(Push(x6, y6, loc6), s6)) ] The result of x pushing y to a location is x is at that location

Monkey/bananas problem (cont.) Initial state (S0) F1. Moveable(BOX) F2. Climbable(BOX) F3. Can- move(monkey) F4. At(BOX, CORNER, S0) F5. At(MONKEY, UNDER- BANANAS, S0) Goal Has(MONKEY, BANANAS, s) How to solve? Convert to clause form Apply resolution to prove Has(MONKEY, BANANAS, Result(Reach(... ), Result(..)..), S0) which gives the plan in reverse order Don t forget to standardize variables! For the proof we need two additional frame axioms

Frame axioms for monkey/bananas problem 7. x, y, loc, s [ At(x, loc, s) At(x, loc, Result(Move(y, loc), s)) ] The location of an object does not change as a result of someone moving to the same location 8. x, y, loc, s [ At(x, loc, s) At(x, loc, Result(Climb- on(y, x), s)) ] The location of an object does not change as a result of someone climbing on it

Limita.on of situa.on calculus Frame problem (it s back again!!) I go from home to the store new situation S In S The store still sells chips My age is still the same Los Angeles is still the largest city in California How can we efniciently represent everything that doesn t change? Successor state axioms

Successor state axioms Normally, things stay true from one state to the next unless an action changes them At(p, loc, Result(a, s)) iff a = Go(p, x) or [At(p, loc, s) and a!= Go(p, y)] We need one or more of these for every Nluent Now we can use theorem proving (or possibly backward chaining) to deduce a plan Still not very practical Need for effective heuristics for situation calculus