Hierarchical Task Networks

Similar documents
Hierarchical Task Networks. Planning to perform tasks rather than to achieve goals

CS 4649/7649 Robot Intelligence: Planning

Classical Planning Problems: Representation Languages

Planungsbasierte Kommunikationsmodelle

Intelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14.

Chapter 3 Complexity of Classical Planning

Domain-Configurable Planning: Hierarchical Task Networks

Toward Real Time Autonomous Robotics Through Integration of Hierarchical Goal Network Planning and Low Level Control Libraries

COMP4418 Knowledge Representation and Reasoning

Abstract Interpretation

Conjunctive queries. Many computational problems are much easier for conjunctive queries than for general first-order queries.

Tight Bounds for HTN planning with Task Insertion

CS 512, Spring 2017: Take-Home End-of-Term Examination

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur

Model checking pushdown systems

Denotational Semantics. Domain Theory

Integrating HTN Planning and Semantic Web Ontologies for Efficient Information Integration

CS 6110 S14 Lecture 38 Abstract Interpretation 30 April 2014

Planning: Wrap up CSP Planning Logic: Intro

SHOP2. Modelling for and using a hierarchical planner. Ilche Georgievski Room: IAAS

Foundations of AI. 9. Predicate Logic. Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution

CSC2542 SAT-Based Planning. Sheila McIlraith Department of Computer Science University of Toronto Summer 2014

Data Integration: Logic Query Languages

HTN Planning: Complexity and Expressivity*

LOGIC AND DISCRETE MATHEMATICS

Artificial Intelligence 2005/06

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013

Planning as Search. Progression. Partial-Order causal link: UCPOP. Node. World State. Partial Plans World States. Regress Action.

Web Service Composition with User Preferences

The STRIPS Subset of PDDL for the Learning Track of IPC-08

Artificial Intelligence Planning

Linear Time Unit Propagation, Horn-SAT and 2-SAT

Knowledge Representation and Reasoning Logics for Artificial Intelligence

HTN Planning for Web Service Composition Using SHOP2

More than a Name? On Implications of Preconditions and Effects of Compound HTN Planning Tasks

Situation Calculus and YAGI

Chapter 1 Introduc0on

CS 2750 Foundations of AI Lecture 17. Planning. Planning

Chapter 2 & 3: Representations & Reasoning Systems (2.2)

Planning with Recursive Subgoals

What is On / Off Policy?

NP and computational intractability. Kleinberg and Tardos, chapter 8

Acknowledgements. Outline

Planning Algorithms Properties Soundness

An Action Model Learning Method for Planning in Incomplete STRIPS Domains

A deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state

Planning. Outside Materials (see Materials page)

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur

Program Static Analysis. Overview

Implementação de Linguagens 2016/2017

Primitive goal based ideas

Hoare logic. WHILE p, a language with pointers. Introduction. Syntax of WHILE p. Lecture 5: Introduction to separation logic

Main Goal. Language-independent program verification framework. Derive program properties from operational semantics

A Critical Look at Critics in HTN Planning

Hoare logic. Lecture 5: Introduction to separation logic. Jean Pichon-Pharabod University of Cambridge. CST Part II 2017/18

Ontological Support for Modelling Planning Knowledge

Formally Certified Satisfiability Solving

Lecture 4: January 12, 2015

Integrating Constraint Models for Sequential and Partial-Order Planning

Constraint Solving. Systems and Internet Infrastructure Security

The Formal Semantics of Programming Languages An Introduction. Glynn Winskel. The MIT Press Cambridge, Massachusetts London, England

Knowledge Representation. CS 486/686: Introduction to Artificial Intelligence

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic

A Small Interpreted Language

Planning: STRIPS and POP planners

Automated Planning. Plan-Space Planning / Partial Order Causal Link Planning

Classical Planning and Scheduling

HTN Planning for Web Service Composition Using SHOP2

New Advances in GraphHTN: Identifying Independent Subproblems in Large HTN Domains

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Lecture 17 of 41. Clausal (Conjunctive Normal) Form and Resolution Techniques

Angelic Hierarchical Planning. Bhaskara Marthi Stuart Russell Jason Wolfe Willow Garage UC Berkeley UC Berkeley

Factored Planning for Controlling a Robotic Arm: Theory

Principles of Programming Languages

Knowledge Representation and Reasoning Logics for Artificial Intelligence

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014

Using Planning Graphs for Solving HTN Planning Problems

Chapter 3. Semantics. Topics. Introduction. Introduction. Introduction. Introduction

Contents. Chapter 1 SPECIFYING SYNTAX 1

Ordering Problem Subgoals

2.1 Sets 2.2 Set Operations

Exploiting Unfounded Sets for HEX-Program Evaluation

TMK Models to HTNs: Translating Process Models into Hierarchical Task Networks

Constraint Propagation for Efficient Inference in Markov Logic

Essential Gringo (Draft)

1. true / false By a compiler we mean a program that translates to code that will run natively on some machine.

Reformulating Constraint Models for Classical Planning

Automated Planning & Artificial Intelligence

A Gentle Introduction to Program Analysis

Logic Programming and Resolution Lecture notes for INF3170/4171

Programming Languages Third Edition

Heuristic Search for Planning

Note that in this definition, n + m denotes the syntactic expression with three symbols n, +, and m, not to the number that is the sum of n and m.

Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Chapter 6 Outline. Unary Relational Operations: SELECT and

Principles of AI Planning. Principles of AI Planning. 8.1 Parallel plans. 8.2 Relaxed planning graphs. 8.3 Relaxation heuristics. 8.

Towards a Logical Reconstruction of Relational Database Theory

The Encoding Complexity of Network Coding

Semantics. A. Demers Jan This material is primarily from Ch. 2 of the text. We present an imperative

Principles of Program Analysis. Lecture 1 Harry Xu Spring 2013

6. Hoare Logic and Weakest Preconditions

Transcription:

Hierarchical Task Networks Planning to perform tasks rather than to achieve goals Literature Malik Ghallab, Dana Nau, and Paolo Traverso. Automated Planning Theory and Practice, chapter 11. Elsevier/Morgan Kaufmann, 2004. E. Sacerdoti. The nonlinear nature of plans. In: Proc. IJCAI, pages 206-214, 1975. A. Tate. Generating project networks. In: Proc. IJCAI, pages 888-893, 1977. Hierarchical Task Networks 2 1

HTN Planning HTN planning: objective: perform a given set of tasks input includes: set of operators set of methods: recipes for decomposing a complex task into more primitive subtasks planning process: decompose non-primitive tasks recursively until primitive tasks are reached Hierarchical Task Networks 3 Overview Simple Task Networks HTN Planning Extensions State-Variable Representation Hierarchical Task Networks 4 2

STN Planning STN: Simple Task Network what remains: terms, literals, operators, actions, state transition function, plans what s new: tasks to be performed methods describing ways in which tasks can be performed organized collections of tasks called task networks Hierarchical Task Networks 5 DWR Stack Moving Example task: move stack of containers from pallet p1 to pallet p3 in a way the preserves the order c1 c2 c3 p1 crane (informal) methods: move via intermediate: move stack to intermediate pile (reversing order) and then to final destination (reversing order again) move stack: repeatedly move the topmost container until the stack is empty move topmost: take followed by put action p2 p3 Hierarchical Task Networks 6 3

Tasks task symbols: T S = {t 1,,t n } operator names T S : primitive tasks non-primitive task symbols: T S - operator names task: t i (r 1,,r k ) t i : task symbol (primitive or non-primitive) r 1,,r k : terms, objects manipulated by the task ground task: are ground action a accomplishes ground primitive task t i (r 1,,r k ) in state s iff name(a) = t i and a is applicable in s Hierarchical Task Networks 7 Simple Task Networks A simple task network w is an acyclic directed graph (U,E) in which the node set U = {t 1,,t n } is a set of tasks and the edges in E define a partial ordering of the tasks in U. A task network w is ground/primitive if all tasks t u U are ground/primitive, otherwise it is unground/non-primitive. Hierarchical Task Networks 8 4

Totally Ordered STNs ordering: t u t v in w=(u,e) iff there is a path from t u to t v STN w is totally ordered iff E defines a total order on U w is a sequence of tasks: t 1,,t n Let w = t 1,,t n be a totally ordered, ground, primitive STN. Then the plan π(w) is defined as: π(w) = a 1,,a n where a i = t i ; 1 i n Hierarchical Task Networks 9 STNs: DWR Example tasks: t 1 = take(crane,loc,c1,c2,p1): primitive, ground t 2 = take(crane,loc,c2,c3,p1): primitive, ground t 3 = move-stack(p1,q): non-primitive, unground task networks: w 1 = ({t 1,t 2,t 3 }, {(t 1,t 2 ), (t 1,t 3 )}) partially ordered, non-primitive, unground w 2 = ({t 1,t 2 }, {(t 1,t 2 )}) totally ordered: w 2 = t 1,t 2, ground, primitive π(w 2 ) = take(crane,loc,c1,c2,p1),take(crane,loc,c2,c3,p1) Hierarchical Task Networks 10 5

STN Methods Let M S be a set of method symbols. An STN method is a 4-tuple m=(name(m),task(m),precond(m),network(m)) where: name(m): the name of the method syntactic expression of the form n(x 1,,x k ) n M S : unique method symbol x 1,,x k : all the variable symbols that occur in m; task(m): a non-primitive task; precond(m): set of literals called the method s preconditions; network(m): task network (U,E) containing the set of subtasks U of m. Hierarchical Task Networks 11 STN Methods: DWR Example (1) move topmost: take followed by put action take-and-put(c,k,l,p o,p d,x o,x d ) task: move-topmost(p o,p d ) precond: top(c,p o ), on(c,x o ), attached(p o,l), belong(k,l), attached(p d,l), top(x d,p d ) subtasks: take(k,l,c,x o,p o ),put(k,l,c,x d,p d ) Hierarchical Task Networks 12 6

STN Methods: DWR Example (2) move stack: repeatedly move the topmost container until the stack is empty recursive-move(p o,p d,c,x o ) task: move-stack(p o,p d ) precond: top(c,p o ), on(c,x o ) subtasks: move-topmost(p o,p d ), move-stack(p o,p d ) no-move(p o,p d ) task: move-stack(p o,p d ) precond: top(pallet,p o ) subtasks: Hierarchical Task Networks 13 STN Methods: DWR Example (3) move via intermediate: move stack to intermediate pile (reversing order) and then to final destination (reversing order again) move-stack-twice(p o,p i,p d ) task: move-ordered-stack(p o,p d ) precond: - subtasks: move-stack(p o,p i ),move-stack(p i,p d ) Hierarchical Task Networks 14 7

Applicability and Relevance A method instance m is applicable in a state s if precond + (m) s and precond - (m) s = { }. A method instance m is relevant for a task t if there is a substitution σ such that σ(t) = task(m). The decomposition of a task t by a relevant method m under σ is δ(t,m,σ) = σ(network(m)) or δ(t,m,σ) = σ( subtasks(m) ) if m is totally ordered. Hierarchical Task Networks 15 Method Applicability and Relevance: DWR Example task t = move-stack(p1,q) crane state s (as shown) c1 c2 c3 p1 p2 p3 method instance m i = recursive-move(p1,p2,c1,c2) m i is applicable in s m i is relevant for t under σ = {q p2} Hierarchical Task Networks 16 8

Method Decomposition: DWR Example δ(t,m i,σ) = move-topmost(p1,p2), move-stack(p1,p2) move-stack(p1,q) {q p2}: recursive-move(p1,p2,c1,c2) move-topmost(p1,p2) move-stack(p1,p2) Hierarchical Task Networks 17 Decomposition of Tasks in STNs Let w = (U,E) be a STN and t U be a task with no predecessors in w and m a method that is relevant for t under some substitution σ with network(m) = (U m,e m ). The decomposition of t in w by m under σ is the STN δ(w,u,m,σ) where: t is replaced in U by σ(u m ) and edges in E involving t are replaced by edges to appropriate nodes in σ(u m ). Hierarchical Task Networks 18 9

STN Planning Domains An STN planning domain is a pair D=(O,M) where: O is a set of STRIPS planning operators and M is a set of STN methods. D is a total-order STN planning domain if every m M is totally ordered. Hierarchical Task Networks 19 STN Planning Problems An STN planning problem is a 4-tuple P=(s i,w i,o,m) where: s i is the initial state (a set of ground atoms) w i is a task network called the initial task network and D=(O,M) is an STN planning domain. P is a total-order STN planning domain if w i and D are both totally ordered. Hierarchical Task Networks 20 10

STN Solutions A plan π = a 1,,a n is a solution for an STN planning problem P=(s i,w i,o,m) if: w i is empty and π is empty; or: there is a primitive task t wi that has no predecessors in w i and a 1 =t is applicable in s i and π = a 2,,a n is a solution for P =(γ(s i,a 1 ), w i -{t}, O, M) or: there is a non-primitive task t wi that has no predecessors in w i and m M is relevant for t, i.e. σ(t) = task(m) and applicable in s i and π is a solution for P =(s i, δ(w i,t,m,σ), O, M). Hierarchical Task Networks 21 Decomposition Tree: DWR Example move-stack(p1,q) recursive-move(p1,p2,c1,c2) move-topmost(p1,p2) take-and-put( ) move-stack(p1,p2) recursive-move(p1,p2,c2,c3) take(crane,loc,c1,c2,p1) put(crane,loc,c1,pallet,p2) move-topmost(p1,p2) move-stack(p1,p2) take-and-put( ) recursive-move(p1,p2,c3,pallet) take(crane,loc,c2,c3,p1) put(crane,loc,c2,c1,p2) move-topmost(p1,p2) move-stack(p1,p2) take-and-put( ) no-move(p1,p2) take(crane,loc,c3,pallet,p1) put(crane,loc,c3,c2,p2) Hierarchical Task Networks 22 11

Ground-TFD: Pseudo Code function Ground-TFD(s, t 1,,t k,o,m) if k=0 return if t 1.isPrimitive() then actions = {(a,σ) a=σ(t 1 ) and a applicable in s} if actions.isempty() then return failure (a,σ) = actions.chooseone() plan Ground-TFD(γ(s,a),σ( t 2,,t k ),O,M) if plan = failure then return failure else return a plan else methods = {(m,σ) m is relevant for σ(t 1 ) and m is applicable in s} if methods.isempty() then return failure (m,σ) = methods.chooseone() plan subtasks(m) σ( t 2,,t k ) return Ground-TFD(s,plan,O,M) Hierarchical Task Networks 23 TFD vs. Forward/Backward Search choosing actions: TFD considers only applicable actions like forward search TFD considers only relevant actions like backward search plan generation: TFD generates actions execution order; current world state always known lifting: Ground-TFD can be generalized to Lifted-TFD resulting in same advantages as lifted backward search Hierarchical Task Networks 24 12

Ground-PFD: Pseudo Code function Ground-PFD(s,w,O,M) if w.u={} return task {t U t has no predecessors in w.e}.chooseone() if task.isprimitive() then actions = {(a,σ) a=σ(t 1 ) and a applicable in s} if actions.isempty() then return failure (a,σ) = actions.chooseone() plan Ground-PFD(γ(s,a),σ(w-{task}),O,M) if plan = failure then return failure else return a plan else methods = {(m,σ) m is relevant for σ(t 1 ) and m is applicable in s} if methods.isempty() then return failure (m,σ) = methods.chooseone() return Ground-PFD(s, δ(w,task,m,σ),o,m) Hierarchical Task Networks 25 Overview Simple Task Networks HTN Planning Extensions State-Variable Representation Hierarchical Task Networks 26 13

Preconditions in STN Planning STN planning constraints: ordering constraints: maintained in network preconditions: enforced by planning procedure must know state to test for applicability must perform forward search HTN Planning additional bookkeeping maintains general constraints explicitly Hierarchical Task Networks 27 First and Last Network Nodes Let π = a 1,,a n be a solution for w, U U be a set of tasks in w, and A(U ) the subset of actions in π such that each a i A(U ) is a descendant of some t U in the decomposition tree. Then we define: first(u,π) = the action a i A(U ) that occurs first in π; and last(u,π) = the action a i A(U ) that occurs last in π. Hierarchical Task Networks 28 14

Hierarchical Task Networks A (hierarchical) task network is a pair w=(u,c), where: U is a set of tasks and C is a set of constraints of the following types: t 1 t 2 : precedence constraint between tasks satisfied if in every solution π: last({t},π) first({t},π); before(u,l): satisfied if in every solution π: literal l holds in the state just before first(u,π); after(u,l): satisfied if in every solution π: literal l holds in the state just after last(u,π); between(u,u,l): satisfied if in every solution π: literal l holds in every state after last(u,π) and before first(u,π). Hierarchical Task Networks 29 HTN Methods Let M S be a set of method symbols. An HTN method is a 4-tuple m=(name(m),task(m),subtasks(m),constr(m)) where: name(m): the name of the method syntactic expression of the form n(x 1,,x k ) n M S : unique method symbol x 1,,x k : all the variable symbols that occur in m; task(m): a non-primitive task; (subtasks(m),constr(m)): a task network. Hierarchical Task Networks 30 15

HTN Methods: DWR Example (1) move topmost: take followed by put action take-and-put(c,k,l,p o,p d,x o,x d ) task: move-topmost(p o,p d ) network: subtasks: {t 1 =take(k,l,c,x o,p o ), t 2 =put(k,l,c,x d,p d )} constraints: {t 1 t 2, before({t 1 }, top(c,p o )), before({t 1 }, on(c,x o )), before({t 1 }, attached(p o,l)), before({t 1 }, belong(k,l)), before({t 2 }, attached(p d,l)), before({t 2 }, top(x d,p d ))} Hierarchical Task Networks 31 HTN Methods: DWR Example (2) move stack: repeatedly move the topmost container until the stack is empty recursive-move(p o,p d,c,x o ) task: move-stack(p o,p d ) network: subtasks: {t 1 =move-topmost(p o,p d ), t 2 =move-stack(p o,p d )} constraints: {t 1 t 2, before({t 1 }, top(c,p o )), before({t 1 }, on(c,x o ))} move-one(p o,p d,c) task: move-stack(p o,p d ) network: subtasks: {t 1 =move-topmost(p o,p d )} constraints: {before({t 1 }, top(c,p o )), before({t 1 }, on(c,pallet))} Hierarchical Task Networks 32 16

HTN Decomposition Let w=(u,c) be a task network, t U a task, and m a method such that σ(task(m))=t. Then the decomposition of t in w using m under σ is defined as: δ(w,t,m,σ) = ((U-{t}) σ(subtasks(m)), C σ(constr(m))) where C is modified from C as follows: for every precedence constraint in C that contains t, replace it with precedence constraints containing σ(subtasks(m)) instead of t; and for every before-, after-, or between constraint over tasks U containing t, replace U with (U -{t}) σ(subtasks(m)). Hierarchical Task Networks 33 HTN Decomposition: Example network: w = ({t 1 = move-stack(p1,q)}, {}) δ(w, t 1, recursive-move(p o,p d,c,x o ), {p o p1,p d q}) = w = ({t 2 =move-topmost(p1,q), t 3 =move-stack(p1,q)}, {t 2 t 3, before({t 2 }, top(c,p1)), before({t 2 }, on(c,x o ))}) δ(w, t 2, take-and-put(c,k,l,p o,p d,x o,x d ), {p o p1,p d q}) = ({t 3 =move-stack(p1,q), t 4 =take(k,l,c,x o,p1), t 5 =put(k,l,c,x d,q)}, {t 4 t 3, t 5 t 3, before({t 4,t 5 }, top(c,p1)), before({t 4,t 5 }, on(c,x o ))} {t 4 t 5, before({t 4 }, top(c,p1)), before({t 4 }, on(c,x o )), before({t 4 }, attached(p1,l)), before({t 4 }, belong(k,l)), before({t 5 }, attached(q,l)), before({t 5 }, top(x d,q))}) Hierarchical Task Networks 34 17

HTN Planning Domains and Problems An HTN planning domain is a pair D=(O,M) where: O is a set of STRIPS planning operators and M is a set of HTN methods. An HTN planning problem is a 4-tuple P=(s i,w i,o,m) where: s i is the initial state (a set of ground atoms) w i is a task network called the initial task network and D=(O,M) is an HTN planning domain. Hierarchical Task Networks 35 Solutions for Primitive HTNs Let (U,C) be a primitive HTN. A plan π = a 1,,a n is a solution for P=(s i,(u,c),o,m) if there is a ground instance (σ(u),σ(c)) of (U,C) and a total ordering t 1,,t n of tasks in σ(u) such that: for i=1 n: name(a i ) = t i ; π is executable in s i, i.e. γ(s i,π) is defined; the ordering of t 1,,t n respects the ordering constraints in σ(c); for every constraint before(u,l) in σ(c) where t k =first(u,π): l must hold in γ(s i, a 1,,a k-1 ); for every constraint after(u,l) in σ(c) where t k =last(u,π): l must hold in γ(s i, a 1,,a k ); for every constraint between(u,u,l) in σ(c) where t k =first(u,π) and t m =last(u,π): l must hold in every state γ(s i, a 1,,a j ), j {k m-1}. Hierarchical Task Networks 36 18

Solutions for Non-Primitive HTNs Let w = (U,C) be a non-primitive HTN. A plan π = a 1,,a n is a solution for P=(s i,w,o,m) if there is a sequence of task decompositions that can be applied to w such that: the result of the decompositions is a primitive HTN w ; and π is a solution for P =(s i,w,o,m). Hierarchical Task Networks 37 Abstract-HTN: Pseudo Code function Abstract-HTN(s,U,C,O,M) if (U,C).isInconsistent() then return failure if U.isPrimitive() then return extractsolution(s,u,c,o) else return decomposetask(s,u,c,o,m) Hierarchical Task Networks 38 19

extractsolution: Pseudo Code function extractsolution(s,u,c,o) t 1,,t n U.chooseSequence(C) a 1,,a n t 1,,t n.choosegrounding(s,c,o) if a 1,,a n.satisfies(c) then return a 1,,a n return failure Hierarchical Task Networks 39 decomposetask: Pseudo Code function decomposetask(s,u,c,o,m) t U.nonPrimitives().selectOne() methods {(m,σ) m M and σ(task(m))= σ(t)} if methods.isempty() then return failure (m,σ) methods.chooseone() (U,C ) δ((u,c),t,m,σ) (U,C ) (U,C ).applycritic() return Abstract-HTN(s,U,C,O,M) Hierarchical Task Networks 40 20

HTN vs. STRIPS Planning Since HTN is generalization of STN Planning, and STN problems can encode undecidable problems, but STRIPS cannot encode such problems: STN/HTN formalism is more expressive non-recursive STN can be translated into equivalent STRIPS problem but exponentially larger in worst case regular STN is equivalent to STRIPS Hierarchical Task Networks 41 Overview Simple Task Networks HTN Planning Extensions State-Variable Representation Hierarchical Task Networks 42 21

Functions in Terms allow function terms in world state and method constraints ground versions of all planning algorithms may fail potentially infinite number of ground instances of a given term lifted algorithms can be applied with most general unifier least commitment approach instantiates only as far as necessary plan-existence may not be decidable Hierarchical Task Networks 43 Axiomatic Inference use theorem prover to infer derived knowledge within world states undecidability of first-order logic in general idea: use restricted (decidable) subset of first-order logic: Horn clauses only positive preconditions can be derived precondition p is satisfied in state s iff p can be proved in s Hierarchical Task Networks 44 22

Attached Procedures associate predicates with procedures modify planning algorithm evaluate preconditions by calling the procedure attached to the predicate symbol if there is such a procedure test against world state (set-relation, theorem prover) otherwise soundness and completeness: depends on procedures Hierarchical Task Networks 45 High-Level Effects allow user to declare effects for nonprimitive methods aim: establish preconditions prune partial plans if high-level effects threaten preconditions increases efficiency problem: semantics Hierarchical Task Networks 46 23

Other Extensions other constraints time constraints resource constraints extended goals states to be avoided required intermediate states limited plan length visit states multiple times Hierarchical Task Networks 47 Overview Simple Task Networks HTN Planning Extensions State-Variable Representation Hierarchical Task Networks 48 24

State Variables some relations are functions example: at(r1,loc1): relates robot r1 to location loc1 in some state truth value changes from state to state will only be true for exactly one location l in each state idea: represent such relations using statevariable functions mapping states into objects example: functional representation: rloc:robots S locations Hierarchical Task Networks 49 States in the State-Variable Representation Let X be a set of state-variable functions. A k-ary state variable is an expression of the form x(v 1, v k ) where: x X is a state-variable function and v i is either an object constant or an object variable. A state-variable state description is a set of expressions of the form x s =c where: x s is a ground state variable x(v 1, v k ) and c is an object constant. Hierarchical Task Networks 50 25

DWR Example: State-Variable State Descriptions simplified: no cranes, no piles state-variable functions: rloc: robots S locations rolad: robots S containers {nil} cpos: containers S locations robots sample state-variable state descriptions: {rloc(r1)=loc1, rload(r1)=nil, cpos(c1)=loc1, cpos(c2)=loc2, cpos(c3)=loc2} {rloc(r1)=loc1, rload(r1)=c1, cpos(c1)=r1, cpos(c2)=loc2, cpos(c3)=loc2} Hierarchical Task Networks 51 Operators in the State-Variable Representation A state-variable planning operator is a triple (name(o), precond(o), effects(o)) where: name(o) is a syntactic expression of the form n(x 1,,x k ) where n is a (unique) symbol and x 1,,x k are all the object variables that appear in o, precond(o) are the unions of a state-variable state description and some rigid relations, and effects(o) are sets of expressions of the form x s v k+1 where: x s is a ground state variable x(v 1, v k ) and v k+1 is an object constant or an object variable. Hierarchical Task Networks 52 26

DWR Example: State-Variable Operators move(r,l,m) precond: rloc(r)=l, adjacent(l,m) effects: rloc(r) m load(r,c,l) precond: rloc(r)=l, cpos(c)=l, rload(r)=nil effects: cpos(c) r, rload(r) c unload(r,c,l) precond: rloc(r)=l, rload(r)=c effects: rload(r) nil, cpos(c) l Hierarchical Task Networks 53 Applicability and State Transitions Let a be an action and s a state. Then a is applicable in s iff: all rigid relations mentioned in precond(a) hold, and if x s =c precond(a) then x s =c s. The state transition function γ for an action a in state s is defined as γ(s,a) = {x s =c x X} where: x s c effects(a) or x s =c s otherwise. Hierarchical Task Networks 54 27

State-Variable Planning Domains Let X be a set of state-variable functions. A state-variable planning domain on X is a restricted state-transition system Σ=(S,A,γ) such that: S is a set of state-variable state descriptions, A is a set of ground instances of some state-variable planning operators O, γ:s A S where γ(s,a)= {x s =c x X and x s c effects(a) or x s =c s otherwise} if a is applicable in s γ(s,a)=undefined otherwise, S is closed under γ Hierarchical Task Networks 55 State-Variable Planning Problems A state-variable planning problem is a triple P=(Σ,s i,g) where: Σ=(S,A,γ) is a state-variable planning domain on some set of state-variable functions X s i S is the initial state g is a set of expressions of the form x s =c describing the goal such that the set of goal states is: S g ={s S x s =c s} Hierarchical Task Networks 56 28

Relevance and Regression Sets Let P=(Σ,s i,g) be a state-variable planning problem. An action a A is relevant for g if g effects(a) {} and for every x s =c g, there is no x s d effects(a) such that c d. The regression set of g for a relevant action a A is: γ -1 (g,a)=(g - ϑ(a)) precond(a) where ϑ(a) = {x s =c x s c effects(a)} definition for all regression sets Γ < (g) exactly as for propositional case Hierarchical Task Networks 57 Statement of a State-Variable Planning Problem A statement of a state-variable planning problem is a triple P=(O,s i,g) where: O is a set of planning operators in an appropriate state-variable planning domain Σ=(S,A,γ) on X s i is the initial state in an appropriate statevariable planning problem P=(Σ,s i,g) g is a goal in the same state-variable planning problem P Hierarchical Task Networks 58 29

Translation: STRIPS to State- Variable Representation Let P=(O,s i,g) be a statement of a classical planning problem. In the operators O, in the initial state s i, and in the goal g: replace every positive literal p(t 1,,t n ) with a statevariable expression p(t 1,,t n )=1 or p(t 1,,t n ) 1 in the operators effects, and replace every negative literal p(t 1,,t n ) with a statevariable expression p(t 1,,t n )=0 or p(t 1,,t n ) 0 in the operators effects. Hierarchical Task Networks 59 Translation: State-Variable to STRIPS Representation Let P=(O,s i,g) be a statement of a statevariable planning problem. In the operators preconditions, in the initial state s i, and in the goal g: replace every state-variable expression p(t 1,,t n )=v with an atom p(t 1,,t n,v), and in the operators effects: replace every state-variable assignment p(t 1,,t n ) v with a pair of literals p(t 1,,t n,v), p(t 1,,t n,w), and add p(t 1,,t n,w) to the respective operators preconditions. Hierarchical Task Networks 60 30

Overview Simple Task Networks HTN Planning Extensions State-Variable Representation Hierarchical Task Networks 61 31