Planning (under complete Knowledge)

Size: px
Start display at page:

Download "Planning (under complete Knowledge)"

Transcription

1 CSC384: Intro to Artificial Intelligence Planning (under complete Knowledge) We cover Sections11.1, 11.2, Section 11.3 talks about partial-order planning, an interesting idea that hasn t really done well in practice talks about planning by converting planning problems in to SAT and then using backtracking search to solve the problem. This also is an interesting idea, but its practical performance has been lacking. Chapter 12 talks about some practical extensions to planning needed in real world applications. We will not cover that material. But it is interesting reading. 1

2 Why Planning Intelligent agents must operate in the world. They are not simply passive reasoners (Knowledge Representation, reasoning under uncertainty) or problem solvers (Search), they must also act on the world. We want intelligent agents to act in intelligent ways. Taking purposeful actions, predicting the expected effect of such actions, composing actions together to achieve complex goals. 2

3 Why Planning E.g. if we have a robot we want robot to decide what to do; how to act to achieve our goals 3

4 A Planning Problem How to change the world to suit our needs Critical issue: we need to reason about what the world will be like after doing a few actions, not just what it is like now GOAL: Craig has coffee CURRENTLY: robot in mailroom, has no coffee, coffee not made, Craig in office, etc. TO DO: goto lounge, make coffee, 4

5 Planning Reasoning about what the world will be like after doing a few actions is similar to what we have already examined. However, now we want to reason about dynamic environments. in(robby,room1), lighton(room1) are true: will they be true after robby performs the action turnofflights? in(robby,room1) is true: what does robby need to do to make in(robby,room3) true? Reasoning about the effects of actions, and computing what actions can achieve certain effects is at the heart of decision making. 5

6 Planning under Uncertainty One of the major complexities in planning that we will deal with later is planning under uncertainty. Our knowledge of the world probabilistic. Sensing is subject to noise (especially in robots). Actions and effectors are also subject to error (uncertainty in their effects). 6

7 Planning But for now we will confine our attention to the deterministic case. We will examine: Determining the effects of actions. finding sequences of actions that can achieve a desired set of effects. This will in some ways be a lot like search, but we will see that representation also plays an important role. 7

8 Situation Calculus First we look at how to model dynamic worlds within first-order logic. The situation calculus is an important formalism developed for this purpose. Situation Calculus is a first-order language. Include in the domain of individuals a special set of objects called situations. Of these s 0 is a special distinguished constant which denotes the initial situation. 8

9 Situation Calculus Situations are used to index states of the world. When dealing with dynamic environments, the world has different properties at different points in time. e.g., in(robby,room1, s 0 ), in(robby,room3,s 0 ) in(robby,room3,s 1 ), in(robby,room1,s 1 ). Different things are true in situation s 1 than in the initial situation s 0. Contrast this with the previous kinds of knowledge we examined. 9

10 Fluents The basic idea is that properties that change from situation to situation (called fluents) take an extra situation argument. clear(b) clear(b,s) clear(b) is no longer statically true, it is true contingent on what situation we are talking about 10

11 Blocks World Example. robot hand clear(c,s 0 ) on(c,a,s 0 ) clear(b,s 0 ) handempty(s 0 ) C A B 11

12 Actions in the Situation Calculus Actions are also part of language A set of primitive action objects in the (semantic) domain of individuals. In the syntax they are represented as functions mapping objects to primitive action objects. pickup(x) function mapping blocks to actions pickup(c) = the primitive action object corresponding to picking up block c stack(x,y) stack(a,b) = the primitive action object corresponding to stacking a on top of b 12

13 Actions modify situations. There is a generic action application function do(a,s). do maps a primitive action and a situation to a new situation. The new situation is the situation that results from applying A to S. do(pickup(c), s 0 ) = the new situation that is the result of applying action pickup(c) to the initial situation s 0. 13

14 What do Actions do? Actions affect the situation by changing what is true. on(c,a,s 0 ); clear(a,do(pickup(c),s 0 )) We want to represent the effects of actions, this is done in the situation calculus with two components. 14

15 Specifying the effects of actions Action preconditions. Certain things must hold for actions to have a predictable effect. pickup(c) this action is only applicable to situations S where clear(c,s) handempty(s) are true. Action effects. Actions make certain things true and certain things false. holding(c, do(pickup(c), S)) X. handempty(do(pickup(x),s)) 15

16 Specifying the effects of actions Action effects are conditional on their precondition being true. S,X. ontable(x,s) clear(x,s) handempty(s) holding(x, do(pickup(x),s)) handempty(do(pickup(x),s)) ontable(x,do(pickup(x,s)) clear(x,do(pickup(x,s)). 16

17 Reasoning with the Situation Calculus. 1. clear(c,s 0 ) 2. on(c,a,s 0 ) 3. clear(b,s 0 ) 4. ontable(a,s 0 ) 5. ontable(b,s 0 ) 6. handempty(s 0 ) Query: Z.holding(b,Z) 7. ( holding(b,z), ans(z)) does there exists a situation in which we are holding b? And if so what is the name of that situation. C A B 17

18 Resolution Convert pickup action axiom into clause form: S,Y. ontable(y,s) clear(y,s) handempty(s) holding(y, do(pickup(y),s)) handempty(do(pickup(y),s)) ontable(y,do(pickup(y,s)) clear(y,do(pickup(y,s)). 8. ( ontable(y,s), clear(y,s), handempty(s), holding(y,do(pickup(y),s)) 9. ( ontable(y,s), clear(y,s), handempty(s), handempty(do(pickup(x),s))) 10. ( ontable(y,s), clear(y,s), handempty(s), ontable(y,do(pickup(y,s))) 11. ( ontable(y,s), clear(y,s), handempty(s), clear(y,do(pickup(y,s))) 18

19 Resolution 12. R[8d, 7]{Y=b,Z=do(pickup(b),S)} ( ontable(b,s), clear(b,s), handempty(s), ans(do(pickup(b),s))) 13. R[12a,5] {S=s 0 } ( clear(b,s 0 ), handempty(s 0 ), ans(do(pickup(b),s 0 ))) 14. R[13a,3] {} ( handempty(s 0 ), ans(do(pickup(b),s 0 ))) 15. R[14a,6] {} ans(do(pickup(b),s 0 )) 19

20 The answer? ans(do(pickup(b),s 0 )) This says that a situation in which you are holding b is called do(pickup(b),s 0 ) This name is informative: it tells you what actions to execute to achieve holding(b). 20

21 Two types of reasoning. In general we can answer questions of the form: on(b,c,do(stack(b,c), do(pickup(b), s 0 ))) S. on(b,c,s) on(c,a,s) The first involves predicting the effects of a sequence of actions, the second involves computing a sequence of actions that can achieve a goal condition. 21

22 The Frame Problem Unfortunately, logical reasoning won t immediately yield the answer to these kinds of questions. e.g., query: on(c,a,do(pickup(b),s 0 ))? is c still on a after we pickup b? Intuitively it should be Can logical reasoning reach this conclusion? 22

23 The Frame Problem 1. clear(c,s 0 ) 2. on(c,a,s 0 ) 3. clear(b,s 0 ) 4. ontable(a,s 0 ) 5. ontable(b,s 0 ) 6. handempty(s 0 ) 8. ( ontable(y,s), clear(y,s), handempty(s), holding(y,do(pickup(y),s)) 9. ( ontable(y,s), clear(y,s), handempty(s), handempty(do(pickup(x),s))) 10. ( ontable(y,s), clear(y,s), handempty(s), ontable(y,do(pickup(y,s))) 11. ( ontable(y,s), clear(y,s), handempty(s), clear(y,do(pickup(y,s))) 12. on(c,a,do(pickup(b),s 0 )) {QUERY) Nothing can resolve with 12! 23

24 Logical Consequence Remember that resolution only computes logical consequences. We stated the effects of pickup(b), but did not state that it doesn t affect on(c,a). Hence there are models in which on(c,a) no longer holds after pickup(b) (as well as models where it does hold). The problem is that representing the non-effects of actions very tedious and in general is not possible. Think of all of the things that pickup(b) does not affect! 24

25 The Frame Problem Finding an effective way of specifying the noneffects of actions, without having to explicitly write them all down is the frame problem. Very good solutions have been proposed, and the situation calculus has been a very powerful way of dealing with dynamic worlds: logic based high level robotic programming languages 25

26 Computation Problems Although the situation calculus is a very powerful representation. It is not always efficient enough to use to compute sequences of actions. The problem of computing a sequence of actions to achieve a goal is planning Next we will study some less rich representations which support more efficient planning. 26

27 CWA Classical Planning. No incomplete or uncertain knowledge. Use the Closed World Assumption in our knowledge representation and reasoning. The Knowledge base used to represent a state of the world is a list of positive ground atomic facts. CWA is the assumption that a) if a ground atomic fact is not in our list of known facts, its negation must be true. b) the constants mentioned in KB are all the domain objects. 27

28 CWA CWA makes our knowledge base much like a database: if employed(john,cibc) is not in the database, we conclude that employed(john, CIBC) is true. 28

29 CWA Example KB = {handempty clear(c), clear(b), on(c,a), ontable(a), ontable(b)} C A B 1. clear(c) clear(b)? 2. on(b,c)? 3. on(a,c) on(b,c)? 4. X.on(X,c)? (D = {a,b,c}) 5. X.ontable(X) X = a X = b? 29

30 Querying a Closed World KB With the CWA, we can evaluate the truth or falsity of arbitrarily complex first-order formulas. This process is very similar to query evaluation in databases. Just as databases are useful, so are CW KB s. 30

31 Querying A CW KB Query(F, KB) /*return whether or not KB = F */ if F is atomic return(f KB) 31

32 Querying A CW KB if F = F 1 F 2 return(query(f 1 ) && Query(F 2 )) if F = F 1 F 2 return(query(f 1 ) Query(F 2 )) if F = F 1 return(! Query(F 1 )) if F = F 1 F 2 return(!query(f 1 ) Query(F 2 )) 32

33 Querying A CW KB if F = X.F 1 for each constant c KB if (Query(F 1 {X=c})) return(true) return(false). if F = X.F 1 for each constant c KB if (!Query(F 1 {X=c})) return(false) return(true). 33

34 Querying A CW KB Guarded quantification (for efficiency). if F = X.F 1 for each constant c KB if (!Query(F 1 {X=c})) return(false) return(true). E.g., consider checking X. apple(x) sweet(x) we already know that the formula is true for all nonapples 34

35 Querying A CW KB Guarded quantification (for efficiency). X:[p(X)] F 1 X: p(x) F 1 for each constant c s.t. p(c) if (!Query(F 1 {X=c})) return(false) return(true). X:[p(X)]F 1 X: p(x) F 1 for each constant c s.t. p(c) if (Query(F 1 {X=c})) return(true) return(false). 35

36 STRIPS representation. STRIPS (Stanford Research Institute Problem Solver.) is a way of representing actions. Actions are modeled as ways of modifying the world. since the world is represented as a CW-KB, a STRIPS action represents a way of updating the CW-KB. Now actions yield new KB s, describing the new world the world as it is once the action has been executed. 36

37 Sequences of Worlds In the situation calculus where in one logical sentence we could refer to two different situations at the same time. on(a,b,s 0 ) on(a,b,s 1 ) In STRIPS, we would have two separate CW- KB s. One representing the initial state, and another one representing the next state (much like search where each state was represented in a separate data structure). 37

38 STRIPS Actions STRIPS represents actions using 3 lists. 1. A list of action preconditions. 2. A list of action add effects. 3. A list of action delete effects. These lists contain variables, so that we can represent a whole class of actions with one specification. Each ground instantiation of the variables yields a specific action. 38

39 STRIPS Actions: Example pickup(x): Pre: {handempty, clear(x), ontable(x)} Adds: {holding(x)} Dels: {handempty, clear(x), ontable(x)} pickup(x) is called a STRIPS operator. a particular instance e.g. pickup(a) is called an action. 39

40 Operation of a STRIPS action. For a particular STRIPS action (ground instance) to be applicable to a state (a CW-KB) every fact in its precondition list must be true in KB. This amounts to testing membership since we have only atomic facts in the precondition list. If the action is applicable, the new state is generated by removing all facts in Dels from KB, then adding all facts in Adds to KB. 40

41 Operation of a Strips Action: Example C A B pickup(b) C A B pre = {handmpty, clear(b), ontable(b)} add = {holding(b)} del = {handmpty, clear(b), ontable(b)} KB = {handempty clear(c), clear(b), on(c,a), ontable(a), ontable(b)} KB = { holding(b), clear(c), on(c,a), ontable(a)} 41

42 STRIPS Blocks World Operators. pickup(x) Pre: {clear(x), ontable(x), handempty} Add: {holding(x)} Del: {clear(x), ontable(x), handempty} putdown(x) Pre: {holding(x)} Add: {clear(x), ontable(x), handempty} Del: {holding(x)} 42

43 STRIPS Blocks World Operators. unstack(x,y) Pre: {clear(x), on(x,y), handempty} Add: {holding(x), clear(y)} Del: {clear(x), on(x,y), handempty} stack(x,y) Pre: {holding(x),clear(y)} Add: {on(x,y), handempty, clear(x)} Del: {holding(x),clear(y)} 43

44 STRIPS has no Conditional Effects putdown(x) Pre: {holding(x)} Add: {clear(x), ontable(x), handempty} Del: {holding(x)} stack(x,y) Pre: {holding(x),clear(y)} Add: {on(x,y), handempty, clear(x)} Del: {holding(x),clear(y)} The table has infinite space, so it is always clear. If we stack(x,y) if Y=Table we cannot delete clear(table), but if Y is an ordinary block c we must delete clear(c). 44

45 Conditional Effects Since STRIPS has no conditional effects, we must sometimes utilize extra actions: one for each type of condition. We embed the condition in the precondition, and then alter the effects accordingly. 45

46 ADL Operators. ADL operators add a number of features to STRIPS. 1. Their preconditions can be arbitrary formulas, not just a conjunction of facts. 2. They can have conditional and universal effects. 3. Open world assumption: 1. States can have negative literals 2. The effect (P Q) means add P and Q but delete P and Q. But they must still specify atomic changes to the knowledge base (add or delete ground atomic facts). 46

47 ADL Operators Examples. move(x,y,z) Pre: on(x,y) clear(z) Effs: ADD[on(X,Z)] DEL[on(X,Y)] Z table DEL[clear(Z)] Y table ADD[clear(Y)] 47

48 ADL Operators, example C A B move(c,a,b) A C B move(c,a,b) Pre: on(c,a) clear(b) Effs: ADD[on(c,b)] DEL[on(c,a)] b table DEL[clear(b)] a table ADD[clear(a)] KB = { clear(c), clear(b), on(c,a), on(a,table), on(b,table)} KB = { on(c,b) clear(c), clear(a) on(a,table), on(b,table)} 48

49 ADL Operators Examples. cleartable() Pre: Effs: X. on(x,table) DEL[on(X,table)] 49

50 ADL Operators. 1. Arbitrary formulas as preconditions. in a CW-KB we can evaluate whether or not the preconditions hold for an arbitrary precondition. 2. They can have conditional and universal effects. Similarly we can evaluate the condition to see if the effect should be applied, and find all bindings for which it should be applied. Specify atomic changes to the knowledge base. CW-KB can be updated just as before. 50

51 Other Example Domains 8 Puzzle as a planning problem A constant representing each position, P1,,P9 P1 P4 P7 P2 P5 P8 P3 P6 P9 A constant for each tile. B,T1,, T8. 51

52 8-Puzzle at(t,p) tile T is at position P at(t1,p1), at(t2,p2), at(t5,p3), adjacent(p1,p2) P1 is next to P2 (i.e., we can slide the blank from P1 to P2 in one move. adjacent(p5,p2), adjacent(p5,p8), 52

53 8-Puzzle slide(t,x,y) Pre: {at(t,x), at(b,y), adjacent(x,y)} Add: {at(b,x), at(t,y)} Del: {at(t,x), at(b,y)} at(t1,p1), at(t5,p3), at(t8,p5), at(b,p6),, at(t1,p1), at(t5,p3), at(b,p5), at(t8,p6),, slide(t8,p5,p6)

54 Elevator Control 54

55 Elevator Control Schindler Lifts. Central panel to enter your elevator request. Your request is scheduled and an elevator is assigned to you. You can t travel with someone going to a secure floor, emergency travel has priority, etc. Modeled as a planning problem and fielded in one of Schindler s high end elevators. 55

56 Solving Planning Problems Given a CW-KB representing the initial state, a set of STRIPS or ADL (Action Description Language) operators, and a goal condition we want to achieve (specified either as a conjunction of facts, or as a formula) The planning problem is determine a sequence of actions that when applied to the initial CW-KB yield an updated CW-KB which satisfies the goal. 56

57 Solving Planning Problems We cover two approaches: GraphPlan: building a graph to represent the planning problem and finding the solution using the graph. Planning as Search: Casting the problem as a standard heuristic search (using reachablity analysis) The performance of GraphPlan is not at good as heuristic search, but it does perform better than partial-order planning and it can be generalized to other types of planning, e.g., finding optimal plans, planning with sensing, etc. 57

58 Graphplan Operates in two phases. Phase I. Guess a concurrent plan length k, then build a leveled graph with k alternating layers. Phase II. Search this leveled graph for a plan. If no plan is found, return to phase I and build a bigger leveled graph with k+1 alternating layers. The final plan, if found, consists of a sequence of sets of actions {a 1 1,a 2 1, } {a 1 2, a 2 2, } {a 1 3, a 2 3, } The plan is concurrent in the sense that at stage I, all actions in the i-th set are executed in parallel. 58

59 Graphplan The leveled graph alternates between levels containing propositional nodes and levels containing action nodes. (Similar to the reachability graph). Three types of edges: precondition-edges, add-edges, and delete-edges. 59

60 GraphPlan Level Graph ontable(a) ontable(b) clear(a) clear(b) handempty pickup(a) pickup(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty Initial state Only the propositions true in the initial state. Possible actions Only the actions whose preconditions are in the previous level. Also have no-ops for capturing nonchanges. All propositions added by actions in previous level Precondition Delete Add 60

61 GraphPlan Level Graph ontable(a) ontable(b) clear(a) clear(b) handempty pickup(a) pickup(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty Level S 0 contains all facts true in the initial state. Level A 0 contains all actions whose preconditions are true in S 0. Included in the set of actions are no-ops. One no-op for every ground atomic fact. The precondition of the no-op is its fact, its add effect is its fact. Level S i contains all facts that are added by actions at level A i-1 Level A i contains all actions whose preconditions are true in S i 61

62 GraphPlan Mutexes. ontable(a) ontable(b) clear(a) clear(b) handempty pickup(a) pickup(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty In addition to the facts/actions. GraphPlan also computes and adds mutexes to the graph. Mutexes are edges between two labels, indicating that these two labels cannot be true at the same time. Mutexes are added as we construct each layer, and in fact alter the set of labels the eventually appear in a layer. 62

63 Mutexes A mutex between two actions a 1 and a 2 in the same layer A i, means that a 1 and a 2 cannot be executed simultaneously (in parallel) at the i th step of a concurrent plan. A mutex between two facts F 1 and F 2 in the same state layer S i, means that F 1 and F 2 cannot be be simultaneously true after i stages of parallel action execution. 63

64 Mutexes It is not possible to compute all mutexes. This is as hard as solving the planning problem, and we want to perform mutex computation as a precursor to solving a planning instance. However, we can quickly compute a subset of the set of all mutexes. Although incomplete these mutexes are still very useful. This is what GraphPlan does. 64

65 Mutexes ontable(a) ontable(b) clear(a) clear(b) handempty pickup(a) pickup(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty Two actions are mutex if either action deletes a precondition or add-effect of another. Note no-ops participate in mutexes. Intuitively these actions have to be sequenced they can t be executed in parallel 65

66 Mutexes ontable(a) ontable(b) clear(a) clear(b) handempty pickup(a) pickup(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty Two propositions p and q are mutex if all actions adding p are mutex of all actions adding q. Must look at all pairs of actions that add p and q. Intuitively, can t achieve p and q together at this stage because we can t concurrently execute achieving actions for them at the previous stage. 66

67 Mutexes holding(a) holding(b) ontable(a) ontable(b) clear(a) clear(b) handempty putdown(a) putdown(b) no-op( op(ontable ontable(a)) no-op( op(ontable ontable(b)) no-op(clear(a)) op(clear(a)) no-op(clear(b)) op(clear(b)) no-op( op(handempty handempty) Two actions are mutex if two of their preconditions are mutex. Intuitively, we can t execute these two actions concurrently at this stage because their preconditions can t simultaneously hold at the previous stage. 67

68 How Mutexes affect the level graph. 1. Two actions are mutex if either action deletes a precondition or add effect of another 2. Two propositions p and q are mutex if all actions adding p are mutex of all actions adding q 3. Two actions are mutex if two of their preconditions are mutex We compute mutexes as we add levels. S 0 is set of facts true in initial state. (Contains no mutexes). A 0 is set of actions whose preconditions are true in S 0. Mark as mutex any action pair where one deletes a precondition or add effect of the other. S 1 is set of facts added by actions at level A 0. Mark as mutex any pair of facts p and q if all actions adding p are mutex with all actions adding q. A 1 is set of actions whose preconditions are not mutex at S 1. Mark as mutex any action pair with preconditions that are mutex in S 1, or where one deleted a precondition or add effect of the other. 68

69 How Mutexes affect the level graph. 1. Two actions are mutex if either action deletes a precondition or add effect of another 2. Two propositions p and q are mutex if all actions adding p are mutex of all actions adding q 3. Two actions are mutex if two of their preconditions are mutex S i is set of facts added by actions in level A i-1 Mark as mutex all facts satisfying 2 (where we look at the action mutexes of A i-1 is set of facts true in initial state. (Contains no mutexes). A i is set of actions whose preconditions are true and nonmutex at S i. Mark as mutex any action pair satisfying 1 or 2. 69

70 How Mutexes affect the level graph. Hence, mutexes will prune actions and facts from levels of the graph. They also record useful information about impossible combinations. 70

71 Example NoOp-on(a,b) on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty unstack(a,b) pickup(c) on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(c), handempty, holding(a), clear(b), holding(c) unstack(a,b) deletes the add effect of NoOpon(a,b), so these actions are mutex as well precondition add effect del effect pickup deletes handempty, one of unstack(a,b) s preconditions. 71

72 Example NoOp-on(a,b) on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty unstack(a,b) pickup(c) on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(c), handempty, holding(a), clear(b), holding(c) unstack(a,b) is the only action that adds clear(b), and this is mutex with pickup(c), which is the only way of adding holding(c). precondition add effect del effect 72

73 Example NoOp-on(a,b) on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty unstack(a,b) pickup(c) on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(c), handempty, holding(a), clear(b), holding(c) These two are mutex for the same reason. precondition add effect del effect 73

74 Example NoOp-on(a,b) on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty unstack(a,b) pickup(c) on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(c), handempty, holding(a), clear(b), holding(c) unstack(a,b) is also mutex with the NoOpon(a,b). So these two facts are mutex (NoOp is the only way on(a,b) can be created). precondition add effect del effect 74

75 Phase II. Searching the Graphplan on(a,b) on(b,c) ontable(c) ontable(b) clear(a) clear(b) handempty K Build the graph to level k, such that every member of the goal is present at level k, and no two are mutex. Why? 75

76 Searching the Graphplan stack(a,b) stack(b,c) no-op(on(b,c)) op(on(b,c))... on(a,b) on(b,c) ontable(c) ontable(b) clear(a) clear(b) handempty K Find a non-mutex collection of actions that add all of the facts in the goal. 76

77 Searching the Graphplan holding(a) clear(b) on(b,c) K-1 stack(a,b) stack(b,c) no-op(on(b,c)) op(on(b,c))... on(a,b) on(b,c) ontable(c) ontable(b) clear(a) clear(b) handempty K The preconditions of these actions at level K-1 become the new goal at level K-1. Recursively try to solve this new goal. If this fails, backtrack and try a different set of actions for solving the goal at level k. 77

78 Phase II-Search Solve(G,K) forall sets of actions A={a i } such that no pair (a i, a j ) A is mutex the actions in A suffice to add all facts in G Let P = union of preconditions of actions in A If Solve(P,K-1) Report PLAN FOUND At end of forall. Exhausted all possible action sets A Report NOPLAN This is a depth first search. 78

79 Graph Plan Algorithm Phase I. build leveled graph. Phase II. Search leveled graph. Phase I: While last state level does not contain all goal facts with no pair being mutex add new state/action level to graph if last state/action level = previous state/action level (including all MUTEXES) graph has leveled off) report NO PLAN. Phase II: Starting at last state level search backwards in graph for plan. Try all ways of moving goal back to initial state. If successful report PLAN FOUND. Else goto Phase I. 79

80 Dinner Date Example Initial State {dirty, cleanhands, quiet} Goal {dinner, present, clean} Actions Cook: Wrap: Pre: {cleanhands} Add: {dinner} Pre: {quiet} Add: {present} Tidy: Pre: {} Add: {clean} Del: {cleanhands, dirty} Vac: Pre: {} Add: {clean} Del: {quite, dirty} 80

81 Dinner example: rule1 action mutex Actions (including all No-OP actions) Cook: Pre: {H} Add: {D} Del: {} Wrap: Pre: {Q} Add: {P} Del: {} Tidy: Pre: {} Add: {C} Del: {H,R} Vac: Pre: {} Add: {C} Del: {Q, R} NO(C): Pre: {C} Add: {C} Del: {} NO(D): Pre: {D} Add: {D} Del: {} NO(H): Pre: {H} Add: {H} Del: {} NO(P): Pre: {P} Add: {P} Del: {} NO(Q): Pre: {Q} Add: {Q} Del: {} NO(R): Pre: {R} Add: {R} Del: {} Legend: NO:No-Op, C:clean, D: Dinner, H: cleanhands, P:Present, Q:Quiet, R: dirty Look at those with non-empty Del, and find others that have these Del in their Pre or Add: So, Rule 1 action mutex are as follows (these are fixed): (Tidy,Cook), (Tidy, NO(H)), (Tidy, NO(R)), (Vac, Wrap), (Vac,NO(Q)), (Vac, NO(R)) Rule 3 action mutex depend on state layer and you have to build the graph. 81

82 Dinner Example: Legend: Arrows: Blue: pre, Green: add, Red: Del, Black: Mutex D: Dinner, C:clean, H: cleanhands, Q:quiet, P:Present, R: dirty Init={R,H,Q} Goal={D,P,C} Cook Wrap D P Note: At layer S1 all goals are present and no pair forms a mutex Tidy c So, go to phase II and search the graph: R H Q Vac NO(R) NO(H) NO(Q) R H Q i.e. Find a set of non-mutex actions that adds all goals {D,P,C}: x{cook, Wrap, Tidy} mutex Tidy&Cook x{cook, Wrap, Vac} mutex Vac&Wrap S0 A0 S1 No such set exists, nothing to backtrack, so goto phase I and add one more action and state layers 82

83 Dinner Example: R H Q Arrows: Blue: pre, Green: add, Red: Del, Black: Mutex D: Dinner, C:clean, H: cleanhands, Q:quiet, P:Present, R: dirty Init={R,H,Q} Goal={D,P,C} Cook Wrap Tidy Vac NO(R) NO(H) NO(Q) D P c R H Q Cook Wrap Tidy Vac NO(D) NO(P) NO(C) NO(R) NO(H) NO(Q) S0 A0 S1 A1 S2 D P c R H Q At layer A1, first draw rule1 action mutex,, then find rule3 action mutex (for this only look at mutex fact at level S1). At S2, apply rule 2 for fact mutex.. At layer S2 all goals are present and no pair forms a mutex,, so PhaseII phase II: Find a set of non- mutex actions that adds all goals {D,P,C}: x{cook, Wrap, Tidy} x{cook, Wrap, Vac} {Cook, Wrap, NO(C)} S1 Cannot find any non-mutex action set in A0 Backtrack to S2, try another action set {Cook, NO(P), Vac} 83

84 Dinner Example: R H Q Arrows: Blue: pre, Green: add, Red: Del, Black: Mutex D: Dinner, C:clean, H: cleanhands, Q:quiet, P:Present, R: dirty Init={R,H,Q} Goal={D,P,C} Cook Wrap Tidy Vac NO(R) NO(H) NO(Q) D P c R H Q Cook Wrap Tidy Vac NO(D) NO(P) NO(C) NO(R) NO(H) NO(Q) S0 A0 S1 A1 S2 D P c R H Q so try {Cook, NO(P), Vac} S1 Find a nonmutex set of act in A0 to get NewG: {NO(H),Wrap} NewG S0 Done! So, the actions are Wrap, {Cook, Vac}: Wrap,Cook,Vac Wrap,Vac,Cook Note that we could still backtrack to S2, try remaining action sets! 84

85 Second Approach: Planning As Search Planning can be treated as a search problem. The initial CW-KB is the initial state. The actions are operators mapping a state (i.e. a CW-KB) to a new state (an updated CW-KB). The goal is satisfied by any state (CW-KB) that satisfies the goal. 85

86 Example. move(b,c) B C A C A B move(c,b) A C B move(c,table) A B C move(a,b) B A C 86

87 Problems Search tree is generally quite large randomly reconfiguring 9 blocks takes thousands of CPU seconds. The representation suggests some structure. Each action only affects a small set of facts, actions depend on each other via their preconditions. Planning algorithms are designed to take advantage of the special nature of the representation. 87

88 Planning as Heuristic Search We look at heuristics computed from Relaxed Plans : The idea is to consider what happens if we ignore the delete lists of actions. This is yields a relaxed problem that can produce a useful heuristic estimate 88

89 Finding Relaxed Plans In the relaxed problem actions add new facts, but never delete facts. Given the relaxed planning problem we use GraphPlan to build a plan for solving the relaxed problem. As we will see, this will be very easy. We then take the number of steps in the GraphPlan plan as a heuristic estimate of the distance from the current state to the goal This is the method used by the Fast Forward System (FF) you are employing in your assignment. It turns out that this heuristic is not admissible (will see). It is possible to compute admissible heuristics, but in general they don t perform as well: they force the planner to find optimal plans. 89

90 GraphPlan with no delete lists Recall: 1. Two actions are mutex if either action deletes a precondition or add effect of another 2. Two propositions p and q are mutex if all actions adding p are mutex of all actions adding q 3. Two actions are mutex if two of their preconditions are mutex With no deletes in the actions, rule #1 can never apply. With no action mutex, rule #2 cannot apply, and thus rule #3 can never apply. So: with no deletes we never need to propagate mutexes. We also do not need to keep track of noop actions (as these were only used to create mutexes). 90

91 GraphPlan with no delete lists Graph Plan procedure on relaxed plans is then simplified to the following: 1. We start with the initial state S We alternate between state and action layers. 3. We find all actions whose preconditions are contained in S 0. These actions comprise the first action layer A The next state layer consists of all of S 0 as well as the adds of all of the actions in A In general A i is the set of actions whose preconditions are contained in S i. S i+1 is S i union the add lists of all of the actions in A i. 91

92 Example a b c d a b c a d d on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(d), handempty unstack(a,b) pickup(d) S 0 A 0 on(a,b), on(b,c), ontable(c), ontable(d), clear(a), handempty, clear(d), holding(a), clear(b), holding(d) S 1 Note: this is not a state! 92

93 Example on(a,b), on(b,c), ontable(c), ontable(d), clear(a), clear(d), handempty, holding(a), clear(b), holding(d) S 1 putdown(a), putdown(d), stack(a,b), stack(a,a), stack(d,b), stack(d,a), pickup(d), unstack(b,c) A 1 93

94 Reachability We continue until the goal G is contained in the state layer, or until the state layer no longer changes. Intuitively, the actions at level A i are the actions that could be executed at the i-th step of some plan, and the facts in level S i are the facts that could be made true after some i-1 step plan. Some of the actions/facts have this property. But not all! This process corresponds to doing a reachablity analysis. 94

95 Reachability a b c on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty unstack(a,b) pickup(c) S 0 A 0 on(a,b), on(b,c), ontable(c), ontable(b), clear(a), clear(c), handempty, holding(a), clear(b), holding(c) S 1 stack(c,b) and on(c,b) needs 4 actions A 1 on(c,b), but stack(c,b) cannot be executed after one step 95

96 Heuristics from Reachability Analysis Grow the levels until the goal is contained in the final state level S[K]. If the state level stops changing and the goal is not present. The goal is unachievable. (The goal is a set of positive facts, and in STRIPS all preconditions are positive facts). Now, we search the graph to extract a plan (i.e. Phase II of the Graph Plan process). 96

97 Recall Phase II of GraphPlan Solve(G,K): forall sets of actions A={a i } such that no pair (a i, a j ) A is mutex and the actions in A suffice to add all facts in G Let P = union of preconditions of actions in A If Solve(P,K-1) Report PLAN FOUND At end of forall. Exhausted all possible action sets A Report NOPLAN Relaxed plan graph contains no mutexes. So the recursive call to Solve(P,K-1) will never fail. Additionally since we don t have no-ops in the graph, we need to change the test suffice to add all facts in G (some facts were previously created at earlier levels and added by no-ops. Without the no-ops we have to delay these facts. 97

98 Heuristics from Reachability Analysis CountActions(G,S K ): /* Compute the number of actions contained in a relaxed plan achieving the goal. */ Split G into facts in S K-1 and elements in S K only. These sets are the previously achieved and just achieved parts of G. Find a minimal set of actions A whose add-effects cover the just achieved part of G. (The set contains no redundant actions, but it might not be the minimum sized set.) Replace the just achieved part of G with the preconditions of A, call this updated G, NewG. Now return CountAction(NewG,S K-1 ) + number of actions needed to cover the just achieved part of G. 98

99 Example legend: [pre]act[add] S 0 = {f 1, f 2, f 3 } A 0 = {[f 1 ]a 1 [f 4 ], [f 2 ]a 2 [f 5 ]} S 1 = {f 1,f 2,f 3,f 4,f 5 } A 1 = {[f 2,f 4,f 5 ]a 3 [f 6 ]} S 2 ={f 1,f 2,f 3,f 4,f 5,f 6 } G = {f 6,f 5, f 1 } We split G into G P and G N : CountActs(G,S 2 ) G P ={f 5, f } //already in S1 1 G N = {f 6 } //New in S2 A = {a 3 } //adds all in G N //the new goal: G P Pre(A) G 1 = {f 5,f 1,f 2,f 4 } Return 1 + CountActs(G 1,S 1 ) 99

100 Example Now, we are at level S1 S 0 = {f 1, f 2, f 3 } A 0 = {[f 1 ]a 1 [f 4 ], [f 2 ]a 2 [f 5 ]} S 1 = {f 1,f 2,f 3,f 4,f 5 } A 1 = {[f 2,f 4,f 5 ]a 3 [f 6 ]} S 2 ={f 1,f 2,f 3,f 4,f 5,f 6 } G 1 = {f 5,f 1,f 2,f 4 } We split G1 into G P and G N : CountActs(G 1,S 1 ) G P ={f 1,f 2 } //already in S0 G N = {f 4,f 5 } //New in S1 A = {a 1,a 2 } //adds all in G N //the new goal: G P Pre(A) G 2 = {f 1,f } 2 Return 2 + CountActs(G 2,S 0 ) = So, in total CountActs(G,S2)=1+2=3 100

101 Using the Heuristic 1. To use CountActions as a heuristic, we build a layered structure from a state S that reaches the goal. 2. Then we CountActions to see how many actions are required in a relaxed plan. 3. We use this count as our heuristic estimate of the distance of S to the goal. 4. This heuristic tends to work better as a best-first search, i.e., when the cost of getting to the current state is ignored. 101

102 Admissibility An optimal length plan in the relaxed problem (actions have no deletes) will be a lower bound on the optimal length of a plan in the real problem. However, CountActions does NOT compute the length of the optimal relaxed plan. The choice of which action set to use to achieve G P ( just achieved part of G ) is not necessarily optimal. In fact it is NP-Hard to compute the optimal length plan even in the relaxed plan space. So CountActions will not be admissible. 102

103 Empirically However, empirically refinements of CountActions performs very well on a number of sample planning domains. 103

CSE 3402: Intro to Artificial Intelligence Planning

CSE 3402: Intro to Artificial Intelligence Planning CSE 3402: Intro to Artificial Intelligence Planning Readings: Russell & Norvig 3 rd edition Chapter 10 (in 2 nd edition, Sections 11.1, 11.2, and 11.4) 1 CWA Classical Planning. No incomplete or uncertain

More information

Planning. Chapter 10 of the textbook. Acknowledgement: Thanks to Sheila McIlraith for the slides.

Planning. Chapter 10 of the textbook. Acknowledgement: Thanks to Sheila McIlraith for the slides. Planning Chapter 10 of the textbook. Acknowledgement: Thanks to Sheila McIlraith for the slides. 1 plan n. 1. A scheme, program, or method worked out beforehand for the accomplishment of an objective:

More information

Planning plan Synonyms: plan Synonyms:

Planning plan Synonyms: plan Synonyms: Planning This material is covered in R&N 2 nd edition chapters 10.3, 11.1, 11.2, 11.4 This material is covered in R&N 3 rd edition chapter 10 plan n. 1. A scheme, program, or method worked out beforehand

More information

Primitive goal based ideas

Primitive goal based ideas Primitive goal based ideas Once you have the gold, your goal is to get back home s Holding( Gold, s) GoalLocation([1,1], s) How to work out actions to achieve the goal? Inference: Lots more axioms. Explodes.

More information

CS 621 Artificial Intelligence. Lecture 31-25/10/05. Prof. Pushpak Bhattacharyya. Planning

CS 621 Artificial Intelligence. Lecture 31-25/10/05. Prof. Pushpak Bhattacharyya. Planning CS 621 Artificial Intelligence Lecture 31-25/10/05 Prof. Pushpak Bhattacharyya Planning 1 Planning Definition : Planning is arranging a sequence of actions to achieve a goal. Uses core areas of AI like

More information

Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation

Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation Relaxed Planning Problems 74 A classical planning problem P has a set of solutions Solutions(P) = { π : π

More information

Planning Algorithms Properties Soundness

Planning Algorithms Properties Soundness Chapter MK:VI III. Planning Agent Systems Deductive Reasoning Agents Planning Language Planning Algorithms State-Space Planning Plan-Space Planning HTN Planning Complexity of Planning Problems Extensions

More information

Classical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Classical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016 Classical Planning CS 486/686: Introduction to Artificial Intelligence Winter 2016 1 Classical Planning A plan is a collection of actions for performing some task (reaching some goal) If we have a robot

More information

1 What is Planning? automatic programming. cis716-spring2004-parsons-lect17 2

1 What is Planning? automatic programming. cis716-spring2004-parsons-lect17 2 PLANNING 1 What is Planning? Key problem facing agent is deciding what to do. We want agents to be taskable: give them goals to achieve, have them decide for themselves how to achieve them. Basic idea

More information

Intelligent Systems. Planning. Copyright 2010 Dieter Fensel and Ioan Toma

Intelligent Systems. Planning. Copyright 2010 Dieter Fensel and Ioan Toma Intelligent Systems Planning Copyright 2010 Dieter Fensel and Ioan Toma 1 Where are we? # Title 1 Introduction 2 Propositional Logic 3 Predicate Logic 4 Reasoning 5 Search Methods 6 CommonKADS 7 Problem-Solving

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lecturer 7 - Planning Lecturer: Truong Tuan Anh HCMUT - CSE 1 Outline Planning problem State-space search Partial-order planning Planning graphs Planning with propositional logic

More information

Planning. Some material taken from D. Lin, J-C Latombe

Planning. Some material taken from D. Lin, J-C Latombe RN, Chapter 11 Planning Some material taken from D. Lin, J-C Latombe 1 Logical Agents Reasoning [Ch 6] Propositional Logic [Ch 7] Predicate Calculus Representation [Ch 8] Inference [Ch 9] Implemented Systems

More information

cis32-ai lecture # 21 mon-24-apr-2006

cis32-ai lecture # 21 mon-24-apr-2006 cis32-ai lecture # 21 mon-24-apr-2006 today s topics: logic-based agents (see notes from last time) planning cis32-spring2006-sklar-lec21 1 What is Planning? Key problem facing agent is deciding what to

More information

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013 Set 9: Planning Classical Planning Systems ICS 271 Fall 2013 Outline: Planning Classical Planning: Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning graphs Readings:

More information

Planning. Introduction

Planning. Introduction Introduction vs. Problem-Solving Representation in Systems Situation Calculus The Frame Problem STRIPS representation language Blocks World with State-Space Search Progression Algorithms Regression Algorithms

More information

A deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state

A deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state CmSc310 Artificial Intelligence Classical Planning 1. Introduction Planning is about how an agent achieves its goals. To achieve anything but the simplest goals, an agent must reason about its future.

More information

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014 Set 9: Planning Classical Planning Systems ICS 271 Fall 2014 Planning environments Classical Planning: Outline: Planning Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning

More information

Scale-up Revolution in Automated Planning

Scale-up Revolution in Automated Planning Scale-up Revolution in Automated Planning (Or 1001 Ways to Skin a Planning Graph for Heuristic Fun & Profit) Subbarao Kambhampati http://rakaposhi.eas.asu.edu RIT; 2/17/06 With thanks to all the Yochan

More information

BASIC PLAN-GENERATING SYSTEMS

BASIC PLAN-GENERATING SYSTEMS CHAPTER 7 BASIC PLAN-GENERATING SYSTEMS In chapters 5 and 6 we saw that a wide class of deduction tasks could be solved by commutative production systems. For many other problems of interest in AI, however,

More information

Plan Generation Classical Planning

Plan Generation Classical Planning Plan Generation Classical Planning Manuela Veloso Carnegie Mellon University School of Computer Science 15-887 Planning, Execution, and Learning Fall 2016 Outline What is a State and Goal What is an Action

More information

Artificial Intelligence 2004 Planning: Situation Calculus

Artificial Intelligence 2004 Planning: Situation Calculus 74.419 Artificial Intelligence 2004 Planning: Situation Calculus Review STRIPS POP Hierarchical Planning Situation Calculus (John McCarthy) situations actions axioms Review Planning 1 STRIPS (Nils J. Nilsson)

More information

Section 9: Planning CPSC Artificial Intelligence Michael M. Richter

Section 9: Planning CPSC Artificial Intelligence Michael M. Richter Section 9: Planning Actions (1) An action changes a situation and their description has to describe this change. A situation description (or a state) sit is a set of ground atomic formulas (sometimes literals

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CSC348 Unit 4: Reasoning, change and planning Syedur Rahman Lecturer, CSE Department North South University syedur.rahman@wolfson.oxon.org Artificial Intelligence: Lecture Notes

More information

Planning. Introduction to Planning. Failing to plan is planning to fail! Major Agent Types Agents with Goals. From Problem Solving to Planning

Planning. Introduction to Planning. Failing to plan is planning to fail! Major Agent Types Agents with Goals. From Problem Solving to Planning Introduction to Planning Planning Failing to plan is planning to fail! Plan: a sequence of steps to achieve a goal. Problem solving agent knows: actions, states, goals and plans. Planning is a special

More information

Planning in a Single Agent

Planning in a Single Agent What is Planning Planning in a Single gent Sattiraju Prabhakar Sources: Multi-agent Systems Michael Wooldridge I a modern approach, 2 nd Edition Stuart Russell and Peter Norvig Generate sequences of actions

More information

Planning Chapter

Planning Chapter Planning Chapter 11.1-11.3 Some material adopted from notes by Andreas Geyer-Schulz and Chuck Dyer Typical BW planning problem A C B A B C Blocks world The blocks world is a micro-world consisting of a

More information

Artificial Intelligence. Planning

Artificial Intelligence. Planning Artificial Intelligence Planning Planning Planning agent Similar to previous problem solving agents Constructs plans that achieve its goals, then executes them Differs in way it represents and searches

More information

Intelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14.

Intelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14. Intelligent Agents State-Space Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: 14. April 2016 U. Schmid (CogSys) Intelligent Agents last change: 14. April

More information

Planning and Acting. CITS3001 Algorithms, Agents and Artificial Intelligence. 2018, Semester 2

Planning and Acting. CITS3001 Algorithms, Agents and Artificial Intelligence. 2018, Semester 2 Planning and Acting CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia 2018, Semester 2 Summary We

More information

CPS 270: Artificial Intelligence Planning

CPS 270: Artificial Intelligence   Planning CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent

More information

Planning (What to do next?) (What to do next?)

Planning (What to do next?) (What to do next?) Planning (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) CSC3203 - AI in Games 2 Level 12: Planning YOUR MISSION learn about how to create

More information

Planning Graphs and Graphplan

Planning Graphs and Graphplan Sec. 11.4 p.1/20 Planning Graphs and Graphplan Section 11.4 Sec. 11.4 p.2/20 Outline The planning graph Planning graph example The graphplan algorithm Using planning graphs for heuristics Additional reference

More information

STRIPS HW 1: Blocks World

STRIPS HW 1: Blocks World STRIPS HW 1: Blocks World Operator Precondition Delete List Add List Stack(x, y) CLEAR(y) CLEAR(y) HOLDING(x) HOLDING(x) ON(x, y) Unstack(x, y) ON(x, y) ON(x, y) HOLDING(x) CLEAR(x) CLEAR(y) PickUp(x)

More information

Intelligent Agents. Planning Graphs - The Graph Plan Algorithm. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University

Intelligent Agents. Planning Graphs - The Graph Plan Algorithm. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University Intelligent Agents Planning Graphs - The Graph Plan Algorithm Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: July 9, 2015 U. Schmid (CogSys) Intelligent Agents

More information

Dipartimento di Elettronica Informazione e Bioingegneria. Cognitive Robotics. SATplan. Act1. Pre1. Fact. G. Gini Act2

Dipartimento di Elettronica Informazione e Bioingegneria. Cognitive Robotics. SATplan. Act1. Pre1. Fact. G. Gini Act2 Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics SATplan Pre1 Pre2 @ 2015 Act1 Act2 Fact why SAT (satisfability)? 2 Classical planning has been observed as a form of logical

More information

Artificial Intelligence Planning

Artificial Intelligence Planning Artificial Intelligence Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent objects, relations, etc. (logic) Now we will combine

More information

Activity Planning and Execution I: Operator-based Planning and Plan Graphs. Assignments

Activity Planning and Execution I: Operator-based Planning and Plan Graphs. Assignments Activity Planning and Execution I: Operator-based Planning and Plan Graphs Slides draw upon material from: Prof. Maria Fox, Univ Strathclyde Brian C. Williams 16.410-13 October 4 th, 2010 Assignments Remember:

More information

Silvia Rossi! Planning. Lezione n.! Corso di Laurea:! Informatica! Insegnamento:! Sistemi! multi-agente! ! A.A !

Silvia Rossi! Planning. Lezione n.! Corso di Laurea:! Informatica! Insegnamento:! Sistemi! multi-agente!  ! A.A ! Silvia Rossi! Planning 4 Lezione n.! Corso di Laurea:! Informatica! Insegnamento:! Sistemi! multi-agente! Email:! silrossi@unina.it! A.A. 2014-2015! Agent Planning! (R&N:11.1,11.2) Additional reading:

More information

Automated Planning. Plan-Space Planning / Partial Order Causal Link Planning

Automated Planning. Plan-Space Planning / Partial Order Causal Link Planning Automated Planning Plan-Space Planning / Partial Order Causal Link Planning Jonas Kvarnström Automated Planning Group Department of Computer and Information Science Linköping University Partly adapted

More information

Cognitive Robotics 2016/2017

Cognitive Robotics 2016/2017 based on Manuela M. Veloso lectures on PLNNING, EXEUTION ND LERNING ognitive Robotics 2016/2017 Planning:, ctions and Goal Representation Matteo Matteucci matteo.matteucci@polimi.it rtificial Intelligence

More information

1. Prolog [30] 2. Search [26] 3. Rule Based Systems [30] 4. Planning [35] 5. Natural Language Processing [23] 6. Machine Learning [36]

1. Prolog [30] 2. Search [26] 3. Rule Based Systems [30] 4. Planning [35] 5. Natural Language Processing [23] 6. Machine Learning [36] VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wānanga o te Ūpoko o te Ika a Māui EXAMINATIONS 2004 END-YEAR COMP 307 ARTIFICIAL INTELLIGENCE Time Allowed: 3 Hours Instructions: There are a total of 180 marks

More information

ICS 606. Intelligent Autonomous Agents 1

ICS 606. Intelligent Autonomous Agents 1 Intelligent utonomous gents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu Lecture #4 Practical Reasoning gents Intentions Planning Means-ends reasoning The blocks world References Wooldridge

More information

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur Module 6 Knowledge Representation and Logic (First Order Logic) Lesson 15 Inference in FOL - I 6.2.8 Resolution We have introduced the inference rule Modus Ponens. Now we introduce another inference rule

More information

Classical Planning Problems: Representation Languages

Classical Planning Problems: Representation Languages jonas.kvarnstrom@liu.se 2017 Classical Planning Problems: Representation Languages History: 1959 3 The language of Artificial Intelligence was/is logic First-order, second-order, modal, 1959: General

More information

Acknowledgements. Outline

Acknowledgements. Outline Acknowledgements Heuristic Search for Planning Sheila McIlraith University of Toronto Fall 2010 Many of the slides used in today s lecture are modifications of slides developed by Malte Helmert, Bernhard

More information

Chapter 6 Planning-Graph Techniques

Chapter 6 Planning-Graph Techniques Lecture slides for Automated Planning: Theory and Practice Chapter 6 Planning-Graph Techniques Dana S. Nau CMSC 722, AI Planning University of Maryland, Spring 2008 1 Motivation A big source of inefficiency

More information

CSC2542 Planning-Graph Techniques

CSC2542 Planning-Graph Techniques 1 Administrative Announcements Discussion of tutorial time Email me to set up a time to meet to discuss your project in the next week. Fridays are best CSC2542 Planning-Graph Techniques Sheila McIlraith

More information

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic Principles of AI Planning June 8th, 2010 7. Planning as search: relaxed planning tasks Principles of AI Planning 7. Planning as search: relaxed planning tasks Malte Helmert and Bernhard Nebel 7.1 How to

More information

Commitment Least you haven't decided where to go shopping. Or...suppose You can get milk at the convenience store, at the dairy, or at the supermarket

Commitment Least you haven't decided where to go shopping. Or...suppose You can get milk at the convenience store, at the dairy, or at the supermarket Planning as Search-based Problem Solving? Imagine a supermarket shopping scenario using search-based problem solving: Goal: buy milk and bananas Operator: buy Heuristic function: does = milk

More information

Lukas Chrpa, Lee McCluskey and Hugh Osborne Univeristy of Huddersfield, UK

Lukas Chrpa, Lee McCluskey and Hugh Osborne Univeristy of Huddersfield, UK a i p p a j Optimizing Plans through Analysis of Action Dependencies and Independencies [aka.. post plan analysis to shorten plans..] Lukas Chrpa, Lee McCluskey and Hugh Osborne Univeristy, UK Context

More information

LECTURE 3: DEDUCTIVE REASONING AGENTS

LECTURE 3: DEDUCTIVE REASONING AGENTS Agent Architectures LECTURE 3: DEDUCTIVE REASONING AGENTS An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas An agent is a computer system capable of flexible autonomous action

More information

PROPOSITIONAL LOGIC (2)

PROPOSITIONAL LOGIC (2) PROPOSITIONAL LOGIC (2) based on Huth & Ruan Logic in Computer Science: Modelling and Reasoning about Systems Cambridge University Press, 2004 Russell & Norvig Artificial Intelligence: A Modern Approach

More information

EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY *

EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY * Iranian Journal of Science & Technology, Transaction B, Engineering, Vol. 29, No. B1 Printed in The Islamic Republic of Iran, 2005 Shiraz University EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY

More information

6.825 Midterm Exam Fall 2002

6.825 Midterm Exam Fall 2002 6.825 Midterm Exam Fall 2002 October 29, 2002 1. (8 points) Convert the following four sentences to clausal form: (a) x. y.p(x) r(x, y) p(x 1 ) r(x 2, F (x 2 )) (b) x. y.p(x) r(x, y) p(a) r(a, y 1 ) (c)

More information

Planning. Introduction

Planning. Introduction Planning Introduction Planning vs. Problem-Solving Representation in Planning Systems Situation Calculus The Frame Problem STRIPS representation language Blocks World Planning with State-Space Search Progression

More information

Planning. CPS 570 Ron Parr. Some Actual Planning Applications. Used to fulfill mission objectives in Nasa s Deep Space One (Remote Agent)

Planning. CPS 570 Ron Parr. Some Actual Planning Applications. Used to fulfill mission objectives in Nasa s Deep Space One (Remote Agent) Planning CPS 570 Ron Parr Some Actual Planning Applications Used to fulfill mission objectives in Nasa s Deep Space One (Remote Agent) Particularly important for space operations due to latency Also used

More information

COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents. Dr Terry R. Payne Department of Computer Science

COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents. Dr Terry R. Payne Department of Computer Science COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents Dr Terry R. Payne Department of Computer Science Pro-Active Behaviour Previously we looked at: Characteristics of an Agent and its Environment

More information

Heuristic Search for Planning

Heuristic Search for Planning Heuristic Search for Planning Sheila McIlraith University of Toronto Fall 2010 S. McIlraith Heuristic Search for Planning 1 / 50 Acknowledgements Many of the slides used in today s lecture are modifications

More information

More Planning and Prolog Operators

More Planning and Prolog Operators More Planning and Prolog Operators Artificial Intelligence Programming in Prolog Lecturer: Tim Smith Lecture 16 22/11/04 22/11/04 AIPP Lecture 16: More Planning and Operators 1 Planning continued Contents

More information

Logic Programming and MDPs for Planning. Alborz Geramifard

Logic Programming and MDPs for Planning. Alborz Geramifard Logic Programming and MDPs for Planning Alborz Geramifard Winter 2009 Index Introduction Logic Programming MDP MDP+ Logic + Programming 2 Index Introduction Logic Programming MDP MDP+ Logic + Programming

More information

Distributed Systems. Silvia Rossi 081. Intelligent

Distributed Systems. Silvia Rossi 081. Intelligent Distributed Systems Silvia Rossi srossi@na.infn.it 081 Intelligent 30% for attending classes Grading 30% for presenting a research paper 40% for writing and discussing paper (design of a multi-robot system),

More information

Automated Planning & Artificial Intelligence

Automated Planning & Artificial Intelligence Automated Planning & Artificial Intelligence Uninformed and Informed search in state space Humbert Fiorino Humbert.Fiorino@imag.fr http://membres-lig.imag.fr/fiorino Laboratory of Informatics of Grenoble

More information

CMU-Q Lecture 6: Planning Graph GRAPHPLAN. Teacher: Gianni A. Di Caro

CMU-Q Lecture 6: Planning Graph GRAPHPLAN. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 6: Planning Graph GRAPHPLAN Teacher: Gianni A. Di Caro PLANNING GRAPHS Graph-based data structure representing a polynomial-size/time approximation of the exponential search tree Can

More information

Planning II. Introduction to Artificial Intelligence CSE 150 Lecture 12 May 15, 2007

Planning II. Introduction to Artificial Intelligence CSE 150 Lecture 12 May 15, 2007 Planning II Introduction to Artificial Intelligence CSE 150 Lecture 12 May 15, 2007 Administration Your second to last Programming Assignment is up - start NOW, you don t have a lot of time The PRIZE:

More information

Efficient Planning with State Trajectory Constraints

Efficient Planning with State Trajectory Constraints Efficient Planning with State Trajectory Constraints Stefan Edelkamp Baroper Straße 301 University of Dortmund email: stefan.edelkamp@cs.uni-dortmund.de Abstract. This paper introduces a general planning

More information

Artificial Intelligence II

Artificial Intelligence II Artificial Intelligence II 2013/2014 - Prof: Daniele Nardi, Joachim Hertzberg Exercitation 3 - Roberto Capobianco Planning: STRIPS, Partial Order Plans, Planning Graphs 1 STRIPS (Recap-1) Start situation;

More information

Artificial Intelligence Planning

Artificial Intelligence Planning Artificial Intelligence Planning Gholamreza Ghassem-Sani Sharif University of Technology Lecture slides are mainly based on Dana Nau s AI Planning University of Maryland 1 For technical details: Related

More information

AUTOMATED PLANNING Automated Planning sequence of actions goal

AUTOMATED PLANNING Automated Planning sequence of actions goal AUTOMATED PLANNING Automated Planning is an important problem solving activity which consists in synthesizing a sequence of actions performed by an agent that leads from an initial state of the world to

More information

SOFTWARE ENGINEERING DESIGN I

SOFTWARE ENGINEERING DESIGN I 2 SOFTWARE ENGINEERING DESIGN I 3. Schemas and Theories The aim of this course is to learn how to write formal specifications of computer systems, using classical logic. The key descriptional technique

More information

Cognitive Robotics. Planning: Plan Domain Description Language. Matteo Matteucci

Cognitive Robotics. Planning: Plan Domain Description Language. Matteo Matteucci Cognitive Robotics Planning: Plan Domain Description Language Matteo Matteucci matteo.matteucci@polimi.it Artificial Intelligence and Robotics Lab - Politecnico di Milano Planning Problems in Artificial

More information

An Evaluation of Blackbox Graph Planning

An Evaluation of Blackbox Graph Planning An Evaluation of Blackbox Graph Planning John Duchi, James Connor, and Bruce Lo jduchi@stanford.edu, jconnor@stanford.edu, brucelo@cs.stanford.edu, I. Introduction Stanford University CS227: Assignment

More information

Where are we? Informatics 2D Reasoning and Agents Semester 2, Planning with state-space search. Planning with state-space search

Where are we? Informatics 2D Reasoning and Agents Semester 2, Planning with state-space search. Planning with state-space search Informatics 2D Reasoning and Agents Semester 2, 2018 2019 Alex Lascarides alex@inf.ed.ac.uk Where are we? Last time... we defined the planning problem discussed problem with using search and logic in planning

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

15-780: Graduate AI Homework Assignment #2 Solutions

15-780: Graduate AI Homework Assignment #2 Solutions 15-780: Graduate AI Homework Assignment #2 Solutions Out: February 12, 2015 Due: February 25, 2015 Collaboration Policy: You may discuss the problems with others, but you must write all code and your writeup

More information

Using Temporal Logics to Express Search Control Knowledge for Planning Λ

Using Temporal Logics to Express Search Control Knowledge for Planning Λ Using Temporal Logics to Express Search Control Knowledge for Planning Λ Fahiem Bacchus Dept. Of Computer Science University of Toronto Toronto, Ontario Canada, M5S 3G4 fbacchus@logos.uwaterloo.ca Froduald

More information

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT 1 Satisfiability (SAT) Problem 2 Boolean Expressions Boolean,

More information

Planning: Wrap up CSP Planning Logic: Intro

Planning: Wrap up CSP Planning Logic: Intro Planning: Wrap up CSP Planning Logic: Intro Alan Mackworth UBC CS 322 Planning 3 ebruary 25, 2013 Textbook 8.4, 5.1 GAC vs. AC GAC = Generalized Arc Consistency (for k-ary constraints) AC = Arc Consistency

More information

LECTURE 4: PRACTICAL REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2010

LECTURE 4: PRACTICAL REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2010 LECTURE 4: PRACTICAL REASONING AGENTS CIS 716.5, Spring 2010 What is Practical Reasoning? Practical reasoning is reasoning directed towards actions the process of figuring out what to do: Practical reasoning

More information

Boolean Representations and Combinatorial Equivalence

Boolean Representations and Combinatorial Equivalence Chapter 2 Boolean Representations and Combinatorial Equivalence This chapter introduces different representations of Boolean functions. It then discusses the applications of these representations for proving

More information

What is On / Off Policy?

What is On / Off Policy? What is On / Off Policy? Q learns how to perform optimally even when we are following a non-optimal policy In greedy, leaves no trace in Q SARSA is on-policy Learns the best policy given our systematic

More information

Foundations of AI. 9. Predicate Logic. Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution

Foundations of AI. 9. Predicate Logic. Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution Foundations of AI 9. Predicate Logic Syntax and Semantics, Normal Forms, Herbrand Expansion, Resolution Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller 09/1 Contents Motivation

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop AXIOMS OF AN IMPERATIVE LANGUAGE We will use the same language, with the same abstract syntax that we used for operational semantics. However, we will only be concerned with the commands, since the language

More information

CS 2750 Foundations of AI Lecture 17. Planning. Planning

CS 2750 Foundations of AI Lecture 17. Planning. Planning S 2750 Foundations of I Lecture 17 Planning Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Planning Planning problem: find a sequence of actions that achieves some goal an instance of a search

More information

Propositional Logic. Part I

Propositional Logic. Part I Part I Propositional Logic 1 Classical Logic and the Material Conditional 1.1 Introduction 1.1.1 The first purpose of this chapter is to review classical propositional logic, including semantic tableaux.

More information

Planning. Planning. What is Planning. Why not standard search?

Planning. Planning. What is Planning. Why not standard search? Based on slides prepared by Tom Lenaerts SWITCH, Vlaams Interuniversitair Instituut voor Biotechnologie Modifications by Jacek.Malec@cs.lth.se Original slides can be found at http://aima.cs.berkeley.edu

More information

3. Knowledge Representation, Reasoning, and Planning

3. Knowledge Representation, Reasoning, and Planning 3. Knowledge Representation, Reasoning, and Planning 3.1 Common Sense Knowledge 3.2 Knowledge Representation Networks 3.3 Reasoning Propositional Logic Predicate Logic: PROLOG 3.4 Planning Planning vs.

More information

Agent Design Example Problems State Spaces. Searching: Intro. CPSC 322 Search 1. Textbook Searching: Intro CPSC 322 Search 1, Slide 1

Agent Design Example Problems State Spaces. Searching: Intro. CPSC 322 Search 1. Textbook Searching: Intro CPSC 322 Search 1, Slide 1 Searching: Intro CPSC 322 Search 1 Textbook 3.0 3.3 Searching: Intro CPSC 322 Search 1, Slide 1 Lecture Overview 1 Agent Design 2 Example Problems 3 State Spaces Searching: Intro CPSC 322 Search 1, Slide

More information

Chapter 3. Set Theory. 3.1 What is a Set?

Chapter 3. Set Theory. 3.1 What is a Set? Chapter 3 Set Theory 3.1 What is a Set? A set is a well-defined collection of objects called elements or members of the set. Here, well-defined means accurately and unambiguously stated or described. Any

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Lecture 14: Lower Bounds for Tree Resolution

Lecture 14: Lower Bounds for Tree Resolution IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 14: Lower Bounds for Tree Resolution David Mix Barrington and Alexis Maciel August

More information

Cognitive Robotics Introduction Matteo Matteucci

Cognitive Robotics Introduction Matteo Matteucci Cognitive Robotics Introduction About me and my lectures Lectures given by Matteo Matteucci +39 02 2399 3470 matteo.matteucci@polimi.it http://www.deib.polimi.it/people/matteucci Research Topics (several

More information

Knowledge Representation. CS 486/686: Introduction to Artificial Intelligence

Knowledge Representation. CS 486/686: Introduction to Artificial Intelligence Knowledge Representation CS 486/686: Introduction to Artificial Intelligence 1 Outline Knowledge-based agents Logics in general Propositional Logic& Reasoning First Order Logic 2 Introduction So far we

More information

INF Kunstig intelligens. Agents That Plan. Roar Fjellheim. INF5390-AI-06 Agents That Plan 1

INF Kunstig intelligens. Agents That Plan. Roar Fjellheim. INF5390-AI-06 Agents That Plan 1 INF5390 - Kunstig intelligens Agents That Plan Roar Fjellheim INF5390-AI-06 Agents That Plan 1 Outline Planning agents Plan representation State-space search Planning graphs GRAPHPLAN algorithm Partial-order

More information

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,,

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,, CMPSCI 601: Recall From Last Time Lecture 5 Definition: A context-free grammar (CFG) is a 4- tuple, variables = nonterminals, terminals, rules = productions,,, are all finite. 1 ( ) $ Pumping Lemma for

More information

6.001 Notes: Section 1.1

6.001 Notes: Section 1.1 6.001 Notes: Section 1.1 Slide 1.1.1 This first thing we need to do is discuss the focus of 6.001. What is this course all about? This seems quite obvious -- this is a course about computer science. But

More information

Planning, Execution & Learning 1. Heuristic Search Planning

Planning, Execution & Learning 1. Heuristic Search Planning Planning, Execution & Learning 1. Heuristic Search Planning Reid Simmons Planning, Execution & Learning: Heuristic 1 Simmons, Veloso : Fall 2001 Basic Idea Heuristic Search Planning Automatically Analyze

More information

Why Search. Things to consider. Example, a holiday in Jamaica. CSE 3401: Intro to Artificial Intelligence Uninformed Search

Why Search. Things to consider. Example, a holiday in Jamaica. CSE 3401: Intro to Artificial Intelligence Uninformed Search CSE 3401: Intro to Artificial Intelligence Uninformed Search Why Search Required Readings: R & N Chapter 3, Sec. 1-4. Lecture slides adapted from those of Fahiem Bacchus. Successful Success in game playing

More information

Lecture 10: Planning and Acting in the Real World

Lecture 10: Planning and Acting in the Real World Lecture 10: Planning and Acting in the Real World CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA Apr 11, 2018 Amarda Shehu (580) 1 1 Outline

More information

Vorlesung Grundlagen der Künstlichen Intelligenz

Vorlesung Grundlagen der Künstlichen Intelligenz Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de

More information