EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY *

Size: px
Start display at page:

Download "EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY *"

Transcription

1 Iranian Journal of Science & Technology, Transaction B, Engineering, Vol. 29, No. B1 Printed in The Islamic Republic of Iran, 2005 Shiraz University EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY * G. GHASSEM-SANI ** AND R. HALAVATI Dept. of Computer Engineering, Sharif University of Technology, Tehran, I. R. of Iran sani@sharif.edu Abstract One of the most important problems of traditional A.I. planning methods such as nonlinear planning is the control of the planning process itself. A non-linear planner confronts many choice points in different steps of the planning process (i.e., selection of the next goal to work on, selection of an action to achieve the goal, and selection of the right order to resolve a conflict), and ideally, it should choose the best option in each case. The partial ordered planner (POP) introduced by Weld in 1994, assumes a magical function called "Choose" to select the best option in each planning step. There have been some previous efforts for the realization of this function; however, most of these efforts ignore the valuable information that can be extracted from the problem s domain. This paper introduces several general heuristics for extracting useful information contained in problem domains by an automatic preprocessing. These heuristics have been incorporated into a planner called H 2 POP, and tested on a number of different domains. Keywords Artificial intelligence, planning, non-linear planning, domain knowledge, heuristic 1. INTRODUCTION Strategic planning has had many advances in recent years by introducing several new efficient methods such as planning by graph analysis [1] or SAT planning [2], etc. However, traditional planning methods such as non-linear planning have the advantage of simplicity. The idea of non-linear planning was first introduced by NOAH [3] and Nonlin [4]. POP [5] uses this idea in the following manner: Using a depth first search, in each iteration, the planner chooses one of the remaining goals, then finds an action satisfying that goal and tries to resolve possible conflicts. These choices are assumed to be made by a magical function called "Choose", which has been addressed by several researchers [6-8]. The main approach of most of these works is to find some general, domain-independent heuristics that can help in making better choices. However, they fail to employ the valuable information that can be automatically elicited from problems domain. This paper presents several heuristics for automatic extraction of domain-dependent knowledge. H 2 POP's operation begins with a single time preprocessing of each new domain prior to working on any problem in that domain. The outcomes of this preprocessing are generating some macro actions, finding sequences of actions that will never be used in any plan, creating a hierarchy of predicates, and discovering hidden conflicts among the predicates of problem domain. The problem solving operation of H 2 POP is almost the same as POP except for the following. The first task at the beginning of solving any problem is to put the goals in a hierarchy that states which goals are more important and should be satisfied sooner. As long as there are still unsatisfied goals in higher levels of this hierarchy, lower level ones are not considered (i.e., hierarchical planning). Instead of choose functions used by POP (i.e. 'choose goal' and 'choose action'), here we employ two functions called 'generate action choices' and 'rate action choices'. The first one produces all possible choices at the current step, and the second one rates them in such a way that the most promising choice is then selected. Received by the editors November 13, 2002 and in final revised form November 9, 2004 Corresponding author

2 108 G. Ghassem-Sani / R. Halavati In the next section, we briefly explain the partial order planning (POP), and then we introduce our automatic, domain-independent heuristic-generator procedures in a section titled 'Domain Preprocessing'. Next, we introduce H 2 POP, and express its differences from POP. Finally, in the last section, we conclude our work and propose some further works. For the sake of simplicity, all examples are given from the Blocks World domain, but the proposed methods can also be applied to other domains. 2. PARTIAL ORDER PLANNING (POP) In a partial order planner, a plan is defined as a triple <S, O, L>, where S is the list of actions in the plan, O is a list of ordering constraints between action pairs, and L is a list of the causal links between the actions preconditions and effects. An ordering constraint <A, B> O means that action A must be executed before action B. A causal link <A, B, Q> L denotes that the precondition Q of action B has been produced by action A and must not be violated between the execution of actions A and B. L is used to detect and resolve possible conflicts among actions. POP begins with a list of goals called agenda. At first, S includes two nodes: Begin and End. The problem main goals are represented by the preconditions of End, and the initial state of the problem is represented by the post conditions of Begin. Initially, there is only one ordering constraint <Begin, End> O, and L is empty. At each planning step, POP chooses a goal from Agenda and tries to achieve it by using either an existing action in the plan or a new action from the list of available actions that are given to the planner. If a new action is added to the plan, the preconditions of the new action are added to the agenda as the new subgoals. The planner also detects and resolves possible threats to the previously achieved goals. There are several choice points in each planning cycle (i.e., choosing the next goal to achieve, choosing an action to support the goal, and choosing a mean to resolve a possible threat) where a proper decision can have a major effect on the efficiency of the planning process and quality of the produced plan. However, in original POP algorithm, there are not any criteria to make a proper decision, and decisions are made by performing a blind search. 3. DOMAIN PREPROCESSING As stated before, the first task of H 2 POP, whenever it encounters a new problem domain, is a pre-analysis of that domain. The aim of this job is extracting the heuristics, besides constructing a number of macro actions. The following sub sections describe different procedures used for this operation. The way that these heuristics are used in the planning process is discussed later. a) Detecting hidden mutual exclusions Every domain may contain predicate pairs that are mutually exclusive, but the point is not stated directly in the domain definition, and therefore remains obscure. For example, Clear(X) and On(Y, X) in the Blocks World Domain are mutually exclusive. This property is not explicitly stated in the domain definition, but can be derived by analyzing appropriate actions. The aim of this procedure is to identify these obscure mutual exclusions in an automatic and domain-independent manner. The outcome of this procedure for a given domain is a table that has one row and one column for each predicate of that domain. This table, for the Blocks World domain, is shown in Table 1. In this table, variables that occur in columns are disjointed from variables in rows. At the end of the execution of this procedure, each cell of the table shows both conditions under which the two related predicates are mutually exclusive, and situations in which they are not. For instance, consider the information given in Table 2 for two cells of Table 1. Iranian Journal of Science & Technology, Volume 29, Number B1 February 2005

3 Employing domain knowledge to improve 109 Table 1. The initial table for hidden mutual exclusion detection On(X,Y) Clear(X) OnTable (X) Hand Empty Holding (X) On (Z, T) Clear (Z) OnTable (Z) Hand Empty Holding (Z) Table 2. Partial mutual exclusion table On(X,Y) Clear(X)... On(Z, T)... (1) Y=Z, X=T : mutually exclusive (2) Y=Z, X T: Ok (3) X=Z, Y T: mutually exclusive. (4) X=Z : Ok (5) X=T : mutually exclusive Table 2's information is interpreted as follows: 1) On(X,Y) and On(Z,T) are mutually exclusive whenever X and Y are equal to T and Z, respectively. 2) On(X,Y) and On(Z,T) are ok whenever X is not equal to T, but Y is equal to Z. 3) On(X,Y) and On(Z,T) are mutually exclusive whenever X is equal to Z, but Y is not equal to T. 4) Clear(X) and On(Z,T) are compatible if X is equal to Z. 5) Clear(X) and On(Z,T) are incompatible if X is equal to T. The mutual exclusion detection algorithm, which is shown in Fig. 1, has two stages. The aim of stage 1 is to initialize the table with predicate pairs that are consistent. This knowledge can be extracted from the definition of domain actions. If an action has two preconditions A and B, we regard these predicates as consistent. (Note, we assume that the domain definition has no errors.) Furthermore, if an action has two add-effects C and D, we again regard C and D as consistent. Based on these assumptions, the first stage of our algorithm iterates for all domain actions, and marks every two predicates that are in the same preconditions list or add-effects of an action as consistent. For example, if we find an action with two preconditions On(X,Y) and Clear(X), we will add the statement "X=Z, Ok" to the cross reference of On(X,Y) and Clear(Z) in Table 1. We must also add the statement "X=Z, Ok" to the cross reference of Clear(X) and On(Z,T). In stage 2, the algorithm performs a recursive search for detecting mutual exclusion relations between predicate pairs. A pair of predicates is regarded as being mutually exclusive if every action that produces the first predicate also has an effect or a (possibly indirect) precondition that is inconsistent with the other predicate (and vise versa). // Stage 1: Initializing the table. 1- For every action A, a. Set M to the set of all preconditions of A. b. For every two predicates P and Q, from set M, i. If P and Q have no common variable and none of them is variable-free, continue from 1-c. ii. Add the relation between variables of P and variables of Q to the cross reference of P and Q as Ok. c. Repeat the above stages for every two predicates P and Q from the set of add-effects of A, too. // Stage 2: Searching for mutual exclusions. 2- Found false 3- For every two predicates A(a 1,a 2,...) and B(b 1,b 2,...), whose compatibility is not yet known, a. x CheckComp( [A(a 1, a 2,...)], B(b 1, b 2,...)) b. y CheckComp( [B(b 1, b 2,...)], A(a 1, a 2,...)) c. if x is false and y is false, then i. Add "A(a 1,a 2,...) and B(b 1,b 2,...) are incompatible" to the table. ii. Found = true. 4- If Found is true, then go to Line 2 Fig. 1. Mutual exclusion detection algorithm February 2005 Iranian Journal of Science & Technology, Volume 29, Number B1

4 110 G. Ghassem-Sani / R. Halavati As it is shown in Fig. 1, each direction of this analysis is performed by one call to a recursive procedure called CheckComp. For instance, in order to prove Clear(Y) and On(X,Y) are mutually exclusive, CheckComp performs the following analysis: Clear(Y) can be achieved by either PutDown(Y), Stack(Y, Z), or Unstack(Z, Y). PutDown(Y) needs Holding(Y), which can be achieved by either Unstack(Y, Z), or PickUp(Y). Both of these actions need Clear(Y) as a precondition. Stack(Y, Z) needs Holding(Y) as well, which in turn needs Clear(Y) again. Finally, Unstack(Y, Z) needs On(Y, Z) which can be achieved by Stack(Y, Z), which in turn needs Clear(Y) again. Thus, all these cases contain a viscous cycle that should be discontinued. Considering the reverse case, On(X, Y) can only be achieved by Stack(X, Y), which produces ~Clear(Y) as an effect. b) Detection of alpha-beta preconditions Let us start this part with an example from Blocks World domain. Suppose our goals are On(A, B) and On(B, C). In this case, it is clear that in the final plan, On(B, C) must be achieved prior to On(A, B). Due to the nonlinear nature of POP, the order of selecting these two goals has no limiting effect on the planning process to find the solution. However, if the planner attacks On(A, B) first, it has to perform a timeconsuming conflict-resolution in order to achieve On(B, C); whereas, if it attacks these goals reversely, the conflict-resolution process will not be necessary. Finding the correct order in which planning goals should be attacked has a major effect on the performance of the planning process. The correct order of achieving these types of goals, which we call alpha-beta pairs, can be determined with a simple rule. Alpha-Beta Pair: If there is a case in which every action producing predicate A produces predicate C as well, where C is mutually exclusive with predicate B, we call A and B an Alpha-Beta pair. A is called the Alpha part and B the Beta one. Whenever we find such a case in a problem, we can conclude that the alpha part should be achieved prior to the beta part, because otherwise, construction of alpha will definitely undo beta. For instance, every action producing On(B, C), also produces Clear(B). We know that Clear(B) is mutually exclusive with On(A, B). Hence On(B, C) and On(A, B) are an Alpha-Beta precondition pair, and therefore On(B, C) should be achieved prior to On(A, B). Figure 2 shows the algorithm that identifies the alpha-beta precondition pairs. 1. For every two predicates A and B, a) For every predicate C incompatible with B, i. If there is an action that can produce A without C, continue from 1-a. ii. Otherwise, add (A, B) as an alpha-beta precondition pair. Continue from 1. c) Checking action parameters Fig. 2. Detection of alpha-beta precondition pairs Many actions with more than one parameter may not work when their parameters are set to equal values (e.g., Stack(X, X)). Identification of these cases can prevent later conflicts. Figure 3 shows the algorithm that can be used to identify these cases. 1. For any action A a. L {} b. M {x x is a precondition of action A} c. N {x x is an effect of action A} d. P {x x is a member of delete list of A} e. Q N ( M P ) f. For every pair of variables of action A such as X and Y, if there are two members of Q that are in conflict, and if X is equal to Y, then add (X,Y) to L. g. L is the list of all variable pairs of action A that cannot be set equal. Fig. 3. Finding action parameter restrictions Iranian Journal of Science & Technology, Volume 29, Number B1 February 2005

5 Employing domain knowledge to improve 111 d) Generation of macro actions Some actions naturally follow each other. For example, in Blocks World, Stack(X, Y) always follows either Pick Up(X) or Unstack(X, Z). If such sequences are considered as macro actions and can be chosen during the action selection phase, the number of choice points and items in the goal agenda is reduced. Figure 4 describes our method for constructing macro actions. Repeat for Max Depth times, o For every two actions or macro actions A and B, For every variable combination of A and B, 1. M {x x is an effect of A } ( {x x is a precondition of A } { x x is in delete list of A } ) 2. N { x x is a precondition of B } 3. If any two members of M and N are mutual exclusive, go to P ( M { x x is in delete list of B } ) { x x is in add list of B } 5. Q P ( { x x is a precondition of A } { x x is a precondition of B and not an effect of A } ) 6. If Q is empty, go to Add AB as a macro action. 8. Continue. Prune generated macro actions. Fig. 4. Macro action generator The algorithm has a max depth parameter in its first line. This parameter limits the maximum size of macros generated by the algorithm. The higher this number is, the longer and more macro actions may be generated. More macro actions can speed up the planning process, but at the same time, it has the side effect of causing a higher branching factor, which in turn results in the slowing down of the planning process. The trade off between these two is in fact domain-dependent. We chose a single iteration for the blocks world domain, which worked quite well. The first three lines of the algorithm ensure that every two actions, with every possible set of common variables among them, are checked. For example, stack and unstack are checked 6 times as follows: 1) stack(x, y), unstack(x, y) 2) stack(x, y), unstack(x, z) 3) stack(x, y), unstack(z, y) 4) stack(x, y), unstack(y, x) 5) stack(x, y), unstack(z, x) 6) stack(x, y), unstack(y, z) Line 1 of the algorithm constructs the set M as all add-effects of action A, plus those preconditions that are not deleted by the A. Line 3 checks to see if there is any mutual exclusion relation between any member of M and a precondition of action B. Line 4 finds all predicates produced after sequential occurrence of the two actions. Line 5 computes the net effects of the AB sequence on the world state. Line 6 checks the usefulness of the AB sequence. The last stage of this algorithm is a pruning procedure that uses some measures as 'number of predicates that are produced by one action of the macro action, and used by 'another', or 'the number of predicates that are produced by more than one action of the macro action', in order to eliminate some of the less promising macro actions. A better method for fulfilling this job is proposed later in the paper. e) Finding opposite actions Some actions like Stack(X, Y) and Unstack(X, Y), are considered as opposite actions, because one of them undoes the effects of the other. In other words, a sequence of these two actions has no effect on the current state of the problem. We can use algorithm 4 to identify theses cases just by replacing line 6 by the statement 'if Q is not empty ' February 2005 Iranian Journal of Science & Technology, Volume 29, Number B1

6 112 G. Ghassem-Sani / R. Halavati 4. HIERARCHICAL/HEURISTIC POP This section describes the differences between POP and H 2 POP. Our planner uses an iterative deepening depth first search on the maximum number of actions needed to solve the given problem. This will help the planner not to be trapped in fruitless branches of the search space. a) Goals hierarchy As stated before, when there is an Alpha-Beta relation between two goals, it is better to achieve the alpha goal prior to trying to achieve the beta goal. For this, the goals are checked for having alpha-beta relations prior to beginning the planning process. After computation of alpha-beta relations, the goals are put in different layers such that in each alpha-beta pair, the alpha goal is put in a higher layer than the beta goal (Here, by being in a higher layer, we mean having a higher priority.) The planning process begins by considering only the highest layer goals. Any precondition of actions of each layer is also assumed to be in that layer. Only when all the goals in one layer have been achieved, the planning proceeds to the next layer. If the planner fails to achieve any of the goals of any layer, it backtracks to the previous layer. b) Priority of action selection to goal selection In POP, goal selection is separated from action selection, and takes place before that. In H 2 POP, however, these two tasks are united. It first determines all actions that are applicable to any remaining goals of the current layer, then rates them with some measurement factors, and finally selects the most promising action. (Note that selecting an action entails the selection of goal(s) that should be achieved; i.e. those goals that are supported by the chosen action. In other words, in H 2 POP, there is not a separate step for choosing goals.) The procedure that performs this task is shown in Fig M {x x is an action currently selected in plan} {x x is a new action from domain space}. 2- L0 {} 3- For all members of M such as A, a. L1 {} b. For all unsatisfied goals belonging to current layer such as G, if A supports G with some variable combination, add the combination to L1. c. If L1 is not empty, then for any combination of variables stored in L1, Rate it. 4- Sort List L0 with the following priorities: a. The highest priority measure is the number of satisfiable goals. b. Then, actions with lower number of requirements are favored. c. At last, macro actions have higher priority than simple ones. Fig. 5. Generate action choices Line1: Set M is constructed as the set of all actions that are already used in the current plan (i.e. old actions), plus all actions that can be added to the plan (i.e., new actions). Line 2: We use list L0 as the list of all actions that are checked and proved to satisfy at least one of the remaining goals in the current layer. Each member of L0 holds some rating measures with itself. What these ratings are is described later. Lines 3-a, 3-b & 3-b-i: This part has to find all possible variable assignments of the selected action that may satisfy at least one goal. For instance, a variable assignment can be in the form of 'x A, y B for Stack(x, y)', in which A and B are two objects of the current problem. A variable assignment may be partial, in that, some of the variables may still have no value. These variable assignments are stored in L1. Line 3-c: It is checked if the current action can satisfy at least one goal. If the check passes, the procedure goes through all possible combinations of variable assignments stored in L1 for the current action and rates these combinations. The rating process takes place by considering the following measures: Iranian Journal of Science & Technology, Volume 29, Number B1 February 2005

7 Employing domain knowledge to improve Whether the variable assignment is valid? 2- How many new sub goals are to be added to the goal agenda if this action is selected? 3- How many goals does it satisfy? 4- Whether or not the current action, if it is a new action, and supports precondition G of action A, is opposite to A? A point worth noting here is that the above procedure generates many choices. Although the procedure sorts theses choices, there may still be too many alternatives to be tried in future backtrackings. Thus, we just check part of the high ranked choices (by using an iterative deepening search), and neglect the rest. c) Checking unrecoverable conflicts Suppose there are two actions A and B in the current plan, such that A has unsatisfied preconditions X and Y, and B has effects X and Z. If ~Z and X form an alpha-beta precondition pair, and Z and Y are mutually exclusive, then B cannot be the supporter of precondition X for A (see Fig. 6a). B A M2 M1 X Z X and ~Z are an alpha-beta pair X Y On(A, B) On(B, C) On(A, B) Clear(C) (a) Mutually Exclusive (b) Mutually Exclusive Fig. 6. The pattern (a) and sample (b) of an unresolvable conflict For instance, suppose there is an action called M1 (either a primitive or a macro action) in our plan which has two unsatisfied preconditions On(A, B) and Clear(C). Suppose there is also another action called M2 which produces On(A, B) and On(B, C). In this case, if the planner chooses M2 as the supplier of On(A, B) for M1, there must be a range between M2 to M1 protecting On(A, B) (see Figure 6-b). On(B, C) and Clear(C) are mutually exclusive, and thus On(B, C) must be undone before we reach M1. However, ~On(B, C) and On(A, B) are an alpha-beta pair, which means undoing On(B, C) cannot be fulfilled without undoing On(A, B). Therefore, any action chosen to undo On(B, C) will definitely be in conflict with the above range, and must be either promoted or demoted; whereas, we need it exactly inside that range. Hence, such cases definitely lead to an unresolvable conflict and need backtracking, and hence should be prevented. The algorithm shown in Fig. 7 finds such cases. X 1- For any new range B A, 2- M {x x is an unsatisfied precondition of A}. 3- N {~x x is an effect of B and x is in conflict with a member of M}. 4- If any member of N is in an alpha-beta precondition pair with X, the link has an unrecoverable conflict. d) Preventing linear conflicts Fig. 7. Finding unrecoverable conflicts The final major difference between POP and H 2 POP is that H 2 POP prevents adding actions that are going to create a linear conflict (i.e. a conflict that cannot be resolved by linearization). However, POP first adds such actions and later detects the conflict and has to backtrack. 5. IMPLEMENTATION RESULTS Some of the theoretical aspects of this paper have been previously introduced elsewhere [11]. However, the implementation results are presented here. Table 3 depicts the comparison between H 2 POP and POP in solving several problems from the Blocks World domain. Both algorithms have been implemented by using Visual C++ and the tests were run on a Pentium III, 600MHz. February 2005 Iranian Journal of Science & Technology, Volume 29, Number B1

8 114 G. Ghassem-Sani / R. Halavati As it is shown, POP performs slightly better than H 2 POP in very simple problems (i.e., cases 1-3), but for larger ones (i.e., cases 4-10), H 2 POP's highly outperforms POP. The lower performance of H 2 POP in easier cases is due to the extra work of H 2 POP in each planning cycle to choose the appropriate goal and action. POP that uses a brute force search often solves small problems faster. However, in tackling larger problems, POP, with its blind search, goes through a lot of unnecessary backtracking, whereas H 2 POP prevents these by making decisions that are more appropriate. Since the planning task (using POP or H 2 POP) is a highly iterative operation, the number of backtrackings has a major effect on the planning efficiency. Although in this paper the comparison between H 2 POP and POP was based on problems from a classical planning domain (i.e., Blocks World), the ideas presented here have also been applied to a number of other domains, too [12]. Table 3. Comparison between POP and H 2 POP Initial State CLEAR(B), ONTABLE(C), ON(B, C), ANDEMPTY CLEAR(C), CLEAR(A), CLEAR(B), CLEAR(D), ONTABLE(C), ONTABLE(A), ONTABLE(B), ONTABLE(D), CLEAR(B), ONTABLE(D), ON(B, C), ON(C, A), ON(A, D), CLEAR(A), CLEAR C, CLEAR(D), ONTABLE(A), ONTABLE(B), ONTABLE(D), ON(C, B), Goal State CLEAR(C) CLEAR(B) ONTABLE(A) CLEAR(C) ON(B, A), LEAR(C) ON(B, A), ON(A, C) ON(B, A), ON(C, B) ON(D, C), ON(C, B), ON(B, A) ON(D, C), ON(C, A), ON(A, B) ON(A, B), ON(B, C), ON(C, D) H 2 POP Time(ms) , ,751 37,934 43,482 POP Time(ms) , , ,960 16,734 >300,000 >300,000 >300, CONCLUSION In this paper, we proposed some changes to partial order planning in order to make it more efficient. POP was claimed to be sound and complete [5], and the following statements show that our modifications do not harm these properties, either. 1) H 2 POP uses an iterative deepening approach, which does not change the soundness or the completeness. 2) We use a hierarchical goal ordering in H 2 POP. The order in which goals are selected has no effect on the completeness of POP; it only affects efficiency. We improve the efficiency by forcing the planner to select the goals in a more rational order. 3) We combined two main steps of POP (i.e. choosing the next goal to work on, and choosing an action to achieve that goal). Since H 2 POP can still select all possible actions, this does not affect the completeness of H 2 POP, either. 4) We use macro actions in addition to primitive actions. As macro actions are just sequences of primitive actions, a selection of a macro action can be assumed to be the same as a sequence of action selections in POP. Besides, H 2 POP is not restricted to select only macro actions. So, whenever it is necessary, primitive actions can be selected, too. Iranian Journal of Science & Technology, Volume 29, Number B1 February 2005

9 Employing domain knowledge to improve 115 5) We prevent the occurrence of 'Unrecoverable Conflicts'. As stated before, an unrecoverable conflict is a case that definitely causes backtracking. If POP selects such a case, it has to backtrack and eliminate it. We prevent such cases in advance. 6) We also prevent linear conflicts beforehand, whereas POP does not recognize such cases until it is too late, and it has to backtrack. We have implemented and tested our heuristics on a number of problem domains. Although there have been some improvements, there are still works to be done. For instance, the procedure that detects "hidden mutual exclusion relations" is not yet complete. Completion of this procedure can have a major effect on the planner's efficiency. We can also make use of statistical information, gathered from previously solved problems, to get rid of useless macros, make a better rating of goals and predicates, and guide selections more properly. The planner also needs a method to identify planning loops. Furthermore, the preprocessing engine can be enhanced so that it derives some meta-rules to control the planning process. Finally, we still have no means to make a wiser choice between promotion and demotion. REFERENCES 1. Blum, A. & Furst, M. (1997). Fast planning through planning graph analysis. Artificial Intelligence, 90, Kautz, H., McAllester, D. & Selman, B. (1996). Encoding plans in propositional logic. Proceedings of the Fifth International Conference on the Principle of Knowledge Representation and Reasoning (KR'96), Sacerdoti, E. (1975). The non-linear nature of plans. Proceedings of the Fourth Joint Conference on Artificial Intelligence (IJCAI-75), Tate, A. (1977). Generating project networks. Proceedings of the Fifth International Joint Conference on Artificial Intelligence (IJCAI-77), Weld, D. (1994). An introduction to least commitment planning. AI Magazine, 15(4), Pollack, M., Joslin, D. & Paolucci, M. (1997). Flaw selection strategies for partial-order planning. Journal of Artificial Intelligence Research, 6, Gerevini, A. & Schubert, L. (1996). Accelerating partial-order planners: Some techniques for effective search control and planning. Journal of Artificial Intelligence Research 5, Koehler, J. & Hoffmann, J. (2000). On reasonable and forced goal orderings and their use in an agenda-driven planning algorithm. Journal of Artificial Intelligence Research, 12, Fikes, R., Hart, P. & Nilsson, N. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3(4), Sacerdoti, E. (1974). Planning in a hierarchy of abstraction spaces. Artificial Intelligence 5, Ghassem-Sani, G., Halavati, R. & Hashemian, E. (2002). Hierarchical/heuristic partial order planner (H 2 POP). Proceedings of International Conference on Artificial Intelligence (IC-AI 2002), 3, Salari, R., Ghassem-Sani, G. & Halavati, R. (2004). Domain preprocessing in AI planning. Proceedings of 9th Annual Int. CSI Computer Conference (CSICC'2004), I, Sharif University of Technology, Tehran, Iran, February 2005 Iranian Journal of Science & Technology, Volume 29, Number B1

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lecturer 7 - Planning Lecturer: Truong Tuan Anh HCMUT - CSE 1 Outline Planning problem State-space search Partial-order planning Planning graphs Planning with propositional logic

More information

A deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state

A deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state CmSc310 Artificial Intelligence Classical Planning 1. Introduction Planning is about how an agent achieves its goals. To achieve anything but the simplest goals, an agent must reason about its future.

More information

ICS 606. Intelligent Autonomous Agents 1

ICS 606. Intelligent Autonomous Agents 1 Intelligent utonomous gents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu Lecture #4 Practical Reasoning gents Intentions Planning Means-ends reasoning The blocks world References Wooldridge

More information

STRIPS HW 1: Blocks World

STRIPS HW 1: Blocks World STRIPS HW 1: Blocks World Operator Precondition Delete List Add List Stack(x, y) CLEAR(y) CLEAR(y) HOLDING(x) HOLDING(x) ON(x, y) Unstack(x, y) ON(x, y) ON(x, y) HOLDING(x) CLEAR(x) CLEAR(y) PickUp(x)

More information

Planning Algorithms Properties Soundness

Planning Algorithms Properties Soundness Chapter MK:VI III. Planning Agent Systems Deductive Reasoning Agents Planning Language Planning Algorithms State-Space Planning Plan-Space Planning HTN Planning Complexity of Planning Problems Extensions

More information

Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation

Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation Relaxed Planning Problems 74 A classical planning problem P has a set of solutions Solutions(P) = { π : π

More information

1 What is Planning? automatic programming. cis716-spring2004-parsons-lect17 2

1 What is Planning? automatic programming. cis716-spring2004-parsons-lect17 2 PLANNING 1 What is Planning? Key problem facing agent is deciding what to do. We want agents to be taskable: give them goals to achieve, have them decide for themselves how to achieve them. Basic idea

More information

Primitive goal based ideas

Primitive goal based ideas Primitive goal based ideas Once you have the gold, your goal is to get back home s Holding( Gold, s) GoalLocation([1,1], s) How to work out actions to achieve the goal? Inference: Lots more axioms. Explodes.

More information

Classical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Classical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016 Classical Planning CS 486/686: Introduction to Artificial Intelligence Winter 2016 1 Classical Planning A plan is a collection of actions for performing some task (reaching some goal) If we have a robot

More information

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2013 Set 9: Planning Classical Planning Systems ICS 271 Fall 2013 Outline: Planning Classical Planning: Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning graphs Readings:

More information

cis32-ai lecture # 21 mon-24-apr-2006

cis32-ai lecture # 21 mon-24-apr-2006 cis32-ai lecture # 21 mon-24-apr-2006 today s topics: logic-based agents (see notes from last time) planning cis32-spring2006-sklar-lec21 1 What is Planning? Key problem facing agent is deciding what to

More information

CS 621 Artificial Intelligence. Lecture 31-25/10/05. Prof. Pushpak Bhattacharyya. Planning

CS 621 Artificial Intelligence. Lecture 31-25/10/05. Prof. Pushpak Bhattacharyya. Planning CS 621 Artificial Intelligence Lecture 31-25/10/05 Prof. Pushpak Bhattacharyya Planning 1 Planning Definition : Planning is arranging a sequence of actions to achieve a goal. Uses core areas of AI like

More information

Intelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14.

Intelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14. Intelligent Agents State-Space Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: 14. April 2016 U. Schmid (CogSys) Intelligent Agents last change: 14. April

More information

Planning as Search. Progression. Partial-Order causal link: UCPOP. Node. World State. Partial Plans World States. Regress Action.

Planning as Search. Progression. Partial-Order causal link: UCPOP. Node. World State. Partial Plans World States. Regress Action. Planning as Search State Space Plan Space Algorihtm Progression Regression Partial-Order causal link: UCPOP Node World State Set of Partial Plans World States Edge Apply Action If prec satisfied, Add adds,

More information

Plan Generation Classical Planning

Plan Generation Classical Planning Plan Generation Classical Planning Manuela Veloso Carnegie Mellon University School of Computer Science 15-887 Planning, Execution, and Learning Fall 2016 Outline What is a State and Goal What is an Action

More information

Planning and Acting. CITS3001 Algorithms, Agents and Artificial Intelligence. 2018, Semester 2

Planning and Acting. CITS3001 Algorithms, Agents and Artificial Intelligence. 2018, Semester 2 Planning and Acting CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia 2018, Semester 2 Summary We

More information

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Roman Barták Charles University in Prague, Faculty of Mathematics and Physics Institute for Theoretical Computer

More information

CPS 270: Artificial Intelligence Planning

CPS 270: Artificial Intelligence   Planning CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent

More information

Artificial Intelligence 2004 Planning: Situation Calculus

Artificial Intelligence 2004 Planning: Situation Calculus 74.419 Artificial Intelligence 2004 Planning: Situation Calculus Review STRIPS POP Hierarchical Planning Situation Calculus (John McCarthy) situations actions axioms Review Planning 1 STRIPS (Nils J. Nilsson)

More information

Where are we? Informatics 2D Reasoning and Agents Semester 2, Planning with state-space search. Planning with state-space search

Where are we? Informatics 2D Reasoning and Agents Semester 2, Planning with state-space search. Planning with state-space search Informatics 2D Reasoning and Agents Semester 2, 2018 2019 Alex Lascarides alex@inf.ed.ac.uk Where are we? Last time... we defined the planning problem discussed problem with using search and logic in planning

More information

Artificial Intelligence. Planning

Artificial Intelligence. Planning Artificial Intelligence Planning Planning Planning agent Similar to previous problem solving agents Constructs plans that achieve its goals, then executes them Differs in way it represents and searches

More information

Formalizing the PRODIGY Planning Algorithm

Formalizing the PRODIGY Planning Algorithm Formalizing the PRODIGY Planning Algorithm Eugene Fink eugene@cs.cmu.edu http://www.cs.cmu.edu/~eugene Manuela Veloso veloso@cs.cmu.edu http://www.cs.cmu.edu/~mmv Computer Science Department, Carnegie

More information

Artificial Intelligence Planning

Artificial Intelligence Planning Artificial Intelligence Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent objects, relations, etc. (logic) Now we will combine

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols The International Arab Journal of Information Technology, Vol. 5, No. 4, October 2008 381 Incompatibility Dimensions and Integration of Atomic Commit Protocols Yousef Al-Houmaily Department of Computer

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence CSC348 Unit 4: Reasoning, change and planning Syedur Rahman Lecturer, CSE Department North South University syedur.rahman@wolfson.oxon.org Artificial Intelligence: Lecture Notes

More information

3. Knowledge Representation, Reasoning, and Planning

3. Knowledge Representation, Reasoning, and Planning 3. Knowledge Representation, Reasoning, and Planning 3.1 Common Sense Knowledge 3.2 Knowledge Representation Networks 3.3 Reasoning Propositional Logic Predicate Logic: PROLOG 3.4 Planning Planning vs.

More information

Components of a Planning System

Components of a Planning System Planning Components of a Planning System In any general problem solving systems, elementary techniques to perform following functions are required Choose the best rule (based on heuristics) to be applied

More information

Planning in a Single Agent

Planning in a Single Agent What is Planning Planning in a Single gent Sattiraju Prabhakar Sources: Multi-agent Systems Michael Wooldridge I a modern approach, 2 nd Edition Stuart Russell and Peter Norvig Generate sequences of actions

More information

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014

Set 9: Planning Classical Planning Systems. ICS 271 Fall 2014 Set 9: Planning Classical Planning Systems ICS 271 Fall 2014 Planning environments Classical Planning: Outline: Planning Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning

More information

I/O Efficieny of Highway Hierarchies

I/O Efficieny of Highway Hierarchies I/O Efficieny of Highway Hierarchies Riko Jacob Sushant Sachdeva Departement of Computer Science ETH Zurich, Technical Report 531, September 26 Abstract Recently, Sanders and Schultes presented a shortest

More information

An Appropriate Search Algorithm for Finding Grid Resources

An Appropriate Search Algorithm for Finding Grid Resources An Appropriate Search Algorithm for Finding Grid Resources Olusegun O. A. 1, Babatunde A. N. 2, Omotehinwa T. O. 3,Aremu D. R. 4, Balogun B. F. 5 1,4 Department of Computer Science University of Ilorin,

More information

Automated Planning. Plan-Space Planning / Partial Order Causal Link Planning

Automated Planning. Plan-Space Planning / Partial Order Causal Link Planning Automated Planning Plan-Space Planning / Partial Order Causal Link Planning Jonas Kvarnström Automated Planning Group Department of Computer and Information Science Linköping University Partly adapted

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Issues in Interleaved Planning and Execution

Issues in Interleaved Planning and Execution From: AAAI Technical Report WS-98-02. Compilation copyright 1998, AAAI (www.aaai.org). All rights reserved. Issues in Interleaved Planning and Execution Scott D. Anderson Spelman College, Atlanta, GA andorson

More information

Structure and Complexity in Planning with Unary Operators

Structure and Complexity in Planning with Unary Operators Structure and Complexity in Planning with Unary Operators Carmel Domshlak and Ronen I Brafman ½ Abstract In this paper we study the complexity of STRIPS planning when operators have a single effect In

More information

6.034 Quiz 1, Spring 2004 Solutions

6.034 Quiz 1, Spring 2004 Solutions 6.034 Quiz 1, Spring 2004 Solutions Open Book, Open Notes 1 Tree Search (12 points) Consider the tree shown below. The numbers on the arcs are the arc lengths. Assume that the nodes are expanded in alphabetical

More information

COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents. Dr Terry R. Payne Department of Computer Science

COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents. Dr Terry R. Payne Department of Computer Science COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents Dr Terry R. Payne Department of Computer Science Pro-Active Behaviour Previously we looked at: Characteristics of an Agent and its Environment

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

15-780: Graduate AI Homework Assignment #2 Solutions

15-780: Graduate AI Homework Assignment #2 Solutions 15-780: Graduate AI Homework Assignment #2 Solutions Out: February 12, 2015 Due: February 25, 2015 Collaboration Policy: You may discuss the problems with others, but you must write all code and your writeup

More information

Operational Semantics

Operational Semantics 15-819K: Logic Programming Lecture 4 Operational Semantics Frank Pfenning September 7, 2006 In this lecture we begin in the quest to formally capture the operational semantics in order to prove properties

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

3. Knowledge Representation, Reasoning, and Planning

3. Knowledge Representation, Reasoning, and Planning 3. Knowledge Representation, Reasoning, and Planning 3.1 Common Sense Knowledge 3.2 Knowledge Representation Networks 3.3 Reasoning Propositional Logic Predicate Logic: PROLOG 3.4 Planning Introduction

More information

Using State-Based Planning Heuristics for Partial-Order Causal-Link Planning

Using State-Based Planning Heuristics for Partial-Order Causal-Link Planning Using State-Based Planning Heuristics for Partial-Order Causal-Link Planning Pascal Bercher, Thomas Geier, and Susanne Biundo Institute of Artificial Intelligence, Ulm University, D-89069 Ulm, Germany,

More information

Planning (What to do next?) (What to do next?)

Planning (What to do next?) (What to do next?) Planning (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) CSC3203 - AI in Games 2 Level 12: Planning YOUR MISSION learn about how to create

More information

An Action Model Learning Method for Planning in Incomplete STRIPS Domains

An Action Model Learning Method for Planning in Incomplete STRIPS Domains An Action Model Learning Method for Planning in Incomplete STRIPS Domains Romulo de O. Leite Volmir E. Wilhelm Departamento de Matemática Universidade Federal do Paraná Curitiba, Brasil romulool@yahoo.com.br,

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

A New Algorithm for Singleton Arc Consistency

A New Algorithm for Singleton Arc Consistency A New Algorithm for Singleton Arc Consistency Roman Barták, Radek Erben Charles University, Institute for Theoretical Computer Science Malostranské nám. 2/25, 118 Praha 1, Czech Republic bartak@kti.mff.cuni.cz,

More information

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014

CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014 CS-171, Intro to A.I. Mid-term Exam Fall Quarter, 2014 YOUR NAME: YOUR ID: ID TO RIGHT: ROW: SEAT: The exam will begin on the next page. Please, do not turn the page until told. When you are told to begin

More information

Probabilistic Belief. Adversarial Search. Heuristic Search. Planning. Probabilistic Reasoning. CSPs. Learning CS121

Probabilistic Belief. Adversarial Search. Heuristic Search. Planning. Probabilistic Reasoning. CSPs. Learning CS121 CS121 Heuristic Search Planning CSPs Adversarial Search Probabilistic Reasoning Probabilistic Belief Learning Heuristic Search First, you need to formulate your situation as a Search Problem What is a

More information

Search and Optimization

Search and Optimization Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution

More information

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 20 Concurrency Control Part -1 Foundations for concurrency

More information

1 Tree Search (12 points)

1 Tree Search (12 points) 1 Tree Search (12 points) Consider the tree shown below. The numbers on the arcs are the arc lengths. Assume that the nodes are expanded in alphabetical order when no other order is specified by the search,

More information

Does Representation Matter in the Planning Competition?

Does Representation Matter in the Planning Competition? Does Representation Matter in the Planning Competition? Patricia J. Riddle Department of Computer Science University of Auckland Auckland, New Zealand (pat@cs.auckland.ac.nz) Robert C. Holte Computing

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Chapter 10 Motion Planning

Chapter 10 Motion Planning Chapter 10 Motion Planning Part 3 10.3 Real Time Global Motion Planning 1 Outline 10.3 Real Time Global Motion Planning 10.3.1 Introduction 10.3.2 Depth Limited Approaches 10.3.3 Anytime Approaches 10.3.4

More information

Branch & Bound (B&B) and Constraint Satisfaction Problems (CSPs)

Branch & Bound (B&B) and Constraint Satisfaction Problems (CSPs) Branch & Bound (B&B) and Constraint Satisfaction Problems (CSPs) Alan Mackworth UBC CS 322 CSP 1 January 25, 2013 P&M textbook 3.7.4 & 4.0-4.2 Lecture Overview Recap Branch & Bound Wrap up of search module

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur

Module 6. Knowledge Representation and Logic (First Order Logic) Version 2 CSE IIT, Kharagpur Module 6 Knowledge Representation and Logic (First Order Logic) Lesson 15 Inference in FOL - I 6.2.8 Resolution We have introduced the inference rule Modus Ponens. Now we introduce another inference rule

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Intelligent Systems. Planning. Copyright 2010 Dieter Fensel and Ioan Toma

Intelligent Systems. Planning. Copyright 2010 Dieter Fensel and Ioan Toma Intelligent Systems Planning Copyright 2010 Dieter Fensel and Ioan Toma 1 Where are we? # Title 1 Introduction 2 Propositional Logic 3 Predicate Logic 4 Reasoning 5 Search Methods 6 CommonKADS 7 Problem-Solving

More information

4 Search Problem formulation (23 points)

4 Search Problem formulation (23 points) 4 Search Problem formulation (23 points) Consider a Mars rover that has to drive around the surface, collect rock samples, and return to the lander. We want to construct a plan for its exploration. It

More information

COMPILER DESIGN. For COMPUTER SCIENCE

COMPILER DESIGN. For COMPUTER SCIENCE COMPILER DESIGN For COMPUTER SCIENCE . COMPILER DESIGN SYLLABUS Lexical analysis, parsing, syntax-directed translation. Runtime environments. Intermediate code generation. ANALYSIS OF GATE PAPERS Exam

More information

Exact Algorithms Lecture 7: FPT Hardness and the ETH

Exact Algorithms Lecture 7: FPT Hardness and the ETH Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,

More information

Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning

Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning Alan J. Hu for CpSc 5 Univ. of British Columbia 00 February 9 These are supplementary notes on these aspects of a modern DPLL-style

More information

A System for Bidirectional Robotic Pathfinding

A System for Bidirectional Robotic Pathfinding A System for Bidirectional Robotic Pathfinding Tesca K. Fitzgerald Department of Computer Science, Portland State University PO Box 751 Portland, OR 97207 USA tesca@cs.pdx.edu TR 12-02 November 2012 Abstract

More information

Commitment Least you haven't decided where to go shopping. Or...suppose You can get milk at the convenience store, at the dairy, or at the supermarket

Commitment Least you haven't decided where to go shopping. Or...suppose You can get milk at the convenience store, at the dairy, or at the supermarket Planning as Search-based Problem Solving? Imagine a supermarket shopping scenario using search-based problem solving: Goal: buy milk and bananas Operator: buy Heuristic function: does = milk

More information

Summary: Issues / Open Questions:

Summary: Issues / Open Questions: Summary: The paper introduces Transitional Locking II (TL2), a Software Transactional Memory (STM) algorithm, which tries to overcomes most of the safety and performance issues of former STM implementations.

More information

Chapter 3: Search. c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1

Chapter 3: Search. c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1 Chapter 3: Search c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1 Searching Often we are not given an algorithm to solve a problem, but only a specification of

More information

Artificial Intelligence II

Artificial Intelligence II Artificial Intelligence II 2013/2014 - Prof: Daniele Nardi, Joachim Hertzberg Exercitation 3 - Roberto Capobianco Planning: STRIPS, Partial Order Plans, Planning Graphs 1 STRIPS (Recap-1) Start situation;

More information

Failure Driven Dynamic Search Control for Partial Order Planners: An Explanation based approach

Failure Driven Dynamic Search Control for Partial Order Planners: An Explanation based approach Failure Driven Dynamic Search Control for Partial Order Planners: An Explanation based approach Subbarao Kambhampati 1 and Suresh Katukam and Yong Qu Department of Computer Science and Engineering, Arizona

More information

The Cheapest Way to Obtain Solution by Graph-Search Algorithms

The Cheapest Way to Obtain Solution by Graph-Search Algorithms Acta Polytechnica Hungarica Vol. 14, No. 6, 2017 The Cheapest Way to Obtain Solution by Graph-Search Algorithms Benedek Nagy Eastern Mediterranean University, Faculty of Arts and Sciences, Department Mathematics,

More information

Learning Techniques for Pseudo-Boolean Solving and Optimization

Learning Techniques for Pseudo-Boolean Solving and Optimization Learning Techniques for Pseudo-Boolean Solving and Optimization José Faustino Fragoso Fremenin dos Santos September 29, 2008 Abstract The extension of conflict-based learning from Propositional Satisfiability

More information

Seach algorithms The travelling salesman problem The Towers of Hanoi Playing games. Comp24412: Symbolic AI. Lecture 4: Search. Ian Pratt-Hartmann

Seach algorithms The travelling salesman problem The Towers of Hanoi Playing games. Comp24412: Symbolic AI. Lecture 4: Search. Ian Pratt-Hartmann Comp24412: Symbolic AI Lecture 4: Search Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2016 17 Outline Seach algorithms The travelling salesman problem The Towers of Hanoi Playing games Typical

More information

cis32-ai lecture # 22 wed-26-apr-2006 Partial Order Planning Partially ordered plans Representation

cis32-ai lecture # 22 wed-26-apr-2006 Partial Order Planning Partially ordered plans Representation cis32-ai lecture # 22 wed-26-apr-2006 Partial Order Planning today s topics: partial-order planning decision-theoretic planning The answer to the problem we ended the last lecture with is to use partial

More information

register allocation saves energy register allocation reduces memory accesses.

register allocation saves energy register allocation reduces memory accesses. Lesson 10 Register Allocation Full Compiler Structure Embedded systems need highly optimized code. This part of the course will focus on Back end code generation. Back end: generation of assembly instructions

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.4. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Using IF-THEN Rules for Classification Rule Extraction from a Decision Tree 1R Algorithm Sequential Covering Algorithms

More information

information is saved on a history stack, and Reverse, which runs back through a previous conservative execution and undoes its eect. We extend Forth's

information is saved on a history stack, and Reverse, which runs back through a previous conservative execution and undoes its eect. We extend Forth's A Virtual Machine Architecture for Constraint Based Programming Bill Stoddart October 25, 2000 Abstract We present a Forth style virtual machine architecture designed to provide for constriant based programming.

More information

BASIC PLAN-GENERATING SYSTEMS

BASIC PLAN-GENERATING SYSTEMS CHAPTER 7 BASIC PLAN-GENERATING SYSTEMS In chapters 5 and 6 we saw that a wide class of deduction tasks could be solved by commutative production systems. For many other problems of interest in AI, however,

More information

LECTURE 4: PRACTICAL REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2010

LECTURE 4: PRACTICAL REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2010 LECTURE 4: PRACTICAL REASONING AGENTS CIS 716.5, Spring 2010 What is Practical Reasoning? Practical reasoning is reasoning directed towards actions the process of figuring out what to do: Practical reasoning

More information

A Correct Algorithm for Efficient Planning with Preprocessed Domain Axioms

A Correct Algorithm for Efficient Planning with Preprocessed Domain Axioms In Bramer, M., Preece, A.& Coenen, F. (eds.), Research and Development in Intelligent Systems XVII (Proc. of ES-2000), pp.363-374. Springer-Verlag, 2001. A Correct Algorithm for Efficient Planning with

More information

Incompatibility Dimensions and Integration of Atomic Commit Protocols

Incompatibility Dimensions and Integration of Atomic Commit Protocols Preprint Incompatibility Dimensions and Integration of Atomic Protocols, Yousef J. Al-Houmaily, International Arab Journal of Information Technology, Vol. 5, No. 4, pp. 381-392, October 2008. Incompatibility

More information

Ordering Problem Subgoals

Ordering Problem Subgoals Ordering Problem Subgoals Jie Cheng and Keki B. Irani Artificial Intelligence Laboratory Department of Electrical Engineering and Computer Science The University of Michigan, Ann Arbor, MI 48109-2122,

More information

Local Search for CSPs

Local Search for CSPs Local Search for CSPs Alan Mackworth UBC CS CSP February, 0 Textbook. Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start) Searching

More information

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling CS 188: Artificial Intelligence Spring 2009 Lecture : Constraint Satisfaction 2/3/2009 Announcements Project 1 (Search) is due tomorrow Come to office hours if you re stuck Today at 1pm (Nick) and 3pm

More information

Generic Types and their Use in Improving the Quality of Search Heuristics

Generic Types and their Use in Improving the Quality of Search Heuristics Generic Types and their Use in Improving the Quality of Search Heuristics Andrew Coles and Amanda Smith Department of Computer and Information Sciences, University of Strathclyde, 26 Richmond Street, Glasgow,

More information

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification Data Mining 3.3 Fall 2008 Instructor: Dr. Masoud Yaghini Outline Using IF-THEN Rules for Classification Rules With Exceptions Rule Extraction from a Decision Tree 1R Algorithm Sequential Covering Algorithms

More information

Multi-Way Number Partitioning

Multi-Way Number Partitioning Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California,

More information

(Due to rounding, values below may be only approximate estimates.) We will supply these numbers as they become available.

(Due to rounding, values below may be only approximate estimates.) We will supply these numbers as they become available. Below, for each problem on this Midterm Exam, Perfect is the percentage of students who received full credit, Partial is the percentage who received partial credit, and Zero is the percentage who received

More information

Level 0. Level 1. Goal level

Level 0. Level 1. Goal level Fast Planning through Greedy Action Graphs Alfonso Gerevini and Ivan Serina Dipartimento di Elettronica per l'automazione Universita di Brescia, Via Branze 38, 25123 Brescia, Italy fgerevini,serinag@ing.unibs.it

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Homework #6 (Constraint Satisfaction, Non-Deterministic Uncertainty and Adversarial Search) Out: 2/21/11 Due: 2/29/11 (at noon)

Homework #6 (Constraint Satisfaction, Non-Deterministic Uncertainty and Adversarial Search) Out: 2/21/11 Due: 2/29/11 (at noon) CS121 Introduction to Artificial Intelligence Winter 2011 Homework #6 (Constraint Satisfaction, Non-Deterministic Uncertainty and Adversarial Search) Out: 2/21/11 Due: 2/29/11 (at noon) How to complete

More information

Planning, Execution & Learning 1. Conditional Planning

Planning, Execution & Learning 1. Conditional Planning Planning, Execution & Learning 1. Conditional Planning Reid Simmons Planning, Execution & Learning: Conditional 1 Conditional Planning Create Branching Plans Take observations into account when selecting

More information

A Critical Look at Critics in HTN Planning

A Critical Look at Critics in HTN Planning To appear, IJCAI-95, August 1995 A Critical Look at Critics in HTN Planning Kutluhan Erol James Hendler Dana S. Nau Reiko Tsuneto kutluhan@cs.umd.edu hendler@cs.umd.edu nau@cs.umd.edu reiko@cs.umd.edu

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

UNINFORMED SEARCH. Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5

UNINFORMED SEARCH. Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5 UNINFORMED SEARCH Announcements Reading Section 3.4, especially 3.4.1, 3.4.2, 3.4.3, 3.4.5 Robbie has no idea where room X is, and may have little choice but to try going down this corridor and that. On

More information

Search: Advanced Topics and Conclusion

Search: Advanced Topics and Conclusion Search: Advanced Topics and Conclusion CPSC 322 Lecture 8 January 20, 2006 Textbook 2.6 Search: Advanced Topics and Conclusion CPSC 322 Lecture 8, Slide 1 Lecture Overview Recap Branch & Bound A Tricks

More information

Distributed Graphplan

Distributed Graphplan Distributed Graphplan Mark Iwen & Amol Dattatraya Mali Electrical Engineering & Computer Science University of Wisconsin, Milwaukee, WI 53211 iwen2724@uwm.edu, mali@miller.cs.uwm.edu, Fax: 1-414-229-2769

More information

Complementary Graph Coloring

Complementary Graph Coloring International Journal of Computer (IJC) ISSN 2307-4523 (Print & Online) Global Society of Scientific Research and Researchers http://ijcjournal.org/ Complementary Graph Coloring Mohamed Al-Ibrahim a*,

More information