Milestone State Formulation Methods
|
|
- Bruno Burns
- 5 years ago
- Views:
Transcription
1 Milestone State Formulation Methods Hyungoo Han Department of Computer Science & Engineering Hankuk University of Foreign Studies 89 Wangsan-ri Mohyeon Cheoin-gu Yongin-si, Gyeonggi-do South Korea Abstract- An intelligent robot generates a plan to achieve a goal in a problem domain. A plan is a sequence of robot actions that accomplish a given mission by being successfully executed. However, in the real world, a robot may encounter unexpected situations and may not execute its actions. A plan repairing method is required in such situations for a robot to accomplish its given mission. Two basic procedures for handling such situations are generating a new plan and repairing an existing plan. The re-planning procedure can cause time to be lost generating a new plan while discarding an existing one. The repair procedure must allocate a large storage area to preserve every expected state transformation to adjust the unexpected changes to normal states. Plan repair with milestone states is an alternative procedure to cope with the situation. It retains the advantages of the other two procedures. This paper proposes progressive and regressive methods of formulating milestone states. A method of assigning weighting values on conditions that compose a milestone state is also proposed. The task to repair a plan employs the weighting values as its job priority. The regressive method formulates less complex milestone states and leads to the conditions of a milestone state to take pertinent weighting values for an efficient way to repair a plan. Keywords: planning, intelligent agents, plan repair, milestone states, weighting values, artificial. I. INTRODUCTION Intelligent robots, like humans, generate a plan to achieve a goal in a problem domain before they execute it [1,2,3]. A plan is a sequence of robotic actions that accomplishes a given goal by being executed successfully. The planning process is a state transformation process from an initial state to a goal state. I have modified the definition of actions of the Stanford Research Institute Problem Solver (STRIPS) slightly [4]. Each action consists of four components: a precondition formula, a set of positive post-conditions, a set of negative post-conditions, and a set of still-conditions. The precondition formula is a conjunction of prerequisite conditions for an action to be triggered. Both the set of positive and negative post-conditions are created, as the result of firing an action; the positive set consists of true or new conditions and the negative set consists of false or deleted conditions The success of a plan is based on the consecutive and successful executions of the individual actions of a robot. However, in the real world a robot may be confronted with unexpected situations termed error states in this paper - that may occur due to device failures or malfunctions, inconsistent sensing data, or unanticipated environmental changes [3]. Planning to consider all possible error states is almost impossible; a robot should be able to fix the error states whilst executing the plan [3,5]. The two basic approaches for handling error states are generating a new plan and repairing an existing plan. [3,6,7]. The re-planning approach discards the current plan and regenerates a plan in its current state, when an error state occurs [3,7]. This procedure builds a new plan that transforms the error state to the goal state; the plan repairing process is simple. However, this re-planning approach can lose time, since it builds a new plan, discarding the existing plan. The second approach is regionally repairing a plan [3,8]. A robot must know the exact descriptions of an expected normal state, which would have been produced if the error state had not occurred, to regionally repair a plan when an error state is encountered. The regional repair task is the process of building a partial plan, which transforms the error state to the expected normal state, without discarding the current plan. Even though this method has the advantage of reusing the existing current plan, it must incur a large cost to store those expected normal states. A robot may keep track of every state transformation and produce each expected normal state internally, whenever it triggers an action, not to store all the expected states. However, this process will confuse a robot in a real time environment of plan execution, because the process of tracking state transformations to produce expected states is not simple; it is time consuming. Plan repair with milestone states is another procedure [6]. This procedure has the benefits of the previous two procedures. The idea of this procedure is not to store all the expected normal states, unlike the regional plan repair procedure, but selecting and storing an appropriate number of expected normal states, as milestone states. With this procedure, a robot will choose a milestone state, which appears behind and is nearest to the current error state, and build a partial plan to transform the error state to
2 the chosen milestone state, discarding actions between the current state and the milestone state only. This paper proposes progressive and regressive methods to formulate milestone states for the last procedure. A method for assigning weighting values on conditions that compose a milestone state is also proposed. The process for repairing a plan employs the weighting values as its job priority. The regressive method formulates less complex milestone states and leads the conditions of a milestone state to take more pertinent weighting values for an effective and efficient handling procedure to repair a plan. II. RELATED WORK The three procedures to handle unexpected environmental changes are stated in section I. This section details the plan repair with milestone states. In general, the regional plan repair procedure is more efficient for fixing an error state than is the re-planning procedure. However, theoretically the former is not always better than the latter [8,9]. The re-planning procedure wastes the existing plan and consumes more time to generate a new plan. The regional plan repair procedure also must store all the expected normal states. In contrast, the procedure with milestone states reuses the existing plan, as much as possible, and stores as few of the expected normal states as possible. Fig. 1 depicts the expected normal states and selected milestone states of a plan. The circles, regardless of their size, denote the expected normal states. Executing the plan transforms the initial state S 1 to the goal state S n in a given problem domain, when all the actions of the plan are triggered consecutively and successfully. Action a i is triggered in state S i and produces S i+1. That is, if a robot has to consume much effort to generate a new lengthy plan. The robot should know the perfect descriptions of the expected normal state with the regional plan repair procedure, as mentioned in the previous section. In Fig. 1, every 6 th state is selected as a milestone state. Determination of the appropriate number of milestone states to repair error states is not within the scope of this paper. Note that when no milestone state is selected, the repair procedure with milestone states becomes the re-planning procedure and all the states are selected as milestone states, the procedure becomes the regional plan repair procedure. If a robot selects more milestone states it must allocate more storage and, if it selects less milestone states, it has to discard more actions and spend more time to generate a partial plan. Therefore, it is not a simple task to determine how many milestone states should be selected to repair error states; this may depend on different factors in problem domains. III. TWO METHODS TO FORMULATE MILESTONE STATES III-1. Problem domain and a blocks world I employed the blocks world in Fig. 2 as a problem domain to describe how the two methods work to formulate milestone states. It is also used to compare the efficiency of their roles in repairing error states and explain the mechanism of assigning weighting values on conditions in the problem domain. In the figure, the plan with 28 actions is generated from the given initial and goal state descriptions. Box B and box C are depicted with dotted lines, because the conditions related to the two Fig. 1. A plan space and milestone states recognizes that every precondition of a i is sound as expected in S i, it triggers a i in S i with transforming S i to S i+1. In Fig. 1, every 6 th state is selected as a milestone state for plan repair. Big circles with M including the goal state S n in the figure are milestone states and will be stored in the robot. Suppose that a robot finds that it cannot trigger action a k in the current state S k, i. e. the S k is an error state, it will choose the first following milestone state M i+1 and generate a partial plan that will transform the error state S k to M i+1. Under this situation, with the re-planning procedure the robot will throw away the entire plan and generate a new plan that transforms S k to the goal state S n. Note that when a k is in the forepart of the plan the robot Fig. 2. Blocks world problem domain
3 boxes are irrelevant to the goal state and the goal state description in the figure does not contain any conditions relevant to the two boxes. Some constraints are imposed in the problem domain. Two different actions, Pick and Unstack, are used to hold a box in the hand of the robot. Unstack(X,Y) is to remove X from Y when Y is a box and both robot and Y are on the same object. Conversely, Pick(X,Y) picks up X from Y when both the robot and X are on Y. Stack(X,Y) and Put(X,Y) are the actions to place X on Y. Stack(X,Y) stacks X on Y when the robot and Y are on the same object. Put(X,Y) is to place X on Y when the robot is on Y. The robot must empty its hand to hold a box, climb, or come down a box. Only one box can be placed on the shelf; to reach a box on the shelf, the robot must stack box A on box D and get on the pile of the two boxes. The robot cannot reach a box on a pile of multiple boxes. The robot, box A, and another box, or the robot and two boxes other than box A, can be placed on box D simultaneously. The robot and a box, or two boxes other than box D, can simultaneously be on box A. There is always room on the floor for the robot and boxes. III-2. State space modeling of a plan A conceptual model for state transition T is defined as a 4-tuple system, since the execution of a plan is concerned with firing actions to change the states of a problem domain. T = (S, A, C, τ)[10]. S is a finite set of domain states, A is a finite set of robot actions, C is a finite set of conditions that comprise the domain states, and τ: S A C S is a state transition function. τ is represented as τ(s i, a i, precond(a i )) s i+1. When the preconditions of an action a i, which are denoted as precond(a i ) are satisfied in the state s i, τ triggers a i in s i, and causes the state transition from s i to s i+1. This new state s i+1 consists of postcond(a i ), postcond - (a i ), and stillcond(a i ). Postcond(a i ) has conditions that are satisfied in compliance with action a i. Postcond - (a i ) has conditions that may be deleted or negated with action a i. Stillcond(a i ) has conditions that are members of state s i, and irrelevant to the execution of the action a i. This stillcond of an action has the conditions created by some previously executed actions and may have preconditions of following actions. The length of a plan is the number of the actions that compose a plan. The number of produced states exceeds that of the plan length by one, when the execution of all actions of the plan is carried out successfully. Both the progressive and the regressive methods produce the same number of states. A milestone state is one of the domain states produced by applying function τ to the actions of a plan. An appropriate number of domain states are selected as milestone states. Determination of an appropriate number may be a distinct research topic of the plan repair procedure with milestone states. Therefore, the methods to build a pool of domain states, from which the milestone states will be selected, are proposed in this paper. III-3. Progressive method The progressive method transforms domain states forward consecutively. A progressive transition function τ f to produce states forward is defined as τ f : S A C F. F is a finite set of domain states produced by applying τ f to actions of A and is a sub set of S. A progressive transition function τ f is defined as: τ f (f i, a i, precond(a i )) f i+1, where f i+1 = (postcond(a i ) postcond - (a i )) stillcond(a i ), f i+1 = f i+1 ( c i f i+1, (c i f i+1 ) ( c i f i+1 )). Note that f i and f i+1 are the elements of F, f 1 is an initial state, and f n+1 is a goal state that is produced with τ f (f n, a n, precond(a n )) when the length of a plan is n. The last expression states that a condition and its negation cannot reside in a state simultaneously. The stillcond of an action consists of both the irrelevant conditions to the action and the conditions unaffected by the execution of the following actions, up to the next milestone state. In some cases, a stillcond of an action may have conditions that remain unchanged by the following actions up to the goal state. These unaffected conditions naturally exist in the milestone states that are formulated in compliance with the progressive method. The unaffected conditions will appear more in the milestone states as the plan execution proceeds, since it is possible that some of the postcond or postcond - of an action can become the stillcond of following actions. Furthermore, these phenomena will be amplified, when a given problem domain is complex and the domain states become more complex. Some unaffected conditions may not be preconditions of the following actions or unnecessary conditions for a robot to accomplish its mission. Meanwhile, the plan repair task must generate a complete partial plan that satisfies all the conditions of the milestone state chosen to fix an error state, regardless of their necessity. Therefore, a robot may make additional effort to generate and execute a partial plan, with the milestone states formulated by the progressive method. It will be hard to expect an efficient way to repair a plan. Table 1 shows example states produced by the two methods. Both methods will produce 29 states with the example blocks world domain in section III-1. III-4. Regressive method The regressive method transforms domain states backward consecutively. A regressive transition function τ b to produce states backward is defined as τ b : S A C B. B is a finite set of domain states produced
4 byapplying τ b to actions of A and is a subset of S. When the plan length is n, b 1 and b n+1 are elements of B and the initial state and the final state, respectively, as for the progressive transition function in the previous section. TABLE 1. Example states States produced by States produced by progressive method regressive method on(r,fl) hempty on(a,fl) on(r,fl) hempty on(a, on(b,a) clear(b) on(c,sf) FL) on(b,a) clear(b) f 1 clear(c) on(d,fl) clear(d b 1 on(c,sf) clear(c) on(d, ) on(e,a) clear(e) FL) clear(d) on(e,a) clear(e) on(r,fl) on(a,fl) clear(a) on(r,fl) on(a,fl) clea f hold(b) on(c,sf) clear(c) 2 on(d,fl) clear(d) on(e,a) b r(a) hold(b) on(c,sf) 2 clear(c) on(d,fl) cle clear(e) ar(d) on(e,a) clear(e) on(r,fl) hempty on(a,d) on(r,fl) hempty on(a, f clear(a) on(b,fl) clear(b) 7 on(c,sf) clear(c) on(d,fl) b D) clear(a) on(c,sf) 7 clear(c) on(d,fl) clear( clear(d) on(e,fl) clear(e) D) on(e,fl) clear(e) f 8 on(r,d) hempty on(a,d) cl ear(a) on(b,fl) clear(b) o n(c,sf) clear(c) on(d,fl) clear(d) on(e,fl) clear(e) on(r,fl) hold(a) on(b,fl) f clear(b) on(c,fl) clear(c) 28 on(d,fl) clear(d) on(e,s b 28 F) clear(e) f 29 A regressive transition function τ b is defined as: τ b (b i, a i-1, precond(a i-1 )) b i-1, where b i-1 = precond(a i-1 ) ( c i b i, c i (postcond(a i-1 ) postcond - (a i-1 ))), b i-1 = b i-1 ( c i b i-1, (c i b i-1 ) ( c i b i-1 )). b i-1 must have precond(a i-1 ), which is the set of prerequisite conditions of ai i-1, since action a i-1 is to be triggered in state b i-1 to produce state b i. State b i-1 must not have the postcond(a i-1 ) or the postcond - (a i-1 ), which are in state b i, because the execution of a i-1 will create these conditions and produce state b i. The last expression states that a condition and its negation cannot reside in a state simultaneously. The stillconds of actions will not have unnecessary conditions, unlike the progressive method, but have the conditions that compose the goal state or preconditions of following actions instead, since the regressive function produces states backwards. Therefore, the unnecessary conditions may also disappear in the milestone states selected from those states produced by the regressive method. The plan repair task with this method will be much lighter and simpler than the task with the progressive method. The lengths of the partial plans to repair the error state in Fig. 3 reveal the efficiency of the regressive method. III-5. An example for repairing an error state Fig. 3 shows an example of an error state and a selected milestone state. The robot is about to trigger the 8 th action, Geton(D,A), of the plan in Fig. 2 and has encountered an error state, as in Fig. 3. Since the precondition, on(r, D), b 8 on(r,fl) hempty on(a,fl) clear(a) on(b,fl) clear(b) on(c,fl) clear(c) on(d,f b 29 L) clear(d) on(e,sf) clear( E) on(r,d) hempty on(a,d ) clear(a) on(c,sf) cl ear(c) on(d,fl) clear(d ) on(e,fl) clear(e) on(r,fl) hold(a) on(d, FL) clear(d) on(e,sf) clear(e) on(r,fl) hempty on(a, FL) clear(a) on(d,fl) clear(d) on(e,sf) cle ar(e) of the action is not true in the error state, the robot cannot trigger the 8 th action. The robot will choose the milestone state in Fig. 3 and generates a partial plan that will transform the error state to the milestone state. Two different forms, Mf 2 by the progressive method and Mb 2 by the regressive method, are described below the picture of the milestone state, from Table 3 in section IV-1. The partial plan for Mf 2 has more actions to repair the error state than the partial plan for Mb 2 Fig. 3. An error state and plan repair III-6. Weighting values of milestone state conditions A weighting value is assigned to each condition composing a milestone state to provide a guide to fix an error state with the milestone states. The task will be able to fix error states efficiently, since the order of fixing the conditions for the plan repair task can be determined by the weighting values. The principle of assigning weighting values on conditions is that for a condition appearing in two consecutive milestone states simultaneously, the condition appearing in the latter milestone state gets additional weighting values. Note that an initial state is employed as a milestone state to pair up to the first milestone state. The conditions appearing in two consecutive milestone states simultaneously may not be the preconditions of the actions between the two milestone states, but may be the preconditions of the following actions of the latter milestone state or the conditions that compose the goal state. These conditions are usually created by the actions that are executed before the first milestone state of the two consecutive milestone states. The continuously appearing conditions may be preserved during the execution of the actions between the two consecutive milestone states and have higher weighting values than other conditions of the rear milestone state, since the unnecessary conditions are removed from milestone states formulated by the
5 regressive method. The following expressions assign weighting values on conditions when it is assumed that W is a weighting value of a condition, RC is a condition related to a robot, and c is a condition of a milestone state M. c i M i, (c i M i+1 c i RC) W(c i M i+1 ) + 2 c i M i, (c i M i+1 c i RC) W(c i M i+1 ) + 1 c i M i+1, c i M i W(c i M i+1 ) = 0 RC is increased by 1 not 2, as a robot is the main agent handling the domain conditions and it has to handle the conditions that are not related to it, prior to handling the RC. When a new condition appears in the rear milestone state, a zero weighting value is given to it. IV. COMPARISON OF THE TWO METHODS IV-1. Complexity comparisons In Table 2, the length of the plan is the number of actions of the plan in Fig. 2, the number of states is the number of states produced by the simulated execution of the plan, and the number of conditions is the total number of conditions of 29 states. Note that the 29 states will be produced by the execution of the 28 actions in the plan. The number of conditions and the average number of state conditions in the table show that the regressive method produces states that are less complex than are those of the progressive method. That is, fewer conditions are to be repaired when a plan repair task is needed. Methods Progressive method Regressive method TABLE 2. Complexity of states Length of Number of Number of plan states conditions Average Table 3 shows the two types of milestone states selected from the 29 states that are produced by progressive and regressive transit functions, with the plan in Fig. 2. In this paper, every 4 th state, including the goal state, is chosen as a milestone state from the 29 states. The goal state is the last milestone state; there are 8 milestone states. In Table 3, Mf i and Mb i are the names of the milestone states formulated by the progressive and the regressive methods respectively, where i is an index of a milestone state and an integer between 1 and 8. There must be a method to pinpoint a milestone state, when an error state is encountered. The integer i becomes the index of the chosen milestone, when it satisfies the following expression, where k is the index of an action encountering an error state and p is a constant used to select milestone states. min(k < i p + 1) From the problem domain, if action 6 has encountered an error state, i will be 2 by the expression min(6 < i 4 + 1), and the second milestone state is chosen for plan repair with milestone states. TABLE 3. Milestone states Progressive method Regressive method on(r,fl) on(a,fl) clear( on(r,fl) on(a,fl) clear( Mf A) on(b,fl) clear(b) on 1 Mb (C,SF) clear(c) on(d,f), 1 A) on(c,sf) clear(c) on( D, clear(d) hold(e) FL) clear(d) hold(e) on(r,d) hempty on(a,d) clear(a) on(b,fl) clear on(r,d) hempty on(a,d) Mf 2 (B) on(c,sf) clear(c) o Mb clear(a) on(c,sf) clear(c) 2 on(d,fl) clear(d) on(e, n(d,fl) clear(d) on(e,f ) clear(e) FL) clear(e) Mf 3 on(r,d) hempty on(a,d) clear(a) on(b,fl) clear (B) on(c,a) clear(c) on( D,FL) clear(d) on(e,fl) clear(e) clear(sf) on(r,fl) on(a,d) clear( Mb 3 A) on(b,fl) clear(b) ho Mf 4 ld(c) on(d,fl) clear(d) Mb 4 on(e,fl) clear(e) clear(s F) on(r,d) hempty on(a,d) clear(a) on(b,fl) clear Mf 5 (B) on(c,fl) clear(c) o Mb 5 n(d,fl) on(e,d) clear(e) clear(sf) on(r,a) on(a,d) clear(a ) on(b,fl) clear(b) on( Mf 6 C,FL) clear(c) on(d,fl) clear(d) hold(e) clear(s F) on(r,fl) hold(a) on(b,f Mf L) clear(b) on(c,fl) cle 7 ar(c) on(d,fl) clear(d) on(e,sf) clear(e) on(r,fl) hempty on(a,f L) clear(a) on(b,fl) cle Mf 8 ar(b) on(c,fl) clear(c) on(d,fl) clear(d) on(e,s F) clear(e) on(r,d) hempty on(a,d) clear(a) on(c,a) clear(c) on(d,fl) clear(d) on(e, FL) clear(e) clear(sf) on(r,fl) on(a,d) clear(a) hold(c) on(d,fl) clear( D) on(e,fl) clear(e) clea r(sf) on(r,d) hempty on(a,d) clear(a) on(d,fl) clear(d) on(e,d) clear(e) clear(s F) Mb 6 on(r,a) on(a,d) clear(a) on(d,fl) clear(d) hold( E) clear(sf) Mb 7 on(r,fl) hold(a) on(d,fl ) clear(d) on(e,sf) clear( E) Mb 8 on(r,fl) hempty on(a,fl ) clear(a) on(d,fl) clear( D) on(e,sf) clear(e) IV-2. Weighting value comparisons The weighting values of the milestone states formulated by the progressive and the regressive methods are assigned in the same way as in section III-6. Table 4 shows the weighting values assigned to the conditions of the milestone states in Table 3. The conditions of states are the domain conditions that appear in all 29 states. In this table, each milestone state is composed only by the conditions with weighting values 0, 1, or 2. The highlighted part of the table shows that in the progressive method, high weighting values are assigned to the unnecessary conditions related to box B and box C. In contrast, in the regressive method, those conditions have low weighting values or do not appear in the milestone states. This means that the unnecessary conditions disappear in the list of conditions to fix an error state. The error state repair task will try to compose a partial plan for the conditions of a chosen milestone state in the order of their weighting values. Therefore, the regressive method will guide the error state repair task to be more efficient than will the
6 progressive method. Conditions TABLE 4. Weighting values of conditions Progressive method Milestone states Regressive method on(r,fl) on(r,a) 0 0 on(r,d) hempty hold(a) 0 0 hold(b) hold(c) 0 0 hold(e) on(a,fl) on(a,d) clear(a) on(b,fl) on(b,a) clear(b) on(c,fl) on(c,a) 0 0 on(c,d) on(c,sf) clear(c) on(d,fl) clear(d) on(e,fl) on(e,a) on(e,d) 0 0 on(e,sf) clear(e) clear(sf) future plan repairing methods with milestone states. REFERENCES [1] Yongtae Do et al., Artificial Intelligence Concepts and Applications, 3rd Edition, SciTech, Seoul Korea, 2009 [2] Malik Ghallab, Dana Nau, Paolo Traverso. Automated Planning Theory and Practice, Morgan Kaufmann Publishers, New York, 2004 [3] Hyungoo Han, Kai Chang, William Day, A Comparison of Failure Handling Approaches for Planning Systems Replanning vs. Recovery, Journal of Applied Intelligence, Vol. 3, Kluwer academic publishers, , 1993 [4] Richard E. Fikes, Nils J. Nilsson, STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving, Artificial Intelligence (2), pp , 1971 [5] Jianming Guo, Liang Liu, A Study of Improvement of D* Algorithms for Mobile Robot Path Planning in Partial Unknown Environments, Kybernetes Volume 39, Emerald Group Publishing Limited, Issue 6, pp , 2010 [6] Hyungoo Han, Plan repair with Milestone States, Institute of Information Industrial Engineering, Hankuk University of Foreign Studies, Vol. 13, pp , 2009 [7] Bernhard Nebel, Jana Koehler, Plan Reuse versus Plan Generation: A Theoretical and Empirical Analysis, Artificial Intelligence, 76, pp , 1995 [8] Roman van der Krogt, Mathijs de Weerdt, Plan Repair as an Extension of Planning, ICAPS, pp , 2005 [9] C. A. Broverman and W. B. Croft, Reasoning about Exceptions during Plan Execution Monitoring, Proc. Natl. Conf. Artificial Intelligence, Seattle, WA, pp , 1987 [10] T. Dean and M. Wellman, Planning and Control, Morgan Kaufmann, 1991 V. CONCLUSION This paper proposes the progressive and regressive methods to formulate milestone states and the method to assign weighting values on the conditions that compose a milestone state. The progressive method forces a milestone state to have unnecessary conditions to repair error states, while the regressive method does not. Therefore, the regressive method usually formulates smaller and less complex milestone states to fix the error states. When a repair list of conditions becomes longer, the length of a new partial plan to repair the conditions on the list also becomes larger. A robot has to spend more time to generate a larger partial plan and to execute the larger partial plan. Unlike the progressive method, the regressive method causes the unnecessary conditions to assume low weighting values or disappear from the milestone states. The task to fix error states employs the weighting values as its job priority. Therefore, the regressive method to formulate milestone states provides an efficient way to repair plans. The proposed method is unsuited to path selection plans. A study on topics, such as the similarity between milestone states, the degree of proximity of locations and domain states, the heuristic approaches to repair error states, and so forth, will deserve a place in
Artificial Intelligence 2004 Planning: Situation Calculus
74.419 Artificial Intelligence 2004 Planning: Situation Calculus Review STRIPS POP Hierarchical Planning Situation Calculus (John McCarthy) situations actions axioms Review Planning 1 STRIPS (Nils J. Nilsson)
More informationIntelligent Agents. State-Space Planning. Ute Schmid. Cognitive Systems, Applied Computer Science, Bamberg University. last change: 14.
Intelligent Agents State-Space Planning Ute Schmid Cognitive Systems, Applied Computer Science, Bamberg University last change: 14. April 2016 U. Schmid (CogSys) Intelligent Agents last change: 14. April
More informationValidating Plans with Durative Actions via Integrating Boolean and Numerical Constraints
Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints Roman Barták Charles University in Prague, Faculty of Mathematics and Physics Institute for Theoretical Computer
More informationPrimitive goal based ideas
Primitive goal based ideas Once you have the gold, your goal is to get back home s Holding( Gold, s) GoalLocation([1,1], s) How to work out actions to achieve the goal? Inference: Lots more axioms. Explodes.
More informationCS 621 Artificial Intelligence. Lecture 31-25/10/05. Prof. Pushpak Bhattacharyya. Planning
CS 621 Artificial Intelligence Lecture 31-25/10/05 Prof. Pushpak Bhattacharyya Planning 1 Planning Definition : Planning is arranging a sequence of actions to achieve a goal. Uses core areas of AI like
More informationIntelligent Systems. Planning. Copyright 2010 Dieter Fensel and Ioan Toma
Intelligent Systems Planning Copyright 2010 Dieter Fensel and Ioan Toma 1 Where are we? # Title 1 Introduction 2 Propositional Logic 3 Predicate Logic 4 Reasoning 5 Search Methods 6 CommonKADS 7 Problem-Solving
More informationClassical Planning. CS 486/686: Introduction to Artificial Intelligence Winter 2016
Classical Planning CS 486/686: Introduction to Artificial Intelligence Winter 2016 1 Classical Planning A plan is a collection of actions for performing some task (reaching some goal) If we have a robot
More informationCreating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation
Creating Admissible Heuristic Functions: The General Relaxation Principle and Delete Relaxation Relaxed Planning Problems 74 A classical planning problem P has a set of solutions Solutions(P) = { π : π
More informationPlanning. Introduction to Planning. Failing to plan is planning to fail! Major Agent Types Agents with Goals. From Problem Solving to Planning
Introduction to Planning Planning Failing to plan is planning to fail! Plan: a sequence of steps to achieve a goal. Problem solving agent knows: actions, states, goals and plans. Planning is a special
More informationCPS 270: Artificial Intelligence Planning
CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent
More informationEMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY *
Iranian Journal of Science & Technology, Transaction B, Engineering, Vol. 29, No. B1 Printed in The Islamic Republic of Iran, 2005 Shiraz University EMPLOYING DOMAIN KNOWLEDGE TO IMPROVE AI PLANNING EFFICIENCY
More informationArtificial Intelligence
Artificial Intelligence Lecturer 7 - Planning Lecturer: Truong Tuan Anh HCMUT - CSE 1 Outline Planning problem State-space search Partial-order planning Planning graphs Planning with propositional logic
More informationA CSP Search Algorithm with Reduced Branching Factor
A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il
More informationPlanning. Introduction
Introduction vs. Problem-Solving Representation in Systems Situation Calculus The Frame Problem STRIPS representation language Blocks World with State-Space Search Progression Algorithms Regression Algorithms
More informationSet 9: Planning Classical Planning Systems. ICS 271 Fall 2014
Set 9: Planning Classical Planning Systems ICS 271 Fall 2014 Planning environments Classical Planning: Outline: Planning Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning
More informationArtificial Intelligence Planning
Artificial Intelligence Planning Instructor: Vincent Conitzer Planning We studied how to take actions in the world (search) We studied how to represent objects, relations, etc. (logic) Now we will combine
More informationQualitative Multi-faults Diagnosis Based on Automated Planning II: Algorithm and Case Study
Qualitative Multi-faults Diagnosis Based on Automated Planning II: Algorithm and Case Study He-xuan Hu, Anne-lise Gehin, and Mireille Bayart Laboratoire d Automatique, Génie Informatique & Signal, UPRESA
More informationConflict based Backjumping for Constraints Optimization Problems
Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,
More informationPetri Nets. Robert A. McGuigan, Department of Mathematics, Westfield State
24 Petri Nets Author: College. Robert A. McGuigan, Department of Mathematics, Westfield State Prerequisites: The prerequisites for this chapter are graphs and digraphs. See Sections 9.1, 9.2, and 10.1
More informationUnifying and extending hybrid tractable classes of CSPs
Journal of Experimental & Theoretical Artificial Intelligence Vol. 00, No. 00, Month-Month 200x, 1 16 Unifying and extending hybrid tractable classes of CSPs Wady Naanaa Faculty of sciences, University
More informationSet 9: Planning Classical Planning Systems. ICS 271 Fall 2013
Set 9: Planning Classical Planning Systems ICS 271 Fall 2013 Outline: Planning Classical Planning: Situation calculus PDDL: Planning domain definition language STRIPS Planning Planning graphs Readings:
More informationPlanning (What to do next?) (What to do next?)
Planning (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) (What to do next?) CSC3203 - AI in Games 2 Level 12: Planning YOUR MISSION learn about how to create
More informationComponents of a Planning System
Planning Components of a Planning System In any general problem solving systems, elementary techniques to perform following functions are required Choose the best rule (based on heuristics) to be applied
More informationThe Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs
The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs G. Ayorkor Mills-Tettey Anthony Stentz M. Bernardine Dias CMU-RI-TR-07-7 July 007 Robotics Institute Carnegie Mellon University
More informationA deterministic action is a partial function from states to states. It is partial because not every action can be carried out in every state
CmSc310 Artificial Intelligence Classical Planning 1. Introduction Planning is about how an agent achieves its goals. To achieve anything but the simplest goals, an agent must reason about its future.
More informationAn Action Model Learning Method for Planning in Incomplete STRIPS Domains
An Action Model Learning Method for Planning in Incomplete STRIPS Domains Romulo de O. Leite Volmir E. Wilhelm Departamento de Matemática Universidade Federal do Paraná Curitiba, Brasil romulool@yahoo.com.br,
More informationLet the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )
17.4 Dynamic tables Let us now study the problem of dynamically expanding and contracting a table We show that the amortized cost of insertion/ deletion is only (1) Though the actual cost of an operation
More informationScienceDirect. Plan Restructuring in Multi Agent Planning
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 46 (2015 ) 396 401 International Conference on Information and Communication Technologies (ICICT 2014) Plan Restructuring
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationArtificial Intelligence. Planning
Artificial Intelligence Planning Planning Planning agent Similar to previous problem solving agents Constructs plans that achieve its goals, then executes them Differs in way it represents and searches
More informationDefinition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,,
CMPSCI 601: Recall From Last Time Lecture 5 Definition: A context-free grammar (CFG) is a 4- tuple, variables = nonterminals, terminals, rules = productions,,, are all finite. 1 ( ) $ Pumping Lemma for
More informationArtificial Intelligence
University of Cagliari M.Sc. degree in Electronic Engineering Artificial Intelligence Academic Year: 07/08 Instructor: Giorgio Fumera Exercises on search algorithms. A -litre and a -litre water jugs are
More informationA proof-producing CSP solver: A proof supplement
A proof-producing CSP solver: A proof supplement Report IE/IS-2010-02 Michael Veksler Ofer Strichman mveksler@tx.technion.ac.il ofers@ie.technion.ac.il Technion Institute of Technology April 12, 2010 Abstract
More informationBlocking Combinatorial Games
Blocking Combinatorial Games by Arthur Holshouser and Harold Reiter Arthur Holshouser 3600 Bullard St. Charlotte, NC, USA, 28208 Harold Reiter Department of Mathematics UNC Charlotte Charlotte, NC 28223
More informationChapter 3: Search. c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1
Chapter 3: Search c D. Poole, A. Mackworth 2010, W. Menzel 2015 Artificial Intelligence, Chapter 3, Page 1 Searching Often we are not given an algorithm to solve a problem, but only a specification of
More informationImplementation of metadata logging and power loss recovery for page-mapping FTL
LETTER IEICE Electronics Express, Vol.10, No.11, 1 6 Implementation of metadata logging and power loss recovery for page-mapping FTL Seung-Ho Lim a) 1 Hankuk University of Foreign Studies, 89 Wangsan-ri,
More informationPlanning Algorithms Properties Soundness
Chapter MK:VI III. Planning Agent Systems Deductive Reasoning Agents Planning Language Planning Algorithms State-Space Planning Plan-Space Planning HTN Planning Complexity of Planning Problems Extensions
More informationA New Method to Index and Query Sets
A New Method to Index and Query Sets Jorg Hoffmann and Jana Koehler Institute for Computer Science Albert Ludwigs University Am Flughafen 17 79110 Freiburg, Germany hoffmann koehler@informatik.uni-freiburg.de
More informationMultiagent Planning with Factored MDPs
Appeared in Advances in Neural Information Processing Systems NIPS-14, 2001. Multiagent Planning with Factored MDPs Carlos Guestrin Computer Science Dept Stanford University guestrin@cs.stanford.edu Daphne
More informationAND-OR GRAPHS APPLIED TO RUE RESOLUTION
AND-OR GRAPHS APPLIED TO RUE RESOLUTION Vincent J. Digricoli Dept. of Computer Science Fordham University Bronx, New York 104-58 James J, Lu, V. S. Subrahmanian Dept. of Computer Science Syracuse University-
More informationPrinciples of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic
Principles of AI Planning June 8th, 2010 7. Planning as search: relaxed planning tasks Principles of AI Planning 7. Planning as search: relaxed planning tasks Malte Helmert and Bernhard Nebel 7.1 How to
More informationL-Bit to M-Bit Code Mapping To Avoid Long Consecutive Zeros in NRZ with Synchronization
L-Bit to M-Bit Code Mapping To Avoid Long Consecutive Zeros in NRZ with Synchronization Ruixing Li Shahram Latifi Yun Lun Ming Lun Abstract we investigate codes that map bits to m bits to achieve a set
More informationPlanning and Acting. CITS3001 Algorithms, Agents and Artificial Intelligence. 2018, Semester 2
Planning and Acting CITS3001 Algorithms, Agents and Artificial Intelligence Tim French School of Computer Science and Software Engineering The University of Western Australia 2018, Semester 2 Summary We
More informationClassical Planning Problems: Representation Languages
jonas.kvarnstrom@liu.se 2017 Classical Planning Problems: Representation Languages History: 1959 3 The language of Artificial Intelligence was/is logic First-order, second-order, modal, 1959: General
More informationLecture 20 : Trees DRAFT
CS/Math 240: Introduction to Discrete Mathematics 4/12/2011 Lecture 20 : Trees Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Last time we discussed graphs. Today we continue this discussion,
More informationPlanning. Some material taken from D. Lin, J-C Latombe
RN, Chapter 11 Planning Some material taken from D. Lin, J-C Latombe 1 Logical Agents Reasoning [Ch 6] Propositional Logic [Ch 7] Predicate Calculus Representation [Ch 8] Inference [Ch 9] Implemented Systems
More informationTwo Comments on the Principle of Revealed Preference
Two Comments on the Principle of Revealed Preference Ariel Rubinstein School of Economics, Tel Aviv University and Department of Economics, New York University and Yuval Salant Graduate School of Business,
More informationAcknowledgements. Outline
Acknowledgements Heuristic Search for Planning Sheila McIlraith University of Toronto Fall 2010 Many of the slides used in today s lecture are modifications of slides developed by Malte Helmert, Bernhard
More informationFinding Rough Set Reducts with SAT
Finding Rough Set Reducts with SAT Richard Jensen 1, Qiang Shen 1 and Andrew Tuson 2 {rkj,qqs}@aber.ac.uk 1 Department of Computer Science, The University of Wales, Aberystwyth 2 Department of Computing,
More information3. Knowledge Representation, Reasoning, and Planning
3. Knowledge Representation, Reasoning, and Planning 3.1 Common Sense Knowledge 3.2 Knowledge Representation Networks 3.3 Reasoning Propositional Logic Predicate Logic: PROLOG 3.4 Planning Planning vs.
More information2 The Fractional Chromatic Gap
C 1 11 2 The Fractional Chromatic Gap As previously noted, for any finite graph. This result follows from the strong duality of linear programs. Since there is no such duality result for infinite linear
More informationcis32-ai lecture # 21 mon-24-apr-2006
cis32-ai lecture # 21 mon-24-apr-2006 today s topics: logic-based agents (see notes from last time) planning cis32-spring2006-sklar-lec21 1 What is Planning? Key problem facing agent is deciding what to
More informationFinding a winning strategy in variations of Kayles
Finding a winning strategy in variations of Kayles Simon Prins ICA-3582809 Utrecht University, The Netherlands July 15, 2015 Abstract Kayles is a two player game played on a graph. The game can be dened
More informationFormalizing the PRODIGY Planning Algorithm
Formalizing the PRODIGY Planning Algorithm Eugene Fink eugene@cs.cmu.edu http://www.cs.cmu.edu/~eugene Manuela Veloso veloso@cs.cmu.edu http://www.cs.cmu.edu/~mmv Computer Science Department, Carnegie
More informationComputing Applicability Conditions for Plans with Loops
Abacus Programs Translation Examples Computing Applicability Conditions for Plans with Loops Siddharth Srivastava Neil Immerman Shlomo Zilberstein Department of Computer Science, University of Massachusetts
More informationChapter S:II. II. Search Space Representation
Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation
More informationArtificial Intelligence
Artificial Intelligence CSC348 Unit 4: Reasoning, change and planning Syedur Rahman Lecturer, CSE Department North South University syedur.rahman@wolfson.oxon.org Artificial Intelligence: Lecture Notes
More informationConstraint Satisfaction Problems. Chapter 6
Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a
More information3. Knowledge Representation, Reasoning, and Planning
3. Knowledge Representation, Reasoning, and Planning 3.1 Common Sense Knowledge 3.2 Knowledge Representation Networks 3.3 Reasoning Propositional Logic Predicate Logic: PROLOG 3.4 Planning Introduction
More informationDomain-Dependent Heuristics and Tie-Breakers: Topics in Automated Planning
Domain-Dependent Heuristics and Tie-Breakers: Topics in Automated Planning Augusto B. Corrêa, André G. Pereira, Marcus Ritt 1 Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS)
More informationHashing. Hashing Procedures
Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements
More informationConsistency and Set Intersection
Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study
More informationA New Algorithm for Singleton Arc Consistency
A New Algorithm for Singleton Arc Consistency Roman Barták, Radek Erben Charles University, Institute for Theoretical Computer Science Malostranské nám. 2/25, 118 Praha 1, Czech Republic bartak@kti.mff.cuni.cz,
More informationComplexity Classes and Polynomial-time Reductions
COMPSCI 330: Design and Analysis of Algorithms April 19, 2016 Complexity Classes and Polynomial-time Reductions Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we introduce
More informationRobot Task Error Recovery Using Petri Nets Learned from Demonstration
Robot Task Error Recovery Using Petri Nets Learned from Demonstration Guoting (Jane) Chang and Dana Kulić Abstract The ability to recover from errors is necessary for robots to cope with unexpected situations
More information: Principles of Automated Reasoning and Decision Making Midterm
16.410-13: Principles of Automated Reasoning and Decision Making Midterm October 20 th, 2003 Name E-mail Note: Budget your time wisely. Some parts of this quiz could take you much longer than others. Move
More informationThe Resolution Algorithm
The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial
More informationAnytime Search in Dynamic Graphs
Anytime Search in Dynamic Graphs Maxim Likhachev a, Dave Ferguson b,c, Geoff Gordon c, Anthony Stentz c, and Sebastian Thrun d a Computer and Information Science, University of Pennsylvania, Philadelphia,
More informationMODERN automated manufacturing systems require. An Extended Event Graph With Negative Places and Tokens for Time Window Constraints
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 2, NO. 4, OCTOBER 2005 319 An Extended Event Graph With Negative Places and Tokens for Time Window Constraints Tae-Eog Lee and Seong-Ho Park
More informationSymbol Tables Symbol Table: In computer science, a symbol table is a data structure used by a language translator such as a compiler or interpreter, where each identifier in a program's source code is
More informationStable Trajectory Design for Highly Constrained Environments using Receding Horizon Control
Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu
More informationHeuristic Search for Planning
Heuristic Search for Planning Sheila McIlraith University of Toronto Fall 2010 S. McIlraith Heuristic Search for Planning 1 / 50 Acknowledgements Many of the slides used in today s lecture are modifications
More informationThe Encoding Complexity of Network Coding
The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network
More information12/30/2013 S. NALINI,AP/CSE
12/30/2013 S. NALINI,AP/CSE 1 UNIT I ITERATIVE AND RECURSIVE ALGORITHMS Iterative Algorithms: Measures of Progress and Loop Invariants-Paradigm Shift: Sequence of Actions versus Sequence of Assertions-
More informationx ji = s i, i N, (1.1)
Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,
More informationRelational Databases
Relational Databases Jan Chomicki University at Buffalo Jan Chomicki () Relational databases 1 / 49 Plan of the course 1 Relational databases 2 Relational database design 3 Conceptual database design 4
More informationProcess Model Consistency Measurement
IOSR Journal of Computer Engineering (IOSRJCE) ISSN: 2278-0661, ISBN: 2278-8727Volume 7, Issue 6 (Nov. - Dec. 2012), PP 40-44 Process Model Consistency Measurement Sukanth Sistla CSE Department, JNTUniversity,
More informationMaterial handling and Transportation in Logistics. Paolo Detti Dipartimento di Ingegneria dell Informazione e Scienze Matematiche Università di Siena
Material handling and Transportation in Logistics Paolo Detti Dipartimento di Ingegneria dell Informazione e Scienze Matematiche Università di Siena Introduction to Graph Theory Graph Theory As Mathematical
More informationIs the statement sufficient? If both x and y are odd, is xy odd? 1) xy 2 < 0. Odds & Evens. Positives & Negatives. Answer: Yes, xy is odd
Is the statement sufficient? If both x and y are odd, is xy odd? Is x < 0? 1) xy 2 < 0 Positives & Negatives Answer: Yes, xy is odd Odd numbers can be represented as 2m + 1 or 2n + 1, where m and n are
More information9.5 Equivalence Relations
9.5 Equivalence Relations You know from your early study of fractions that each fraction has many equivalent forms. For example, 2, 2 4, 3 6, 2, 3 6, 5 30,... are all different ways to represent the same
More informationEECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley
EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1
More information2. Lecture notes on non-bipartite matching
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 15th, 013. Lecture notes on non-bipartite matching Given a graph G = (V, E), we are interested in finding
More informationHeuristic Search and Advanced Methods
Heuristic Search and Advanced Methods Computer Science cpsc322, Lecture 3 (Textbook Chpt 3.6 3.7) May, 15, 2012 CPSC 322, Lecture 3 Slide 1 Course Announcements Posted on WebCT Assignment1 (due on Thurs!)
More informationFormalization of Objectives of Grid Systems Resources Protection against Unauthorized Access
Nonlinear Phenomena in Complex Systems, vol. 17, no. 3 (2014), pp. 272-277 Formalization of Objectives of Grid Systems Resources Protection against Unauthorized Access M. O. Kalinin and A. S. Konoplev
More informationAuthors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai)
TeesRep - Teesside's Research Repository A Note on the Suboptimality of Nonpreemptive Real-time Scheduling Item type Article Authors Abugchem, F. (Fathi); Short, M. (Michael); Xu, D. (Donglai) Citation
More informationDERIVING TOPOLOGICAL RELATIONSHIPS BETWEEN SIMPLE REGIONS WITH HOLES
DERIVING TOPOLOGICAL RELATIONSHIPS BETWEEN SIMPLE REGIONS WITH HOLES Mark McKenney, Reasey Praing, and Markus Schneider Department of Computer and Information Science & Engineering, University of Florida
More informationAn Efficient Selective-Repeat ARQ Scheme for Half-duplex Infrared Links under High Bit Error Rate Conditions
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE CCNC 26 proceedings. An Efficient Selective-Repeat ARQ Scheme for
More informationFormally-Proven Kosaraju s algorithm
Formally-Proven Kosaraju s algorithm Laurent Théry Laurent.Thery@sophia.inria.fr Abstract This notes explains how the Kosaraju s algorithm that computes the strong-connected components of a directed graph
More informationNon-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty
Non-Homogeneous Swarms vs. MDP s A Comparison of Path Finding Under Uncertainty Michael Comstock December 6, 2012 1 Introduction This paper presents a comparison of two different machine learning systems
More informationSpring 2007 Midterm Exam
15-381 Spring 2007 Midterm Exam Spring 2007 March 8 Name: Andrew ID: This is an open-book, open-notes examination. You have 80 minutes to complete this examination. Unless explicitly requested, we do not
More informationReview of Sets. Review. Philippe B. Laval. Current Semester. Kennesaw State University. Philippe B. Laval (KSU) Sets Current Semester 1 / 16
Review of Sets Review Philippe B. Laval Kennesaw State University Current Semester Philippe B. Laval (KSU) Sets Current Semester 1 / 16 Outline 1 Introduction 2 Definitions, Notations and Examples 3 Special
More informationA CAN-Based Architecture for Highly Reliable Communication Systems
A CAN-Based Architecture for Highly Reliable Communication Systems H. Hilmer Prof. Dr.-Ing. H.-D. Kochs Gerhard-Mercator-Universität Duisburg, Germany E. Dittmar ABB Network Control and Protection, Ladenburg,
More informationMC302 GRAPH THEORY SOLUTIONS TO HOMEWORK #1 9/19/13 68 points + 6 extra credit points
MC02 GRAPH THEORY SOLUTIONS TO HOMEWORK #1 9/19/1 68 points + 6 extra credit points 1. [CH] p. 1, #1... a. In each case, for the two graphs you say are isomorphic, justify it by labeling their vertices
More informationLecture: Iterative Search Methods
Lecture: Iterative Search Methods Overview Constructive Search is exponential. State-Space Search exhibits better performance on some problems. Research in understanding heuristic and iterative search
More informationCompiler Design. Register Allocation. Hwansoo Han
Compiler Design Register Allocation Hwansoo Han Big Picture of Code Generation Register allocation Decides which values will reside in registers Changes the storage mapping Concerns about placement of
More informationAn Information-Theoretic Approach to the Prepruning of Classification Rules
An Information-Theoretic Approach to the Prepruning of Classification Rules Max Bramer University of Portsmouth, Portsmouth, UK Abstract: Keywords: The automatic induction of classification rules from
More informationImplementing Some Foundations for a Computer-Grounded AI
Implementing Some Foundations for a Computer-Grounded AI Thomas Lin MIT 410 Memorial Drive Cambridge, MA 02139 tlin@mit.edu Abstract This project implemented some basic ideas about having AI programs that
More informationSearch. (Textbook Chpt ) Computer Science cpsc322, Lecture 2. May, 10, CPSC 322, Lecture 2 Slide 1
Search Computer Science cpsc322, Lecture 2 (Textbook Chpt 3.0-3.4) May, 10, 2012 CPSC 322, Lecture 2 Slide 1 Colored Cards You need to have 4 colored index cards Come and get them from me if you still
More informationA New Combinatorial Design of Coded Distributed Computing
A New Combinatorial Design of Coded Distributed Computing Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT, USA
More informationGUI Design Principles
GUI Design Principles User Interfaces Are Hard to Design You are not the user Most software engineering is about communicating with other programmers UI is about communicating with users The user is always
More information