ScienceDirect. Plan Restructuring in Multi Agent Planning

Similar documents
An Analysis of the Effect of Communication for Multi-agent Planning in a Grid World Domain

Scaling Up Multiagent Planning: A Best-Response Approach

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

Solving Multiagent Planning Problems with Concurrent Conditional Effects

Available online at ScienceDirect. Procedia Computer Science 59 (2015 )

A Rapid Automatic Image Registration Method Based on Improved SIFT

Available online at ScienceDirect. Procedia Computer Science 103 (2017 )

Strategies for simulating pedestrian navigation with multiple reinforcement learning agents

Available online at ScienceDirect. Procedia Computer Science 96 (2016 )

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Available online at ScienceDirect. Procedia Computer Science 46 (2015 )

Coalition formation in multi-agent systems an evolutionary approach

Available online at ScienceDirect. Procedia Technology 24 (2016 )

Available online at ScienceDirect. Procedia Computer Science 93 (2016 )

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Available online at ScienceDirect. Procedia CIRP 44 (2016 )

Available online at ScienceDirect. Procedia Computer Science 45 (2015 )

Available online at ScienceDirect. Procedia Computer Science 59 (2015 )

Available online at ScienceDirect. Procedia Computer Science 37 (2014 )

COMPLETE AND SCALABLE MULTI-ROBOT PLANNING IN TUNNEL ENVIRONMENTS. Mike Peasgood John McPhee Christopher Clark

Available online at ScienceDirect. Procedia Computer Science 93 (2016 )

Weighted Attacks in Argumentation Frameworks

A Learning Based and Vision Guided Robotic Agent Replanning Framework and a Case Study

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at ScienceDirect. Procedia Computer Science 87 (2016 )

A Distributed Framework for Solving the Multiagent Plan Coordination Problem

A Game Map Complexity Measure Based on Hamming Distance Yan Li, Pan Su, and Wenliang Li

Available online at ScienceDirect. Procedia Computer Science 64 (2015 )

Available online at ScienceDirect. Procedia Computer Science 96 (2016 )

Available online at ScienceDirect. Procedia Computer Science 60 (2015 )

Available online at ScienceDirect. Procedia Computer Science 46 (2015 )

ABT with Clause Learning for Distributed SAT

Available online at ScienceDirect. Procedia Computer Science 89 (2016 )

Available online at ScienceDirect. Procedia Computer Science 52 (2015 )

Plexil-Like Plan Execution Control in Agent Programming

Available online at ScienceDirect. Procedia Computer Science 79 (2016 )

Extending PDDL for Hierarchical Planning and Topological Abstraction

An Action Model Learning Method for Planning in Incomplete STRIPS Domains

Three-Dimensional Reconstruction from Projections Based On Incidence Matrices of Patterns

Available online at ScienceDirect. Procedia Computer Science 34 (2014 ) Generic Connector for Mobile Devices

NDL A Modeling Language for Planning

Managing test suites for services

ScienceDirect. Vulnerability Assessment & Penetration Testing as a Cyber Defence Technology

Available online at ScienceDirect. Procedia Computer Science 54 (2015 ) 24 30

Procedia - Social and Behavioral Sciences 195 ( 2015 ) World Conference on Technology, Innovation and Entrepreneurship

Available online at ScienceDirect. Procedia Computer Science 57 (2015 )

Spring Localization II. Roland Siegwart, Margarita Chli, Martin Rufli. ASL Autonomous Systems Lab. Autonomous Mobile Robots

Using Message-passing DCOP Algorithms to Solve Energy-efficient Smart Environment Configuration Problems

A Labelling Based Justification Status of Arguments

Available online at ScienceDirect. Procedia Computer Science 52 (2015 )

Generic Types and their Use in Improving the Quality of Search Heuristics

Validating Plans with Durative Actions via Integrating Boolean and Numerical Constraints

Media Access Delay and Throughput Analysis of Voice Codec with Silence Suppression on Wireless Ad hoc Network

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Qualitative Multi-faults Diagnosis Based on Automated Planning II: Algorithm and Case Study

A New Algorithm for Singleton Arc Consistency

Available online at ScienceDirect. Procedia Computer Science 35 (2014 )

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Quality Guarantees on Locally Optimal Solutions for Distributed Constraint Optimization Problems

Available online at ScienceDirect. Procedia Manufacturing 6 (2016 ) 33 38

Available online at ScienceDirect. Procedia Engineering 111 (2015 )

Available online at ScienceDirect. Procedia Computer Science 20 (2013 )

Available online at ScienceDirect. Procedia Computer Science 46 (2015 )

Available online at ScienceDirect. Procedia Computer Science 82 (2016 ) 28 34

Procedia Computer Science

Model-based control of a handling system with SysML

An Efficient Algorithm for Multiagent Plan Coordination

SAT BASED ALGORITHMIC APPROACH FOR SUDOKU PUZZLE

Available online at ScienceDirect. Procedia Computer Science 54 (2015 ) 7 13

Behavior Programming Language and Automated Code Generation for Agent Behavior Control

Multi-agent planning by distributed constraint satisfaction

Available online at ScienceDirect. Procedia Computer Science 46 (2015 ) Design of Hash Algorithm using Latin Square

ScienceDirect. Analogy between immune system and sensor replacement using mobile robots on wireless sensor networks

Optimal Channel Selection for Cooperative Spectrum Sensing Using Coordination Game

A recommendation engine by using association rules

Parallel Evaluation of Hopfield Neural Networks

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

SCADA Systems Management based on WEB Services

Artificial Intelligence

ScienceDirect. Exporting files into cloud using gestures in hand held devices-an intelligent attempt.

Available online at ScienceDirect. Procedia Computer Science 76 (2015 )

Temporal Planning with a Non-Temporal Planner

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas

ScienceDirect. An Approach for Generating Minimal Test Cases for Regression Testing

A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning

Available online at ScienceDirect. Procedia Computer Science 34 (2014 )

Available online at ScienceDirect. Procedia Computer Science 98 (2016 )

A Fair Extension of (Soft) Concurrent Constraint Languages

Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem

Available online at ScienceDirect. Procedia Computer Science 92 (2016 )

Domain-Dependent Heuristics and Tie-Breakers: Topics in Automated Planning

Available online at ScienceDirect. Procedia Computer Science 59 (2015 )

Total cost of ownership for application replatform by open-source SW

Instant Prediction for Reactive Motions with Planning

An Efficient QBIR system using Adaptive segmentation and multiple features

A Case for Collaborative Construction as Testbed for Cooperative Multi-Agent Planning

Available online at ScienceDirect. Procedia Computer Science 85 (2016 )

Principles of AI Planning. Principles of AI Planning. 7.1 How to obtain a heuristic. 7.2 Relaxed planning tasks. 7.1 How to obtain a heuristic

Path-planning for Multiple Robots

Transcription:

Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 46 (2015 ) 396 401 International Conference on Information and Communication Technologies (ICICT 2014) Plan Restructuring in Multi Agent Planning Satyendra Singh Chouhan a, *, Rajdeep Niyogi a a Computer Science and Engineering Department, IIT Roorkee, Roorkee-247667, INDIA Abstract In this paper we consider a multi agent planning problem where given a set of plans of the individual agents that may lead to conflict during execution, the task is to find a conflict-free plan for the agents. For this we give a formal model of the multi agent planning problem (MAPP) and give a description of the multi agent path planning. We demonstrate this technique in a grid world domain. The experimental results show promise for our method. 2014 2015 The Authors. Published by by Elsevier Elsevier B.V. B.V. This is an open access article under the CC BY-NC-ND license Peer-review (http://creativecommons.org/licenses/by-nc-nd/4.0/). under responsibility of organizing committee of the International Conference on Information and Communication Peer-review under responsibility of organizing committee of the International Conference on Information and Communication Technologies (ICICT 2014). Technologies (ICICT 2014) Keywords: Multi-agent planning; Plan restructuring; grid world domain; 1. Introduction Planning is an important feature in the design of intelligent systems. Planning precede acting. In multi agent planning domains typically there are several agents that plan and act together to share resources, activities, and goals 1,2,3. Multi-agent planning is a major issue for the distributed AI and multi-agent community. In this paper, we assume that a state is completely known, actions are deterministic, and the environment is static. But even in such a setting there may be a possibility of plan failure. For example, consider a two agent robot domain 4 where both agents are in a hallway and they want to move into the same room through a single narrow doorway through which only one agent can pass at a time. The agents can perform the action GO to go through the doorway, but this action will only succeed if the other agent is not trying * Corresponding author. Tel.: +91-895-422-1599. E-mail address: satycdec@iitr.ernet.in 1877-0509 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of organizing committee of the International Conference on Information and Communication Technologies (ICICT 2014) doi:10.1016/j.procs.2015.02.036

Satyendra Singh Chouhan and Rajdeep Niyogi / Procedia Computer Science 46 ( 2015 ) 396 401 397 to go through the doorway at the same time. To achieve the individual goals of both agents one of the agents has to wait till other agent passes through the door. Thus there should be coordination between their actions. Multi agent planning is an interesting research area. In the literature different multi agent planning techniques have been suggested. Some authors 7,11 consider multi agent planning as a process of finding joint plan by coordination of different agents. In some works 5,17,18 multi agent plans are viewed as combination of individual plans of different agents. A recent work 10, 15 considers solving a multi agent planning problem by dividing the planning problem into sub problems that are assigned to multiple agents. These works use the inherent structure of planning domains to represent it as a multi agent planning problem. Our work focuses on the problems where multiple agents work concurrently to achieve the individual goals. In section 2 we describe the framework for multi agent planning problem. Section 3 describes the plan restructuring approach to solve multi agent planning. Section 4 gives the implementation results. Section 5 describes related work and we finally conclude in section 6. 2. A Framework for Multi Agent planning problem 2.1 (Definition1) Multi agent planning domain 4 A multi agent planning domain D is a tuple {P, n, A i=1 to n, R}, where P is a finite set of propositions and S is set of states, S P n: number of agents A i : finite set of actions of agent i R S A S is a non-deterministic transition relation where A = A 1 A 2 A n R satisfies the following condition: If (s, a, s ) R and (s, b, s ) R then i there exists s S, (s, (a 1 a i-1, b i, a i+1 a n ), s ) R In addition, let Act i (s) A i be the set of actions that are executable in state s, i.e., Act i (s) = {a i A i (s, (.a i ), s ) R} 2.2 (Definition 2) Multi agent planning problem 4 For a multi agent planning domain D = {P, n, A i=1 to n, R}, a multi agent planning problem P is a tuple {D, I, G i=1 to n }, where I is the set of initial states and I S and G i S is the set of goal states for agent i. When agents execute their individual plans conflicts may arise. Thus we need to restructure the individual plans. 2.3 (Definition 3) Multi agent path planning problem is a multi-agent planning problem where the states are positions or coordinates in a given environment. A path for a given plan consists of a sequence of states. In the multi agent environment, coordination is needed to achieve conflict free paths (plans) for all agents. Conflict free plans can be achieved by restructuring any one or more individual plans. For example, consider a grid world domain of size 5 5. Some cells are obstacles and an agent cannot occupy these cells. Each cell can be occupied by only one agent at any point of time. In multi agent system the knowledge of agents is limited and restricted to only local information. In the grid world domain, an agent knows information about at most four adjacent cells. Every agent can perform actions from the action set {move_left, move_right, move_down, move_up, wait}. Each cell is assigned a unique number that determines the state of an agent. In the following we show two possible situations where optimal individual plans may lead to failure during execution. A. (Case1) Point collision Let agent x and y s initial and final states be (S 11, S 15 ) and (S 3, S 23 ) respectively. Optimal individual plans of agent x and y are π x = {s 11, s 12, s 13, s 14, s 15 } and π y = {s 3, s 8, s 13, s 18, s 23 }. In this case there is a conflict at state s 13.

398 Satyendra Singh Chouhan and Rajdeep Niyogi / Procedia Computer Science 46 ( 2015 ) 396 401 Both agents will try to occupy the same cell at the same time (see Fig. 2.). B. (Case2) Plan obstruction Fig. 1.Grid world domain Let agent x and y s initial and final states be (S 6, S 15 ) and (S 5, S 18 ) respectively. Optimal Individual plans of agents x and y are π x = {s 6, s 11, s 12, s 13, s 14, s 15 } and π y = {s 5, s 10, s 15, s 14, s 13, S 18 }. In this case, both plans will fail due to the obstruction of other plan (See Fig. 2.). a b Fig. 2. (a) Point collision; (b) Plan obstruction 3. Plan Restructuring Approach In this section we suggest a method for plan restructuring for a multi agent path planning problem. Let p=a 1,a 2,..a k be a plan executed by agent 1 at the initial state s 0. This results in a sequence of states: s 0,s 1,..,s k ; the initial state is also considered in the sequence. Let p = b 1, b 2,..b k be another plan executed by agent 2 at the initial state s 0. Let the resulting sequence of states be: s 0, s 1,..,s k. For the plans p,p to be conflicting there is a common point in the sequences for which the states are same. Formally, j, 0 < j k s j = s j. At this point (s j ) in the sequence of say plan p, we change the original action that was executed at s j-1. We can also choose the point as s j in the plan p and make the change accordingly. As a result of this change, the sequence changes; so we need to iteratively change the sequence (and thus the plan). Algorithm: Input: π x, π y // individual plans of agent x & y. Output: π x, π y // restructured plans Joint_planner(π x, π y ) Pos Conflict_check(π x, π y ); If pos =null then output no restructuring required and exit; {π x, π y} Resolve(π x, π y, pos); If {π x, π y}=null then no solution exists and exit; If there are still conflicts Conflict_check (π x, π y)! = null then go to step 3. Output restructured plan π x, π y

Satyendra Singh Chouhan and Rajdeep Niyogi / Procedia Computer Science 46 ( 2015 ) 396 401 399 Algorithm s explanation: There are two important methods, Conflict_check(π x, π y ) and Resolve(π x, π y, pos). Conflict_check (π x, π y ) determines two types of conflicts, i.e. point collision and plan obstruction. Point collision is checked by comparing plans, such that if there is a common state after the same number of actions then collision will occur. If it is so, then conflict check returns the index of the same state variable in both plans. Plan obstruction is checked by determining a common sub-plan on which both are moving in opposite directions. Conflict_check returns an index of state variable corresponding to the point where plan obstruction occurs. In Resolve (π x, π y, pos) method we randomly choose one of the plan and restructure the plan from (pos-1) such that the collision does not occur at next state. If it is not possible, then it selects another plan to restructure. For example in point collision case let the optimal individual plan of agents x and y be π x = {s 11, s 12, s 13, s 14, s 15 } and π y = {s 3, s 8, s 13, s 18, s 23 }. These plans are forwarded to the planner by agents. The planner checks for possible conflicts and restructures the plans accordingly. In this case the conflict is at state s 13. The planner restructures the plan of any one of the agents (let s say x) to π x = {s 11, s 12, s 12, s 13, s 14, s 15 } by assigning a wait action at s 12. Now it iteratively checks if there are any more conflicts. If no, it forwards the plan to the agents. Solution plans would be π x = {s 11, s 12, s 12, s 13, s 14, s 15 } and π y = {s 3, s 8, s 13, s 18, s 23 }. In plan obstruction case, let the optimal individual plan of agents x and y be π x = {s 6, s 11, s 12, s 13, s 14, s 15 } and π y = {s 5, s 10, s 15, s 14, s 13, S 18 }. In this case planner iteratively re plans one of the agent s plan and check conflicts. In this situation there is a conflict due to state s 13 and s 14. Planner iteratively restructures one of the plans. In this situation planner will restructure the plan of x like {s 6, s 11, s 12, s 13, s 8, s 3, s 4, s 5, s 10, s 15 }. And planner again will check if there is any conflict in π x = {s 06, s 11, s 12, s 13, s 18, s 13, s 14, s 15 } and π y = {s 5, s 10, s 15, s 14, s 13, s 18 }. If there is any conflict again, it will select one of plan and try to restructure till there is no conflicts. Solution would be π x = {s 06, s 11, s 12, s 13, s 18, s 13, s 14, s 15 } and π y = {s 5, s 10, s 15, s 14, s 13, S 18 }. 4. Implementation results The purpose of our experiment is to show the need of plan restructuring in multi agent planning for the grid world domain. We have implemented the above algorithm in JAVA. All the experiments are performed on 2.10 GHz Intel Pentium I3 with 3 GB RAM Windows 7 operating system. We have studied and observed the different cases with individual plan of agents where plan interaction might cause failure of individual plans. We have also shown the restructured plan generated by the planner with time taken to generate it. Table1 shows the experimental results for the different cases. In table1 columns two and three show the individual plans of the agents. The fourth column shows if there is any conflict between the individual plans. The fifth column shows the restructured plan after resolving the conflict. Sixth column shows the time taken by the simulator program for conflict resolution. 5. Related work In paper 8 the authors proposed a multi agent planning model based on plan reuse and an algorithm based on plan reuse. The task is to assign a subset of the public goals to each agent. Each agent iteratively solves its problem by receiving plans from other agent. In paper 12 the authors compare two strategies to come up with stable plan for agent. It suggests that plan stability can be achieved by re-planning (planning from scratch) or plan repairing (modifying an existing plan). In paper 6 the authors proposed an approach of multi-agent (MA) plan repair (MA-REPAIR), based on multi-agent planning (MA-STRIPS) 10. MA-STRIPS is an extension of the classical STRIPS-based planning approach to multi agent planning for teamwork and coordination. According to the MA-REPAIR approach, the multi-agent team computes a team plan using a fully decentralized MA-STRIPS planning algorithm, and subsequently executes the plan, while at the same time monitoring possible failures of plan execution. Upon an occurrence of such a failure, the team stops execution and invokes a plan repair algorithm and fixes the failed joint plan in order to reach a joint goal state from the state in which the failure occurred 13. When agents work in dynamic environments there may be condition for which some states and goals of agents may be modified. So in such situations existing plans should be modified and re-planning is required 14.

400 Satyendra Singh Chouhan and Rajdeep Niyogi / Procedia Computer Science 46 ( 2015 ) 396 401 In paper 9 a general model of single agent re-planning is presented. The paper describes three re-planning paradigms that are distinguished by the constraints that they are bound to satisfy during the re planning process. These are re-planning as restart, re-planning to reduce computation, and re-planning for multi-agent scenarios. Some work has been done in the area of continual planning 16. This technique consists of interleaving planning with plan execution and execution monitoring. 6. Conclusion and Future Work In this paper we have presented a plan restructuring approach for multi agent path planning. We have illustrated our approach on a grid world domain. The results are satisfactory. As part of future work we wish to apply our algorithm on robot path planning and also study the applicability of the approach for other complex domains. Table1. Agent s individual plans and restructured plans Sr. No. Agent x s plan Agent y s plan Conflict Restructured plan Time (ms) 1 π x ={s 11,s 12,s 13,s 14,s 15} π y ={s 03,s 08,s 13,s 18,s 23 } Yes π x ={s 11,s 12, s 12, s 13,s 14,s 15} 0.437 π y ={s 03,s 08,s 13,s 18,s 23 } 2 π x = {s 11,s 16,s 21,s 22,s 23,s 24} π y ={s 03,s 08,s 13,s 18,s 23} Yes π x ={s 11,s 16,s 21,s 22,s 23, s 24} 0.396 π y ={s 03,s 08,s 13,s 18, s 18,s 23} 3 π x = {s 06,s 11,s 12,s 13,s 14,s 15} π y ={s 05,s 10,s 15,s 14,s 13, s 18} Yes π x ={s 06,s 11,s 12,s 13, s 18, s 13,s 14, 0.584 π y ={s 05,s 10,s 15,s 14,s 13, s 18} 4 π x = {s 02,s 03,s 08,s 13,s 18,s 23} π y ={s 21,s 22,s 23,s 18,s 13, s 14} Yes π x ={s 02,s 03,s 08,s 13,s 12, s 13,s 18, 0.421 π y ={s 21,s 22,s 23,s 18,s 13, s 14} 5 π x = {s 16,s 21,s 22,s 23,s 24} π y ={s 14,s 13,s 18,s 23,s 24} Yes π x ={s 16,s 21,s 22, s 22, s 23, s 24} 0.365 π y ={s 14,s 13,s 18,s 23,s 24} 6 π x ={s 08,s 13,s 14,s 15,s 20} π y ={s 15,s 14,s 13,s 18,s 23} Yes π x ={s 08,s 13, s 1, s 13,s 14,s 15,s 20} 0.788 π y ={s 15,s 14,s 13,s 18,s 23} 7 π x ={s 10,s 05,s 04,s 03,s 08} π y ={ s 03,s 04,s 05,s 10} Yes π x ={s 10,s 05,s 04,s 03,s 08} 1.02 π y ={ s 03, s 02,s 02,s 02 s 04,s 05,s 10} 8 π x={s 20,s 15,s 10,s 05,s 10} π y ={s 05,s 10,s 15,s 14,s 13} Yes π x ={s 20, s 20, s 20,s 15,s 10,s 05,s 10} 0.738 π y ={s 05,s 10,s 15,s 14,s 13} 9 π x ={s 04,s 03,s 08,s 13,s 18} π y ={s 11,s 12,s 13,s 08,s 03, s 02} Yes π x ={s 04,s 03,s 08,s 13,s 18} 0.780 s 15} s 23} π y={s 11,s 12,s 13,s 14,s 13,s 08,s 03,s 02} 10 π x ={s 24,s 23,s 22,s 21,s 16, s 11} π y ={s 11,s 16,s 21,s 22,s 23, s 24} Yes π x ={s 24,s 23, s 18, s 18, s 18 s 23, s 22,s 21,s 16, s 11} π y ={s 11,s 16,s 21,s 22,s 23, s 24} 1.037 References 1. Ghallab M, Nau D, Traverso P. Automated Planning: Theory and Practice. Morgan Kaufmann, San Francisco 2004. 2. Weerdt M, Clement B. Introduction to Planning in Multi-agent Systems. In Journal of Multi agent Grid System; 2009; 345-355. 3. Durfee EH. Distributed Problem Solving and Planning. In Multi agent Systems: A Modern Approach to Distributed Artificial Intelligence Springer; 1999; 121 164. 4. Bowling M, Jensen R, Veloso M. A Formalization of equilibria for multi agent planning. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI); 2003;1460 1462

Satyendra Singh Chouhan and Rajdeep Niyogi / Procedia Computer Science 46 ( 2015 ) 396 401 401 5. Larbi RB, Konieczny S, P. Marquis. Extending classical planning to the multi-agent case: A game-theoretic approach. In ECSQARU Lecture Notes in Computer Science, Springer; 2007; 4724; 731 742. 6. Komenda A, Novák P. Decentralized multi-agent plan repair in dynamic environments. In Proceedings of Autonomous agent multi-agent system (AAMAS); 2012; 1239 1240. 7. Nissim R, Brafman R I, Domshlak C. A general fully distributed multi-agent planning algorithm. In Proceedings of Autonomous agent multiagent system (AAMAS); 2010; 1323 1330. 8. Borrajo D. Plan Sharing for Multi-Agent Planning. In proceeding of distributed and Multi-Agent Planning workshop (ICAPS-DMAP); 2013; 57-65. 9. Talamadupulay K, Smith D E, Cushingy W, Kambhampati S. A Theory of Intra-Agent Re-planning. In proceeding of Distributed and Multi- Agent Planning workshop (ICAPS DMAP); 2013; 48-56. 10. Brafman R I, Domshlak C. From one to many: Planning for loosely coupled multi agent systems. In Proceedings of International Conference on Automated Planning and Scheduling (ICAPS); 2008; 28 35. 11. Brenner M. Planning for multi-agent environments: From individual perceptions to coordinated execution. In Workshop of Multi agent Planning and Scheduling, (ICAPS); 2005; 80-88. 12. Fox M, Gerevini A, Long D, Serina I. Plan stability: Re-planning versus plan repair. In Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS); 2006; 212 221. 13. Komenda A, Novák P, Michal P. Decentralized How to Repair Multi-agent Plans: Experimental Approach. In proceeding of Distributed and Multi-Agent Planning workshop (ICAPS DMAP); 2013; 66-74. 14. Cushing W, Kambhampati S. Re-planning: A New Perspective. In Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS); 2005. 15. Nissim R, Brafman R. Cost-Optimal Planning by Self-Interested Agents. In proceeding of Distributed and Multi-Agent Planning workshop (ICAPS DMAP); 2013; 1-7. 16. Brenner M, Nebel B. Continual planning and acting in dynamic multi agent environments. In Proceedings of Autonomous agent multi-agent system (AAMAS); 2009; 297-331. 17. Durfee EH, Lesser V R. Partial global planning: A coordination framework for distributed hypothesis formation. IEEE Transactions on Systems, Man, and Cybernetics; 1991; 21, 5; 1167 1183. 18. Boutilier C, brafman R. Partial-order planning with concurrent interacting actions. Journal of Artificial Intelligence Research; 2001; 14; 105 136.