Distributed Planning in Hierarchical Factored MDPs

Size: px
Start display at page:

Download "Distributed Planning in Hierarchical Factored MDPs"

Transcription

1 Distribute Planning in Hierarchical Factore MDPs Carlos Guestrin Computer Science Dept Stanfor University Geoffrey Goron Computer Science Dept Carnegie Mellon University We present a principle an efficient planning algorithm for collaborative multiagent ynamical systems. All computation uring both the planning an the execution phases is istribute among the agents; each agent only nees to moel an plan for a small part of the system. Each of these local subsystems is small but once they are combine they can represent an exponentially larger problem. The subsystems are connecte through a subsystem hierarchy. Coorination an communication between the agents is not impose but erive irectly from the structure of this hierarchy. A globally consistent plan is achieve by a message passing algorithm where messages correspon to natural local rewar functions an are compute by local linear programs; another message passing algorithm allows us to execute the resulting policy. When two portions of the hierarchy share the same structure our algorithm can reuse plans an messages to spee up computation. 1 Introuction Many interesting planning problems have a very large number of states an actions escribe as the cross prouct of a small number of state variables an action variables. Even very fast exact algorithms cannot solve such large planning problems in a reasonable amount of time. Fortunately in many such cases we can group the variables in these planning problems into subsystems that interact in a simple manner. As argue by Herbert Simon [20] in Architecture of Complexity many complex systems have a nearly ecomposable hierarchical structure with the subsystems interacting only weakly between themselves. In this paper we represent planning problems using a hierarchical ecomposition into local subsystems. (In multiagent planning problems each agent will usually implement one or more local subsystems.) Although each subsystem is small once these subsystems are combine we can represent an exponentially larger problem. The avantage of constructing such a grouping is that we can hope to plan for each subsystem separately. Coorinating many local planners to form a successful global plan requires careful attention to communication between planners: if two local plans make contraictory assumptions global success is unlikely. In this paper we escribe an algorithm for coorinating many local Markov ecision process (MDP) planners to form a global plan. The algorithm is relatively simple: at each stage we solve a small number of small linear programs to etermine messages that each planner will pass to its neighbors an base on these messages the local planners revise their plans an sen new messages until the plans stop changing. The messages have an appealing interpretation: they are rewars or penalties for taking actions that benefit or hurt their neighbors an statistics about plans that the agents are consiering. Our hierarchical ecomposition offers another significant feature: reuse. When two subsystems are equivalent (i.e. are instances of the same class) plans evise for one subsystem can be reuse in the other. Furthermore in many occasions larger parts of the system may be equivalent. In these cases we can not only reuse plans but also messages. The iniviual local planners can run any planning algorithm they esire so long as they can extract a particular set of state visitation frequencies from their plans. Of course suboptimal planning from local planners will ten to reuce the quality of the global plan. Our algorithm oes not necessarily converge to an optimal plan. However it is guarantee to be the same as the plan prouce by a particular centralize planning algorithm (approximate linear programming as in [19] with a particular basis). 2 Relate work Many researchers have examine the iea of iviing a planning problem into simpler pieces in orer to solve it faster. There are two common ways to split a problem into simpler pieces which we will call serial ecomposition an parallel ecomposition. Our planning algorithm is significant because it hanles both serial an parallel ecompositions an it provies more opportunities for abstraction than other parallelecomposition planners. Aitionally it is fully istribute: at no time is there a global combination step requiring knowlege of all subproblems simultaneously. o previous algorithm combines these qualities. In a serial ecomposition one subproblem is active at any given time. The overall state consists of an inicator of which subproblem is active along with that subproblem s state. Subproblems interact at their borers which are the states where we can enter or leave them. For exam

2 ple imagine a robot navigating in a builing with multiple rooms connecte by oorways: fixing the value of the oorway states ecouples the rooms from each other an lets us solve each room separately. In this type of ecomposition the combine state space is the union of the subproblem state spaces an so the total size of all of the subproblems is approximately equal to the size of the combine problem. Serial ecomposition planners in the literature inclue ushner an Chen s algorithm [12] an Dean an Lin s algorithm [6] as well as a variety of hierarchical planning algorithms. ushner an Chen were the first to apply DantzigWolfe ecomposition to Markov ecision processes while Dean an Lin combine ecomposition with state abstraction. Hierarchical planning algorithms inclue MAX [7] hierarchies of abstract machines [16] an planning with macrooperators [22 9]. By contrast in a parallel ecomposition multiple subproblems can be active at the same time an the combine state space is the cross prouct of the subproblem state spaces. The size of the combine problem is therefore exponential rather than linear in the number of subproblems which means that a parallel ecomposition can potentially save us much more computation than a serial one. For an example of a parallel ecomposition suppose there are multiple robots in our builing interacting through a common resource constraint such as limite fuel or through a common goal such as lifting a box which is too heavy for one robot to lift alone. A subproblem of this task might be to plan a path for one robot using only a compact summary of the plans for the other robots. Parallel ecomposition planners in the literature inclue Singh an Cohn s [21] an Meuleau et al. s [15] algorithms. Singh an Cohn s planner buils the combine state space explicitly using subproblem solutions to initialize the global search. So while it may require fewer planning iterations than naive global planning it is limite by having to enumerate an exponentially large set. Meuleau et al. s planner is esigne for parallel ecompositions in which the only coupling is through global resource constraints. More complicate interactions such as conjunctive goals or share state variables are beyon its scope. Our planner works by representing a planning problem as a linear program [14] substituting in a compact approximate representation of the solution [19] an transforming an ecomposing the LP so that it can be solve by a istribute network of planners. One of this paper s contributions is the metho for transformation an ecomposition. Our transformation is base on the factorize planning algorithm of Guestrin oller an Parr [8]. Their algorithm uses a central planner but allows istribute execution of plans. We exten that result to allow planning to be istribute as well while still guaranteeing that we reach the same solution. That means that our algorithm allows for truly ecentralize multiagent planning an execution: each agent can run its own local planner an compute messages locally an oesn t nee to know the global state of the worl or the actions of unrelate agents. The transformation prouces a sparse linear program to which we apply a neste Beners ecomposition [1]. (Or ually a DantzigWolfe ecomposition [3].) As mentione above other authors have use this ecomposition for planning; but our metho is the first to hanle parallel ecompositions of planning problems. Another contribution of our new hierarchical representation an planning algorithm over the algorithm of Guestrin et al. [8] is reuse. As we escribe in Sec. 9 our approach can reuse plans an messages for parts of the hierarchy that share the same structure. 3 Markov Decision Processes The Markov Decision Process (MDP) framework formalizes the problem where agents are acting in a ynamic environment an are jointly trying to maximize their expecte longterm return. An MDP is efine as a 4tuple where: is a finite set of states; is a set of actions;! is a rewar function such that represents the rewar obtaine in state after taking " action $#! ; an is a Markovian transition moel where represents the probability of going & # from state to state with action. We will write to mean the vector of rewars associate with action (with one entry for each state) an we will write to mean the transition matrix associate with action (with one entry for each pair of source an estination states). We assume that the MDP has an infinite horizon an that future rewars./ are iscounte exponentially with a iscount factor ')(+. In general the state space is more compactly efine in terms of state variables i.e :9. Similarly the action can be ecompose into action variables ;0/< 3 <>=?9. B The optimal value is efine so A is the maximal value achievable by any action at state. More is the solution DE to (1) below. AB stationary policy is a mapping C < where C is the action to be taken at state. For any value we can efine the policy obtaine &)B by onestep lookahea : where the I MDILO is taken componentwise. The greey policy for the optimal value is the optimal policy CRA. There are several algorithms for computing the optimal policy (see [17] for a review). S One is to solve the Bellman linear program: for the value function so represents the value S of state U. Pick any fixe state relevance weights VW( with VYXZ ; without loss of generality assume [ T V\TR. Then the Bellman LP is ]_^ab^ ^ac6 V)ef@ Throughout this paper inequalities between vectors hol componentwise; e.g. i_gkj means i T gkj T for all U. Also free inex variables are by convention universally quantifie; e.g. the constraint in (1) hols for all.

3 ] [ ^ [ e fuel flow air flow O2 sensor engine temp RPM brake transmission gear roa slope esire spee engine power actual spee Figure 1: Smart engine control ynamics; e.g. the state variable O2sensor epens on the action variables fuelflow an airflow. The ual of the Bellman LP is ^ c. ILO ' [ JV The vector calle the flow for action can be interprete as the expecte number of times that action will be execute in each state (iscounte so that future visits count less than present ones). So the objective of (2) is to maximize total rewar for all actions execute an the constraints say that the number of times we enter each state must be the same as the number of times we leave it. State relevance weights tell us how often we start in each state. 4 Hierarchical multiagent factore MDPs Most interesting MDPs have a very large number of states. For example suppose that we want to buil a smart engine control whose state at any time is escribe by the variables shown in Fig. 1. The number of istinct states in such an MDP is exponential in the number of state variables. Similarly many interesting MDPs have a very large number of actions escribe by a smaller number of action variables. Even very fast exact algorithms cannot solve such large MDPs in a reasonable amount of time. Fortunately in many cases we can group the variables in a large MDP into subsystems that interact in a simple manner. The roune boxes in Fig. 1 show one possible grouping; we might call the three subsystems fuelinjection enginecontrol an speecontrol. These subsystems overlap with each other on some variables; e.g. fuelinjection an enginecontrol overlap on O2sensor. We can consier O2sensor to be an output of the fuelinjection system an an input to the enginecontrol. The avantage of constructing such a grouping is that we can now plan for each subsystem separately. Since there are many fewer variables in each subsystem than there are in the MDP as a whole we can hope that it will be possible to construct a global plan by stitching together plans for the various subsystems. Of course if we plan for each subsystem completely separately there s no guarantee that the overall plan will be useful so we may nee to replan for each subsystem several times taking into account the current plans for neighboring subsystems. In our engine example we can examine the speecontrol subsystem an compute what values of enginepower brake an transmissiongear woul help us most in keeping actualspee near esirespee. While the speecontrol system controls transmissiongear an brake irectly it must communicate with the enginecontrol system to influence the value of enginepower. If the esire value of enginepower turns out to be too expensive to maintain g (2) the speecontrol system may have to form a new plan that makes o with the available enginepower. 4.1 Subsystem tree MDPs To formalize the above intuition we will efine a general class of MDPs built from interacting subsystems. In Sec. 5 we will give an algorithm for planning in such MDPs; this algorithm will run efficiently when the number of state variables in the scope of each subsystem is small. We will start by efining a basic subsystem. On its own a basic subsystem is just an MDP with factore state an action spaces but below we will escribe how to combine several basic subsystems into a larger MDP by allowing them to share state an action variables. (In particular an action variable in one subsystem can be a state variable in another; a subsystem s actions are just the variables which can be set by forces external to the subsystem.) Definition 4.1 (Basic subsystem) A basic subsystem. The is an MDP efine by the tuple internal state variables I F an the external state variables O I 7F are isjoint sets. The scope of is FG. The local rewar function maps assignments for 7F $# to realvalue rewars an the local transition function maps assignments for F R# to a probability istribution over the assignment to the internal variables in the next time step. We have ivie the scope of a subsystem into internal variables an external variables. Each subsystem knows the ynamics of its internal variables an can therefore reason about value functions that epen on these variables. External variables are those which the subsystem oesn t know how to influence irectly; a subsystem may form opinions about which values of the external variables woul be helpful or harmful but cannot perform Bellman backups for value functions that epen on external variables. We will write for the vector of rewars associate with a setting for. has one component for each for the transi setting of. Similarly we will write tion matrix associate with setting to ; each column of correspons to a single setting of at time! while each row correspons to a setting of at time!. Given several basic subsystems we can collect them together into a subsystem tree. Right now we are not imposing any consistency conitions on the subsystems but we will o so in a little while. Definition 4.2 (Subsystem tree) A subsystem tree is a set of local subsystem MDPs 3 #" together with a tree parent function T Parent F. Without loss of generality take 3 to be the root of the tree. We will write Parent 3 F $ an we will efine Chilren F in the usual way. The internal variables of the tree I F are those variables which are internal to any ; the external variables O I F are the variables which are in the scope of some subsystem but not internal to any subsystem. Finally the common (separator) variables between

4 " " " a subsystem an its parent are enote by F Parent FaF. F There are two consistency conitions we nee to enforce on a subsystem tree to make sure that it efines an MDP. The first says that every subsystem that references a variable 2 T must be connecte to every other subsystem referencing 2"T an the secon says that neighboring subsystems have to agree on transition probabilities. Definition 4.3 (Running intersection property) A subsystem tree satisfies the running intersection property if whenever a variable 2 T is in both F an.f then 2"T( F for every subsystem in the path between an in. Similarly internal variables must satisfy the same conition. Definition 4.4 (Consistent ynamics) A subsystem tree has consistent ynamics if pairs of subsystems agree on the ynamics of their joint variables. Specifically if is an assignment to F F B# an if is an assignment to I F I /F in the next time step then # # $# $# F B# runs over all assignments to I /F B# consistent with. Here the sum A subsystem tree which satisfies the running intersection property an has consistent ynamics is calle a consistent subsystem tree. For the rest of this paper all subsystem trees will be assume to be consistent. Given a consistent subsystem tree we can construct a plain ol MDP which efines the ynamics associate with this tree: Lemma 4.5 (Equivalent MDP) From a consistent subsystem tree ] we can construct a wellefine equivalent MDP ) ) F). The state variable set is I F. The action variable set is O I F. The rewar function is the sum of all subsystem rewars Z[ 3. The transition proba! are bilities for a given state an action " 3 [ where is the restriction of to the scope of ; is the restriction of to the internal variables of ; is the restriction of to variables in F ; an! F enotes the set of assignments to consistent with. In our engine example the roune boxes in Fig. 1 are the scopes of the three subsystems. The O2sensor variable is an external variable for the enginecontrol system an enginepower is an external variable for the speecontrol system; all other variables of each subsystem are internal. (ote: the way we have escribe this example each variable is internal to only one subsystem; this is not necessary so long as we maintain the running intersection property.) R 1 R 2 R 3 X X X 2 A 1 A 2 X 1 X 2 X 3 h h 1 M 1 ( & ' A# SepSet[M 2] Figure 2: Multiagent factore MDP with basis functions )+ an ) (left) represente as a hierarchical subsystem tree (right). 4.2 Hierarchical subsystem trees In our engine example we might ecie to a several more variables escribing the internal state of the engine. In this case the enginecontrol subsystem woul become noticeably more complicate than the other two subsystems in our ecomposition. Since our ecomposition algorithm below allows us to use any metho we want for solving the subsystem MDPs we coul use another subsystem ecomposition to split enginecontrol into subsubsystems. For example we might ecie that we shoul break it own into a istributor subsystem a powertrain subsystem an four cyliner subsystems. This recursive ecomposition woul result in a tree of subsystem trees. We call such a tree of trees a hierarchical subsystem tree. Hierarchical subsystem trees are important for two reasons. First they are important for knowlege representation: it is easier to represent an reason about hierarchies of subsystems than flat collections. Secon they are important for reuse: if the same subsystem appears several times in our moel we can reuse plans from one copy to spee up planning for other copies (see Sec. 9 for etails). 4.3 Relationship to factore MDPs Factore MDPs [2] allow the representation of large structure MDPs by using a ynamic Bayesian network [5] as the transition moel. Guestrin et al. [8] use factore MDPs for multiagent planning. They presente a planning algorithm which approximates the value function of a factore MDP with factore linear value functions [10]. These value functions are a weighte linear combination of basis functions where each basis function is restricte to epen only on a small subset of state variables. The relationship between factore MDPs an the hierarchical ecomposition escribe in this paper is analogous to the one between stanar Bayesian networks an ObjectOriente Bayesian networks [11]. In terms of representational power hierarchical multiagent factore MDPs are equivalent to factore MDPs with factore linear value functions. That is a factore MDP associate with some choice of basis functions can be easily transforme into a subsystem tree with a particular choice of subsystems an viceversa. 1 This transformation involves the backprojection of the basis functions [10] an the triangulation of the 1 For some basis function choices the transformation from factore MDPs to subsystem trees may also imply an approximate solution of the basic subsystem MDPs. M 2 X # X " X X # X " X $ A# A# A" X # X " X # " X " X $

5 ' ' resulting epenency graph into a clique tree [13]. For example consier the factore MDP on the left of Fig. 2 with basis functions 0 3 P9 represente as iamons in the next time step. The figure on the right represents an equivalent subsystem tree where each local ynamical system is represente by a small part of the global DB. However the hierarchical moel offers some avantages: First we can specify a simple completely istribute planning algorithm for this moel (Sec. 5). Secon the hierarchical moel allows us to reuse plans generate for two equivalent subsystems. Thir the knowlege engineering task is simplifie as systems can be built from a library of subsystems. Finally even in collaborative multiagent settings each agent may not want to reveal private information; e.g. in a factory upper management may not want to reveal the salary of one section manager to another. Using our hierarchical approach each subsystem coul be associate with an agent an each agent woul only have access to its local moel an rewar function. 5 Solving hierarchical factore MDPs We nee to aress some problems before we can solve the Bellman LP for an MDP represente as a subsystem tree: the LP has exponentially many variables an constraints an it has no separation between subsystems. This section escribes our solutions to these problems. 5.1 Approximating the Bellman LP Consier the MDP obtaine from a subsystem tree accoring to Lemma 4.5. This MDP s state an action spaces are exponentially large with one state for each assignment to 0/ "8 9 an one action for each assignment to 0/< 3 < = 9 ; so optimal planning is intractable. We use the common approach of restricting attention to value functions that are compactly represente as a linear com bination of basis functions We will write ( for the weights of our linear combination an for the matrix whose columns are the basis functions; so our representation of the value function As propose by Schweitzer an Seimann [19] we can substitute our approximate value function representation into the Bellman LP (1): ]_^ab^ ^ac6 V)e & R g (3) There is in general no guarantee on the quality of the but recent work of e Farias an Van Roy [4] provies some analysis of the error relative to that of the best possible approximation in the subspace an some guiance as to selecting the state relevance weights V so as to improve the quality of the approximation. We will choose the basis to reflect the structure of : we will allow ourselves complete flexibility to represent the value of each subsystem but we will approximate the global value by the sum of the subsystem value functions. 2 If is itself a subsystem tree we will further approximate the global value function by recursively into a sum of its subsubsystem value functions; but for simplicity of notation we will assume that has been flattene so that the s are all basic subsystem MDPs. More specifically let be the basis for within. In other wors let the U th column of be an inicator function over assignments to I F which is 1 when I 7F is set to its U th possible setting an 0 Substituting otherwise. Then we can this approximation into (3) yiels ])^ b^ ^ c. [ g [ 5.2 Factoring the approximate LP The substitution (4) ramatically reuces the number of free variables in our linear program: instea of one variable for each assignment to I F we now have one variable for each assignment to I F for each. The number of constraints however remains the same: one for each assignment to F. Fortunately Guestrin et al. [8] propose an algorithm that reuces the number of constraints in a factore MDP by a metho analogous to variable elimination. Their algorithm introuces an extra variable (calle a message variable) foreach possible assignment to each separating set 7F. To simplify notation we will sometimes write extra arguments to ; for instance if is an assignment to F it etermines the value of an so we can write. We will also introuce ummy variables to represent the total influence of all of the s on subsystem uner action. is a vector with one component for each assignment to I F. With this notation the Guestrin et al. algorithm reuces (4) to the following LP: g ^ac6 [ (The set contains all such that ( Chilren F.) This LP has many fewer constraints than (4): there is one inequality for each instea of one for each. If we are willing to assume a centralize planner with knowlege of the etails of every subsystem we are now one: we can just construct (5) an han it to a linear program solving package. 3 In orer to achieve istribute 2 We are not restricting to be the value function which woul be optimal if were isolate. 3 Actually we nee one more assumption: the state relevance weights must factor along subsystem lines so that we can compute efficiently. Equivalently we can pick subsystem weight vectors that satisfy a conition analogous to consistent ynamics then efine so that. (4) (5)

6 ' M j M l M k Figure 3: Message passing algorithm from the point of view of subsystem : it maintains a set of local flows an with corresponing local values. receives the current rewar message from its parent. Aitionally each chil subsystem sens a message to containing the flows over their separator variables along with the corresponing total values of the subtree roote at. combines this information to compute the new rewar messages for each chil along with the flow message an total value sent to. planning though we nee to break (5) into pieces which refer only to a single subsystem an its neighbors. (In fact even if we are relying on a central planner we may still want to break (5) into subsystems to give ourselves the flexibility to solve each subsystem with a ifferent algorithm.) 5.3 Rewar sharing If we fix all message variables the LP in (5) splits into many small pieces. An iniviual piece looks like: ^ c. V V. The vectors Here we have written V for are now constants. By comparing (6) to the stanar Bellman LP (1) we can see that it represents an MDP with rewars an transition probabilities. We will call this MDP the stanalone MDP for subsystem. Viewing (6) as an MDP allows us to interpret as a vector of ajustments to the rewar for subsystem. These ajustments epen on the separating sets between an its neighbors in the subsystem tree. in (5) we can see how a goo setting for the message variables encourages cooperative behavior. The message variable reuces the rewar of in states where is true but increases By looking at the efinition of the rewar of s parent by the same amount. So if benefits from being in states where is true we can increase to cause s parent to seek out those states. Conversely if the states hurt s parent we can ecrease to avoi them. to encourage This interpretation is an important feature of our algorithm. We have taken a complex planning problem which may have many strong couplings between subsystems an efine a small number of message variables which allow us to reuce the global coorination problem to the problem of fining an appropriate rewarsharing plan. Our algorithm is also linke to rewar shaping [18]. In reinforcement learning it is common to a fictitious shap (6) 1. Initialization: an For all subsystems. For all an separating sets touching 2. For each subsystem if or for change in the last iteration: Solve the rewar message LP (11) to fin new values for the message variables of each separating set between an its chilren. If the rewar message LP was boune use its value an ual variables to a a new entry to accoring to (9). Also a a new row to accoring to (10). 3. For every if epens on a rewar message which change in step 2 solve its stanalone MDP (6). A a new entry to accoring to (7). For every separating set which touches a a new row to as in Eq. (8). 4. If an Figure 4: Planning algorithm for subsystem trees. or a has change set! an go to 2. ing rewars to the system to spee up learning. The purpose of the rewar message in our approach is to encourage coorination rather than fast learning. onetheless the rewar messages o shape the subsystems policies to conform to a globally consistent strategy. 5.4 Algorithm escription In this subsection we will escribe our algorithm for fining a goo rewarsharing plan. The algorithm is guarantee to converge in a finite number of iterations; at termination we will have foun an exact solution to (5). Our algorithm maintains several sets of variables at each subsystem in the tree. These variables represent messages passe between neighboring subsystems in the tree. All messages are about one of two topics: rewars or expecte frequencies (flows). Flow messages pass up the tree from chil to parent generating rewar messages in their wake. These rewar messages cause neighboring subsystems to replan which in turn generates more flow messages. Fig. 3 illustrates the messages exchange between subsystems. Rewar messages allow our algorithm to encourage cooperative behavior by rewaring actions which help neighboring subsystems an punishing actions which hurt them. Flow messages tell subsystems about the policies their neighbors are consiering; they let subsystems know what assignments to their neighbors have figure out how to achieve (an at what cost). The first set of message variables is " the most recent rewar message receive by from its parent. The secon set consists of " the most recent rewarsharing plan sent to s chilren. The remaining sets of variables keep track of important statistics about the policies foun so far. The algorithm uses these statistics to help generate new policies for various parts of the subsystem tree. We keep track of policies both for iniviual subsys

7 e ] ^ g tems an for groups of subsystems. For each iniviual subsystem we keep a vector whose U th component is the local expecte iscounte rewar of the U th policy we compute for. We also keep matrices for every separating set that touches. The U th row of tells us how often the U th policy sets the variables of to each of their possible values. compose of one vector If is a feasible flow for for each action then its component in is This is the rewar for excluing contributions from message variables. The corresponing row of has one element for every assignment to the variables of. This element is:! B (8) This is the marginalization of to the variables of. Consier now the subtree roote at (7). For this subtree we keep a vector of subtree rewars an a matrix of frequencies. A new policy for a subtree is a mixture of policies for its chilren an its root; we can represent such a mixe policy with some vectors of mixture weights. We will nee one vector of mixture weights for each chil (call them for ( ) an one for (call it ). Each element of an the s is a weight for one of the previous policies we have compute. Given a set of mixture weights compute a new entry for in terms of an the trees roote at s chilren: e >e (9) This is the expecte iscounte rewar for our mixe policy excluing contributions from message variables. We can also compute the corresponing new row for : it is (10) This is how often our mixe policy sets the variables of to each of their possible values. Our algorithm alternates between two ways of generating new policies. The simpler way is to solve a stanalone MDP for one of the subsystems; this will generate a new policy if we have change any of the relate rewar message variables. The secon way is to solve a rewar message linear program; this LP upates some rewar message variables an also prouces a set of mixture weights for use in equations (9) an (10). There is a rewar message LP for the subtree roote at any nonleaf subsystem. Let the inex run over ; then we can write the LP as: ]_^!^ ^ac6 g g [ " [ (11) The solution of this LP tells us the value of the new rewar messages " to be sent to s chilren. To obtain mixture weights we can look at the ual of (11): ^ac6 IPO [ e g g >e " (12) These mixture weights are use to generate the message to be sent to s parent. Fig. 4 brings all of these ieas together into a messagepassing algorithm which propagates information through our subsystem tree. The following theorem guarantees the correctness of our algorithm; for a proof see Sec. 7. Theorem 5.1 (Convergence an correctness) Let be a subsystem tree. The istribute algorithm in Fig. 4 converges in a finite number of iterations to a solution of the global linear program (4) for. While Fig. 4 escribes a specific orer in which agents upate their policies other orers will work as well. In fact Thm. 5.1 still hols so long as every agent eventually respons to every message sent to it. 6 An example Before we justify our algorithm we will work through a simple example. This example emonstrates how to etermine which messages agents nee to pass to each other as well as how to interpret those messages. 6.1 A simple MDP Our example MDP has 2 binary state variables an an 2 binary action variables i an j. The state evolves accoring to R3 i an R3j. Our perstep 5 rewar is an our iscount factor is ' J. That means there is a tension between wanting J to avoi an immeiate penalty an wanting 4 to allow!"!$#"!$# " later. The exact value function. for this.f MDP is for the states. We will ecompose our MDP into 2 subsystems one with internal variable an external variable i an one with internal variable an external variables an j. This ecomposition cannot represent all possible value functions: it is restricte to @ which has only three inepenent parameters. However the exact &'"!$# value function is a member 6 of this family ) so our ecomposition algorithm will fin it. 6.2 The LP an its ecomposition With the above efinitions we can write out the full linear program for : ])^ b^ ^ c. R R 3 + [)@ 3 j There are 16 constraints in the full LP (4 states 4 actions). We can reuce that to 10 by introucing two new variables

8 ] ^ g g Figure 5: Constraint matrix for the example MDP after variable elimination. Columns correspon to the variables in that orer. f ^ F (the. represents '$@ j minimum of the part of the constraint epening on an. ow we write our LP as j if is 1) an similarly for ])^ b^ ^ c. B [ [ g '$@:3 i '$@ j This LP has the constraint matrix shown in Fig. 5. As inicate the matrix has a simple block structure: two variables appear in all constraints; two appear only in the subproblem above the horizontal line; an two appear only in the subproblem below the horizontal line. This block structure is what allows our algorithm to plan separately for the two subproblems. 6.3 Execution of the ecomposition algorithm The ecomposition algorithm starts out with J (no rewar sharing). So always picks since that allows it to set an capture a rewar of 10 on each step. Similarly 3 sees no benefit to visiting its state so it heas for 4J to avoi a rewar of 3 per step. Each of these two policies results in a new constraint for our message LP. For example s policy tells us that g 3 f 3 since it achieves an average rewar of / 95 when J an always sets 4. Aing the new constraints an resolving the message LP tells us that 3 an isagree about how often shoul equal 1 an suggests putting a premium on + for 3 an the corresponing penalty on for. (The message LP is unboune at this point so the size of the premium/penalty is arbitrary so long as it is large.) As we woul expect the two MDPs react to this change in rewars by changing their policies: in step 2 3 ecies it will set _ as often as possible an ecies it will set as often as possible. With the constraints erive from these new policies the message LP ecies that it will give 3 a rewar of 9 for setting W an charge the corresponing penalty. With this new rewar structure the two MDPs can now compute what will turn out to be their final policies: 3 will set J as often as possible espite the onestep penalty of 3 thereby allowing to set an achieve a rewar of 10 on each step. Summing the resulting value functions for the two subproblems gives the true globallyoptimal value function for the overall MDP an further steps of our algorithm o not change this result. 7 Algorithm justification We can erive the algorithm in Fig. 4 by performing a sequence of neste Beners ecompositions on the linear program (5). This section reviews Beners ecomposition then outlines how to apply it to (5) to prouce our algorithm. Since Beners ecomposition is correct an finitely convergent this section is a proof sketch for Thm Beners ecomposition Beners ecomposition [1] solves LPs of the form ]_^ab^ ^ c. i e g j e (13) by repeately solving subproblems where the value of is fixe. It is useful mainly when the subproblems are easier to solve than the problem as a whole perhaps because the matrix is block iagonal or has some other special structure. (Other types of constraints besies g are also possible.) We will call the master variable an the slave variable. If we fix 4 we get the subproblem: ]_^!^ ^ac6 jde g Writing ]_^ab^ reuce (13) to: i e ^ac6 # IPO e Jj g (14) for the optimal ^ac6 value of this subproblem we. The ual of (14) is (15) ote that the feasible region of (15) is inepenent of. If we have a feasible solution to (15) it provies a lower boun on the subproblem value by plugging into the ob. If we have several jective of (15): e feasible solutions 3... each one provies a lower boun on. So we can approximate the reuce version of (13) with ]_^ab^ ^ac6 i e TBe (16) The Beners algorithm repeately solves (16) to get a new value of then plugs that value of into (15) an solves for a new T. The process is guarantee to converge in finitely many iterations. 7.2 Decomposing the factore LP We can pick any set of message variables to start our Beners ecomposition. Suppose we pick for all s which are chilren of the root 3. These s will be master variables an all remaining LP variables will be slaves. Fixing these message variables to " separates the root from its chilren. So the Beners subproblem will split into several separate pieces which we can solve inepenently. One piece will be just the stanalone MDP for the root subsystem an each other piece will contain a whole subsystem tree roote at one of 3 s chilren.

9 " ] [ ^ [ " e Using this ecomposition our master becomes: ]_^ab^ ^ac6 Constraints in [ (17) where is the objective of the root stanalone MDP an the s are the objectives of the LPs for the subtrees roote at each chil. The set contains the constraints receive from each subproblem. First consier the stanalone MDP for the root. The ual of its Bellman LP (6) is: ^ac6 IPO '[ where ". We note that the " V is a constant vector specifie by the choice of s appear only in the objective. Thus any policy for this subsystem will yiel flows " which are feasible for any setting of the " s. These flows will in turn yiel a constraint for the LP in (17): g e " e " g (18) The first part of the constraint is the value of the policy associate with " which we store as an entry in in Sec The secon part is the prouct of the flows (which we store as a row of ) with the rewar messages. ow let s turn our attention to the LP for the subtree roote at a chil. By taking the ual of this LP we will obtain a constraint of the same form as the one in Eq. (18). However the two terms will correspon to the value of the whole subtree an the flows of the whole subtree respectively. Fortunately we can compute these terms with another Beners ecomposition that separates from its chilren. This recursive ecomposition gives us the quantities we calle an. ote that we o not nee the complete set of flows but only the marginals over the variables in ; so we can compute locally by Eq. (10). The proof is conclue by inuction. 8 Hierarchical action selection Once the planning algorithm has converge to a for each subsystem we nee a metho for selecting the greey action associate with the global value We might try to compute the best action by enumerating all actions an comparing them. Unfortunately our action space is exponentially large making this approach infeasible. However we can exploit the subsystem tree structure to select an action efficiently [8]. Recall that we are intereste in fining the greey action which maximizes the function. Our value function is ecompose as the sum of local value functions over subsystems. This ecomposition also implies a ecomposition of the function: [ where: ' [ # ote that some of the external variables will be internal to some other subsystem while others correspon to actual action choices. More specifically for each subsystem ivie its variables into those which are internal to some subsystem in (state variables) an those which are external to all subsystems (action variables). Write for the former an for the latter. At each time step! observes the current value of. (All of these variables are either internal or external to so a subsystem never nees to observe variables outsie its scope.) Subsystem then instantiates the statevariables part of to generating a new local function enote by which only epens on local action variables. The subsystems must now combine their local functions to ecie which action is globally greey i.e. which action maximizes [. They can o so by RF making two passes over the subsystem tree one upwars an one ownwars. If the parent of is write for an assignment to their common action variables. In the upwars pass computes a conitional strategy for each assignment to its parent s actions. The value of this strategy is compute recursively: "IPO $ In the ownwars pass each subsystem chooses an action given the choices alreay mae: IL M"IPO. The cost of this action selection algorithm is linear in the number of subsystems an in the number of actions in each subsystem. Thus we can efficiently compute the greey policy associate with our compact value function. 9 Reusing subsystems plans an messages In typical realworl problems subsystems of the same type will appear in several places in the subsystem tree. For example in a car engine there will typically be several cyliner subsystems. In aition to the conceptual avantages of representing all cyliners the same way our algorithm can gain computational avantages by reusing both plans an messages in multiple parts of the subsystem tree. We can view a subsystem tree (Definition 4.1) as a class or template. Then when esigning a factorization for a new problem we can instantiate this class in multiple positions in our subsystem tree. We can also form complex classes out of simpler ones; instantiating a complex class then inserts a whole subtree into our tree (an also inicates how subsystems are groupe to form a hierarchical tree). ow suppose that we have foun a new policy for a subsystem. Our algorithm uses this policy to compute a set of ual variables as in (2) then marginalizes onto each of s separating sets (8) to generate constraints in rewar message LPs. These same ual variables are feasible for any subsystem of the same class as.

10 Therefore we can reuse by marginalizing it onto s separating sets as well to generate extra constraints. Furthermore we can recor in s class efinition an whenever a new subsystem tree uses another instance of s class we can save computation by reusing again. Finally if two whole subtrees of are equivalent we can reuse the subtree policy messages from our algorithm. More precisely two subtrees are equivalent if their roots are of the same class an their chilren are equivalent. Sets of equivalent subtrees contain sets of sameclass subsystems an so policies from subsystems in one subtree can be reuse in the others as escribe above. In aition mixe policies for the whole subtree can be reuse since they will be feasible for one subtree iff they are feasible for the other. That means that whenever we a a row to an (equations (9) an (10)) we can a the same row to an yieling further computational savings. 10 Conclusions In this paper we presente a principle an practical planning algorithm for collaborative multiagent problems. We represent such problems using a hierarchical ecomposition into local subsystems. Although each subsystem is small once these subsystems are combine we can represent an exponentially larger problem. Our planning algorithm can exploit this hierarchical structure for computational efficiency avoiing an exponential blowup. Furthermore this algorithm can be implemente in a istribute fashion where each agent only nees to solve local planning problems over its own subsystem. The global plan is compute by a message passing algorithm where messages are calculate by local LPs. Our representation an algorithm are suitable for heterogeneous systems where subsystem MDPs are represente in ifferent forms or solve by ifferent algorithms. For example one subsystem MDP coul be solve by policy iteration while other coul be tackle with a library of heuristic policies. Furthermore some subsystem MDPs coul have known moels while others coul be solve by reinforcement learning techniques. Our planning algorithm is guarantee to converge to the same solution as the centralize approach of Guestrin et al. [8] who experimentally teste the quality of their algorithm s policies on some benchmark problems. They conclue that the policies attaine nearoptimal performance on these problems an were significantly better than those prouce by some other methos. Our istribute algorithm converges to the same policies; so we woul expect to see the same positive results but with planning speeups from reuse an without the nee for centralize planning. We believe that hierarchical multiagent factore MDPs will facilitate the moeling of practical systems while our istribute planning approach will make them applicable to the control of very large stochastic ynamical systems. Acknowlegments We are very grateful to D. oller R. Parr an M. Rosencrantz for many useful iscussions. This work was supporte by the DoD MURI aministere by the Office of aval Research Grant ; Air Force contract F DARPA s TAS program; an AFRL contract F C0219 DARPA s MICA program. C. Guestrin was also supporte by a Siebel Scholarship. References [1] J. R. Birge. Solution methos for stochastic ynamic linear programs. Tech. Rep. SOL 8029 Stanfor Univ [2] C. Boutilier T. Dean an S. Hanks. Decision theoretic planning: Structural assumptions an computational leverage. Journal of Artificial Intelligence Research 11: [3] George B. Dantzig. Linear programming an extensions. Princeton University Press Princeton J [4] D.P. e Farias an B. Van Roy. Approximate ynamic programming via linear programming. In IPS [5] T. Dean an. anazawa. A moel for reasoning about persistence an causation. Comp. Intel. 5(3): [6] T. Dean an S. Lin. Decomposition techniques for planning in stochastic omains. In IJCAI [7] T. G. Dietterich. Hierarchical reinforcement learning with the MAX value function ecomposition. Journal of Artificial Intelligence Research 13: [8] C. Guestrin D. oller an R. Parr. Multiagent planning with factore MDPs. In IPS [9] M. Hauskrecht. Meuleau L. aelbling T. Dean an C. Boutilier. Hierarchical solution of Markov ecision processes using macroactions. In UAI [10] D. oller an R. Parr. Computing factore value functions for policies in structure MDPs. In IJCAI [11] D. oller an A. Pfeffer. Objectoriente Bayesian networks. In UAI [12] H. J. ushner an C. H. Chen. Decomposition of systems governe by Markov chains. IEEE Transactions on Automatic Control 19(5): [13] S. Lauritzen an D. Spiegelhalter. Local computations with probabilities on graphical structures an their application to expert systems. J. Royal Stat. Soc. B 50(2): [14] A. S. Manne. Linear programming an sequential ecisions. Management Science 6(3): [15]. Meuleau M. Hauskrecht. im L. Peshkin L. Pack aelbling T. Dean C. Boutilier. Solving very large weaklycouple Markov ecision processes. In AAAI [16] R. Parr an S. Russell. Reinforcement learning with hierarchies of machines. In IPS [17] M. L. Puterman. Markov ecision processes: Discrete stochastic ynamic programming. Wiley ew York [18] J. Ranløv an P. Alstrøm. Learning to rive a bicycle using reinforcement learning an shaping. In ICML [19] P. Schweitzer an A. Seimann. Generalize polynomial approximations in Markovian ecision processes. J. of Mathematical Analysis an Applications 110: [20] Herbert A. Simon. The Sciences of the Artificial. MIT Press Cambrige Massachusetts secon eition [21] S. Singh an D. Cohn. How to ynamically merge Markov ecision processes. In IPS [22] R. S. Sutton D. Precup an S. Singh. Between MDPs an semimdps: A framework for temporal abstraction in reinforcement learning. Artif. Intelligence 112:

Multiagent Planning with Factored MDPs

Multiagent Planning with Factored MDPs Appeared in Advances in Neural Information Processing Systems NIPS-14, 2001. Multiagent Planning with Factored MDPs Carlos Guestrin Computer Science Dept Stanford University guestrin@cs.stanford.edu Daphne

More information

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway State Inexe Policy Search by Dynamic Programming Charles DuHaway Yi Gu 5435537 503372 December 4, 2007 Abstract We consier the reinforcement learning problem of simultaneous trajectory-following an obstacle

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks TR-IIS-05-021 Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu, Pangfeng Liu, Da-Wei Wang, Jan-Jan Wu December 2005 Technical Report No. TR-IIS-05-021 http://www.iis.sinica.eu.tw/lib/techreport/tr2005/tr05.html

More information

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems

On the Role of Multiply Sectioned Bayesian Networks to Cooperative Multiagent Systems On the Role of Multiply Sectione Bayesian Networks to Cooperative Multiagent Systems Y. Xiang University of Guelph, Canaa, yxiang@cis.uoguelph.ca V. Lesser University of Massachusetts at Amherst, USA,

More information

Coupling the User Interfaces of a Multiuser Program

Coupling the User Interfaces of a Multiuser Program Coupling the User Interfaces of a Multiuser Program PRASUN DEWAN University of North Carolina at Chapel Hill RAJIV CHOUDHARY Intel Corporation We have evelope a new moel for coupling the user-interfaces

More information

Skyline Community Search in Multi-valued Networks

Skyline Community Search in Multi-valued Networks Syline Community Search in Multi-value Networs Rong-Hua Li Beijing Institute of Technology Beijing, China lironghuascut@gmail.com Jeffrey Xu Yu Chinese University of Hong Kong Hong Kong, China yu@se.cuh.eu.h

More information

Lecture 1 September 4, 2013

Lecture 1 September 4, 2013 CS 84r: Incentives an Information in Networks Fall 013 Prof. Yaron Singer Lecture 1 September 4, 013 Scribe: Bo Waggoner 1 Overview In this course we will try to evelop a mathematical unerstaning for the

More information

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my Calculus I course that I teach here at Lamar University. Despite the fact that these are my class notes, they shoul be accessible to anyone wanting to learn Calculus

More information

Divide-and-Conquer Algorithms

Divide-and-Conquer Algorithms Supplment to A Practical Guie to Data Structures an Algorithms Using Java Divie-an-Conquer Algorithms Sally A Golman an Kenneth J Golman Hanout Divie-an-conquer algorithms use the following three phases:

More information

CS 106 Winter 2016 Craig S. Kaplan. Module 01 Processing Recap. Topics

CS 106 Winter 2016 Craig S. Kaplan. Module 01 Processing Recap. Topics CS 106 Winter 2016 Craig S. Kaplan Moule 01 Processing Recap Topics The basic parts of speech in a Processing program Scope Review of syntax for classes an objects Reaings Your CS 105 notes Learning Processing,

More information

Online Appendix to: Generalizing Database Forensics

Online Appendix to: Generalizing Database Forensics Online Appenix to: Generalizing Database Forensics KYRIACOS E. PAVLOU an RICHARD T. SNODGRASS, University of Arizona This appenix presents a step-by-step iscussion of the forensic analysis protocol that

More information

Yet Another Parallel Hypothesis Search for Inverse Entailment Hiroyuki Nishiyama and Hayato Ohwada Faculty of Sci. and Tech. Tokyo University of Scien

Yet Another Parallel Hypothesis Search for Inverse Entailment Hiroyuki Nishiyama and Hayato Ohwada Faculty of Sci. and Tech. Tokyo University of Scien Yet Another Parallel Hypothesis Search for Inverse Entailment Hiroyuki Nishiyama an Hayato Ohwaa Faculty of Sci. an Tech. Tokyo University of Science, 2641 Yamazaki, Noa-shi, CHIBA, 278-8510, Japan hiroyuki@rs.noa.tus.ac.jp,

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu Institute of Information Science Acaemia Sinica Taipei, Taiwan Da-wei Wang Jan-Jan Wu Institute of Information Science

More information

Non-homogeneous Generalization in Privacy Preserving Data Publishing

Non-homogeneous Generalization in Privacy Preserving Data Publishing Non-homogeneous Generalization in Privacy Preserving Data Publishing W. K. Wong, Nios Mamoulis an Davi W. Cheung Department of Computer Science, The University of Hong Kong Pofulam Roa, Hong Kong {wwong2,nios,cheung}@cs.hu.h

More information

Offloading Cellular Traffic through Opportunistic Communications: Analysis and Optimization

Offloading Cellular Traffic through Opportunistic Communications: Analysis and Optimization 1 Offloaing Cellular Traffic through Opportunistic Communications: Analysis an Optimization Vincenzo Sciancalepore, Domenico Giustiniano, Albert Banchs, Anreea Picu arxiv:1405.3548v1 [cs.ni] 14 May 24

More information

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method Southern Cross University epublications@scu 23r Australasian Conference on the Mechanics of Structures an Materials 214 Transient analysis of wave propagation in 3D soil by using the scale bounary finite

More information

Solving Factored POMDPs with Linear Value Functions

Solving Factored POMDPs with Linear Value Functions IJCAI-01 workshop on Planning under Uncertainty and Incomplete Information (PRO-2), pp. 67-75, Seattle, Washington, August 2001. Solving Factored POMDPs with Linear Value Functions Carlos Guestrin Computer

More information

SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH

SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH Galen H Sasaki Dept Elec Engg, U Hawaii 2540 Dole Street Honolul HI 96822 USA Ching-Fong Su Fuitsu Laboratories of America 595 Lawrence Expressway

More information

CS269I: Incentives in Computer Science Lecture #8: Incentives in BGP Routing

CS269I: Incentives in Computer Science Lecture #8: Incentives in BGP Routing CS269I: Incentives in Computer Science Lecture #8: Incentives in BGP Routing Tim Roughgaren October 19, 2016 1 Routing in the Internet Last lecture we talke about elay-base (or selfish ) routing, which

More information

Learning Subproblem Complexities in Distributed Branch and Bound

Learning Subproblem Complexities in Distributed Branch and Bound Learning Subproblem Complexities in Distribute Branch an Boun Lars Otten Department of Computer Science University of California, Irvine lotten@ics.uci.eu Rina Dechter Department of Computer Science University

More information

6 Gradient Descent. 6.1 Functions

6 Gradient Descent. 6.1 Functions 6 Graient Descent In this topic we will iscuss optimizing over general functions f. Typically the function is efine f : R! R; that is its omain is multi-imensional (in this case -imensional) an output

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex boies is har Navin Goyal Microsoft Research Inia navingo@microsoftcom Luis Raemacher Georgia Tech lraemac@ccgatecheu Abstract We show that learning a convex boy in R, given ranom samples

More information

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES OLIVIER BERNARDI AND ÉRIC FUSY Abstract. We present bijections for planar maps with bounaries. In particular, we obtain bijections for triangulations an quarangulations

More information

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles 2D Kinematics Consier a robotic arm. We can sen it commans like, move that joint so it bens at an angle θ. Once we ve set each joint, that s all well an goo. More interesting, though, is the question of

More information

Kinematic Analysis of a Family of 3R Manipulators

Kinematic Analysis of a Family of 3R Manipulators Kinematic Analysis of a Family of R Manipulators Maher Baili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S. 6597 1, rue e la Noë, BP 92101,

More information

Approximate Linear Programming for Average-Cost Dynamic Programming

Approximate Linear Programming for Average-Cost Dynamic Programming Approximate Linear Programming for Average-Cost Dynamic Programming Daniela Pucci de Farias IBM Almaden Research Center 65 Harry Road, San Jose, CA 51 pucci@mitedu Benjamin Van Roy Department of Management

More information

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks : a Movement-Base Routing Algorithm for Vehicle A Hoc Networks Fabrizio Granelli, Senior Member, Giulia Boato, Member, an Dzmitry Kliazovich, Stuent Member Abstract Recent interest in car-to-car communications

More information

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Queueing Moel an Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Marc Aoun, Antonios Argyriou, Philips Research, Einhoven, 66AE, The Netherlans Department of Computer an Communication

More information

Study of Network Optimization Method Based on ACL

Study of Network Optimization Method Based on ACL Available online at www.scienceirect.com Proceia Engineering 5 (20) 3959 3963 Avance in Control Engineering an Information Science Stuy of Network Optimization Metho Base on ACL Liu Zhian * Department

More information

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control Almost Disjunct Coes in Large Scale Multihop Wireless Network Meia Access Control D. Charles Engelhart Anan Sivasubramaniam Penn. State University University Park PA 682 engelhar,anan @cse.psu.eu Abstract

More information

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 17, No 3 Sofia 017 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-017-0030 Particle Swarm Optimization Base

More information

Frequent Pattern Mining. Frequent Item Set Mining. Overview. Frequent Item Set Mining: Motivation. Frequent Pattern Mining comprises

Frequent Pattern Mining. Frequent Item Set Mining. Overview. Frequent Item Set Mining: Motivation. Frequent Pattern Mining comprises verview Frequent Pattern Mining comprises Frequent Pattern Mining hristian Borgelt School of omputer Science University of Konstanz Universitätsstraße, Konstanz, Germany christian.borgelt@uni-konstanz.e

More information

Optimal Oblivious Path Selection on the Mesh

Optimal Oblivious Path Selection on the Mesh Optimal Oblivious Path Selection on the Mesh Costas Busch Malik Magon-Ismail Jing Xi Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 280, USA {buschc,magon,xij2}@cs.rpi.eu Abstract

More information

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract The Reconstruction of Graphs Dhananay P. Mehenale Sir Parashurambhau College, Tila Roa, Pune-4030, Inia. Abstract In this paper we iscuss reconstruction problems for graphs. We evelop some new ieas lie

More information

Design of Policy-Aware Differentially Private Algorithms

Design of Policy-Aware Differentially Private Algorithms Design of Policy-Aware Differentially Private Algorithms Samuel Haney Due University Durham, NC, USA shaney@cs.ue.eu Ashwin Machanavajjhala Due University Durham, NC, USA ashwin@cs.ue.eu Bolin Ding Microsoft

More information

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace A Classification of R Orthogonal Manipulators by the Topology of their Workspace Maher aili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S.

More information

Improving Spatial Reuse of IEEE Based Ad Hoc Networks

Improving Spatial Reuse of IEEE Based Ad Hoc Networks mproving Spatial Reuse of EEE 82.11 Base A Hoc Networks Fengji Ye, Su Yi an Biplab Sikar ECSE Department, Rensselaer Polytechnic nstitute Troy, NY 1218 Abstract n this paper, we evaluate an suggest methos

More information

Intensive Hypercube Communication: Prearranged Communication in Link-Bound Machines 1 2

Intensive Hypercube Communication: Prearranged Communication in Link-Bound Machines 1 2 This paper appears in J. of Parallel an Distribute Computing 10 (1990), pp. 167 181. Intensive Hypercube Communication: Prearrange Communication in Link-Boun Machines 1 2 Quentin F. Stout an Bruce Wagar

More information

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs IEEE TRANSACTIONS ON KNOWLEDE AND DATA ENINEERIN, MANUSCRIPT ID Distribute Line raphs: A Universal Technique for Designing DHTs Base on Arbitrary Regular raphs Yiming Zhang an Ling Liu, Senior Member,

More information

Learning Polynomial Functions. by Feature Construction

Learning Polynomial Functions. by Feature Construction I Proceeings of the Eighth International Workshop on Machine Learning Chicago, Illinois, June 27-29 1991 Learning Polynomial Functions by Feature Construction Richar S. Sutton GTE Laboratories Incorporate

More information

d 3 d 4 d d d d d d d d d d d 1 d d d d d d

d 3 d 4 d d d d d d d d d d d 1 d d d d d d Proceeings of the IASTED International Conference Software Engineering an Applications (SEA') October 6-, 1, Scottsale, Arizona, USA AN OBJECT-ORIENTED APPROACH FOR MANAGING A NETWORK OF DATABASES Shu-Ching

More information

Comparison of Methods for Increasing the Performance of a DUA Computation

Comparison of Methods for Increasing the Performance of a DUA Computation Comparison of Methos for Increasing the Performance of a DUA Computation Michael Behrisch, Daniel Krajzewicz, Peter Wagner an Yun-Pang Wang Institute of Transportation Systems, German Aerospace Center,

More information

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means Classifying Facial Expression with Raial Basis Function Networks, using Graient Descent an K-means Neil Allrin Department of Computer Science University of California, San Diego La Jolla, CA 9237 nallrin@cs.ucs.eu

More information

Computer Organization

Computer Organization Computer Organization Douglas Comer Computer Science Department Purue University 250 N. University Street West Lafayette, IN 47907-2066 http://www.cs.purue.eu/people/comer Copyright 2006. All rights reserve.

More information

1 Surprises in high dimensions

1 Surprises in high dimensions 1 Surprises in high imensions Our intuition about space is base on two an three imensions an can often be misleaing in high imensions. It is instructive to analyze the shape an properties of some basic

More information

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation Solution Representation for Job Shop Scheuling Problems in Ant Colony Optimisation James Montgomery, Carole Faya 2, an Sana Petrovic 2 Faculty of Information & Communication Technologies, Swinburne University

More information

Cluster Center Initialization Method for K-means Algorithm Over Data Sets with Two Clusters

Cluster Center Initialization Method for K-means Algorithm Over Data Sets with Two Clusters Available online at www.scienceirect.com Proceia Engineering 4 (011 ) 34 38 011 International Conference on Avances in Engineering Cluster Center Initialization Metho for K-means Algorithm Over Data Sets

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.eu 6.854J / 18.415J Avance Algorithms Fall 2008 For inormation about citing these materials or our Terms o Use, visit: http://ocw.mit.eu/terms. 18.415/6.854 Avance Algorithms

More information

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources An Algorithm for Builing an Enterprise Network Topology Using Wiesprea Data Sources Anton Anreev, Iurii Bogoiavlenskii Petrozavosk State University Petrozavosk, Russia {anreev, ybgv}@cs.petrsu.ru Abstract

More information

Loop Scheduling and Partitions for Hiding Memory Latencies

Loop Scheduling and Partitions for Hiding Memory Latencies Loop Scheuling an Partitions for Hiing Memory Latencies Fei Chen Ewin Hsing-Mean Sha Dept. of Computer Science an Engineering University of Notre Dame Notre Dame, IN 46556 Email: fchen,esha @cse.n.eu Tel:

More information

Considering bounds for approximation of 2 M to 3 N

Considering bounds for approximation of 2 M to 3 N Consiering bouns for approximation of to (version. Abstract: Estimating bouns of best approximations of to is iscusse. In the first part I evelop a powerseries, which shoul give practicable limits for

More information

Modifying ROC Curves to Incorporate Predicted Probabilities

Modifying ROC Curves to Incorporate Predicted Probabilities Moifying ROC Curves to Incorporate Preicte Probabilities Cèsar Ferri DSIC, Universitat Politècnica e València Peter Flach Department of Computer Science, University of Bristol José Hernánez-Orallo DSIC,

More information

2-connected graphs with small 2-connected dominating sets

2-connected graphs with small 2-connected dominating sets 2-connecte graphs with small 2-connecte ominating sets Yair Caro, Raphael Yuster 1 Department of Mathematics, University of Haifa at Oranim, Tivon 36006, Israel Abstract Let G be a 2-connecte graph. A

More information

Polygon Simplification by Minimizing Convex Corners

Polygon Simplification by Minimizing Convex Corners Polygon Simplification by Minimizing Convex Corners Yeganeh Bahoo 1, Stephane Durocher 1, J. Mark Keil 2, Saee Mehrabi 3, Sahar Mehrpour 1, an Debajyoti Monal 1 1 Department of Computer Science, University

More information

Table-based division by small integer constants

Table-based division by small integer constants Table-base ivision by small integer constants Florent e Dinechin, Laurent-Stéphane Diier LIP, Université e Lyon (ENS-Lyon/CNRS/INRIA/UCBL) 46, allée Italie, 69364 Lyon Ceex 07 Florent.e.Dinechin@ens-lyon.fr

More information

Throughput Characterization of Node-based Scheduling in Multihop Wireless Networks: A Novel Application of the Gallai-Edmonds Structure Theorem

Throughput Characterization of Node-based Scheduling in Multihop Wireless Networks: A Novel Application of the Gallai-Edmonds Structure Theorem Throughput Characterization of Noe-base Scheuling in Multihop Wireless Networks: A Novel Application of the Gallai-Emons Structure Theorem Bo Ji an Yu Sang Dept. of Computer an Information Sciences Temple

More information

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis Multilevel Linear Dimensionality Reuction using Hypergraphs for Data Analysis Haw-ren Fang Department of Computer Science an Engineering University of Minnesota; Minneapolis, MN 55455 hrfang@csumneu ABSTRACT

More information

A Plane Tracker for AEC-automation Applications

A Plane Tracker for AEC-automation Applications A Plane Tracker for AEC-automation Applications Chen Feng *, an Vineet R. Kamat Department of Civil an Environmental Engineering, University of Michigan, Ann Arbor, USA * Corresponing author (cforrest@umich.eu)

More information

Distributed Decomposition Over Hyperspherical Domains

Distributed Decomposition Over Hyperspherical Domains Distribute Decomposition Over Hyperspherical Domains Aron Ahmaia 1, Davi Keyes 1, Davi Melville 2, Alan Rosenbluth 2, Kehan Tian 2 1 Department of Applie Physics an Applie Mathematics, Columbia University,

More information

Message Transport With The User Datagram Protocol

Message Transport With The User Datagram Protocol Message Transport With The User Datagram Protocol User Datagram Protocol (UDP) Use During startup For VoIP an some vieo applications Accounts for less than 10% of Internet traffic Blocke by some ISPs Computer

More information

An Adaptive Routing Algorithm for Communication Networks using Back Pressure Technique

An Adaptive Routing Algorithm for Communication Networks using Back Pressure Technique International OPEN ACCESS Journal Of Moern Engineering Research (IJMER) An Aaptive Routing Algorithm for Communication Networks using Back Pressure Technique Khasimpeera Mohamme 1, K. Kalpana 2 1 M. Tech

More information

Improving Performance of Sparse Matrix-Vector Multiplication

Improving Performance of Sparse Matrix-Vector Multiplication Improving Performance of Sparse Matrix-Vector Multiplication Ali Pınar Michael T. Heath Department of Computer Science an Center of Simulation of Avance Rockets University of Illinois at Urbana-Champaign

More information

Fast Fractal Image Compression using PSO Based Optimization Techniques

Fast Fractal Image Compression using PSO Based Optimization Techniques Fast Fractal Compression using PSO Base Optimization Techniques A.Krishnamoorthy Visiting faculty Department Of ECE University College of Engineering panruti rishpci89@gmail.com S.Buvaneswari Visiting

More information

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body International Engineering Mathematics Volume 04, Article ID 46593, 7 pages http://x.oi.org/0.55/04/46593 Research Article Invisci Uniform Shear Flow past a Smooth Concave Boy Abullah Mura Department of

More information

Pairwise alignment using shortest path algorithms, Gunnar Klau, November 29, 2005, 11:

Pairwise alignment using shortest path algorithms, Gunnar Klau, November 29, 2005, 11: airwise alignment using shortest path algorithms, Gunnar Klau, November 9,, : 3 3 airwise alignment using shortest path algorithms e will iscuss: it graph Dijkstra s algorithm algorithm (GDU) 3. References

More information

Software Reliability Modeling and Cost Estimation Incorporating Testing-Effort and Efficiency

Software Reliability Modeling and Cost Estimation Incorporating Testing-Effort and Efficiency Software Reliability Moeling an Cost Estimation Incorporating esting-effort an Efficiency Chin-Yu Huang, Jung-Hua Lo, Sy-Yen Kuo, an Michael R. Lyu -+ Department of Electrical Engineering Computer Science

More information

Implicit and Explicit Functions

Implicit and Explicit Functions 60_005.q //0 :5 PM Page SECTION.5 Implicit Differentiation Section.5 EXPLORATION Graphing an Implicit Equation How coul ou use a graphing utilit to sketch the graph of the equation? Here are two possible

More information

Politehnica University of Timisoara Mobile Computing, Sensors Network and Embedded Systems Laboratory. Testing Techniques

Politehnica University of Timisoara Mobile Computing, Sensors Network and Embedded Systems Laboratory. Testing Techniques Politehnica University of Timisoara Mobile Computing, Sensors Network an Embee Systems Laboratory ing Techniques What is testing? ing is the process of emonstrating that errors are not present. The purpose

More information

Chapter 5 Proposed models for reconstituting/ adapting three stereoscopes

Chapter 5 Proposed models for reconstituting/ adapting three stereoscopes Chapter 5 Propose moels for reconstituting/ aapting three stereoscopes - 89 - 5. Propose moels for reconstituting/aapting three stereoscopes This chapter offers three contributions in the Stereoscopy area,

More information

Analytical approximation of transient joint queue-length distributions of a finite capacity queueing network

Analytical approximation of transient joint queue-length distributions of a finite capacity queueing network Analytical approximation of transient joint queue-length istributions of a finite capacity queueing network Gunnar Flötterö Carolina Osorio March 29, 2013 Problem statements This work contributes to the

More information

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 31, NO. 4, APRIL

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 31, NO. 4, APRIL IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 1, NO. 4, APRIL 01 74 Towar Efficient Distribute Algorithms for In-Network Binary Operator Tree Placement in Wireless Sensor Networks Zongqing Lu,

More information

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides Threshol Base Data Aggregation Algorithm To Detect Rainfall Inuce Lanslies Maneesha V. Ramesh P. V. Ushakumari Department of Computer Science Department of Mathematics Amrita School of Engineering Amrita

More information

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE БСУ Международна конференция - 2 THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE Evgeniya Nikolova, Veselina Jecheva Burgas Free University Abstract:

More information

Just-In-Time Software Pipelining

Just-In-Time Software Pipelining Just-In-Time Software Pipelining Hongbo Rong Hyunchul Park Youfeng Wu Cheng Wang Programming Systems Lab Intel Labs, Santa Clara What is software pipelining? A loop optimization exposing instruction-level

More information

APPLYING GENETIC ALGORITHM IN QUERY IMPROVEMENT PROBLEM. Abdelmgeid A. Aly

APPLYING GENETIC ALGORITHM IN QUERY IMPROVEMENT PROBLEM. Abdelmgeid A. Aly International Journal "Information Technologies an Knowlege" Vol. / 2007 309 [Project MINERVAEUROPE] Project MINERVAEUROPE: Ministerial Network for Valorising Activities in igitalisation -

More information

ACE: And/Or-parallel Copying-based Execution of Logic Programs

ACE: And/Or-parallel Copying-based Execution of Logic Programs ACE: An/Or-parallel Copying-base Execution of Logic Programs Gopal GuptaJ Manuel Hermenegilo* Enrico PontelliJ an Vítor Santos Costa' Abstract In this paper we present a novel execution moel for parallel

More information

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways Ben, Jogs, An Wiggles for Railroa Tracks an Vehicle Guie Ways Louis T. Klauer Jr., PhD, PE. Work Soft 833 Galer Dr. Newtown Square, PA 19073 lklauer@wsof.com Preprint, June 4, 00 Copyright 00 by Louis

More information

15-780: MarkovDecisionProcesses

15-780: MarkovDecisionProcesses 15-780: MarkovDecisionProcesses J. Zico Kolter Feburary 29, 2016 1 Outline Introduction Formal definition Value iteration Policy iteration Linear programming for MDPs 2 1988 Judea Pearl publishes Probabilistic

More information

Variable Independence and Resolution Paths for Quantified Boolean Formulas

Variable Independence and Resolution Paths for Quantified Boolean Formulas Variable Inepenence an Resolution Paths for Quantifie Boolean Formulas Allen Van Geler http://www.cse.ucsc.eu/ avg University of California, Santa Cruz Abstract. Variable inepenence in quantifie boolean

More information

Try It. Implicit and Explicit Functions. Video. Exploration A. Differentiating with Respect to x

Try It. Implicit and Explicit Functions. Video. Exploration A. Differentiating with Respect to x SECTION 5 Implicit Differentiation Section 5 Implicit Differentiation Distinguish between functions written in implicit form an eplicit form Use implicit ifferentiation to fin the erivative of a function

More information

Math 131. Implicit Differentiation Larson Section 2.5

Math 131. Implicit Differentiation Larson Section 2.5 Math 131. Implicit Differentiation Larson Section.5 So far we have ealt with ifferentiating explicitly efine functions, that is, we are given the expression efining the function, such as f(x) = 5 x. However,

More information

Algorithm for Intermodal Optimal Multidestination Tour with Dynamic Travel Times

Algorithm for Intermodal Optimal Multidestination Tour with Dynamic Travel Times Algorithm for Intermoal Optimal Multiestination Tour with Dynamic Travel Times Neema Nassir, Alireza Khani, Mark Hickman, an Hyunsoo Noh This paper presents an efficient algorithm that fins the intermoal

More information

Random Clustering for Multiple Sampling Units to Speed Up Run-time Sample Generation

Random Clustering for Multiple Sampling Units to Speed Up Run-time Sample Generation DEIM Forum 2018 I4-4 Abstract Ranom Clustering for Multiple Sampling Units to Spee Up Run-time Sample Generation uzuru OKAJIMA an Koichi MARUAMA NEC Solution Innovators, Lt. 1-18-7 Shinkiba, Koto-ku, Tokyo,

More information

Discrete Markov Image Modeling and Inference on the Quadtree

Discrete Markov Image Modeling and Inference on the Quadtree 390 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 3, MARCH 2000 Discrete Markov Image Moeling an Inference on the Quatree Jean-Marc Laferté, Patrick Pérez, an Fabrice Heitz Abstract Noncasual Markov

More information

Figure 1: Schematic of an SEM [source: ]

Figure 1: Schematic of an SEM [source:   ] EECI Course: -9 May 1 by R. Sanfelice Hybri Control Systems Eelco van Horssen E.P.v.Horssen@tue.nl Project: Scanning Electron Microscopy Introuction In Scanning Electron Microscopy (SEM) a (bunle) beam

More information

Tight Wavelet Frame Decomposition and Its Application in Image Processing

Tight Wavelet Frame Decomposition and Its Application in Image Processing ITB J. Sci. Vol. 40 A, No., 008, 151-165 151 Tight Wavelet Frame Decomposition an Its Application in Image Processing Mahmu Yunus 1, & Henra Gunawan 1 1 Analysis an Geometry Group, FMIPA ITB, Banung Department

More information

WLAN Indoor Positioning Based on Euclidean Distances and Fuzzy Logic

WLAN Indoor Positioning Based on Euclidean Distances and Fuzzy Logic WLAN Inoor Positioning Base on Eucliean Distances an Fuzzy Logic Anreas TEUBER, Bern EISSFELLER Institute of Geoesy an Navigation, University FAF, Munich, Germany, e-mail: (anreas.teuber, bern.eissfeller)@unibw.e

More information

Additional Divide and Conquer Algorithms. Skipping from chapter 4: Quicksort Binary Search Binary Tree Traversal Matrix Multiplication

Additional Divide and Conquer Algorithms. Skipping from chapter 4: Quicksort Binary Search Binary Tree Traversal Matrix Multiplication Aitional Divie an Conquer Algorithms Skipping from chapter 4: Quicksort Binary Search Binary Tree Traversal Matrix Multiplication Divie an Conquer Closest Pair Let s revisit the closest pair problem. Last

More information

Image compression predicated on recurrent iterated function systems

Image compression predicated on recurrent iterated function systems 2n International Conference on Mathematics & Statistics 16-19 June, 2008, Athens, Greece Image compression preicate on recurrent iterate function systems Chol-Hui Yun *, Metzler W. a an Barski M. a * Faculty

More information

A Convex Clustering-based Regularizer for Image Segmentation

A Convex Clustering-based Regularizer for Image Segmentation Vision, Moeling, an Visualization (2015) D. Bommes, T. Ritschel an T. Schultz (Es.) A Convex Clustering-base Regularizer for Image Segmentation Benjamin Hell (TU Braunschweig), Marcus Magnor (TU Braunschweig)

More information

Using Vector and Raster-Based Techniques in Categorical Map Generalization

Using Vector and Raster-Based Techniques in Categorical Map Generalization Thir ICA Workshop on Progress in Automate Map Generalization, Ottawa, 12-14 August 1999 1 Using Vector an Raster-Base Techniques in Categorical Map Generalization Beat Peter an Robert Weibel Department

More information

4.2 Implicit Differentiation

4.2 Implicit Differentiation 6 Chapter 4 More Derivatives 4. Implicit Differentiation What ou will learn about... Implicitl Define Functions Lenses, Tangents, an Normal Lines Derivatives of Higher Orer Rational Powers of Differentiable

More information

filtering LETTER An Improved Neighbor Selection Algorithm in Collaborative Taek-Hun KIM a), Student Member and Sung-Bong YANG b), Nonmember

filtering LETTER An Improved Neighbor Selection Algorithm in Collaborative Taek-Hun KIM a), Student Member and Sung-Bong YANG b), Nonmember 107 IEICE TRANS INF & SYST, VOLE88 D, NO5 MAY 005 LETTER An Improve Neighbor Selection Algorithm in Collaborative Filtering Taek-Hun KIM a), Stuent Member an Sung-Bong YANG b), Nonmember SUMMARY Nowaays,

More information

Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach

Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach Jian Zhao, Chuan Wu The University of Hong Kong {jzhao,cwu}@cs.hku.hk Abstract Peer-to-peer (P2P) technology is popularly

More information

On the Placement of Internet Taps in Wireless Neighborhood Networks

On the Placement of Internet Taps in Wireless Neighborhood Networks 1 On the Placement of Internet Taps in Wireless Neighborhoo Networks Lili Qiu, Ranveer Chanra, Kamal Jain, Mohamma Mahian Abstract Recently there has emerge a novel application of wireless technology that

More information

Animated Surface Pasting

Animated Surface Pasting Animate Surface Pasting Clara Tsang an Stephen Mann Computing Science Department University of Waterloo 200 University Ave W. Waterloo, Ontario Canaa N2L 3G1 e-mail: clftsang@cgl.uwaterloo.ca, smann@cgl.uwaterloo.ca

More information

Interior Permanent Magnet Synchronous Motor (IPMSM) Adaptive Genetic Parameter Estimation

Interior Permanent Magnet Synchronous Motor (IPMSM) Adaptive Genetic Parameter Estimation Interior Permanent Magnet Synchronous Motor (IPMSM) Aaptive Genetic Parameter Estimation Java Rezaie, Mehi Gholami, Reza Firouzi, Tohi Alizaeh, Karim Salashoor Abstract - Interior permanent magnet synchronous

More information

AnyTraffic Labeled Routing

AnyTraffic Labeled Routing AnyTraffic Labele Routing Dimitri Papaimitriou 1, Pero Peroso 2, Davie Careglio 2 1 Alcatel-Lucent Bell, Antwerp, Belgium Email: imitri.papaimitriou@alcatel-lucent.com 2 Universitat Politècnica e Catalunya,

More information

Solutions to Tutorial 1 (Week 8)

Solutions to Tutorial 1 (Week 8) The University of Syney School of Mathematics an Statistics Solutions to Tutorial 1 (Week 8) MATH2069/2969: Discrete Mathematics an Graph Theory Semester 1, 2018 1. In each part, etermine whether the two

More information

Lab work #8. Congestion control

Lab work #8. Congestion control TEORÍA DE REDES DE TELECOMUNICACIONES Grao en Ingeniería Telemática Grao en Ingeniería en Sistemas e Telecomunicación Curso 2015-2016 Lab work #8. Congestion control (1 session) Author: Pablo Pavón Mariño

More information