DJAO: A Communication-Constrained DCOP algorithm that combines features of ADOPT and Action-GDL

Size: px
Start display at page:

Download "DJAO: A Communication-Constrained DCOP algorithm that combines features of ADOPT and Action-GDL"

Transcription

1 JAO: A ommunication-onstrained OP algorithm that combines features of AOPT and Action-GL Yoonheui Kim and Victor Lesser University of Massachusetts at Amherst MA 3, USA {ykim,lesser}@cs.umass.edu Abstract In this paper we propose a novel OP algorithm, called JAO, that is able to efficiently find a solution with low communication overhead; this algorithm can be used for optimal and bounded approximate solutions by appropriately setting the error bounds. Our approach builds on distributed junction trees used in Action-GL to represent independence relations among variables. We construct an / search space based on these junction trees. This new type of search space results in higher degrees for each node, consequently yielding a more efficient search graph in the distributed settings. JAO uses a branch-and-bound search algorithm to distributedly find solutions within this search graph. We introduce heuristics to compute the upper and lower bound estimates that the search starts with, which is integral to our approach for reducing communication overhead. We empirically evaluate our approach in various settings. Introduction In this paper, we formulate a new algorithm for distributed constraint optimization problems (OPs), called JAO, which works on a distributed junction tree (Paskin, Guestrin, and McFadden 25). JAO operates in two phases. In the first phase, heuristic upper and lower bounds for variable value configurations are created using a bottom-up propagation scheme similar in character to Action-GL (Vinyals, Rodriguez-Aguilar, and erquides 2). Except that instead of transmitting values for all configurations, we transmit only the filtered upper and lower bounds of configuration values. The next phase using these heuristics conducts an AOPT-like (Modi et al. 25; Yeoh, Felner, and Koenig 28) search on / search graph based on the junction tree, which we call / search junction graph, to find a solution with desired precision. This two-phase strategy reduces overall communication significantly. / search tree and context-minimal / search graph An / search space (Marinescu and echter 25) is introduced to exploit independencies encoded by the graph- opyright c 24, Association for the Advancement of Artificial Intelligence ( All rights reserved. ical model upon which OP algorithms such as Action- GL and Max-Sum (Farinelli et al. 28) are constructed. An / search tree is a search space with additive nodes whose subtrees denote disjoint search spaces under different variables in addition to nodes in traditional search trees whose subtrees denote disjoint search spaces under values of variables. nodes decompose the search space in their subtrees under Generalized istributive Law framework (Aji and McEliece 2). It reduces the size of OP search space from O(exp(n)) to O(n exp(m)), where m is the depth of the pseudo-tree (Koller and Friedman 29) and n is the number of variables. OP algorithms such as AOPT (Modi et al. 25) and n- AOPT (Yeoh, Felner, and Koenig 28) can be viewed as distributed search algorithms on this / search space. efinition (/ search tree) Given a OP instance P, its primal graph G and a pseudotree T of G, the associated / search tree S T (P ) has alternating levels of nodes and nodes. The nodes are labeled X i and correspond to variables. The nodes are labeled X i, a and correspond to value assignments in the domains of variables. The root of the / search tree is an node, labeled with the root of T. The children of an node X i are nodes labeled with assignments X i, a, consistent along the path from the root. The children of an node X i, a are nodes labeled with the children of variable X i in T. The path of a node n S T, denoted P ath ST (n), is the path from the root of S T to n, and corresponds to a partial value assignment to all variables along the path. An example of / search tree is given in Fig. a. ecause nodes decompose the problem into separate subproblems, variables in different subtrees of an node n are considered independently given the value assignment along the path to n. The arcs in S T are annotated by appropriate labels of the cost functions. efinition 2 (label) The label l(x i, X i, a ) is defined as the sum of all the cost function values for which variable X i is contained in their scope and whose scope is fully assigned along the path from root to n. efinition 3 (value) The value v(n) of a node n S T, is defined recursively as follows: (i) if n = X i, a is a terminal node then v(n) = l(x i, X i, a ); (ii) if n =

2 X i, a is an internal node then v(n)=(x i, X i, a ) + n succ(n) v(n ); (iii) if n = X i is an internal node then v(n) = max n succ(n) v(n ), where succ(n) are the children of n in S T. In (echter and Mateescu 27), the / search graph shown in Fig. c was introduced to reduce the size of the search tree in Fig a by merging two nodes that root identical subtrees. A ontext-based merge operation is defined as either i) merging two nodes that share the same variable assignments on the ancestors of these nodes, which have connections in G to these nodes or their descendants, or ii) merging two nodes that share assignments on these nodes and ancestors of nodes, which have connections in G to these nodes descendants. Example For the primal graph G in Fig. b and a search tree in Fig. a for G, nodes for that shares assignments for can be merged as is connected to in G. In contrast, nodes for cannot be merged as the assignments for A and should match. nodes at the lowest level can be merged if these nodes share the assignments as these nodes do not have any descendant. A A A (b) (d) (a) A (c) Figure : an / search graph efinition 4 (context minimal / graph) The / search graph of G that is closed under contextbased merge operator is called a context minimal / search graph. Junction Tree and OP A distributed constraint optimization problem (OP) instance P = A, X,, F is formally defined by the following parameters: A set of variables X = {X,..., X r }, where each variable has a finite domain (maximum size N) of possible values that it can be assigned. A A set of constraint functions F = (F,..., F k ), where each constraint function, F j : X j R, takes as input any setting of the variables X j X and provides a real valued utility. In OP, we assume that each variable x i is owned by an agent a i A and that an agent only knows about the constraint functions in which it is involved. The OP can be represented using a constraint network, where there is a node corresponding to each variable x i and where there is an edge (hyper-edge) for each constraint F j that connects all variables that are involved in the function F j. The objective in the OP is to find the complete variable configuration x that maximizes F F j F j(x j ). The dual constraint graph (Koller and Friedman 29) is a transformation of a non-binary network into a special type of binary network. It contains cliques (or c-variables) domains of which ranges over all possible value combinations permitted by the corresponding constraint functions, and shared variables in any two adjacent cliques have same values. A junction tree (or join tree) (owell et al. 27) T is a subgraph of the dual graph which is a tree and satisfies the condition that cliques associated with a variable x form a connected subset of T. A Junction tree is represented as a tuple X,, S, F. where X is a set of variables, is a set of cliques, where each clique i is a subset of variables i X; S is a set of separators, where each separator is an arc between two adjacent cliques containing their intersection; and F is a set of potentials, where each potential in F is assigned to each clique in. A distributed junction tree (Paskin, Guestrin, and McFadden 25) decomposes a OP into a series of subproblems, some of which can be solved in parallel. A subproblem represented as a clique c i can be solved independently given the local constraint functions f i F and the values from neighbors on separator s i S. Separators S specify which values will be used in the neighboring cliques in order to compute the solution for its local subproblem. JAO(k) First Phase: Heuristics Generation Preprocessing techniques to supply the search with heuristic values has successfully been used to enhance both centralized and decentralized search methods. (Ali, Koenig, and Tambe 25; Wallace 996). In this section we describe a scheme for generating initial heuristic estimates h U and h L used in JAO(k), based on a new function filtering technique, which we call Soft Filtering, described here. (Pujol-Gonzalez et al. 2; rito and Meseguer 2) used the Function Filtering technique (rito and Meseguer 2) on OPs to prune variable configurations of local nodes, that do not yield the optimal solution. We use the soft function filtering technique to generate heuristics that maintains the tuples that potentially yield the optimal solution while summarizing the rest with upper and lower bounds. Unlike the heuristics in (Ali, Koenig, and Tambe 25; Wallace 996) which are generated by solving lower complexity problems than the original, JAO solves the original

3 problem and focuses on reducing communication by filtering tuples that are unlikely to be part of the optimal solution. The Soft Filtering technique used in JAO summarizes constraint functions to reduce communication required for transmitting such function. A simple difference from Function Filtering is that the Soft Filtering technique provides summarized lower and upper bounds on filtered configurations. Let the variable configuration S in message m be divided into two sets filtered configurations S F, and nonfiltered configurations S NF. Let U be the upper bounds of values on variable configurations and L the lower bounds. The values U m and L m in the messages are filtered as follows. { U(v) if v SNF U m (v) = max SF U(v) if v S F L m is similarly defined with max replaced with min. Filtered configurations are summarized as a filtered tuple with a single upper and lower bounds, therefore reducing the number of items in each message from 2S to 2 S NF +2. The optimal strategy is guaranteed to remain in the search space as no solution is completely dropped. This summarization builds a basis for the next phase where an AOPT-like search finds a solution within a desired accuracy. Among many ways to select which items to filter, we select items in the bottom ( d)% of the function range. Second Phase: Search on / junction graph On the pseudo-tree based search graph, functions are evaluated only when their scope is fully assigned along the path. The search backtracks to evaluate different variable assignment which occurs among the domain of a single variable at each level. This complete decentralization in value selection in the distributed setting results in the exponential number of messages in AOPT. Instead, we introduce / search graph on a junction tree where each level is associated with each clique in the junction tree in a OP (See Fig. 2), consequently yielding a more compact search graph with a lower number of nodes. This search graph is a contextminimal / search graph upon construction. A Figure 2: an / search graph based on a junction tree efinition 5 (/ search junction graph) Given a OP instance P and its junction tree T, the associated / search junction graph S T (P ) has alternating levels of nodes and nodes. The nodes are labeled i : S i, a, N i where a are variables assignments in the domains of variables in separators S i whose value are propagated from ancestors and newly appeared variables N i in clique i. These nodes correspond to the cliques with partial assignment. The nodes are labeled S ij, b and correspond to value assignments in the domains of the separator between clique i and its child j. The root of the / search graph is an node, labeled with the root of T. The children of an node S ij, b are nodes who are labeled with j : S j, b, N j with the same assignment on variables in separators S ij and S j. Example onsider the graphical model in Fig. b describing a graph coloring problem over domains {,}. An / search graph based on a possible pseudo-tree is given in Fig c and an / search junction graph in Fig d is given in Fig. 2. Observe that the function evaluation on l({a, }, a) occurs at the expansion of nodes at level 3 in Fig. c instead of at level in Fig. 2. On Fig. c, the search is unguided until the third expansion. It also elongates the backtrack path leading to an increase in the number of visited nodes to evaluate a single function. Theorem Given a OP instance P and a junction tree T, its / search junction graph is sound and complete. It contains all and only solutions. The solution space in / search junction graph is identical to the junction tree, thus it contains all and only solutions. onsequently, any search algorithm that traverses the / search junction graph in a depth-first manner is guaranteed to have a time complexity equal to the time complexity of Action-GL (Vinyals, Rodriguez-Aguilar, and erquides 2) on the same junction tree which is exponential in the tree width. Theorem 2 The size of search junction graph has exactly same size as the total complexity of junction tree as no subtree is redundant. The depth of the graph does not exceed the number of agents. The search result for its subtree is stored at each node, therefore no identical subtree is explored twice and a value assignment on a cost function is never repeated. The arcs in S T are annotated by appropriate labels of the cost functions. The nodes in S T are associated with a value, accumulating the result of the computation resulted from the subtree below. labels on the arcs are determined by values from neighboring nodes unlike the one defined in ef. 3. efinition 6 label: The label l( i : S i, a, N i, S ij, b ) of the arc from the node to the node S ij, b is defined as the cost function values contained in the clique i whose scope is fully assigned with values from the parent node and its child node. The value v(n) of a node n S T (P ) is computed in the same way as in ef. 3. Likewise, the value of each node can be recursively computed from leaves to root. We can show that: Proposition Given an / search graph S T (P ) of a OP instance, the value function v(n) is the maximum cost solution to the subproblem rooted at n, subject to the current variable instantiation along the path from root to n. If n is the root of S T, then v(n) is the maximum cost solution to P.

4 We prove optimality by verifying the value of nodes are identical to the values produced during the execution of Action-GL. The value of an node is identical to the value of corresponding assignments in the messages from the corresponding clique of Action-GL given the context along the search path to these nodes. Valuation of nodes is the combination of local utility functions and values of its child nodes and corresponds to the maximum achievable value of variable assignments given the variable assignments along the search path. Proposition 2 / search junction graph S T (P ) is context-minimal upon construction The separators S i and S ij contain all and only variables that build a context for each node and a single node is created for each value assignment in the separator, thus it is contextminimal. Search in distributed settings Each agent in the system distributedly conducts its share of search for the nodes on S T (P ) it owns. Agents are responsible for valuation of owned nodes and path determination. efinition 7 Agent ownership: Each node in S T (P ) is owned by an agent. Agent A i owns all nodes associated with its own clique i : nodes i : S i, a, N i and child nodes of these are assigned to A i. For example, suppose clique A, A and in Fig d are owned by agent A, A 2, and A 3 respectively. nodes of clique are 3 :,,, 3 :,,. These nodes and child nodes of these are assigned to A 3. Search procedure between nodes belonging to different agents incurs communication. When an agent A j chooses to expand a child node i : S i, a, N i, A j transmits partial assignments a to an agent A i who owns the child nodes. Updated function values are sent back to A j when the search backs up. Search paths that incur communication are displayed as dotted lines in Fig. 2. JAO on / search junction graph If each node n S T (P ) is assigned a heuristic lower-bound estimate L(n) and heuristic upper-bound estimate UP(n), then we can calculate the lower and upper bound estimates of assignments and dominated search space can be pruned. ounds on Partial Solution Similarly to (Marinescu and echter 25), a partially expanded search graph, denoted as PSG, contains the root node, will have a frontier containing all the nodes that were generated but not expanded. Each expansion of a leaf node on PSG updates the lower and upper bound estimates on / search graph. An active partial subtree AP T (n) rooted at a node n ending at a tip node t contains the path between n and t, and all children of nodes on the path. A dynamic heuristic function of a node n relative to the current PSG given the initial heuristic functions h U and h L can be computed. efinition 8 (ynamic Lower and Upper ound) Given an active partial tree AP T (n), the dynamic heuristic estimate of upper and lower bound function, U(n) and L(n), is defined recursively as follows: (i) if there is a single node n in AP T (n) and is evaluated, then U(n) = v(n) = L(n) else if n is a single node in AP T U(n) = h U (n) and L(n) = h L (n); (ii) n = S ij, b is an node, having children m,..., m k, and label = l( i : S i, a, N i, S ij, b ), then U(n) = min(h U (n), label + k i= U(m i)) L(n) = max(h L (n), label + k i= L(m)) ; (iii) if n = i : P i, a, N i where n is an node, having an child m, then U(n) = min(h(n), U(m)) and L(n) = max(h(n), L(m i )). Theorem 3 L(n) is a lower bound on the optimal solution to the subproblem rooted at n, namely L(n) v(n), and also by definition L(n) h L (n). Also, U(n) v(n) and U(n) h L (n). Proof: We will prove by induction assuming the correctness of heuristics that v(n) h U(n), v(n) h L(n). asis: At leaf nodes of / junction search graph, it is trivial that v(n) = U(n) = L(n) as v(n) is computed using local constraints and does not involve any heuristic function. Induction step: At any node having children m,..., m k, v(n) = label + i v(mi) label + i U(mi), where v(m i) U(m i). v(n) min(h U(n), label + i U(mi)) = U(n), where v(n) h U(n). At any node having child m = argmax i v(m i), v(n) = v(m) U(m) v(n) min(h U(n), U(m)) = U(n). L(n) v(n) can be proved similarly. Therefore, L(n) v(n) U(n). Also, U (n) and L(n) provides tighter bounds than the initial heuristic functions. Proposition 3 (Pruning rule) For any node n and its sibling m, if U(n) < L(m) or U(n) = L(n) then subtree below n can be pruned. JAO(k) We now set up a JAO search on S T (P ) whose nodes are assigned to agents. Starting from the root agent given the initial heuristic upper and lower bound functions h U and h L, the objective is to search one of the solution that satisfies the termination condition while pruning dominated candidate solutions. JAO agents use three types of messages: VALUE, OST, and TERMINATE. At the start, the root agent expands the nodes from its node and selects the best branch in the subtree and sends VALUE messages containing variable values on the chosen branch to its child nodes. Upon receipt of a VALUE message, an agent evaluates whether the back-up condition is satisfied for the given value assignments b in the message. If the back-up condition is satisfied, the agent backs up with updated values by sending a OST message to its parent. Otherwise, it expands the nodes compatible with b and selects the best branch and sends a VALUE message to that child node.

5 Algorithm : JAO(k)() procedure Init() wait ; // number of waited messages k i, k c ; // own and child s k value m b nil ; // node in par(a i) to backtrack to m, m ; // node with max, second max U n c; // node context-compatible with m b procedure RootRun() Init(); UpdateM(); if (hecktermination()) then Send(TERMINATE) to c succ(a i); terminate; end k i U(m ) max(u(m ) k, L(m )); wait succ(a i) ; Send(VALUE, m, k i); loop forever while (message queue is not empty) do pop msg off message queue; When Received(msg); if (hecktermination()) then Send(TERMINATE) to c succ(a i); terminate; else if ( wait==) then UpdateM(); k i = U(m ) max(u(m ) k, L(m )); Send(VALUE, m, k i) to c succ(a i); end procedure Run() Init(); loop forever while (message queue is not empty) do pop msg off message queue; When Received(msg); if (ecide ackup() && wait==) then Send(OST, m b, U(m b ), L(m b )) to par(a i) else if (wait==) then UpdateM(); Send(VALUE, m, k c), to c succ(a i) end Upon receipt of a OST message containing the updated lower and upper bounds on the chosen expanded nodes from all child nodes, it recalculates the lower and upper bounds of its node. It then re-evaluates the back-up condition for the received VALUE message. Unless it satisfies the back-up condition, then the question of which branch to select is re-examined and the agents sends another VALUE message to its children. These steps are repeated until a termination condition in Prop 4 holds for the root agent. It then sends a TERMINATE message to each of its children and terminate. Upon receipt of a TERMINATE message, each agent does the same. Algorithm 2: JAO(k)(2) procedure UpdateM() m = argmax U(m), for m succ(n r); m = argmax U(m), for m succ(n r) \ m ; procedure When Received(OST, m, v U, v L) wait wait, U(m) v U, L(m) v L; U (n) U(n), where n = par(m) ; U(n) max(u(n), U(m)) ; L(n) max(l(n), L(m)); procedure When Received(VALUE, m, k) k i k, m b m, wait succ(a i) ; n c context compatible(m),; procedure ecide ackup() if (U(n c) U (n c) k i) then return true; else k c (k i (U(n) U (n)))/ succ(a i) ; return false; end procedure When Received(TERMINATE) Send(TERMINATE) to c succ(a i); terminate; procedure heck Termination() if ( U(n r) == L(n r)) then return true; else if for m succ(n r)\m, U(m) L(m ) then return true; return false; Proposition 4 Given an node n and nodes m,..., m k at the root agent, JAO(k) is terminated if U and L satisfies the condition U(n) = L(n) or i, U(m j ) L(m i ) for j, i j. Each agent stores the lower and upper bounds of expanded nodes and updates these values upon each OST message arrival. The memory requirement for each agent does not exceed O(n d ) where n is the size of variable domain and d is the induced width of the junction tree. Among many different search strategies which determines the back-up condition for solving OP and OP, bestfirst search and depth-first branch-and-bound search have been primarily studied (Marinescu and echter 25; 27; Modi et al. 25; Yeoh, Felner, and Koenig 28). est-first search always follows the best item found and in the distributed setting whenever there is an update, agents propagate it to all ancestors whose best items may change. On the other hand, depth-first search retains the current path until it is certainly dominated or the true value of node v(n) is found. In (Gutierrez, Meseguer, and Yeoh 2), AOP T (k) provides a trade-off between these two extremes, where the search keeps the current path until the distance between the best solution on the current path and the best solution found so far becomes greater than a given constant k. Similarly, we developed JAO(k), which subsumes both depth-first and best-first search strategy on / search junction graph. It performs depth-first when k =, best-

6 first when k = ɛ, and a hybrid when ɛ < k <, where k is the distance between the best found solution U(m ) and the next best solution U(m ) found so far. Unlike AOPT(k) in which each agent determines k using the best solutions based on its subproblems provided by the node s subtree, each agent in JAO instead uses a measurement that considers a global perspective of the entire problem on the current best solution. The search backtracks when U(m ) max(u(m ) k, L(m )), which occurs as soon as the best solution is dominated by the second best with the best-first strategy with k = ɛ, and when the true value for m is found (Thus, U(m ) = L(m )) with the depth-first strategy with k =. Algorithm and 2 shows the pseudocode of JAO(k), where a i is a generic agent, par(a i ) its parent agent, succ(a i ) its set of child agents, par(n) the parent node of the node n in the search graph, succ(n) the set of node n s child nodes, and n r the node of the root agent. The root agent runs RootRun() which contains search initiation whereas all other agents run Run(). The pseudo-code uses a predicate context compatible(m) to select a node whose variable value assignment matches that of the node m. Example of JAO On the junction tree in Fig. d, let there be three agents, A, A 2 and A 3 for the cliques A, A and respectively. The constraint function f(a, ) are assigned to clique A, f(a, ) and f(, ) to clique A, f(, ) to. A f(a, ) : 4 5 A f(a, ) : 2 3 f(, ) : 4 2 f(, ) : Phase : The first phase starts by the agent A 2 computing the local potential b by merging f(a, ) and f(, ) in the clique A as well as A 3. A 6 5 b(a,, ) : Each agent who does not own the root node generates a filtered message once they have received from all the child agents (agents who own the child nodes). The message from A 2 and A 3 to A in Action-GL would be as follows. M A3 A : 4 A 6 M A2 A : Filtered messages are created and sent with the filtering rate l = 8. FS denotes the filtered set of variable configurations. For the message M A2 A, the function range is (8 4), thus items with upper bound equal or less than (4+ 4*.8) are filtered except (A=, =). Messages in JAO are: M A3 A : h L h U 4 4 FS A h L h U M A2 A : 8 8 FS 4 6 Once messages received, A calculates potential b as the total sum of received messages and local functions. A L U 5 7 b(a) : Phase 2: A checks the termination condition on the possible solution with the highest upper bound (A=, =). Since L(A=, =) does not dominate U(A=, =), the search starts. A computes the distance k between maximum and the second maximum upper bounds. The distance k = 4 3 =. The search backtracks when the upper bound decreases by equal or more than min(k, k ). The upper and lower bound gap originates only from M A2 A. Therefore, A sends a VALUE message with a variables configuration (A=, =) and min(k, k ) to A 2. A 2 receives this VALUE message. It then sends a OST message L(, )=5, U(, )=5 as it is a leaf node. Upon receipt of the OST message, the root node updates its potential. A L U 6 7 b(a, ) : Since the lower bound of (A=, =) dominates upper bounds of all other configurations, the termination condition is satisfied and the search terminates. Approximate JAO(k) An approximate version of the algorithm can be obtained by relaxing the constraint on upper and lower bound gap similar to search-based OP algorithms (Modi et al. 25; Yeoh, Sun, and Koenig 29). Approximate JAO with an error bound e terminates when the solution contains no more than error e such that that value of the found solution n is no worse than v(n ) e, where n is the optimal solution. The corresponding termination condition is L(n) U(n) e or i, U(m j ) L(m i )+e for j, i j. Also, the search backtracks when U(m ) max(u(m ) k, L(m ) + e). Empirical Evaluation In this section we evaluate the performance of JAO search. For each experiment, we report the communication costs, Ns(non-concurrent constraint check), and solution quality for approximate solutions with respect to optimal solutions. We used a JAO that sends VALUE messages to at most 25 nodes when there are ties. We evaluate and compare our approach with Action-GL and AOPT(k) with k = 4 which was reasonable among 4, 4 and 4. ommunication costs are measured as the number of bytes sent during execution and the message count of both A single variable value is 4bytes, and cost 8bytes.

7 c Algorithm Total ytes Ns Msgs JAO(k = ɛ) JAO(k = ) JAO(k = ) JAO(k = 5) Action-GL AOPT(K=4) JAO(k = ɛ) JAO(k = ) JAO(k = ) JAO(k = 5) Action-GL AOPT(K=4) JAO(k = ɛ) JAO(k = ) JAO(k = ) JAO(k = 5) Action-GL AOPT(K=4) Table : Performance of Optimal JAO(k) on Random inary OP Instances Savings WRT GL Solution Quality Overall ommunication ost l=.8 l=.7 l= Accuracy Loss(%) (a) ommunication Savings.97 Solution Quality.94 l=.8.9 l=.7.88 l=.6 Solution ound Accuracy Loss(%) (c) Solution Quality onstraint hecks WRT GL Message ount WRT GL Overall omputation ost l=.8 l=.7 l= Accuracy Loss (b) omputation time Message ount l=.8 l=.7 l= Accuracy Loss (d) Message ount Figure 3: Performance of Approximate JAO(k=) Algorithm Total ytes Ns Msgs A JAO(k = ɛ) 63,36 22,27,3 35 JAO(k =,, ) 55,2 25,43, Action-GL 3,624,86 9,95,92 26 AOPT(K=3,,),2,364 24,68,428 2,5,732 JAO(JAOk = ɛ) 278,68 3,44, JAO(k =,, ) 238,696 2,998, Action-GL 4,274, AOPT(K=3,,) 54,735,4 66,542,75 9,869,28 JAO(k = ɛ) 27,8 9,379, JAO(k =,, ) 2,56 7,59,783 9 Action-GL,294,382 6,49,23 78 AOPT(K=3,,),9,24 2,625,36 78,3 JAO(k = ɛ) 78,528 3,98, JAO(k =,, ) 482,654 43,36, Action-GL,32,229 58,754, AOPT(K=3,,) 2,573,856 66,347,439 3,82,54 Table 2: Sensor Network Instances UTIL and VALUE messages. For approximate results, we show the true utility of found solution which is often much higher than the estimated lower bound. We experimented on random binary OP instances with variables of domain size. The function cost are randomly generated over the range,..., 2. We varied the number of constraint functions c over 2, 25 and 3 and averaged our results over 2 instances for each value of c. The total communication amount is largely determined by the structure of junction tree. Thus, five junction trees were constructed for each problem instance. Initial heuristics were generated using soft filtering where the bottom 7% is filtered. The JAO with k = ɛ(i.e., conducts best-first search) consistently performed well in these experiments. Tab. 2 shows the results on sensor network instances from a public repository (Yin 28) with a heuristic which filters 9% (l =.9). Tab. and 2 show JAO requires less communication than both AOPT 2 and Action-GL. ompared to the random OP instances, function ranges in sensor network instances are wider and a large set of clearly dominated variable configurations results in significant communication savings with JAO. JAO with high k values performed better in terms of communication on these lower connectivity graphs compared to the random OP instances. Lastly, in an experiment on the random binary OP instances, we measured the methods trend as the solution quality guarantee changes. We evaluated 2 instances for each quality loss ranging from loss = % to loss = 8% using a heuristic which filters 8%(l=.8) and 7%(l=.7). Results in Fig. 3 show that JAO gains significant savings in communication as the error bound increases. With an 8% heuristic, it transmits about 8 times less information than that of Action-GL when loss = 8% while it uses 3 times more computation (Ns) and about 3 messages per agent. onclusions We addressed the problem of solving OP exactly and with precise approximation bounds by developing a new distributed algorithm called JAO. There are three novel ideas in JAO. Firstly, it uses an / junction graph representation, which builds a basis for efficient search in the distributed settings. The second is a two phase search strategy that combines characteristics of AOPT and Action-GL. The third is a soft filtering technique to significantly reduce communication without losing any accuracy. 2 Results from (Gutierrez, Meseguer, and Yeoh 2)

8 References Aji, S., and McEliece, R. 2. The generalized distributive law. Information Theory, IEEE Transactions on 46(2): Ali, S.; Koenig, S.; and Tambe, M. 25. Preprocessing techniques for accelerating the OP algorithm AOPT. In Proceedings of the Fourth International Joint onference on Autonomous Agents and Multiagent Systems, AAMAS 5, New York, NY, USA: AM. rito, I., and Meseguer, P. 2. Improving POP with function filtering. In Proceedings of the 9th International onference on Autonomous Agents and Multiagent Systems: volume - Volume, AAMAS, Richland, S: International Foundation for Autonomous Agents and Multiagent Systems. owell, R. G.; awid, A. P.; Lauritzen, S. L.; and Spiegelhalter,. J. 27. Probabilistic Networks and Expert Systems: Exact omputational Methods for ayesian Networks. Springer Publishing ompany, Incorporated, st edition. echter, R., and Mateescu, R. 27. / search spaces for graphical models. Artif. Intell. 7(2-3):73 6. Farinelli, A.; Rogers, A.; Petcu, A.; and Jennings, N. R. 28. ecentralised coordination of low-power embedded devices using the max-sum algorithm. In AAMAS 8: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems, Richland, S: International Foundation for Autonomous Agents and Multiagent Systems. Gutierrez, P.; Meseguer, P.; and Yeoh, W. 2. Generalizing AOPT and n-aopt. In Proceedings of the Twenty-Second International Joint onference on Artificial Intelligence - Volume Volume One, IJAI, AAAI Press. Koller,., and Friedman, N. 29. Probabilistic Graphical Models: Principles and Techniques - Adaptive omputation and Machine Learning. The MIT Press. Marinescu, R., and echter, R. 25. / branchand-bound for graphical models. In Proceedings of the 9th international joint conference on Artificial intelligence, IJ- AI 5, San Francisco, A, USA: Morgan Kaufmann Publishers Inc. Marinescu, R., and echter, R. 27. est-first / search for graphical models. In Proceedings of the 22Nd National onference on Artificial Intelligence - Volume 2, AAAI 7, AAAI Press. Modi, P. J.; Shen, W.-M.; Tambe, M.; and Yokoo, M. 25. AOPT: asynchronous distributed constraint optimization with quality guarantees. Artif. Intell. 6(-2):49 8. Paskin, M.; Guestrin,.; and McFadden, J. 25. A robust architecture for distributed inference in sensor networks. In Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, IPSN 5. Piscataway, NJ, USA: IEEE Press. Pujol-Gonzalez, M.; erquides, J.; Meseguer, P.; and Rodriguez-Aguilar, J. A. 2. ommunication-constrained OPs: message approximation in GL with function filtering. In The th International onference on Autonomous Agents and Multiagent Systems - Volume, AAMAS, Richland, S: International Foundation for Autonomous Agents and Multiagent Systems. Vinyals, M.; Rodriguez-Aguilar, J. A.; and erquides, J. 2. onstructing a unifying theory of dynamic programming OP algorithms via the generalized distributive law. Autonomous Agents and Multi-Agent Systems 22(3): Wallace, R. J Enhancements of branch and bound methods for the maximal constraint satisfaction problem. In Proceedings of the Thirteenth National onference on Artificial Intelligence - Volume, AAAI 96, AAAI Press. Yeoh, W.; Felner, A.; and Koenig, S. 28. n-aopt: an asynchronous branch-and-bound OP algorithm. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2, AA- MAS 8, Richland, S: International Foundation for Autonomous Agents and Multiagent Systems. Yeoh, W.; Sun, X.; and Koenig, S. 29. Trading off solution quality for faster computation in dcop search algorithms. In Proceedings of the 2st International Jont onference on Artifical Intelligence, IJAI 9, San Francisco, A, USA: Morgan Kaufmann Publishers Inc. Yin, Z. 28. US OP repository.

Generalizing ADOPT and BnB-ADOPT

Generalizing ADOPT and BnB-ADOPT Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Generalizing ADOPT and BnB-ADOPT Patricia Gutierrez 1, Pedro Meseguer 1 and William Yeoh 2 1 IIIA - CSIC, Universitat

More information

A Memory Bounded Hybrid Approach to Distributed Constraint Optimization

A Memory Bounded Hybrid Approach to Distributed Constraint Optimization A Memory Bounded Hybrid Approach to Distributed Constraint Optimization James Atlas, Matt Warner, and Keith Decker Computer and Information Sciences University of Delaware Newark, DE 19716 atlas, warner,

More information

Using Message-passing DCOP Algorithms to Solve Energy-efficient Smart Environment Configuration Problems

Using Message-passing DCOP Algorithms to Solve Energy-efficient Smart Environment Configuration Problems Using Message-passing DCOP Algorithms to Solve Energy-efficient Smart Environment Configuration Problems Pierre Rust 1,2 Gauthier Picard 1 Fano Ramparany 2 1 MINES Saint-Étienne, CNRS Lab Hubert Curien

More information

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Appeared in Proc of the 14th Int l Joint Conf on Artificial Intelligence, 558-56, 1995 On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Roberto J Bayardo Jr and Daniel P Miranker

More information

AND/OR Cutset Conditioning

AND/OR Cutset Conditioning ND/OR utset onditioning Robert Mateescu and Rina Dechter School of Information and omputer Science University of alifornia, Irvine, 92697 {mateescu, dechter}@ics.uci.edu bstract utset conditioning is one

More information

MB-DPOP: A New Memory-Bounded Algorithm for Distributed Optimization

MB-DPOP: A New Memory-Bounded Algorithm for Distributed Optimization MB-DPOP: A New Memory-Bounded Algorithm for Distributed Optimization Adrian Petcu and Boi Faltings Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland) email: {adrian.petcu,

More information

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s Semi-Independent Partitioning: A Method for Bounding the Solution to COP s David Larkin University of California, Irvine Abstract. In this paper we introduce a new method for bounding the solution to constraint

More information

Application of Techniques for MAP Estimation to Distributed Constraint Optimization Problem

Application of Techniques for MAP Estimation to Distributed Constraint Optimization Problem University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses 2015 Application of Techniques for MAP Estimation to Distributed Constraint Optimization Problem

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm

Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm Duc Thien Nguyen, William Yeoh, and Hoong Chuin Lau School of Information Systems Singapore Management University Singapore 178902 {dtnguyen.2011,hclau}@smu.edu.sg

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

The Relationship Between AND/OR Search and Variable Elimination

The Relationship Between AND/OR Search and Variable Elimination The Relationship etween N/OR Search and Variable limination Robert Mateescu and Rina echter School of Information and omputer Science University of alifornia, Irvine, 92697-3425 {mateescu,dechter}@ics.uci.edu

More information

Secure Combinatorial Optimization using DFS-based Variable Elimination

Secure Combinatorial Optimization using DFS-based Variable Elimination Secure Combinatorial Optimization using DFS-based Variable Elimination Silaghi, M. and Petcu, A. and Faltings, B. Florida Tech EPFL July 19, 2005 Abstract It is known that, in general, Constraint Optimization

More information

These notes present some properties of chordal graphs, a set of undirected graphs that are important for undirected graphical models.

These notes present some properties of chordal graphs, a set of undirected graphs that are important for undirected graphical models. Undirected Graphical Models: Chordal Graphs, Decomposable Graphs, Junction Trees, and Factorizations Peter Bartlett. October 2003. These notes present some properties of chordal graphs, a set of undirected

More information

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Roberto J. Bayardo Jr. and Daniel P. Miranker Department of Computer Sciences and Applied Research Laboratories The University of

More information

Loopy Belief Propagation

Loopy Belief Propagation Loopy Belief Propagation Research Exam Kristin Branson September 29, 2003 Loopy Belief Propagation p.1/73 Problem Formalization Reasoning about any real-world problem requires assumptions about the structure

More information

Best-First AND/OR Search for Graphical Models

Best-First AND/OR Search for Graphical Models est-irst N/ Search for Graphical Models Radu Marinescu and Rina echter School of Information and omputer Science University of alifornia, Irvine, 92627 {radum,dechter}@ics.uci.edu bstract The paper presents

More information

Best-First AND/OR Search for Most Probable Explanations

Best-First AND/OR Search for Most Probable Explanations MARINSU & DHTR 259 est-first / Search for Most Probable xplanations Radu Marinescu and Rina Dechter School of Information and omputer Science University of alifornia, Irvine, A 92697-3425 {radum,dechter}@ics.uci.edu

More information

EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM

EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM Toshihiro Matsui, Hiroshi Matsuo and Akira Iwata Department of Electrical and Computer Engineering Nagoya Institute of Technology

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

12/5/17. trees. CS 220: Discrete Structures and their Applications. Trees Chapter 11 in zybooks. rooted trees. rooted trees

12/5/17. trees. CS 220: Discrete Structures and their Applications. Trees Chapter 11 in zybooks. rooted trees. rooted trees trees CS 220: Discrete Structures and their Applications A tree is an undirected graph that is connected and has no cycles. Trees Chapter 11 in zybooks rooted trees Rooted trees. Given a tree T, choose

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Speeding Up GDL-based Message Passing Algorithms for Large-Scale DCOPs

Speeding Up GDL-based Message Passing Algorithms for Large-Scale DCOPs Speeding Up GDL-based Message Passing Algorithms for Large-Scale DCOPs Md. Mosaddek Khan, Sarvapali D. Ramchurn Long Tran-Thanh and Nicholas R. Jennings School of Electronics and Computer Science, University

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Recoloring k-degenerate graphs

Recoloring k-degenerate graphs Recoloring k-degenerate graphs Jozef Jirásek jirasekjozef@gmailcom Pavel Klavík pavel@klavikcz May 2, 2008 bstract This article presents several methods of transforming a given correct coloring of a k-degenerate

More information

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of California, San Diego CA 92093{0114, USA Abstract. We

More information

Forward Bounding on a Pseudo Tree for DCOPs

Forward Bounding on a Pseudo Tree for DCOPs Forward Bounding on a Pseudo Tree for DCOPs Omer Litov and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel Abstract. Distributed constraint

More information

Advancing AND/OR Search for Optimization Using Diverse Principles

Advancing AND/OR Search for Optimization Using Diverse Principles Advancing AN/OR Search for Optimization Using iverse Principles Radu Marinescu and Rina echter School of Information and omputer Science University of alifornia, Irvine, A 9697-45 {radum,dechter}@ics.uci.edu

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm

Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm Distributed Gibbs: A Memory-Bounded Sampling-Based DCOP Algorithm Duc Thien Nguyen School of Information Systems Singapore Management University Singapore 178902 dtnguyen.2011@smu.edu.sg William Yeoh Department

More information

Compiling Constraint Networks into AND/OR Multi-Valued Decision Diagrams (AOMDDs)

Compiling Constraint Networks into AND/OR Multi-Valued Decision Diagrams (AOMDDs) ompiling onstraint Networks into N/OR Multi-Valued ecision iagrams (OMs) Robert Mateescu and Rina echter onald ren School of Information and omputer Science University of alifornia, Irvine, 92697-3425

More information

Preprocessing Techniques for Accelerating the DCOP Algorithm ADOPT

Preprocessing Techniques for Accelerating the DCOP Algorithm ADOPT Preprocessing Techniques for Accelerating the DCOP Algorithm ADOPT Syed Ali Sven Koenig Milind Tambe Computer Science Department University of Southern California Los Angeles, CA 90089-0781 {syedmuha,skoenig,tambe}@usc.edu

More information

Caching Schemes for DCOP Search Algorithms

Caching Schemes for DCOP Search Algorithms Caching Schemes for DCOP Search Algorithms William Yeoh Computer Science USC Los Angeles, CA wyeoh@usc.edu Pradeep Varakantham Robotics Institute CMU Pittsburgh, PA 122 pradeepv@cs.cmu.edu Sven Koenig

More information

Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models

Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Akihiro Kishimoto IBM Research, Ireland Joint work with Radu Marinescu and Adi Botea Outline 1 Background 2 RBFAOO

More information

Overlapping Coalition Formation for Efficient Data Fusion in Multi-Sensor Networks

Overlapping Coalition Formation for Efficient Data Fusion in Multi-Sensor Networks Overlapping Coalition Formation for Efficient Data Fusion in Multi-Sensor Networks Viet Dung Dang, Rajdeep K. Dash, Alex Rogers and Nicholas R. Jennings School of Electronics and Computer Science,University

More information

Towards Parallel Search for Optimization in Graphical Models

Towards Parallel Search for Optimization in Graphical Models Towards Parallel Search for Optimization in Graphical Models Lars Otten and Rina Dechter Bren School of Information and Computer Sciences University of California, Irvine {lotten,dechter}@ics.uci.edu Abstract

More information

Large Neighborhood Search with Quality Guarantees for Distributed Constraint Optimization Problems

Large Neighborhood Search with Quality Guarantees for Distributed Constraint Optimization Problems Large Neighborhood Search with Quality Guarantees for Distributed Constraint Optimization Problems Ferdinando Fioretto 1,2, Federico Campeotto 2, Agostino Dovier 2, Enrico Pontelli 1, and William Yeoh

More information

On 2-Subcolourings of Chordal Graphs

On 2-Subcolourings of Chordal Graphs On 2-Subcolourings of Chordal Graphs Juraj Stacho School of Computing Science, Simon Fraser University 8888 University Drive, Burnaby, B.C., Canada V5A 1S6 jstacho@cs.sfu.ca Abstract. A 2-subcolouring

More information

Adopt Algorithm for Distributed Constraint Optimization

Adopt Algorithm for Distributed Constraint Optimization Adopt Algorithm for Distributed Constraint Optimization Pragnesh Jay Modi Information Sciences Institute & Department of Computer Science University of Southern California http://www.isi.edu/~modi Distributed

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität

More information

Graph Algorithms Using Depth First Search

Graph Algorithms Using Depth First Search Graph Algorithms Using Depth First Search Analysis of Algorithms Week 8, Lecture 1 Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Graph Algorithms Using Depth

More information

Belief propagation in a bucket-tree. Handouts, 275B Fall Rina Dechter. November 1, 2000

Belief propagation in a bucket-tree. Handouts, 275B Fall Rina Dechter. November 1, 2000 Belief propagation in a bucket-tree Handouts, 275B Fall-2000 Rina Dechter November 1, 2000 1 From bucket-elimination to tree-propagation The bucket-elimination algorithm, elim-bel, for belief updating

More information

Explorative Max-sum for Teams of Mobile Sensing Agents

Explorative Max-sum for Teams of Mobile Sensing Agents Explorative for Teams of Mobile Sensing Agents Harel Yedidsion, Roie Zivan Dept. of Industrial Engineering and Management, Ben Gurion University Beer-Sheva, Israel yedidsio@bgu.ac.il, zivanr@bgu.ac.il

More information

Look-ahead with Mini-Bucket Heuristics for MPE

Look-ahead with Mini-Bucket Heuristics for MPE Look-ahead with Mini-Bucket Heuristics for MPE Rina Dechter, Kalev Kask, William Lam University of California, Irvine Irvine, California, USA Javier Larrosa UPC Barcelona Tech Barcelona, Spain Abstract

More information

Conflict Graphs for Combinatorial Optimization Problems

Conflict Graphs for Combinatorial Optimization Problems Conflict Graphs for Combinatorial Optimization Problems Ulrich Pferschy joint work with Andreas Darmann and Joachim Schauer University of Graz, Austria Introduction Combinatorial Optimization Problem CO

More information

Unifying and extending hybrid tractable classes of CSPs

Unifying and extending hybrid tractable classes of CSPs Journal of Experimental & Theoretical Artificial Intelligence Vol. 00, No. 00, Month-Month 200x, 1 16 Unifying and extending hybrid tractable classes of CSPs Wady Naanaa Faculty of sciences, University

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Search Algorithms for Solving Queries on Graphical Models & the Importance of Pseudo-trees in their Complexity.

Search Algorithms for Solving Queries on Graphical Models & the Importance of Pseudo-trees in their Complexity. Search Algorithms for Solving Queries on Graphical Models & the Importance of Pseudo-trees in their Complexity. University of California, Irvine CS199: Individual Study with Rina Dechter Héctor Otero Mediero

More information

Information Processing Letters

Information Processing Letters Information Processing Letters 112 (2012) 449 456 Contents lists available at SciVerse ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl Recursive sum product algorithm for generalized

More information

Distributed Constraint Optimization

Distributed Constraint Optimization Distributed Constraint Optimization Gauthier Picard MINES Saint-Étienne LaHC UMR CNRS 5516 gauthier.picard@emse.fr Some contents taken from OPTMAS 2011 and OPTMAS-DCR 2014 Tutorials UMR CNRS 5516 SAINT-ETIENNE

More information

Clustering Using Graph Connectivity

Clustering Using Graph Connectivity Clustering Using Graph Connectivity Patrick Williams June 3, 010 1 Introduction It is often desirable to group elements of a set into disjoint subsets, based on the similarity between the elements in the

More information

Modeling Microgrid Islanding Problems as DCOPs

Modeling Microgrid Islanding Problems as DCOPs Modeling Microgrid Islanding Problems as DCOPs Saurabh Gupta, William Yeoh, Enrico Pontelli Department of Computer Science New Mexico State University Las Cruces, NM, 88003 Email: {saurabh, wyeoh, epontell}@nmsu.edu

More information

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP.

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP. Example: Bioinformatics Soft Constraint Processing 16.412J/6.834J Cognitive Robotics Martin Sachenbacher (Using material from Thomas Schiex) RNA is single-strand molecule, composed of A,U,G,C Function

More information

BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm

BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm Journal of Artificial Intelligence Research (1) -1 Submitted /9; published /1 BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm William Yeoh Computer Science Department, University of Southern

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Quality Guarantees on Locally Optimal Solutions for Distributed Constraint Optimization Problems

Quality Guarantees on Locally Optimal Solutions for Distributed Constraint Optimization Problems Quality Guarantees on Locally Optimal Solutions for Distributed Constraint Optimization Problems Jonathan P. Pearce and Milind Tambe Computer Science Department University of Southern California Los Angeles,

More information

implementing the breadth-first search algorithm implementing the depth-first search algorithm

implementing the breadth-first search algorithm implementing the depth-first search algorithm Graph Traversals 1 Graph Traversals representing graphs adjacency matrices and adjacency lists 2 Implementing the Breadth-First and Depth-First Search Algorithms implementing the breadth-first search algorithm

More information

Evaluating the Impact of AND/OR Search on 0-1 Integer Linear Programming

Evaluating the Impact of AND/OR Search on 0-1 Integer Linear Programming Evaluating the Impact of AND/OR Search on 0-1 Integer Linear Programming Radu Marinescu, Rina Dechter Donald Bren School of Information and Computer Science University of California, Irvine, CA 92697,

More information

Foundations of Computer Science Spring Mathematical Preliminaries

Foundations of Computer Science Spring Mathematical Preliminaries Foundations of Computer Science Spring 2017 Equivalence Relation, Recursive Definition, and Mathematical Induction Mathematical Preliminaries Mohammad Ashiqur Rahman Department of Computer Science College

More information

Trees : Part 1. Section 4.1. Theory and Terminology. A Tree? A Tree? Theory and Terminology. Theory and Terminology

Trees : Part 1. Section 4.1. Theory and Terminology. A Tree? A Tree? Theory and Terminology. Theory and Terminology Trees : Part Section. () (2) Preorder, Postorder and Levelorder Traversals Definition: A tree is a connected graph with no cycles Consequences: Between any two vertices, there is exactly one unique path

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

Nogood-FC for Solving Partitionable Constraint Satisfaction Problems

Nogood-FC for Solving Partitionable Constraint Satisfaction Problems Nogood-FC for Solving Partitionable Constraint Satisfaction Problems Montserrat Abril, Miguel A. Salido, Federico Barber Dpt. of Information Systems and Computation, Technical University of Valencia Camino

More information

An Any-space Algorithm for Distributed Constraint Optimization

An Any-space Algorithm for Distributed Constraint Optimization An Any-space Algorithm for Distributed Constraint Optimization Anton Chechetka and Katia Sycara School of Computer Science, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA {antonc,

More information

An undirected graph is a tree if and only of there is a unique simple path between any 2 of its vertices.

An undirected graph is a tree if and only of there is a unique simple path between any 2 of its vertices. Trees Trees form the most widely used subclasses of graphs. In CS, we make extensive use of trees. Trees are useful in organizing and relating data in databases, file systems and other applications. Formal

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Kernelization Upper Bounds for Parameterized Graph Coloring Problems

Kernelization Upper Bounds for Parameterized Graph Coloring Problems Kernelization Upper Bounds for Parameterized Graph Coloring Problems Pim de Weijer Master Thesis: ICA-3137910 Supervisor: Hans L. Bodlaender Computing Science, Utrecht University 1 Abstract This thesis

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

Lecture 5: Exact inference. Queries. Complexity of inference. Queries (continued) Bayesian networks can answer questions about the underlying

Lecture 5: Exact inference. Queries. Complexity of inference. Queries (continued) Bayesian networks can answer questions about the underlying given that Maximum a posteriori (MAP query: given evidence 2 which has the highest probability: instantiation of all other variables in the network,, Most probable evidence (MPE: given evidence, find an

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

Multi-Variable Agent Decomposition for DCOPs

Multi-Variable Agent Decomposition for DCOPs Multi-Variable Agent Decomposition for DCOPs Ferdinando Fioretto Department of Computer Science New Mexico State University Las Cruces, NM 88003, U.S. ffiorett@cs.nmsu.edu William Yeoh Department of Computer

More information

Search and Optimization

Search and Optimization Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution

More information

Pushing Forward Marginal MAP with Best-First Search

Pushing Forward Marginal MAP with Best-First Search Pushing Forward Marginal MAP with Best-First Search Radu Marinescu IBM Research Ireland radu.marinescu@ie.ibm.com Rina Dechter and Alexander Ihler University of California, Irvine Irvine, CA 92697, USA

More information

The Structure of Bull-Free Perfect Graphs

The Structure of Bull-Free Perfect Graphs The Structure of Bull-Free Perfect Graphs Maria Chudnovsky and Irena Penev Columbia University, New York, NY 10027 USA May 18, 2012 Abstract The bull is a graph consisting of a triangle and two vertex-disjoint

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

Decomposable Constraints

Decomposable Constraints Decomposable Constraints Ian Gent 1, Kostas Stergiou 2, and Toby Walsh 3 1 University of St Andrews, St Andrews, Scotland. ipg@dcs.st-and.ac.uk 2 University of Strathclyde, Glasgow, Scotland. ks@cs.strath.ac.uk

More information

Class2: Constraint Networks Rina Dechter

Class2: Constraint Networks Rina Dechter Algorithms for Reasoning with graphical models Class2: Constraint Networks Rina Dechter dechter1: chapters 2-3, Dechter2: Constraint book: chapters 2 and 4 Text Books Road Map Graphical models Constraint

More information

Solving Multi-Objective Distributed Constraint Optimization Problems. CLEMENT Maxime. Doctor of Philosophy

Solving Multi-Objective Distributed Constraint Optimization Problems. CLEMENT Maxime. Doctor of Philosophy Solving Multi-Objective Distributed Constraint Optimization Problems CLEMENT Maxime Doctor of Philosophy Department of Informatics School of Multidisciplinary Sciences SOKENDAI (The Graduate University

More information

Solving NP-hard Problems on Special Instances

Solving NP-hard Problems on Special Instances Solving NP-hard Problems on Special Instances Solve it in poly- time I can t You can assume the input is xxxxx No Problem, here is a poly-time algorithm 1 Solving NP-hard Problems on Special Instances

More information

Chapter 11: Graphs and Trees. March 23, 2008

Chapter 11: Graphs and Trees. March 23, 2008 Chapter 11: Graphs and Trees March 23, 2008 Outline 1 11.1 Graphs: An Introduction 2 11.2 Paths and Circuits 3 11.3 Matrix Representations of Graphs 4 11.5 Trees Graphs: Basic Definitions Informally, a

More information

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, )

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, ) Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 1. if >. 2. error new key is greater than current key 3.. 4.. 5. if NIL and.

More information

Stochastic Anytime Search for Bounding Marginal MAP

Stochastic Anytime Search for Bounding Marginal MAP Stochastic Anytime Search for Bounding Marginal MAP Radu Marinescu 1, Rina Dechter 2, Alexander Ihler 2 1 IBM Research 2 University of California, Irvine radu.marinescu@ie.ibm.com, dechter@ics.uci.edu,

More information

Diversity Coloring for Distributed Storage in Mobile Networks

Diversity Coloring for Distributed Storage in Mobile Networks Diversity Coloring for Distributed Storage in Mobile Networks Anxiao (Andrew) Jiang and Jehoshua Bruck California Institute of Technology Abstract: Storing multiple copies of files is crucial for ensuring

More information

Impact of Problem Centralization in Distributed Constraint Optimization Algorithms

Impact of Problem Centralization in Distributed Constraint Optimization Algorithms Impact of Problem Centralization in Distributed Constraint Optimization Algorithms John Davin and Pragnesh Jay Modi Computer Science Department Carnegie Mellon University Pittsburgh PA 15213 {jdavin, pmodi}@cs.cmu.edu

More information

Solutions to Midterm 2 - Monday, July 11th, 2009

Solutions to Midterm 2 - Monday, July 11th, 2009 Solutions to Midterm - Monday, July 11th, 009 CPSC30, Summer009. Instructor: Dr. Lior Malka. (liorma@cs.ubc.ca) 1. Dynamic programming. Let A be a set of n integers A 1,..., A n such that 1 A i n for each

More information

Constraint Solving by Composition

Constraint Solving by Composition Constraint Solving by Composition Student: Zhijun Zhang Supervisor: Susan L. Epstein The Graduate Center of the City University of New York, Computer Science Department 365 Fifth Avenue, New York, NY 10016-4309,

More information

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Fundamenta Informaticae 56 (2003) 105 120 105 IOS Press A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Jesper Jansson Department of Computer Science Lund University, Box 118 SE-221

More information

Complementary Graph Coloring

Complementary Graph Coloring International Journal of Computer (IJC) ISSN 2307-4523 (Print & Online) Global Society of Scientific Research and Researchers http://ijcjournal.org/ Complementary Graph Coloring Mohamed Al-Ibrahim a*,

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Trees Sidra Malik sidra.malik@ciitlahore.edu.pk Tree? In computer science, a tree is an abstract model of a hierarchical structure A tree is a finite set of one or more nodes

More information

Binary Decision Diagrams

Binary Decision Diagrams Logic and roof Hilary 2016 James Worrell Binary Decision Diagrams A propositional formula is determined up to logical equivalence by its truth table. If the formula has n variables then its truth table

More information

Constraint Satisfaction. AI Slides (5e) c Lin

Constraint Satisfaction. AI Slides (5e) c Lin Constraint Satisfaction 4 AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 4 1 4 Constraint Satisfaction 4.1 Constraint satisfaction problems 4.2 Backtracking search 4.3 Constraint propagation 4.4 Local search

More information

Acyclic Network. Tree Based Clustering. Tree Decomposition Methods

Acyclic Network. Tree Based Clustering. Tree Decomposition Methods Summary s Join Tree Importance of s Solving Topological structure defines key features for a wide class of problems CSP: Inference in acyclic network is extremely efficient (polynomial) Idea: remove cycles

More information

A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements

A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements ABSTRACT James Atlas Computer and Information Sciences University of Delaware Newark, DE 19716 atlas@cis.udel.edu

More information

March 20/2003 Jayakanth Srinivasan,

March 20/2003 Jayakanth Srinivasan, Definition : A simple graph G = (V, E) consists of V, a nonempty set of vertices, and E, a set of unordered pairs of distinct elements of V called edges. Definition : In a multigraph G = (V, E) two or

More information

An Effective Upperbound on Treewidth Using Partial Fill-in of Separators

An Effective Upperbound on Treewidth Using Partial Fill-in of Separators An Effective Upperbound on Treewidth Using Partial Fill-in of Separators Boi Faltings Martin Charles Golumbic June 28, 2009 Abstract Partitioning a graph using graph separators, and particularly clique

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Heuristic Search and Advanced Methods

Heuristic Search and Advanced Methods Heuristic Search and Advanced Methods Computer Science cpsc322, Lecture 3 (Textbook Chpt 3.6 3.7) May, 15, 2012 CPSC 322, Lecture 3 Slide 1 Course Announcements Posted on WebCT Assignment1 (due on Thurs!)

More information

Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs

Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs Philippe Jégou, Samba Ndojh Ndiaye, and Cyril Terrioux LSIS - UMR CNRS 6168 Université Paul Cézanne (Aix-Marseille

More information

Graphical Models. Pradeep Ravikumar Department of Computer Science The University of Texas at Austin

Graphical Models. Pradeep Ravikumar Department of Computer Science The University of Texas at Austin Graphical Models Pradeep Ravikumar Department of Computer Science The University of Texas at Austin Useful References Graphical models, exponential families, and variational inference. M. J. Wainwright

More information

Summary. Acyclic Networks Join Tree Clustering. Tree Decomposition Methods. Acyclic Network. Tree Based Clustering. Tree Decomposition.

Summary. Acyclic Networks Join Tree Clustering. Tree Decomposition Methods. Acyclic Network. Tree Based Clustering. Tree Decomposition. Summary s Join Tree Importance of s Solving Topological structure denes key features for a wide class of problems CSP: Inference in acyclic network is extremely ecient (polynomial) Idea: remove cycles

More information