On the Time Complexity of Bucket. Javier Larrosa. January 23, Abstract. In this short note, we prove the time complexity of full-bucket and

Similar documents
Semi-Independent Partitioning: A Method for Bounding the Solution to COP s

Belief propagation in a bucket-tree. Handouts, 275B Fall Rina Dechter. November 1, 2000

Treewidth and graph minors

CS 6783 (Applied Algorithms) Lecture 5

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

Look-ahead with Mini-Bucket Heuristics for MPE

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees

.Math 0450 Honors intro to analysis Spring, 2009 Notes #4 corrected (as of Monday evening, 1/12) some changes on page 6, as in .

1 Maximum Independent Set

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Search Trees. Undirected graph Directed graph Tree Binary search tree

The Encoding Complexity of Network Coding

AND/OR Cutset Conditioning

Advances in Parallel Branch and Bound

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

Mini-Buckets: A General Scheme for Generating Approximations in Automated Reasoning

Anytime AND/OR Depth-first Search for Combinatorial Optimization

Distributed minimum spanning tree problem

PLANAR GRAPH BIPARTIZATION IN LINEAR TIME

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Assignment 5: Solutions

Unifying and extending hybrid tractable classes of CSPs

We will show that the height of a RB tree on n vertices is approximately 2*log n. In class I presented a simple structural proof of this claim:

Notes on Binary Dumbbell Trees

birds fly S NP N VP V Graphs and trees

Monotone Paths in Geometric Triangulations

Recoloring k-degenerate graphs

Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C

Constraint Satisfaction Problems

Query Optimization. Query Optimization. Optimization considerations. Example. Interaction of algorithm choice and tree arrangement.

So the actual cost is 2 Handout 3: Problem Set 1 Solutions the mark counter reaches c, a cascading cut is performed and the mark counter is reset to 0

Crossing bridges. Crossing bridges Great Ideas in Theoretical Computer Science. Lecture 12: Graphs I: The Basics. Königsberg (Prussia)

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

6c Lecture 3 & 4: April 8 & 10, 2014

HEURISTIC ALGORITHMS FOR THE GENERALIZED MINIMUM SPANNING TREE PROBLEM

4 Fractional Dimension of Posets from Trees

II (Sorting and) Order Statistics

Lecture 3: Graphs and flows

Fixed-Parameter Algorithms, IA166

Rank-Pairing Heaps. Bernard Haeupler Siddhartha Sen Robert E. Tarjan. SIAM JOURNAL ON COMPUTING Vol. 40, No. 6 (2011), pp.

Mathematically Rigorous Software Design Review of mathematical prerequisites

Linear Time Split Decomposition Revisited

6.856 Randomized Algorithms

Priority Queues. 1 Introduction. 2 Naïve Implementations. CSci 335 Software Design and Analysis III Chapter 6 Priority Queues. Prof.

The Borsuk-Ulam theorem- A Combinatorial Proof

PCP and Hardness of Approximation

3 No-Wait Job Shops with Variable Processing Times

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

Theorem 2.9: nearest addition algorithm

Discrete mathematics

On the Complexity of Interval-Based Constraint. Networks. September 19, Abstract

Graph Algorithms Using Depth First Search

Advanced Database Systems

MB-DPOP: A New Memory-Bounded Algorithm for Distributed Optimization

Approximation Algorithms for Item Pricing

arxiv:cs/ v1 [cs.ds] 20 Feb 2003

Towards more efficient infection and fire fighting

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Reachability in K 3,3 -free and K 5 -free Graphs is in Unambiguous Logspace

4 Basics of Trees. Petr Hliněný, FI MU Brno 1 FI: MA010: Trees and Forests

Clustering Using Graph Connectivity

FUTURE communication networks are expected to support

Faster parameterized algorithms for Minimum Fill-In

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Kernelization Upper Bounds for Parameterized Graph Coloring Problems

Paths, Flowers and Vertex Cover

Algorithms for Learning and Teaching. Sets of Vertices in Graphs. Patricia A. Evans and Michael R. Fellows. University of Victoria

A General Scheme for Automatic Generation of Search Heuristics from Specification Dependencies. Kalev Kask and Rina Dechter

6. Finding Efficient Compressions; Huffman and Hu-Tucker

Uses for Trees About Trees Binary Trees. Trees. Seth Long. January 31, 2010

A Uniform View of Backtracking

Lecture 3. Recurrences / Heapsort

talk in Boca Raton in Part of the reason for not publishing the result for such a long time was his hope for an improvement in terms of the size

Constraint Optimization Techniques for MultiObjective Branch and Bound Search

Lecture 4: Primal Dual Matching Algorithm and Non-Bipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs

Important separators and parameterized algorithms

Towards the Proof of the PCP Theorem

Suffix Tree and Array

1 Minimum Cut Problem

16 Greedy Algorithms

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES)

THE RESULT FOR THE GRUNDY NUMBER ON P4- CLASSES

CSE 417 Branch & Bound (pt 4) Branch & Bound

Trees : Part 1. Section 4.1. Theory and Terminology. A Tree? A Tree? Theory and Terminology. Theory and Terminology

SORTING AND SELECTION

Text Compression through Huffman Coding. Terminology

Faster parameterized algorithms for Minimum Fill-In

Introduction In the standard approach to database modeling two restrictions are imposed on database relations. (i) The elements are assumed to be un-o

An Effective Upperbound on Treewidth Using Partial Fill-in of Separators

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Backtracking. Chapter 5

Class2: Constraint Networks Rina Dechter

Part 2: Balanced Trees

Throughout this course, we use the terms vertex and node interchangeably.

Lower Bound on Comparison-based Sorting

Mutual Exclusion Between Neighboring Nodes in a Tree That Stabilizes Using Read/Write Atomicity?

(a) (b) Figure 1: Bipartite digraph (a) and solution to its edge-connectivity incrementation problem (b). A directed line represents an edge that has

,, 1{48 () c Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Optimal Representations of Polymorphic Types with Subtyping * ALEXAN

Transcription:

On the Time Complexity of Bucket Elimination Algorithms Javier Larrosa Information and Computer Science University of California at Irvine, USA January 23, 2001 Abstract In this short note, we prove the time complexity of full-bucket and mini-bucket elimination [1, 2]. In previous papers, when discussing the complexity of these algorithms, only the importance of the exponential contribution was emphasized and the rest of the contributions to the cost were not carefully considered. In this note, we address this fact and provide a non-trivial bound for the non-exponential contribution to the complexity of both algorithms. We demonstrate the result for the Additive Combinatorial Optimization Problem case. 1 Introduction 1.1 Additive Combinatorial Optimization Problem An additive combinatorial optimization problem (ACOP) is dened by a tuple (X ; D; F) where X= fx1;:::;x n g is a set of n variables identied by their index; D= fd1;:::;d n g is a collection of nite domains, D i is the set of possible values for variable x i ; F= ff1;:::;f r g is a set of r cost functions. A cost function f i is a real valued function dened over a set of variables var(f i ), called its scope. It assigns a cost to each tuple over var(f i ). The scope size of a function is called its arity. In the sequel, t will denote a tuple (an assignment of domain values to a set of variables) and var(t) will denote the set of variables assigned by t. For notation simplicity, when we evaluate a tuple t on a cost functions f i such that var(f i ) var(t), we write f i (t) when we really mean f i (t 0 ), where t 0 is the subtuple of t containing the assignment tovariables in var(f i ). The sum of the cost functions denes the problem's objective function and the task is to minimize the objective function, rx minf f i (t)g; t 2 t (D1 ::: D n ) i=1 1

It is well known that solving ACOP is NP-hard. We assume that quering a cost functions is time O(1). 1.2 Operations on Functions Sum of functions: Let f i and f j be two arbitrary functions. Their sum f i f j is a new function dened over the scope var(f i ) [ var(f j )by, (f i f j )(t) =f i (t)f j (t) Variable elimination by minimization: Let f j be an arbitrary function and x i avariable in its scope. The elimination ofx i from f j, min i ff j g,is a new function with scope var(f j ),fx i g that assigns to each tuple the minimum cost extension to x i.formally,(t is a tuple over var(f j ),fx i g) (minff j g)(t) = minff j (t; a)g i a2d i where f j (t; a) means the evaluation of f j on the extention of the tuple t with the assignment ofvalue a to variable x i. Lemma 1.1 Let g1;:::;g k be a set of functions. The complexity of computing P k j=1 f j is time O(k exp(q)), where q is the arity of the resulting function (i.e.: q = j[ k j=1 var(f j)j). Proof: Clearly, computing P k j=1 f i j requires the computation of O(exp(q)) values (each one of the new function entries). Computing each individual value requires to query every original function, which has complexity O(k). Therefore, the total cost is O(k exp(q)). Lemma 1.2 Let f be a function of arity k and V a subset of its scope. The complexity of computing min V ffg is time exponential in the arity of f, O(exp(k)). Proof: Let s denote the size of V. The scope of min V ffg is var(f), V and its arity isk, s. Consequently, computing min V ffg requieres the computation of O(exp(k, s)) values. Computing each individual value requires to obtain the minimum out of O(exp(s)). Therefore, the total cost is O(exp(k, s) exp(s)) which iso(exp(k)). Theorem 1.1 Let F = ff1;:::;f k g be a set of functions and let V be a subset of the combined scope of F. The complexity of computing min V f P k j=1 f jg is O(k exp(q)), where q is the size of the combined scope of F Proof: Trivial after the previous Lemmas. 2 Time Complexity of Bucket Elimination In this section we prove that the complexity of Bucket Elimination (BE) [2] along variable ordering o is O(r exp(w (o))), where r is the number of cost functions in the problem and w (o) is the induced width of the problem along 2

the ordering o. We show itby associating the algorithm to a tree such that leaves are the original cost functions in the problem and each internal node is a sequence of computations performed by the algorithm. Thus the cost of BE is the sum of costs over the nodes of the tree. We show that the cost of the computations in an arbitrary node is bounded by O(exp(w (o))). We also show that the number of internal nodes of the tree is bounded by 2 r. Therefore, it is clear that the total complexity ofbeiso(r exp(w (o))). BE can be seen as a sequence of sums of function and variable eliminations. Therefore, given a problem instance and a variable ordering we can write the BE execution as an algebraic formulae of the form (jv j0;k>0): min fg 1 ::: g k g V where each g is either an original cost function from the problem or has, recursively, the same form. It is important to note that each original function appear only once in the formulae. Example 2.1 Let's consider a problem instance dened over ve variables fx1;:::;x5g and four functions ff1(x1;x3);f2(x2;x3);f3(x3;x4);f4(x2;x5)g. The execution trace of bucket elimination along the lexicographical variable ordering is, bucket5: bucket4: bucket3: bucket2: bucket1: Result: f4(x2;x5) f3(x3;x4) f2(x2;x3);f1(x1;x3) 4(x3) := ff3(x3;x4)g 5(x2) := ff4(x2;x5)g 3(x1;x2) := ff2(x2;x3)f1(x1;x3)4(x3)g 2(x1) := f5(x2)3(x1;x2)g 1() := f2(x1)g Therefore, we can expand the computation of 1() and express the whole execution as min (1;2) fmin 5 ff 4(x2;x5)g minfminff3(x3;x4)g f2(x2;x3)f1(x1;x3)gg 3 4 We associate a tree, denoted computation tree (CT), to the algebraic formulae. The original functions (arguments of the formulae) are the leaves and computations (either sum of functions or variable elimination) are the internal nodes. Figure 1.a depicts the computation tree associated to the example in 2.1. CTs are not directly useful in our proof, because their number of nodes is not proportional to the number of original functions. This is because CTs may have internal linear paths, which are paths between an internal node v and one of its ancestors w, (v; v1 :::;v k ;w), such that all nodes in the path but v have only one child. For instance, the path between the lowest and the root in 1.a is an internal linear path. To overcome this problem, we dene the compact computation tree (CCT) as the tree obtained from the CT where internal linear paths are merged into one node representing the sequence of computations. Figure 1.b depicts the compact computation tree of the example in 2.1. In the CCT, leaves are the original cost functions and internal nodes are sequences of computations. The sequence of computations of an internal node is has 3

f4(x2, x5) f3(x3, x4) f2(x2, x3) f1(x1, x3) a) Computation-tree f4(x2, x5) f3(x3, x4) f2(x2, x3) f1(x1, x3) b) Compact Computation-tree Figure 1:. the following form min V f P k j=1 g jg (k 0), where each g is either an original function or the result of the computation of a a child. It is important to note that the number of nodes in the CCT is bounded by 3 r and the number of internal nodes in the CCT is bounded by 2 r. The reason is that there are exactly r leaves and the compactation of internal linear paths guarantees that only leaf parents may have one child. Therefore there are at most 2r internal nodes. It is clear that the CCT provides a trace of the execution of bucket elimination. Therefore, the complexity of BE is the complexity of the set of computations depicted in the CCT. Theorem 2.1 The complexity of bucket elimination along variable ordering o is time O(r exp(w (o))), where r is the number of cost functions and w (o) is the problem induced width along ordering o. Proof: The computation performed in an internal node v in the CTT is the sum of ch(v) functions (ch(v) denotes the numberofchildren of node v, ch(v) > 0 in internal nodes) and the subsequent elimination of q variables (q 0) from the resulting function. The combined arity ofthech(v) functions (namely, the functions arriving to node v) is bounded by w (o) 1. Therefore, using 4

Theorem 1.1 we can bound the complexity of the computation at node v by O(ch(v) P exp(w (o) 1)). The total cost is the sum of costs over internal nodes O(( ch(v)) v exp(w (o) 1)). It is obvious that the sum of children over non-leave nodes is equal to the number of nodes in the tree minus one, which we showed that is bounded by 3 r. Consequently, the total cost is bounded by O(3 r exp(w (o) 1)) which is equal to O(r exp(w (o))) 3 Time Complexity of Mini-bucket Elimination Mini-bucket elimination (MB) [1] can also be expressed as a sequence of sum functions and variable eliminations. Therefore, given a problem instance and a variable ordering we can also express the MB execution as an algebraic formulae. As before, it is important to note that each function appear only once in the formulae. Example 3.1 The execution trace of(2)-mini-bucket elimination in the problem of Example 2.1 along the lexicographical variable ordering is, bucket5: bucket4: bucket3: bucket2: bucket1: Result: f4(x2;x5) f3(x3;x4) f2(x2;x3);f1(x1;x3) 4(x3) := ff3(x3;x4)g 5(x2) := ff4(x2;x5)g (3;1)(x2) := ff2(x2;x3)4(x3)g (3;2)(x1) := ff1(x1;x3)g 2() := f5(x2)(3;1)(x2)g 1() := f(3;2)(x1)g Therefore, we can expand the computation and express the whole execution as, min 2 fmin 5 ff4(x2;x5)g min 3 fmin 4 ff3(x3;x4)g f2(x2;x3)gg min (1;3) ff 1(x1;x3)g We can also represent a MB execution by means of a computation tree that can be compacted into a compact computation tree (CCT). The computation tree of the example is depicted in Figure 2.a and the compact computation tree (CCT) is depicted in Figure 2.b If BE and MB are executed with the same problem and the same variable ordering, The CTT of MB is likely to have more nodes (with less computation in each). It is important to note that, although possibly having more nodes, the total number nodes of the CTT associated to a MB execution is still bounded by 3r. As in the BE case, the CTT has exactly r leaves and the compactation guarantees that only parents of leaves may have one child. Therefore, the number of internal nodes is bounded by 2r and the total number of nodes is bounded by 3r. The complexity of MB is the sum of complexities of internal nodes in the CTT, which yields the following result. Theorem 3.1 The complexity of mini-bucket elimination with accuracy parameter i along variable ordering o is time O(r exp(i)), where r is the number of cost functions. 5

f4(x2, x5) f3(x3, x4) f2(x2, x3) f1(x1, x3) a) Computation-tree f4(x2, x5) f3(x3, x4) f2(x2, x3) f1(x1, x3) a) Compact Computation-tree Figure 2:. Proof: The computation performed at an internal node v in the CTT is the sum of ch(v) functions and the subsequent elimination of q variables (q 0) from the resulting function. By denition of (i)-mb, the combined arity of the ch(v) functions arriving to node v is bounded by i. Therefore, using Theorem 1.1 we can bound the complexity of the computation at node v by O(ch(v) exp(i)). The sum of children over non-leave nodes is equal to the number of nodes in the tree minus one, which is bounded by 3 r. Consequently, the total cost is bounded by O(3 r exp(i 1)) which is assymptotically equivalent to O(r exp(i)) References [1] R. Dechter. Mini-Buckets: A General Scheme for Generating Approximations in Automated Reasoning. In Proceeding of IJCAI'97, pg. 1297{1303, 1997. 6

[2] R. Dechter. Bucket elimination: A unifying framework for reasoning. Arti- cial Intelligence, 113:41{85, 1999. 7