All-to-All Communication

Similar documents
GIAN Course on Distributed Network Algorithms. Spanning Tree Constructions

Distributed minimum spanning tree problem

GIAN Course on Distributed Network Algorithms. Spanning Tree Constructions

6.852 Lecture 10. Minimum spanning tree. Reading: Chapter 15.5, Gallager-Humblet-Spira paper. Gallager-Humblet-Spira algorithm

Distributed Minimum Spanning Tree

A DISTRIBUTED SYNCHRONOUS ALGORITHM FOR MINIMUM-WEIGHT SPANNING TREES

Problem Set 2 Solutions

These are not polished as solutions, but ought to give a correct idea of solutions that work. Note that most problems have multiple good solutions.

Distributed Computing over Communication Networks: Leader Election

A synchronizer generates sequences of clock pulses at each node of the network satisfying the condition given by the following definition.

CIS 121 Data Structures and Algorithms Minimum Spanning Trees

Problem Set 2 Solutions

22 Elementary Graph Algorithms. There are two standard ways to represent a

CSE331 Introduction to Algorithms Lecture 15 Minimum Spanning Trees

Tight Bounds For Distributed MST Verification

Algorithms for Minimum Spanning Trees

COMP 182: Algorithmic Thinking Prim and Dijkstra: Efficiency and Correctness

Theorem 2.9: nearest addition algorithm

Greedy algorithms is another useful way for solving optimization problems.

Toward Optimal Bounds in the Congested Clique: Graph Connectivity and MST

22 Elementary Graph Algorithms. There are two standard ways to represent a

Analyze the obvious algorithm, 5 points Here is the most obvious algorithm for this problem: (LastLargerElement[A[1..n]:

DO NOT RE-DISTRIBUTE THIS SOLUTION FILE

Algorithm 23 works. Instead of a spanning tree, one can use routing.

Announcements Problem Set 5 is out (today)!

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019

Chapter 8 DOMINATING SETS

Chapter 8 DOMINATING SETS

Definition For vertices u, v V (G), the distance from u to v, denoted d(u, v), in G is the length of a shortest u, v-path. 1

Ma/CS 6b Class 26: Art Galleries and Politicians

Master Thesis: Animations and Analysis of Distributed Algorithms. Boris Koldehofe

The Power of Locality

Distributed Algorithms 6.046J, Spring, Nancy Lynch

The k-center problem Approximation Algorithms 2009 Petros Potikas

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem

Decreasing a key FIB-HEAP-DECREASE-KEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADING-CUT(, )

GraphTheory Models & Complexity BFS Trees DFS Trees Min Spanning Trees. Tree Constructions. Martin Biely

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

II (Sorting and) Order Statistics

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Approximation Algorithms

Lecture 19. Broadcast routing

Local MST Computation with Short Advice

Minimum Spanning Trees

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for Advanced Algorithms, Fall 2013.

The Round Complexity of Distributed Sorting

Tree Construction (BFS, DFS, MST) Chapter 5. Breadth-First Spanning Tree (BFS)

Clearly there is a close relation between the radius R and the diameter D of a graph, such as R D 2R.

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

HW Graph Theory SOLUTIONS (hbovik) - Q

CSE 431/531: Algorithm Analysis and Design (Spring 2018) Greedy Algorithms. Lecturer: Shi Li

The clique number of a random graph in (,1 2) Let ( ) # -subgraphs in = 2 =: ( ) We will be interested in s.t. ( )~1. To gain some intuition note ( )

UC Berkeley CS 170: Efficient Algorithms and Intractable Problems Handout 8 Lecturer: David Wagner February 20, Notes 8 for CS 170

Approximation Algorithms

Distributed Approximate Matching

Local MST computation with short advice

The clique number of a random graph in (,1 2) Let ( ) # -subgraphs in = 2 =: ( ) 2 ( ) ( )

Sparse Hypercube 3-Spanners

Week 12: Minimum Spanning trees and Shortest Paths

CSE 521: Design and Analysis of Algorithms I

Optimal Deterministic Routing and Sorting on the Congested Clique

The NP-Completeness of Some Edge-Partition Problems

Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube

Superconcentrators of depth 2 and 3; odd levels help (rarely)

9 Distributed Data Management II Caching

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

2 The Fractional Chromatic Gap

Discharging and reducible configurations

CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018

7 Distributed Data Management II Caching

Byzantine Consensus in Directed Graphs

Multiple Vertex Coverings by Cliques

Algorithms for COOPERATIVE DS: Leader Election in the MPS model

Ramsey s Theorem on Graphs

Lecture 4: September 11, 2003

An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem

Advanced algorithms. topological ordering, minimum spanning tree, Union-Find problem. Jiří Vyskočil, Radek Mařík 2012

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

CSE Winter 2015 Quiz 2 Solutions

Matching Theory. Figure 1: Is this graph bipartite?

1 Minimum Spanning Tree (MST)

CS781 Lecture 2 January 13, Graph Traversals, Search, and Ordering

UNIT 3. Greedy Method. Design and Analysis of Algorithms GENERAL METHOD

Assignment # 4 Selected Solutions

THE FIRST APPROXIMATED DISTRIBUTED ALGORITHM FOR THE MINIMUM DEGREE SPANNING TREE PROBLEM ON GENERAL GRAPHS. and

Λέων-Χαράλαμπος Σταματάρης

Dijkstra s algorithm for shortest paths when no edges have negative weight.

Lecture outline. Graph coloring Examples Applications Algorithms

Finding a -regular Supergraph of Minimum Order

Cluster Editing with Locally Bounded Modifications Revisited

A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees

Kruskal s MST Algorithm

Approximability Results for the p-center Problem

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

CS521 \ Notes for the Final Exam

21. Distributed Algorithms

Network Algorithms. Distributed Sorting

Clustering Using Graph Connectivity

Transcription:

Network Algorithms All-to-All Communication Thanks to Thomas Locher from ABB Research for basis of slides! Stefan Schmid @ T-Labs, 2011

Repetition Tree: connected graph without cycles Spanning subgraph: A subgraph that spans all vertices of a graph Minimum spanning tree (MST): Spanning tree with the least total weight among all the spanning trees of a weighted and connected graph 2

Definitions Broadcast Tree? Spanning tree? Stefan Schmid @ T-Labs, 2011 3

Definitions Broadcast Tree? Yes Spanning tree? No 4

Definitions Broadcast Tree? Spanning tree? 5

Definitions Broadcast Tree? No Spanning tree? No, but spanning subgraph! 6

Definitions Broadcast v BFS from v? 7

Definitions Broadcast v BFS from v? No. 8

Definitions Broadcast v BFS from v? 9

Definitions Broadcast v BFS from v? Yes. 10

Definitions Broadcast 1 3 1 2 1 2 1 2 5 1 1 v 5 1 MST? 11

Definitions Broadcast 1 3 1 2 1 2 1 2 5 1 1 v 5 1 MST? No. Note: BFS can also be defined wrt weights (shortest path spanning tree)! 12

System Model The network is a clique G=(V,E,w) where w(e) denotes the weight of edge e E and V =n Edge weights unique (not critical, can break symmetries by Ids), can be represented with O(log n) bits Each node has a distinct ID of O(log n) bits Each node knows all the edges it is incident to and their weights Each node knows about all the other nodes The synchronous communication model is used Results so far? 13

Definitions Many algorithms run in phases: in each phase, a subset of the V -1 MST edges are chosen. The MST «grows over time»! We say that nodes that are directly or indirectly connected by the edges chosen so far belong to the same fragment (or «cluster») Blue edge : lightest edge out of a fragment Minimum weight outgoing edge (MWOE) is the edge with the lowest weight incident to a given node and leading to another fragment. This edge is a blue edge candidate! 14

The GHS Algorithm 15

Repetition Time Complexity O(n log n) where n is graph size. Message Complexity O(m log n) where m is number of edges: at most O(1) messages on each edge in a phase. Can we do better on special networks? E.g., the clique? 16

MST on Clique Can we do better on special networks? E.g., the clique? 17

MST on Clique Yes we can: 1. Send all weights to «leader» (or even to all other nodes) 2. Compute solution locally (Prim/Kruskal) 3. Broadcast result Complexity? Time O(1), Messages O(n), so optimal! 18

Bounded Message Size Not very scalable! Messages are huge: contain n-1 weights Message Size Message size limited to O(log n) bits. (Assume all variables in network all of size O(log n),e.g., weights, identifiers,...) Simple algorithm required messages of size O(n log n) bits! So n rounds to transmit over a single link. Note: locality is no longer the problem, but the communication! 19

Bounded Message Size Not very scalable! Messages are huge: contain n-1 weights Message Size Message size limited to O(log n) bits. So how to do it in time less than O(n) with bounded message size? What about GHS algo? Guess possible performance! 20

A Simple Algorithm Phase k: Code for node v in fragment F Input: Set of chosen edges that build node fragments 1. Compute the MWOE of v («blue edge candidate») 2. Send the MWOE to all nodes in the same fragment 3. Receive messages from the other nodes 4. If own MWOE is the lightest in fragment, then broadcast it to all other nodes in the clique and add this edge. So all other nodes always know which fragments there are / merge currently! 5. Receive other broadcast messages and add those edges as well 21

A Simple Algorithm Example: How will it proceed? 1 5 6 3 4 2 22

A Simple Algorithm Example: Round 1: <1> <1> <1> <1> <2> 1 5 6 3 4 <2> <2> 2 <2> Single node component: Broadcast the lightest edge to all fragments = neighbors. Here, the lightest incident edge of each individual node is a blue edge! 1 5 6 3 4 2 Add edges and update fragment 23

A Simple Algorithm Example: Round 2: <4> 1 5 6 3 4 <3> 2 <3> <4> Send MWOE edge to all nodes in the same fragment 1 <3> <3> 5 6 <3> 3 4 <3> 2 Broadcast the lightest edge to all other nodes in remaining fragments. 24

A Simple Algorithm Example: Round 2: Why correct? - 1 5 6 3 4 What is runtime? 2 Add edges and update fragments 25

A Simple Algorithm Example: Round 2: 1 5 6 3 4 2 Add edges and update fragments Why correct? - By blue edge rule, MST will emerge - No message too large: one weight per edge and time What is runtime? Since the minimum fragment size doubles in each round, the algorithm computes the MST in O(log n) rounds! - Can we do better? Note: we did not exploit fact that we can send different messages to different neighbors! But could be exploited to speed up!

Fast MST Algorithm: General Idea To reduce the number of rounds, fragments have to grow faster! In our simple algorithm, we used the MWOE / blue edge of each fragment to merge clusters. With this approach, the minimum fragment size doubled in each phase. Idea how to speed it up? What if we could use the k lightest outgoing edges of each fragment, where k is the fragment size? 27

Impact of Fragment Growth What if we could use the k lightest outgoing edges of each fragment, where k is the fragment size? Cluster of size k merges with k clusters of size k, so next fragment of quadratic size k*k (in contrast to 2*k so far) So: 1=2 0, 2=2 1, 4=2 2, 16=2 4, 256=2 8. In general: After i steps, 2 2i in contrast to 2 i so far. How fast is this? n = 2 2i log n = 2 i log log n = i 28

Impact of Fragment Growth Let b k denote the minimum fragment size after phase k, then it holds for our simple algorithm that and b 0 := 1 b k+1 2 b k thus b k 2 k k O(log n) 29

Impact of Fragment Growth We will derive an algorithm for which it holds that: b k+1 b k (b k +1) Thus the fragment sizes grow quadratically as opposed to merely double In order to achieve such a rate, information has to be spread faster! We will use a simple trick for that... 30

A Trick to Avoid Link Congestion Link capacity bounded: we cannot send much information over a single link... b x a <x>,<y>,<z> y z 31

A Trick to Avoid Link Congestion However, much information can be sent from different nodes to a particular node v 0! A node can simply send parts of the information that it wants to transmit to a specific node to some other nodes. These nodes can send all parts to the specific node in one step! This can be used to share workload! 32

Fast MST Algorithm: General Idea Our new algorithm will execute the following steps in each phase. Let b be the minimal fragment size (the actual size can be larger!) 1. Each fragment computes the b lightest edges e 1,...,e b to other distinct fragments 2. Assign at most one of those lightest edges to the members of the fragment («responsible» for this edge)! min size b = 4 e 1 e 4 4 2 e 2 1 fragment (size 5) 3 e 3 33

Fast MST Algorithm: General Idea 3. Each node with an edge <v,u,w({v,u})> assigned to it, sends <v,u,w({v,u})> to a specific node v 0 (global!) 4. Node v 0 computes the lightest edges that can be safely added to the spanning tree (no cycle) fragment Step 2 and 3 together is exactly our trick! v o b = 4 fragment 34

Fast MST Algorithm: General Idea 5. Node v 0 sends a message to a node, if its assigned edge is added to the spanning tree 6. Each node, that received a message, broadcasts it to all other nodes ( All nodes have to know about all added edges and new fragments!) fragment Our trick again! v o b = 4 fragment 35

Fast MST Algorithm: General Idea This way, more edges can be added in one phase! However, how does it really work? There are a few obvious problems... 36

Fast MST Algorithm: Problem 1 First problem: How to compute the b lightest outgoing edges of a specific fragment? Not so difficult: procedure Cheap_Out (idea?) 37

Fast MST Algorithm: Problem 2 Second problem: How can the designated node v 0 know which edges can be added safely? Let's illustrate this problem with an example graph! 38

Fragment Cycle Detection In our example: V = n = 12 b = 2 (minimum fragment size) 7 2 1 6 e 5 4 3 8 Which are the lightest b = 2 outgoing edges of the fragments? All edges except for edge e! So this is the picture of the designated node v 0 after receiving the b = 2 lightest outgoing edges of each fragment v 0 does not know about the edge e! It is the 3rd lightest edge of both adjacent nodes! 39

Fragment Cycle Detection v 0 can construct a logical graph: its nodes are the fragments and its edges are the b = 2 lightest outgoing edges 7 2 1 6 e 5 4 3 8 6 2 4 7 e 8 5 1 3 40

Fragment Cycle Detection Based on the knowledge of the b = 2 lightest outgoing edges, v 0 can locally merge nodes of the logical graph into fragments. Can for sure take the MWOE / blue edges of each component! So build super-fragments by connecting any pair with the lightest connecting edge: edges with weights 1, 2, 3 and 4 are safe. But can I continue adding edges in ascending order of weight, as long as no loop occurs? 6 2 4 7 e 8 5 1 3 Add edges 1, 2, 3, 4 super-fragment 6 2 4 7 e 8 5 1 3 super-fragment 41

Fragment Cycle Detection: Problem Can I add edge with weight 6? Loop-free in logical graph! No! The edge is not part of the MST! Not a blue edge between the components! 6 2 4 7 e 8 5 1 3 The problem is that in both (super)fragments at least one of the nodes has already used up all of its b outgoing edges. The (b+1)th outgoing edge might be lighter than other edges, but does not appear in logical graph! So, when is it safe to add an edge? 42

Fast MST Algorithm: The Algorithm Step by Step Let's put everything together and solve the open problems! Initially, each node is itself a fragment of size 1 and no edges are selected. The algorithm consists of 6 steps. Each step can be performed in constant time (communication round). All 6 steps together build one phase of the algorithm, thus the time complexity of one phase is O(1). A specific node in each fragment F, e.g. the node with the smallest ID, is considered the leader l(f) of the fragment F (since our algorithm ensures that nodes know fragment, they also know leader). 43

Fast MST Algorithm: The Algorithm Step by Step Step 1 a) Each node v computes the minimum-weight edge e(v,f) that connects v to any node of some other fragment F, for all other fragments F b) Each node v sends e(v,f) to the leader l(f) for all fragments other than the own fragment v fragment F fragment F 1 2 l(f 1 ) l(f 2 ) 44

Fast MST Algorithm: The Algorithm Step by Step Step 2 a) Each leader v of a fragment F (i.e. l(f) =v) computes the lightest edge between F and every other fragment b) Each leader v performs procedure Cheap_Out Selects the b lightest outgoing edges and appoints them to its nodes g(e 1 ) e 1 e 2 l(f) e 3 e 4 g(e 2 ) g(e 3 ) fragment F g(e 4 ) If edge e is appointed to v, then v is denoted e's guardian g(e) (assignment needed for our load balancing trick!) 45

Fast MST Algorithm: The Algorithm Step by Step Procedure Cheap_Out Code for the leader of fragment F Input: Lightest edge e(f,f') for every other fragment F' 1. Sort the input edges in increasing order of weight 2. Define b = min{ F, <# of fragments>} (Size of own fragment as long as many other fragments. Why not more? Cannot communicate so much info!) 3. Choose the first b nodes of the sorted list 4. Appoint the node with the i-th largest ID as the guardian of the i-th edge, for i = 1,...,b 5. Send a message about the edge to the node it is appointed to 46

Fast MST Algorithm: The Algorithm Step by Step Step 3 All nodes that are guardians for a specific edge send a message to the designated node v 0, e.g. the node with the smallest ID in the whole graph fragment v o fragment v 0 knows the b lightest outgoing edges of each fragment! 47

Fast MST Algorithm: The Algorithm Step by Step Step 4 a) v 0 locally performs procedure Const_Frags Computes the edges to be added b) For all added edges, v 0 sends a message to g(e) v o fragment fragment 48

Fast MST Algorithm: The Algorithm Step by Step How does Const_Frags work? We have seen: problem occurs when all b outgoing edges of a fragment are used up! More precisely, a problem occurs only if there is at least one full fragment in each of the two super-fragments which are supposed to be merged! (Otherwise we would see the edge.) 2 6 4 7 e 8 5 1 3 Used up all edges! 49

Fast MST Algorithm: The Algorithm Step by Step How does Const_Frags work? We call a super-cluster containing a cluster that used up all of its b edges finished ( not safe ). If an edge is the lightest outgoing edge of one super-fragment that is not finished, then it is still safe to add it, no matter if the other super-fragment is finished, since we are sure that there is no better edge to connect the unfinished super-fragment to other fragments. 6 2 4 7 e 8 5 1 3 finished finished 50

Fast MST Algorithm: The Algorithm Step by Step Procedure Const_Frags Code for the designated node v 0 Input: the b lightest outgoing edges of each fragment 1. Construct the logical graph 2. Sort the input edges in increasing order of weight 3. Go through the list, starting with the lightest edge: If the edge can be added without creating a cycle then add it else drop it 51

Fast MST Algorithm: The Algorithm Step by Step Procedure Const_Frags If two (super-)fragments are merged, then the new super-fragment is declared finished if the edge is the heaviest edge of a fragment in any of the two super-fragment or any of the two super-fragments is already finished. If the edge is dropped ( both clusters already belong to the same super-fragment), then the super-fragment is declared finished if the edge is the heaviest edge of any of the two fragments Note: If a super-fragment is declared finished then it will remain finished until the end of the phase. Final Step All edges between finished super-clusters are deleted (before looking at the next lightest edge) 52

Fast MST Algorithm: The Algorithm Step by Step Step 5 All nodes that received a message from v 0 broadcast their edge to all other nodes Step 6 Each node adds all edges and computes the new fragments (complete view!). If the number of clusters is greater than 1, then the next phase starts. 53

Fast MST Algorithm: The Algorithm Step by Step The entire algorithm for node v in fragment F 1. Compute the minimum-weight edge e(v,f') that connects v to fragment F' and send it to l(f') for all fragments F' F 2. if v = l(f): Compute lightest edge between F and every other fragment. Perform Cheap_Out 3. if v = g(e) for some edge e: Send <e> to v 0 4. if v = v 0 : Perform Const_Frags. Send message to g(e) for each added edge e 5. if v received a message from v 0 : Broadcast it 6. Add all received edges and compute the new fragments 54

Analysis: Correctness It suffices to show that whenever an edge is added, it is part of the MST We only have to analyze Const_Frags! Proof [Sketch]: We only have to show that we always add the lightest outgoing edge of each super-fragment. This is always the right choice! 55

Analysis: Correctness Assume edge e is used to merge super-fragment SC 1 and SC 2. W.l.o.g., assume that SC 1 is not finished and that e is one of the b lightest outgoing edges of its fragment. We will show now that e is the MWOE of SC 1! Assume that there is a lighter outgoing edge e' (w(e') < w(e)), incident to a fragment C that connects super-fragment SC 1 to super-fragment SC 3. C e e' SC 2 SC 3 is a fragment SC 1 56

IV. Analysis: Correctness Case 1: e' is among the b lightest outgoing edges of its fragment C. Since w(e') < w(e), e' must have been considered before e, thus either SC 1 and SC 3 have been merged before or e' was dropped because SC 1 = SC 3. Either way, e' cannot be an outgoing edge when the algorithm adds e. Contradiction! SC 2 e C e' SC 3 is a fragment SC 1 57

IV. Analysis: Correctness Case 2: e' is not among the b lightest outgoing edges of its fragment C. Case 2.1: There is an edge e'' among the b lightest outgoing edges from fragment C leading to the same fragment C'. It follows that w(e'') < w(e'). Since SC 1 SC 3, e'' has not been considered yet, thus w(e) < w(e''). Hence we have that w(e) < w(e'). Contradiction! C e e' SC 2 SC 3 SC 1 58

IV. Analysis: Correctness Case 2: e' is not among the b lightest outgoing edges of its fragment C. Case 2.2: None of the b lightest outgoing edges of C lead to C'. Thus, all b outgoing edges have lower weights than e', also the heaviest of these edges e'', i.e., w(e'') < w(e') < w(e). Hence, edge e'' must have been inspected already. Since it is the heaviest (last) edge of some fragment, SC 1 must now be finished. Contradiction! C e e' SC 2 SC 3 SC 1 59

Analysis: Time Complexity Each phase requires O(1) rounds, but how many phases are required until termination? Reminder: b k denotes the minimum fragment size in phase k. Lemma 2: It holds that b k+1 b k (b k +1). 60

Analysis: Time Complexity Proof [Sketch]: We prove a stronger claim: Whenever a super-fragment is declared finished in phase k+1, it contains at least b k +1 fragments. Each fragment has (at least) b k outgoing edges in phase k+1, since b k is the minimum fragment size after phase k. C 1 2... b k 61

Analysis: Time Complexity Case 1: The super-fragment is declared finished after one of its fragment has used up all of its b k outgoing edges. Let C be this fragment. Let's call those edges 1, 2,..., b k leading to the fragments C 1, C 2,..., C b. If the inspection of an edge does not result in a merge, then the clusters already belong to the same super-fragment! If there is a merge, then they belong to the same super-fragment afterwards. C 1 2 b k... 62

Analysis: Time Complexity Thus, at the end, the super-fragment contains at least C, C 1, C 2,..., C b! The super-fragment contains at least b k +1 fragments. Case 2: The super-fragment is declared finished after merging with an already finished super-fragment. Using an inductive argument, the finished super-fragment must already contain at least b k +1 clusters, since one of its clusters has used up all of its b k edges. C 1 2 b k... 63

Analysis: Time Complexity Theorem 1: The time complexity is O(log log n) rounds. Proof: According to Lemma 2, it holds that b k+1 b k (b k +1). Furthermore, we have that b 0 := 1. Hence it follows that for every k 1. Since b k n, it follows that k log(log n)+1. Since each phase requires O(1) rounds, the time complexity is O(log log n). 64

Analysis: Time Complexity Theorem 2: The message complexity is O(n 2 log n). Number of bits! The proof is simple: Count the messages exchanged in Steps 1, 3, 4, and 5. We will not do this here. Adler et al. showed that the minimum number of bits required to solve the MST problem in this model is Ω(n 2 log n). Thus, this algorithm is asymptotically optimal! 65

Overview I. Introduction II. Previous Results III. Fast MST Algorithm IV. Analysis V. Summary Results & Conclusions VI. Extensive Example 66

Summary: Conclusions "An obvious question we leave open is whether the algorithm can be improved, or whether there is an inherent lower bound of Ω(log log n) on the number of communication rounds required to construct an MST in this model." 67

Extensive Example: The Graph 4 8 12 10 2 11 13 1 6 14 All other edges are heavier!!! v 0 7 9 3 15 5 68

Extensive Example: Phase 1 <4> 12 4 8 3 11 <2> <1> 13 1 6 <6> 14 v 0 <1> <3> <4> 7 15 <6> 10 9 <5> <2> <5> 5 2 1. Not necessary 2. Not necessary 3. Send MWOE to v 0 4. Const_Frags! 69

Extensive Example: Phase 1 Const_Frags 1. Construct logical graph 4 2 1 6 3 5 70

Extensive Example: Phase 1 Const_Frags 2. Add edges 4 2 1 6 3 5 71

Extensive Example: Phase 1 g(<4>) <4> g(<3>) v 0 <3> 4 3 g(<4>) <4> 12 8 11 g(<6>) 13 1 6 14 g(<3>) 7 g(<1>) g(<1>) <5> <5> 15 10 g(<6>) g(<2>) 2 g(<2>) 9 5 g(<5>) g(<5>) 1. Not necessary 2. Not necessary 3. Send MWOE to v 0 4. Const_Frags! 5. Send e to g(e) Only some messages are displayed 72

Extensive Example: Phase 1 g(<4>) g(<3>) v 0 4 3 g(<4>) 12 8 11 g(<6>) 13 1 6 14 g(<3>) 7 g(<1>) 15 10 g(<1>) g(<6>) g(<2>) 2 g(<2>) 9 5 g(<5>) Only some messages are displayed g(<5>) 1. Not necessary 2. Not necessary 3. Send MWOE to v 0 4. Const_Frags! 5. Send e to g(e) 6. Broadcast e and update the fragments 73

Extensive Example: Phase 2 4 8 <12> 12 10 2 1. Compute e(v,f ) and send it to l(f ) <8> 11 13 1 6 14 <13> v 0 3 7 <7> <15> 15 9 5 Only some messages are displayed 74

Extensive Example: Phase 2 g(<8>) 4 g(<7>) v 0 3 <13> g(<12>) 8 11 13 1 <8> <10> 6 14 7 15 12 10 <12> <14> g(<8>) g(<13>) g(<7>) g(<10>) g(<9>) g(<10>) 9 <14> 5 2 g(<9>) g(<14>) g(<14>) 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians Only some messages are displayed 75

Extensive Example: Phase 2 <8> v 0 12 4 8 3 <13> <12> 11 <14> <8> 13 1 6 14 <7> 7 15 <9> <10> 10 <9> 9 <10> <14> 5 2 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 4. Const_Frags! 76

Extensive Example: Phase 2 Const_Frags 1. Construct logical graph 12 8 10 13 14 7 9 77

Extensive Example: Phase 2 Const_Frags 2. Add edges 12 Edges between finished superfragments must not be added! 8 10 13 14 7 9 finished finished 78

Extensive Example: Phase 2 g(<8>) <8> v 0 g(<7>) 4 8 3 2 10 11 13 <8> 1 6 14 <7> 7 g(<8>) g(<7>) 15 12 g(<9>) g(<10>) Only some messages are displayed 9 g(<10>) 5 g(<9>) 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 4. Const_Frags! 5. Send e to g(e) 79

Extensive Example: Phase 2 g(<8>) v 0 g(<7>) 4 3 8 11 13 1 6 14 7 g(<7>) g(<8>) 15 12 10 Only some messages are displayed g(<9>) g(<10>) 9 g(<10>) 5 2 g(<9>) 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 4. Const_Frags! 5. Send e to g(e) 6. Broadcast e and update the clusters 80

Extensive Example: Phase 3 4 8 12 10 2 1. Compute e(v,f ) and send it to l(f ) 11 13 1 6 14 v 0 7 9 3 15 5 Only some messages are displayed 81

Extensive Example: Phase 3 g(<11>) v 0 12 4 2 8 10 3 11 13 1 <12> 6 14 <12> 7 g(<12>) g(<11>) g(<12>) 9 5 15 Only some messages are displayed 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 82

Extensive Example: Phase 3 4 g(<11>) v 0 3 <12> 8 11 13 1 6 14 <11> 7 g(<12>) <12> 15 12 10 g(<11>) g(<12>) 9 5 2 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 4. Const_Frags! Only some messages are displayed 83

Extensive Example: Phase 3 Const_Frags 1. Construct logical graph 11 84

Extensive Example: Phase 3 Const_Frags 2. Add edges 11 finished 85

Extensive Example: Phase 3 g(<11>) v 0 4 8 13 <11> 7 1 12 11 6 10 g(<11>) g(<12>) 9 2 14 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 3 g(<12>) 15 5 4. Const_Frags! 5. Send e to g(e) 86

Extensive Example: Phase 3 4 g(<11>) v 0 3 8 11 13 1 6 14 7 g(<12>) 15 12 10 Only some messages are displayed g(<11>) g(<12>) 9 5 2 1. Compute e(v,f ) and send it to l(f ) 2. Select b = 2 lightest outgoing edges and appoint guardians 3. Send appointed edge to v 0 4. Const_Frags! 5. Send e to g(e) 6. Broadcast e and update the fragments 87

Extensive Example: After Phase 3 4 8 12 10 2 11 13 1 6 14 Done! v 0 7 9 3 15 5 88

References M. Adler, W. Dittrich, B. Juurlink, M. Kutylowski, and I. Rieping. Communication Optimal Parallel Minimum Spanning Tree Algorithms. In Proc. 10th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 27-36, 1998. Z. Lotker, B. Patt Shamir, and D. Peleg. Distributed MST for Constant Diameter Graphs. In Proc. 20th Annual ACM Symposium on Principles of Distributed Computing (PODC), pages 63-71, 2001. Z. Lotker, E. Pavlov, B. Patt Shamir, and D. Peleg. MST Construction in O(log log n) Communication Rounds. In Proc. 15th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 94-100, 2003. D. Peleg and V. Rubinovich. Near Tight Lower Bound on the Time Complexity of Distributed MST Construction. SIAM J. Comput., 30:1427-1442, 2000. 89