GIAN Course on Distributed Network Algorithms. Spanning Tree Constructions

Similar documents
GIAN Course on Distributed Network Algorithms. Spanning Tree Constructions

Clearly there is a close relation between the radius R and the diameter D of a graph, such as R D 2R.

A synchronizer generates sequences of clock pulses at each node of the network satisfying the condition given by the following definition.

GraphTheory Models & Complexity BFS Trees DFS Trees Min Spanning Trees. Tree Constructions. Martin Biely

Distributed Computing

Broadcast: Befo re 1

Distributed Algorithms 6.046J, Spring, Nancy Lynch

All-to-All Communication

MITOCW watch?v=w_-sx4vr53m

THE FIRST APPROXIMATED DISTRIBUTED ALGORITHM FOR THE MINIMUM DEGREE SPANNING TREE PROBLEM ON GENERAL GRAPHS. and

Distributed Computing over Communication Networks: Leader Election

6.852 Lecture 10. Minimum spanning tree. Reading: Chapter 15.5, Gallager-Humblet-Spira paper. Gallager-Humblet-Spira algorithm

Communication Networks I December 4, 2001 Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page 1

Distributed Algorithms 6.046J, Spring, 2015 Part 2. Nancy Lynch

GIAN Course on Distributed Network Algorithms. Distributed Objects

Distributed minimum spanning tree problem

managing an evolving set of connected components implementing a Union-Find data structure implementing Kruskal s algorithm

COMP 182: Algorithmic Thinking Prim and Dijkstra: Efficiency and Correctness

Tree Construction (BFS, DFS, MST) Chapter 5. Breadth-First Spanning Tree (BFS)

The Power of Locality

Algorithm Design (8) Graph Algorithms 1/2

Dijkstra s algorithm for shortest paths when no edges have negative weight.

CS 161 Lecture 11 BFS, Dijkstra s algorithm Jessica Su (some parts copied from CLRS) 1 Review

A family of optimal termination detection algorithms

2. True or false: even though BFS and DFS have the same space complexity, they do not always have the same worst case asymptotic time complexity.

Master Thesis: Animations and Analysis of Distributed Algorithms. Boris Koldehofe

Network Layer: Routing

Binary heaps (chapters ) Leftist heaps

22 Elementary Graph Algorithms. There are two standard ways to represent a

Redes de Computadores. Shortest Paths in Networks

CS521 \ Notes for the Final Exam

Distributed Systems. Basic Algorithms. Rik Sarkar. University of Edinburgh 2015/2016

Distributed Systems. Basic Algorithms. Rik Sarkar. University of Edinburgh 2016/2017

Distributed Minimum Spanning Tree

Routing. Information Networks p.1/35

ADHOC SYSTEMSS ABSTRACT. the system. The diagnosis strategy. is an initiator of. can in the hierarchy. paper include. this. than.

Lecture 19. Broadcast routing

Tutorial. Question There are no forward edges. 4. For each back edge u, v, we have 0 d[v] d[u].

Chapter 4. Greedy Algorithms. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

CSE 100 Minimum Spanning Trees Prim s and Kruskal

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

CS 341: Algorithms. Douglas R. Stinson. David R. Cheriton School of Computer Science University of Waterloo. February 26, 2019

Chapter 8 DOMINATING SETS

Chapter 8 DOMINATING SETS

CSE 521: Design and Analysis of Algorithms I

Lecture Summary CSC 263H. August 5, 2016

A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees

Lecture 3: Graphs and flows

How Routing Algorithms Work

Graph Algorithms. Many problems in networks can be modeled as graph problems.

Ad hoc and Sensor Networks Topology control

11.9 Connectivity Connected Components. mcs 2015/5/18 1:43 page 419 #427

CS261: Problem Set #1

Λέων-Χαράλαμπος Σταματάρης

CSE 331: Introduction to Algorithm Analysis and Design Graphs

CSc Leader Election

ICS 351: Today's plan. distance-vector routing game link-state routing OSPF

Copyright 2000, Kevin Wayne 1

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

A DISTRIBUTED SYNCHRONOUS ALGORITHM FOR MINIMUM-WEIGHT SPANNING TREES

UNIT 2 ROUTING ALGORITHMS

Computer Networks. Routing

Review of Graph Theory. Gregory Provan

Lecture 6 Basic Graph Algorithms

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Trees. Arash Rafiey. 20 October, 2015

Parallel Breadth First Search

21. Distributed Algorithms

Single Source Shortest Path

Topics. Trees Vojislav Kecman. Which graphs are trees? Terminology. Terminology Trees as Models Some Tree Theorems Applications of Trees CMSC 302

CSE 431/531: Analysis of Algorithms. Greedy Algorithms. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

CSE 100: GRAPH ALGORITHMS

Minimum-Spanning-Tree problem. Minimum Spanning Trees (Forests) Minimum-Spanning-Tree problem

What is a minimal spanning tree (MST) and how to find one

Outline. Routing. Introduction to Wide Area Routing. Classification of Routing Algorithms. Introduction. Broadcasting and Multicasting

Shortest Paths. Nishant Mehta Lectures 10 and 11

Distributed Leader Election Algorithms in Synchronous Networks

CS4450. Computer Networks: Architecture and Protocols. Lecture 11 Rou+ng: Deep Dive. Spring 2018 Rachit Agarwal

Algorithm Design and Analysis

Parallel Pointers: Graphs

T Seminar on Theoretical Computer Science Spring 2007 Distributed Computation. Constructing a spanning tree Toni Kylmälä

From Shared Memory to Message Passing

ECE 333: Introduction to Communication Networks Fall 2001

Lecture 10 Graph algorithms: testing graph properties

Algorithms for Minimum Spanning Trees

Shortest Path Algorithms

Multicast EECS 122: Lecture 16

Lecture 11: Analysis of Algorithms (CS ) 1

Leader Election in Rings

Depth First Search A B C D E F G A B C 5 D E F 3 2 G 2 3

Throughout this course, we use the terms vertex and node interchangeably.

Kruskal s MST Algorithm

Discrete mathematics

Optimization I : Brute force and Greedy strategy

cs/ee 143 Communication Networks

Routing Algorithms for Safety Critical Wireless Sensor Networks

Graph Algorithms. Many problems in networks can be modeled as graph problems.

Lecture 5 Using Data Structures to Improve Dijkstra s Algorithm. (AM&O Sections and Appendix A)

Shortest Path Routing Communications networks as graphs Graph terminology Breadth-first search in a graph Properties of breadth-first search

Algorithms and Data Structures

Transcription:

GIAN Course on Distributed Network Algorithms Spanning Tree Constructions Stefan Schmid @ T-Labs, 2011

Spanning Trees Attactive infrastructure : sparse subgraph ( loop-free backbone ) connecting all nodes. E.g., cheap flooding. Spanning Tree Cycle-free subgraph spanning all nodes. 2

Spanning Trees Is this a spanning tree?

Spanning Trees No, not cycle-free! 4

Spanning Trees Is this a spanning tree?

Spanning Trees No, not spanning this node! 6

Spanning Trees Is this a spanning tree?

Spanning Trees No, disconnected: spanning forest, not spanning tree! 8

Spanning Trees Is this a spanning tree?

Spanning Trees Yes, a spanning tree 10

Applications Efficient Broadcast and Aggregation Algebraic Gossip Disseminating multiple messages in large communication network Used in Ethernet network to avoid Layer-2 forwarding loops: Spanning Tree Protocol In ad-hoc networks: efficient backbone: broadcast and aggregate data using a linear number of transmissions Random communication pattern with neighbors Gossip: based on local interactions 11

Applications ConvergeCast: a fundamental network service. Efficient with spanning trees! Efficient Broadcast and Aggregation Algebraic Gossip Disseminating multiple messages in large communication network Used in Ethernet network to avoid Layer-2 forwarding loops: Spanning Tree Protocol In ad-hoc networks: efficient backbone: broadcast and aggregate data using a linear number of transmissions Random communication pattern with neighbors Gossip: based on local interactions 12

Types of Spanning Trees BFS MST a.k.a. shortest distance spanning tree (may also be weighted) Spanning tree includes shortest paths from a given root to all nodes Interesting e.g. for fast broadcast Minimum link cost spanning tree Interesting, e.g., for least routing cost or energy cost How to compute a spanning tree in the LOCAL model? 13

Benefits from Spanning Tree: Fundamental ConvergeCast 14

Benefits from Spanning Tree: Fundamental ConvergeCast Combination of broadcast from root to leaves and information aggregation from leaves to root! 15

Benefits from Spanning Tree: Fundamental ConvergeCast Want to know average temperature! 16

Benefits from Spanning Tree: Fundamental ConvergeCast Want to know average temperature! Broadcast along spanning tree: O(n) rather than O(m) messages! Broadcast Round 1 17

Benefits from Spanning Tree: Fundamental ConvergeCast Want to know average temperature! Broadcast along spanning tree: O(n) rather than O(m) messages! Broadcast Round 2 18

Benefits from Spanning Tree: Fundamental ConvergeCast Want to know average temperature! Broadcast along spanning tree: O(n) rather than O(m) messages! Broadcast Round 3 19

Benefits from Spanning Tree: Fundamental ConvergeCast Want to know average temperature! Broadcast along spanning tree: O(n) rather than O(m) messages! O(n) messages for broadcast! 20

Benefits from Spanning Tree: Fundamental ConvergeCast But how to aggregate information now in O(n) messages?! 21

Benefits from Spanning Tree: Fundamental ConvergeCast But how to aggregate information now in O(n) messages?! Aggregate from leaves toward root, with in-network processing! 22

Benefits from Spanning Tree: Fundamental ConvergeCast Temp! Temp! Temp! Temp! Temp! Aggregate Round 1 23

Benefits from Spanning Tree: Fundamental ConvergeCast Wait for Wait for children children Temp! Wait for children Wait for children Temp! Temp! Temp! Aggregate Round 1 Wait for children 24

Benefits from Spanning Tree: Fundamental ConvergeCast Wait for Wait for children children Temp! Temp! Temp! Wait for children Inner nodes should not send yet: avoid multiple messages over same spanning tree link! Wait for children Temp! Aggregate Round 1 Wait for children 25

Benefits from Spanning Tree: Fundamental ConvergeCast agg Aggregate agg Round 2 26

Benefits from Spanning Tree: Fundamental ConvergeCast agg Aggregate Round 3 27

Benefits from Spanning Tree: Fundamental ConvergeCast agg Aggregate Round 4 28

Benefits from Spanning Tree: Fundamental ConvergeCast agg Finished! 29

Benefits from Spanning Tree: Fundamental ConvergeCast agg How good is this algorithm? Can we do ConvergeCast with less messages?? Finished! 30

Benefits from Spanning Tree: Fundamental ConvergeCast agg How good is this algorithm? Can we do ConvergeCast with less messages? Finished! Let s talk about lower bounds! 31

Recall: Local Algorithm Send...... receive...... compute. 32

Let us introduce some definitions Distance, Radius, Diameter Distance between two nodes is # hops. Radius of a node is max distance to any other node. Radius of graph is minimum radius of any node. Diameter of graph is max distance between any two nodes. Relationship between R and D? 33

Let us introduce In general: some R definitions D 2R. Upper bound: max distance cannot be longer than going via any given node (2x radius). Lower bound: contradict definition of radius. Distance, Radius, Diameter Distance between two nodes is # hops. Radius of a node is max distance to any other node. Radius of graph is minimum radius of any node. Diameter of graph is max distance between any two nodes. In the complete graph, for all nodes: R=D. On the line, for broder nodes: 2R=D. 34

Let us introduce In general: some R definitions D 2R. Upper bound: max distance cannot be longer than going via any given node (2x radius). Lower bound: contradict definition of radius. Distance, Radius, Diameter Distance between two nodes is # hops. Radius of a node is max distance to any other node. Radius of graph is minimum radius of any node. Diameter of graph is max distance between any two nodes. In the complete graph, for all nodes: R=D. On the line, for broder nodes: 2R=D. Therefore, in the following, we will express all asymptotic results in D (not R). 35

Relevance: Radius People enjoy identifying nodes of small radius in a graph! E.g., Erdös number, Kevin Bacon number, joint Erdös-Bacon number, etc. 36

Lower Bounds for Broadcast Message complexity? Time complexity? 37

Lower Bounds for Broadcast Message complexity? Each node must receive message: so at least n-1. Time complexity? The radius of the source: each node needs to receive message. 38

Lower Bounds for Broadcast Message complexity? Each node must receive message: so at least n-1. Time complexity? The radius of the source: each node needs to receive message. How to achieve this? 39

Lower Bounds for Broadcast Message complexity? Each node must receive message: so at least n-1. Time complexity? The radius of the source: each node needs to receive message. How to achieve this? Compute a breadth first spanning tree! But how? Next! 40

Idea: Compute BFS using Flooding in LOCAL Model! 41

Idea: Compute BFS using Flooding in LOCAL Model! Send to all neighbors! Round 1 42

Idea: Compute BFS Choose using Flooding parent in LOCAL Model! for spanning tree! Choose parent for spanning tree! Choose parent for spanning tree! Round 1 Invariant: parent has shorter distance to root: loop-free! 43

Idea: Compute Send BFS using to all Flooding in LOCAL Model! neighbors! Round 2 44

Idea: Compute BFS using Flooding in LOCAL Model! Choose a parent: if multiple arrive at same time, take arbitrary! arbitrary parent with shorter distance! Round 2 Invariant: parent has shorter distance to root: loop-free! 45

Idea: Compute BFS using Flooding in LOCAL Model! Round 3 46

Idea: Compute BFS using Flooding in LOCAL Model! Round 3 Invariant: parent has shorter distance to root: loop-free! 47

Idea: Compute BFS using Flooding in LOCAL Model! Round 4 48

Idea: Compute BFS using Flooding in LOCAL Model! BFS! Invariant: parent has shorter distance to root: loop-free! 49

Idea: Compute BFS using Flooding in LOCAL Model! BFS! But careful! We assumed that messages propagate in synchronous manner! What if not? 50

Bad example Careful: in asynchronous environment, should not make first successful sender my parent! 51

Bad example How to overcome? Two options: Dijkstra and Bellman-Ford Careful: in asynchronous environment, should not make first successful sender my parent! 52

Distributed BFS: Dijkstra Flavor Essentially a synchronizer for the LOCAL model! Idea: overcome asynchronous problem by proceeding in phases! 53

Distributed BFS: Dijkstra Flavor Phase 1 Explore 1-neighborhood only: set Round-Trip-Time to 1. (Round 1) Idea: overcome asynchronous problem by proceeding in phases! 54

Distributed BFS: Dijkstra Flavor Phase 1 (Round 2) Idea: overcome asynchronous problem by proceeding in phases! 55

Distributed Start Phase BFS: 2! (Propagate Dijkstra Flavor along existing spanning tree!) Phase 2 (Round 1) Idea: overcome asynchronous problem by proceeding in phases! 56

Distributed BFS: Dijkstra Flavor Start Phase 2! I am at distance 1 from root! Phase 2 (Round 2) Idea: overcome asynchronous problem by proceeding in phases! 57

Distributed BFS: Dijkstra Flavor join! Phase 2 (Round 3) no! no! join! join! Idea: overcome asynchronous problem by proceeding in phases! Choose parent with smaller distance! 58

Distributed BFS: Dijkstra Flavor Start Phase 3! Phase 3 Start Phase 3! Start Phase 3! I am at distance 2 from root! Idea: overcome asynchronous problem by proceeding in phases! 59

General Scheme In each phase: expand spanning tree by one hop: explore neighborhood! Phase i Phase i+1 60

General Scheme Start i Start i+1 Phase i Phase i+1 61

General Scheme For efficiency: can propagate start i messages along pre-established spanning tree! Start i Start i+1 At edge we need to try all. Phase i Phase i+1 62

Distributed BFS: Dijkstra Flavor join i join i Same for responses! (Aggregated along existing BFS) Phase i Phase i+1 63

Distributed BFS: Dijkstra Flavor Time Complexity? Phase i Phase i+1 64

Distributed BFS: Dijkstra Flavor Time Complexity? O(D) phases, take time O(D): O(D 2 ) where D is the radius from the root. Phase i Phase i+1 65

Distributed BFS: Dijkstra Flavor Message Complexity? Phase i Phase i+1 66

Distributed BFS: Dijkstra Flavor Message Complexity? Plus: test each edge once: join, ACK/NAK at edge: total O(m). start and join propagation inside spanning tree: O(n) per phase: O(nD) in total. Phase i Phase i+1 67

Distributed BFS: Dijkstra Flavor Message Complexity? Plus: test each edge once: join, ACK/NAK at edge: total O(m). start and join propagation inside spanning tree: O(n) per phase: O(nD) in total. O(nD+m) Phase i Phase i+1 68

Distributed BFS: Dijkstra Flavor Dijkstra: find next closest node ( on border ) to the root Dijkstra Style Divide execution into phases. In phase p, nodes with distance p to the root are detected. Let T p be the tree of phase p. T 1 neighbors. Repeat (until no new nodes discovered): is the root plus all direct 1. Root starts phase p by broadcasting start p within T p 2. A leaf u of T p (= node discovered only in last phase) sends join p+1 to all quiet neighbors v (u has not talked to v yet) 3. Node v hearing join for first time sends back ACK : it becomes leave of tree T p+1 ; otherwise v replied NACK (needed since async!) 4. The leaves of T p collect all answers and start Echo Algorithm to the root 5. Root initates next phase 69

Distributed BFS: Bellman-Ford Flavor 70

Distributed BFS: Bellman-Ford Flavor Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! 71

Distributed BFS: Bellman-Ford Flavor 0 Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! init to Initialize: root distance 0, other nodes 72

Distributed BFS: Bellman-Ford Flavor distance 1 0 Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! Start: root sends distance 1 packet to neighbors 73

Distributed BFS: Bellman-Ford Flavor 0 1 Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! 1 1 Repeat: whenever receive new packet: check whether new minimal distance (if so change parent), and propagate!

Distributed BFS: Bellman-Ford Flavor 0 1 2 Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! Packet may race through network: asynchronous! 1 1 4 3 Repeat: whenever receive new packet: check whether new minimal distance (if so change parent), and propagate! 75

Distributed BFS: Bellman-Ford Flavor 0 1 2 Idea: Don t go through these time-consuming phases but blast out messages, but with distance information! Packet may race through network: asynchronous! 1 But sooner or later, node will learn shorter distance! 1 4 3 Repeat: whenever receive new packet: check whether new minimal distance (if so change parent), and propagate! 76

Distributed BFS: Bellman-Ford Flavor Bellman-Ford: compute shortest distances by flooding an all paths; best predecessor = parent in tree Bellman-Ford Style Each node u stores d u, the distance from u to the root. Initially, d root =0 and all other distances are 1. Root starts algo by sending 1 to all neighbors. 1. If a node u receives message y with y<d u d u := y send y+1 to all other neighbors 77

Analysis How is this defined?! Assuming a unit upper bound on per link delay! Time Complexity? Message Complexity? 78

Analysis Time Complexity? Worst propagation time is simply the diameter. O(D) where D is diameter of graph. By induction: By time d, node at distance d got d. Clearly true for d=0 and d=1. A node at distance d has neighbor at distance d-1 that got d-1 on time by induction hypothesis. It will send d in next time slot... Message Complexity? O(mn) where m is number of edges, n is number of nodes. Because: A node can reduce its distance at most n- 1 times (recall: asynchronous!). Each of these times it sends an upate message to all its neighbors 79

Bellman-Ford with Many Messages 1 d=1 2 root d=2 3 d=5 d=3 5 4 d=4 80

Bellman-Ford with Many Messages d=1 root 1 2 d=1 d=4 d=2 4 3 d=3 Everyone has a new best distance and informs neighbors! 81

Bellman-Ford with Many Messages d=1 root d=1 1 d=3 d=1 3 2 d=2 Everyone has a new best distance and informs neighbors! 82

Discussion Which algorithm is better? Dijkstra has better message complexity, Bellman-Ford better time complexity. Can we do better? Yes, but not in this course... Remark: Asynchronous algorithms can be made synchronous... (e.g., by central controller or better: local synchronizers) 83

How to compute an MST? MST Tree with edges of minimal total weight. 1 2 1 3 1 7 1 1 1 8 3 5 4 1 Stefan Schmid @ T-Labs, 2011

Idea: Exploit Basic Fact of MST: Blue Edges Blue Edge Let T be an MST and T a subgraph of T. Edge e=(u,v) is outgoing edge if u ϵ T and v T. The outgoing edge of minimal weight is called blue edge. Lemma If T is the MST and T a subgraph of T, then the blue edge of T is also part of T. It holds: exactly one tree edge across and cut and the lightest edge across a cut must be part of the MST! By contradiction: otherwise get a cheaper MST by swapping the two cut edges! 1 3 5

Gallager-Humblet-Spira 86

Gallager-Humblet-Spira Basic idea: Grow components in parallel and merge them at the blue edge! Using Covergecast. 87

Gallager-Humblet-Spira Assume some components have already emerged: Basic idea: Grow components in parallel and merge them at the blue edge! Using Covergecast. T 1 blue for T 1 T 2 8 5 6 1 3 blue for T 2 and T 3 T 3 Each component has only one blue edge (cheapest outgoing): loops impossible, can take them in parallel! 88

Gallager-Humblet-Spira Basic idea: Grow components and in parallel merge them and merge at the them blue edge, at the using blue edge! Using Covergecast! Covergecast. leader of T 1! 8 T 1 blue for T 1 T 2 5 6 1 3 leader of T 2! blue for T 2 and T 3 leader of T 3! T 3 Idea: a leader in each cluster responsible of finding blue edge with convergecast! Inside component: maintain spanning tree directed to leader! 89

Gallager-Humblet-Spira: High-level View Phase 3 Phase 1 Idea: components grow in parallel and merge in a loop-free manner! Phase 2 90

Gallager-Humblet-Spira: High-level View After round i, minimal component has size at least 2 i : doubles in each round! Phase 3 Phase 1 Idea: components grow in parallel and merge in a loop-free manner! Phase 2 So at most log(n) phases in total! 91

Gallager-Humblet-Spira: High-level View But how to determine blue edge quickly and re-elect new leader in merged larger component? Phase 3 Phase 1 Minimal fragment size in round i? 2 i Keep spanning tree in each component! Can do efficient covergecast there. Phase 2 Total number of phases? 92

Example: Agree on a New Root How to merge T and T across (u,v)? root T 1 T root v 10 3 7 root u T 93

Example: Agree on a New Root How to merge T and T across (u,v)? root T blue edge of T and T T root v 1 10 3 7 root Invariant: directed and rooted spanning tree in each component! Links point to root. blue edge of T u T 94

Example: Agree on a New Root How to merge T and T across (u,v)? root T 1 T root v 10 3 7 root u T Step 1: invert path from root1 to u and root2 to v. 95

Example: Agree on a New Root How to merge T and T across (u,v)? root T 1 T root v 10 3 7 root u T Step 1: invert path from root1 to u and root2 to v. Step 2: send merge request across blue edge (u,v). Here (u,v) is only the blue edge for T so one message! Step 3: consequence of link reversal: v becomes new root overall! 96

Example: Agree on a New Root How to merge T and T across (u,v)? root T Rooted again! T root v 1 10 3 7 root u T Step 1: invert path from root1 to u and root2 to v. Step 2: send merge request across blue edge (u,v). Here (u,v) is only the blue edge for T so one message! Step 3: consequence of link reversal: v becomes new root overall! 97

Example: Agree on a New Root How to merge T and T across (u,v)? Rooted again! root T T root v 1 10 3 What if (u,v) was blue link for both T and T? Just make tie-breaking who becomes root, u or v, e.g., ID based! 7 root u T Step 1: invert path from root1 to u and root2 to v. Step 2: send merge request across blue edge (u,v). Here (u,v) is only the blue edge for T so one message! Step 3: consequence of link reversal: v becomes new root overall! 98

Distributed Kruskal Idea: Grow components by learning blue edge! But do many fragments in parallel! Gallager-Humblet-Spira Initially, each node is root of its own fragment. Repeat (until all nodes in same fragment) 1. nodes learn fragment IDs of neighbors 2. root of fragment finds blue edge (u,v) by convergecast 3. root sends message to u (inverting parent-child) 4. if v also sent a merge request over (u,v), u or v becomes new root depending on smaller ID (make trees directed) 5. new root informs fragment about new root (convergecast on MST of fragment): new fragment ID 99

Analysis Time Complexity? Message Complexity? Each phase mainly consists of two convergecasts, so O(D) time and O(n) messages per phase? 100

Analysis Time Complexity? O(n log n) where n is graph size. Message Complexity? Log n phases with O(n) time convergecast: spanning tree is not BFS! The size of the smallest fragment at least doubles in each phase, so it s logarithmic. But converge cast may take n hops. Log n phases but in each phase need to learn leader ID of neighboring fragments, for all neighbors! O(m log n) where m is number of edges: at most O(1) messages on each edge in a phase. Really needed? Each phase mainly consists of two convergecasts, so O(n) time and O(n) messages. In order to learn fragment IDs of neighbors, O(m) messages are needed (again and again: ID changes in each phase). Yes, we can do better. Stefan Schmid @ T-Labs, 2011 101

Analysis Time Complexity? O(n log n) where n is graph size. Message Complexity? Log n phases with O(n) time convergecast: spanning tree is not BFS! The size of the smallest fragment at least doubles in each phase, so it s logarithmic. But converge cast may take n hops. Log n phases but in each phase need to learn leader ID of neighboring fragments, for all neighbors! O(m log n) where m is number of edges: at most O(1) messages on each edge in a phase. Really needed? Each phase mainly consists of two convergecasts, so O(n) time and O(n) messages. In order to learn fragment IDs of neighbors, O(m) messages Note: are needed this (again algorithm and again: ID can changes solve in each leader phase). election! Leader = last surviving root! Yes, we can do better. Stefan Schmid @ T-Labs, 2011 102

Homework: License to Match 103

Literature for further reading: - Peleg s book End of lecture

Exercise 2: License to Match Only a subset participates in the mission! NOT OK! Link used twice. Only know whether I myself participate initially Need to build pairs Communication along tree: use link only once!

Exercise 2: License to Match Idea: Convergecast! Pair up from leaves upward At most one unmatched guy in subtree: either match with myself or propagate upward (links used at most once!) If total number of active nodes even, there is a solution

Exercise 2: License to Match

Exercise 2: License to Match Idea: Convergecast! Pair up from leaves upward 3. At most one unmatched guy in subtree: either match with myself or propagate upward (links used at most once!) 2. If total number of active nodes even, there is a solution 1.

Exercise 2: License to Match Analysis Runtime in the order of tree depth 3. Number of messages linear in tree size 2. 1.