Adaptive Routing of QoS-Constrained Media Streams over Scalable Overlay Topologies

Similar documents
Adaptive Routing of QoS-constrained Media Streams over Scalable Overlay Topologies

Adaptive Routing of QoS-constrained Media Streams over Scalable Overlay Topologies

Dynamic Characteristics of k-ary n-cube Networks for Real-time Communication

Comparison of k-ary n-cube and de Bruijn Overlays in QoS-constrained Multicast Applications

Delaunay Triangulation Overlays

GIAN Course on Distributed Network Algorithms. Network Topologies and Local Routing

NSFA: Nested Scale-Free Architecture for Scalable Publish/Subscribe over P2P Networks

Many-to-Many Communications in HyperCast

Scalable Overlay Multicast Tree Construction for Media Streaming

Peer-to-Peer Protocols and Systems. TA: David Murray Spring /19/2006

Telematics Chapter 9: Peer-to-Peer Networks

Scalable Application Layer Multicast

GIAN Course on Distributed Network Algorithms. Network Topologies and Interconnects

Lecture 6: Overlay Networks. CS 598: Advanced Internetworking Matthew Caesar February 15, 2011

Two challenges for building large self-organizing overlay networks

EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Overlay Networks: Motivations

CS 498 Hot Topics in High Performance Computing. Networks and Fault Tolerance. 9. Routing and Flow Control

Goals. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Solution. Overlay Networks: Motivations.

Overlay Networks: Motivations. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Motivations (cont d) Goals.

Network Layer (Routing)

Peer-To-Peer Techniques

Debunking some myths about structured and unstructured overlays

Making Gnutella-like P2P Systems Scalable

EE122: Multicast. Kevin Lai October 7, 2002

EE122: Multicast. Internet Radio. Multicast Service Model 1. Motivation

Scalable P2P architectures

Application Layer Multicast For Efficient Peer-to-Peer Applications

Overlay Networks for Multimedia Contents Distribution

Peer-to-Peer Systems. Chapter General Characteristics

Overlay and P2P Networks. Structured Networks and DHTs. Prof. Sasu Tarkoma

0!1. Overlaying mechanism is called tunneling. Overlay Network Nodes. ATM links can be the physical layer for IP

Connecting to the Network

P2P Network Structured Networks: Distributed Hash Tables. Pedro García López Universitat Rovira I Virgili

Capacity-Aware Multicast Algorithms on Heterogeneous Overlay Networks

EE 122: Peer-to-Peer (P2P) Networks. Ion Stoica November 27, 2002

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Motivation for peer-to-peer

11. Application-Layer Multicast

P2P and the Future Internet. Pablo Rodriguez Telefonica Research, Barcelona

Lecture 24: Interconnection Networks. Topics: topologies, routing, deadlocks, flow control

Brocade: Landmark Routing on Peer to Peer Networks. Ling Huang, Ben Y. Zhao, Yitao Duan, Anthony Joseph, John Kubiatowicz

Load Sharing in Peer-to-Peer Networks using Dynamic Replication

Network-on-chip (NOC) Topologies

Abstract In this paper we describe a new network service called Speccast. Speccast offers a generalized addressing and

Flooded Queries (Gnutella) Centralized Lookup (Napster) Routed Queries (Freenet, Chord, etc.) Overview N 2 N 1 N 3 N 4 N 8 N 9 N N 7 N 6 N 9

Efficient Group Rekeying Using Application-Layer Multicast

Nash Equilibrium Load Balancing

Minimizing Churn in Distributed Systems

CIS 700/005 Networking Meets Databases

Evaluating Unstructured Peer-to-Peer Lookup Overlays

Arvind Krishnamurthy Fall 2003

! Naive n-way unicast does not scale. ! IP multicast to the rescue. ! Extends IP architecture for efficient multi-point delivery. !

Peer-to-Peer Internet Applications: A Review

Information Brokerage

Overlay Networks in ScaleNet

MAC LAYER. Murat Demirbas SUNY Buffalo

Data-Centric Query in Sensor Networks

Link State Routing & Inter-Domain Routing

Chapter 10: Peer-to-Peer Systems

416 Distributed Systems. Mar 3, Peer-to-Peer Part 2

Routing protocols in WSN

Efficient Multimedia Distribution in Source Constraint Networks

Improving Resiliency of Overlay Networks for Streaming Applications

CompSci 356: Computer Network Architectures Lecture 21: Overlay Networks Chap 9.4. Xiaowei Yang

CS4700/CS5700 Fundamentals of Computer Networks

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

CS 640 Introduction to Computer Networks. Today s lecture. What is P2P? Lecture30. Peer to peer applications

A Scalable Content- Addressable Network

Early Measurements of a Cluster-based Architecture for P2P Systems

Scaling Problem Computer Networking. Lecture 23: Peer-Peer Systems. Fall P2P System. Why p2p?

IP Multicast Technology Overview

Interconnection Networks: Routing. Prof. Natalie Enright Jerger

Efficient Multi-source Data Dissemination in Peer-to-Peer Networks

Application Layer Multicast Algorithm

A Framework for Peer-To-Peer Lookup Services based on k-ary search

QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks

Architectures for Distributed Systems

Unicast Routing in Mobile Ad Hoc Networks. Dr. Ashikur Rahman CSE 6811: Wireless Ad hoc Networks

Bayeux: An Architecture for Scalable and Fault Tolerant Wide area Data Dissemination

Turning Heterogeneity into an Advantage in Overlay Routing

Computation of Multiple Node Disjoint Paths

Investigation on the fisheye state algorithm in context with QoS routing. Author: Yi Zhou. Prof: Raimo Kantola Instructor: Dr.

Lecture 12: Interconnection Networks. Topics: dimension/arity, routing, deadlock, flow control

Peer-to-Peer Streaming Systems. Behzad Akbari

Lecture 26: Interconnects. James C. Hoe Department of ECE Carnegie Mellon University

Communication Networks I December 4, 2001 Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page 1

Network Layer: Routing

Symphony. Symphony. Acknowledgement. DHTs: : The Big Picture. Spectrum of DHT Protocols. Distributed Hashing in a Small World

Improving the QOS in Video Streaming Multicast

A Scalable Overlay Multicast Architecture for Large-Scale Applications

Implementation of Near Optimal Algorithm for Integrated Cellular and Ad-Hoc Multicast (ICAM)

Small-World Overlay P2P Networks: Construction and Handling Dynamic Flash Crowd

Application-layer Multicast with Delaunay Triangulations

Advanced Distributed Systems. Peer to peer systems. Reference. Reference. What is P2P? Unstructured P2P Systems Structured P2P Systems

SHARED MEMORY VS DISTRIBUTED MEMORY

Lecture 3: Sorting 1

Multicast Communications. Tarik Čičić, 4. March. 2016

Intra-domain Routing

Should we build Gnutella on a structured overlay? We believe

Peer-to-Peer Systems and Security

Transcription:

Adaptive Routing of QoS-Constrained Media Streams over Scalable Overlay Topologies Gerald Fry and Richard West Boston University Boston, MA 02215 {gfry,richwest}@cs.bu.edu

Introduction Internet growth has stimulated development of realtime distributed applications e.g., streaming media delivery, interactive distance learning, webcasting (e.g., SHOUTcast) Peer-to-peer (P2P) systems now popular Efficiently locate & retrieve data (e.g., mp3s) e.g., Gnutella, Freenet, Kazaa, Chord, CAN, Pastry To date, limited work on scalable delivery & processing of QoS-constrained data streams

Objectives Scalable overlay networks Devise a logical network that can support many thousands of hosts Minimize the average (logical) hop count between nodes Efficient delivery of data streams Route arbitrary messages (eg., video data packets) along the overlay topology Reduce routing latency by considering physical proximity How can logical positions of hosts be adapted to reduce lateness with respect to deadlines?

Contributions Focus on scalable delivery of real-time media streams Analysis of k-ary n-cube graphs as structures for overlay topologies Comparison of overlay routing algorithms Dynamic host relocation in logical space based on QoS constraints Applications: live video broadcasts, resource intensive sensor streams, data intensive scientific applications

Introduction (4) Overview of this talk Definition and properties of k-ary n-cube graphs Optimization through M-region analysis Overlay routing policies Adaptive node relocation based on persubscriber QoS constraints Concluding remarks and future work

Definition of k-ary n-cube Graphs A k-ary n-cube graph is defined by two parameters: n = # dimensions k = radix (or base) in each dimension Each node is associated with an identifier consisting of n base-k digits Two nodes are connected by a single edge iff: their identifiers have n-1 identical digits, and the ith digits in both identifiers differ by exactly 1 (modulo k)

Properties of k-ary n-cube Graphs M = k n nodes in the graph If k = 2, degree of each node is n If k > 2, degree of each node is 2n Worst-case hop count between nodes: n k/2 Average case path length: A(k,n) = n (k 2 /4) 1/k Optimal dimensionality: n = ln M Minimizes A(k,n) for given k and n

Overlay Routing Example Overlay is modeled as an undirected k-ary n-cube graph An edge in the overlay corresponds to a uni-cast path in the physical network Physical view Logical view A C E G 2 4 R1 8 10 5 R2 1 9 6 3 B D F H [011] [111] 10 B C 8 18 7 [010] 16 A D F 21 14 G [101] 10 19 12 18 E 16 H [000] [100]

Average Hop Count H(k,n): sum of the distances from any one node to every other node in a k-ary n-cube graph Proof by induction on dimensionality, n Base case: H(k,1) = (k 2 /4) H(k,n) = H(k,n-1)k + k n-1 (k 2 /4) Thus, H(k,n) = k n (n (k 2 /4) 1/k) Avg. hop count between pairs of nodes Given by A(k,n) = H(k,n) / k n = n (k 2 /4) 1/k

Worst-case Hop Count Each k-ary n-cube node is represented by a string of n digits in base k Given two node identifiers: A = a 1,a 2,,a n ; B = b 1,b 2,,b n Distance between corresponding nodes is given by the sum of each a i b i (modulo k) Maximum distance in one dimension = k/2 Thus, the maximum path length for n dimensions = n k/2

Logical versus Physical Hosts Mapping between physical and logical hosts is not necessarily one-to-one M logical hosts m physical hosts For routing, we must have m <= M Destination identifier would be ambiguous otherwise If m < M, some logical nodes are unassigned

M-region Analysis Hosts joining / leaving system change value of m Initial system is bootstrapped with overlay that optimizes A(k,n) Let M-region be range of values for m for which A(k,n) is minimized

Calculating M-regions Calculate_M-Region(int m) { i = 1; k = j = 2; while (M[i,j] < m) i++; // Start with a hypercube n = i; maxm = M[i,j]; mina = A[i,j]; incj = 1; while (i > 0) { j += incj; i--; if ((A[i,j] <= mina) && (M[i,j] > maxm)) { incj = 1; maxm = M[i,j]; mina = A[i,j]; n = i; k = j; } else incj = 0; } return k, n; } Try to find the largest M such that: m <= M & A(k,n) is minimized

M-regions 1594323 4782969 531441 177147 59049 81 243 729 2187 6561 19683 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 e.g., m=6500 k=3, n=8, M=6561 M k n Value of k and n 2 4 9 27 32

Overlay Routing Three routing policies are investigated Ordered Dimensional Routing (ODR) Random Ordering of Dimensions (Random) Proximity-based Greedy Routing (Greedy) Forward message to neighbor along logical edge with lowest cost that reduces hop-distance to destination Experimental analysis done via simulation 5050 routers in physical topology (transit-stub) 65536 hosts

16D Hypercube versus 16-ary 4-cube Cumulative % of Subscribers 100 90 80 70 60 50 40 30 20 10 0 Greedy routing up to 40% better 2x16 ODR 2x16 Random 2x16 Greedy 16x4 ODR 16x4 Random 16x4 Greedy 1 2 4 8 16 32 64 128 256 512 Delay Penalty (relative to unicast)

Adaptive Node Assignment Initially, hosts are assigned random node IDs Publisher hosts announce availability of channels Super-nodes make info available to peers Hosts subscribing to published channels specify QoS constraints (e.g., latency bounds) Subscribers may be relocated in logical space to improve QoS by considering physical proximities of publishers & subscribers

Adaptive Node Assignment (2) Subscribe (Subscriber S, Publisher P, Depth d) { if (d == D) return; } find a neighbor i of P such that i.cost(p) is maximal for all neighbors if (S.cost(P) < i.cost(p)) swap logical positions of i and S; else Subscribe (S, i, d+1); Swap S with node i up to D logical hops from P

Simulation Results Randomly generated physical topology with 5050 routers M=65536 and topology is a 16D hypercube Randomly chosen publisher plus some number of subscribers with QoS (latency) constraints Adaptive algorithm used with D=1 Greedy routing performed with & without adaptive node assignment

Success Ratio vs Group Size 0.73 0.72 Adaptive Non-adaptive Success Ratio 0.71 0.7 0.69 0.68 0.67 0.66 0.65 512 1024 2048 4096 8192 Group Size 16384 32768 Can potentially be improved 65536 Success if routing latency <= QoS constraint, c Success ratio = (# successes) / (# subscribers) Adaptive node assignment shows up to 5% improvement

Lateness versus Group Size Average Normalized Lateness 3.5 3 2.5 2 1.5 1 0.5 0 Adaptive Non-adaptive 512 1024 2048 4096 8192 16384 32768 65535 Group Size Normalized lateness = 0, if S.cost(P) <= c Normalized lateness = (S.cost(P)-c)/c, otherwise Adaptive method can yield >20% latency reduction

Adaptive Node ID Assignment Initial results look encouraging Improved performance likely if adaptation considers nodes at greater depth,d, from publishers Expts only considered D=1 Adaptive node assignment attempts to minimize maximum delay between publishers and subscribers

Link Stress Previously, aimed to reduce routing latencies Important to consider physical link stress: Avg times a message is forwarded over a given link, to multicast info from publisher(s) to all subscribers

Link Stress Simulation Results 16D hypercube overlay on random physical network Randomly chosen publisher plus varying groups of subscribers Multicast trees computed from union of routing paths between publisher and each subscriber Measure average physical link stress: (# times message is forwarded over a link) (# unique links required to route msg to all subscribers)

Lateness versus Group Size Average Normalized Lateness 2.4 2.3 2.2 2.1 2 1.9 1.8 1.7 1.6 1.5 512 1024 2048 4096 8192 16384 32768 Group Size Variations in lateness (for pairs of columns) due in part to random locations of subscribers relative to publisher

Link Stress versus Group Size 40 35 Average Link Stress 30 25 20 15 10 5 0 512 1024 2048 4096 8192 16384 32768 Group Size Greedy routing performs worse as group size increases Appears to be due to greater intersection of physical links for multicast tree (i.e. fewer physical links)

Conclusions Analysis of k-ary n-cube graphs as overlay topologies Minimal average hop count M-region analysis determines optimal values for k and n. Greedy routing Leverages physical proximity information Significantly lower delay penalties than existing approaches based on P2P routing Adaptive node ID re-assignment for satisfying QoS constraints

Future and Ongoing Work Further investigation into alternative adaptive algorithms How does changing the overlay structure affect per-subscriber QoS constraints? Analysis of stability as hosts join and depart from the system Goal is to build an adaptive distributed system QoS guarantees of NARADA Scalability of systems such as Pastry/Scribe