CRESCENDO GEORGE S. NOMIKOS. Advisor: Dr. George Xylomenos

Similar documents
March 10, Distributed Hash-based Lookup. for Peer-to-Peer Systems. Sandeep Shelke Shrirang Shirodkar MTech I CSE

Chord : A Scalable Peer-to-Peer Lookup Protocol for Internet Applications

P2P Network Structured Networks: Distributed Hash Tables. Pedro García López Universitat Rovira I Virgili

CompSci 356: Computer Network Architectures Lecture 21: Overlay Networks Chap 9.4. Xiaowei Yang

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Lecture 6: Overlay Networks. CS 598: Advanced Internetworking Matthew Caesar February 15, 2011

Athens University of Economics and Business. Dept. of Informatics

OverSim. A Flexible Overlay Network Simulation Framework. Ingmar Baumgart, Bernhard Heep, Stephan Krause

Architectures for Distributed Systems

Goals. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Solution. Overlay Networks: Motivations.

Content Overlays. Nick Feamster CS 7260 March 12, 2007

EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Overlay Networks: Motivations

CSE 5306 Distributed Systems

Peer to Peer Networks

Overlay Networks. Behnam Momeni Computer Engineering Department Sharif University of Technology

CIS 700/005 Networking Meets Databases

Scalability In Peer-to-Peer Systems. Presented by Stavros Nikolaou

CSE 5306 Distributed Systems. Naming

Overlay Networks: Motivations. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Motivations (cont d) Goals.

Chord: A Scalable Peer-to-peer Lookup Service For Internet Applications

L3S Research Center, University of Hannover

Distributed Hash Tables

DHT Overview. P2P: Advanced Topics Filesystems over DHTs and P2P research. How to build applications over DHTS. What we would like to have..

P2P Network Structured Networks: Distributed Hash Tables. Pedro García López Universitat Rovira I Virgili

Motivation. The Impact of DHT Routing Geometry on Resilience and Proximity. Different components of analysis. Approach:Component-based analysis

Page 1. How Did it Start?" Model" Main Challenge" CS162 Operating Systems and Systems Programming Lecture 24. Peer-to-Peer Networks"

The Impact of DHT Routing Geometry on Resilience and Proximity. Acknowledgement. Motivation

LECT-05, S-1 FP2P, Javed I.

Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. What We Want. Today s Question. What We Want. What We Don t Want C 1

Peer to Peer Networks

C 1. Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. Today s Question. What We Want. What We Want. What We Don t Want

EE 122: Peer-to-Peer Networks

Three Layer Hierarchical Model for Chord

Flooded Queries (Gnutella) Centralized Lookup (Napster) Routed Queries (Freenet, Chord, etc.) Overview N 2 N 1 N 3 N 4 N 8 N 9 N N 7 N 6 N 9

Bayeux: An Architecture for Scalable and Fault Tolerant Wide area Data Dissemination

: Scalable Lookup

Overlay and P2P Networks. Structured Networks and DHTs. Prof. Sasu Tarkoma

ChordNet: A Chord-based self-organizing super-peer network

The InfoMatrix: Distributed Indexing in a P2P Environment

An Expresway over Chord in Peer-to-Peer Systems

Telematics Chapter 9: Peer-to-Peer Networks

Lecture 6: Securing Distributed and Networked Systems. CS 598: Network Security Matthew Caesar March 12, 2013

Distributed Hash Table

DISTRIBUTED COMPUTER SYSTEMS ARCHITECTURES

CS 640 Introduction to Computer Networks. Today s lecture. What is P2P? Lecture30. Peer to peer applications

PEER-TO-PEER NETWORKS, DHTS, AND CHORD

CS514: Intermediate Course in Computer Systems

Building a low-latency, proximity-aware DHT-based P2P network

A Scalable Content- Addressable Network

Saarland University Faculty of Natural Sciences and Technology I Department of Computer Science. Masters Thesis

SEMESTER 2 Chapter 3 Introduction to Dynamic Routing Protocols V 4.0

Peer to peer systems: An overview

Distributed Systems Final Exam

CSE 486/586 Distributed Systems

Searching for Shared Resources: DHT in General

Searching for Shared Resources: DHT in General

Distributed File Systems: An Overview of Peer-to-Peer Architectures. Distributed File Systems

Brocade: Landmark Routing on Peer to Peer Networks. Ling Huang, Ben Y. Zhao, Yitao Duan, Anthony Joseph, John Kubiatowicz

Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems

416 Distributed Systems. Mar 3, Peer-to-Peer Part 2

CPSC 426/526. P2P Lookup Service. Ennan Zhai. Computer Science Department Yale University

Turning Heterogeneity into an Advantage in Overlay Routing

Internet Indirection Infrastructure (i3) Ion Stoica, Daniel Adkins, Shelley Zhuang, Scott Shenker, Sonesh Surana. UC Berkeley SIGCOMM 2002

Reliable End System Multicasting with a Heterogeneous Overlay Network

Application Layer Multicast For Efficient Peer-to-Peer Applications

Symphony. Symphony. Acknowledgement. DHTs: : The Big Picture. Spectrum of DHT Protocols. Distributed Hashing in a Small World

EE 122: Peer-to-Peer (P2P) Networks. Ion Stoica November 27, 2002

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Naming WHAT IS NAMING? Name: Entity: Slide 3. Slide 1. Address: Identifier:

Hybrid Overlay Structure Based on Random Walks

DISTRIBUTED SYSTEMS CSCI 4963/ /4/2015

Content Overlays (continued) Nick Feamster CS 7260 March 26, 2007

Motivation for peer-to-peer

Distributed Hash Tables

Naming. Distributed Systems IT332

Handling Churn in a DHT

Lecture 8: Application Layer P2P Applications and DHTs

Scalable overlay Networks

CSCD 433/533 Advanced Networks Spring 2016

Introduction to P2P Computing

Introduction to OSPF

CS 640: Introduction to Computer Networks. Intra-domain routing. Inter-domain Routing: Hierarchy. Aditya Akella

Inter-Autonomous-System Routing: Border Gateway Protocol

A Structured Overlay for Non-uniform Node Identifier Distribution Based on Flexible Routing Tables

15-744: Computer Networking P2P/DHT

Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems

Distributed Hash Tables: Chord

Slides for Chapter 10: Peer-to-Peer Systems

Naming. Naming. Naming versus Locating Entities. Flat Name-to-Address in a LAN

08 Distributed Hash Tables

Introduction to Peer-to-Peer Systems

Path Optimization in Stream-Based Overlay Networks

BGP. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Distributed Systems. 17. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2016

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications

CS 347 Parallel and Distributed Data Processing

Peer-to-Peer (P2P) Systems

Structured Peer-to-Peer Networks

OnehopMANET: One-hop Structured P2P over Mobile ad hoc Networks

Overlay Networks in ScaleNet

Scalable Information-Sharing Network Management Nina X. Guo

Transcription:

CRESCENDO Implementation of Hierarchical Chord (Crescendo), according to Canon paradigm and evaluation, via simulation over realistic network topologies, of the Crescendo's advantages in comparison to Chord. GEORGE S. NOMIKOS Advisor: Dr. George Xylomenos ATHENS UNIVERSITY OF ECONOMICS AND BUSINESS DEPARTMENT OF INFORMATICS MSc IN COMPUTER SCIENCE ATHENS UNIVERSITY OF ECONOMICS AND BUSINESS

Motivation All current DHTs are Flat, non-hierarchical structures. No single point of failure, homogeneity. Decentralization, Scalability. Why Hierarchical Design? Hierarchies exist! Physical Network has hierarchy. Fault Isolation, Security. Efficient caching and bandwidth usage. Adaptation to the underlying physical network. Hierarchical storage of content. Hierarchical access control. Goal: Inherit the best of both designs Hierarchical DHTs.

Motivation Hierarchical DHTs Simulators offering hierarchical DHT support Maintain all the advantages of flat DHTs and add even more. None of the current simulators offer hierarchical DHT support. No other module exists. This is the first and the only hierarchical DHT implementation available. Implemented for the simulation environment OMNeT++/OverSim.

Chord Hash function (SHA-1) assigns each node and key an m-bit identifier. Node keys are arranged in a circle. Each node: Has links to it's previous node (predecessor) and to it's next node (successor). Maintains a routing table with up to m entries called the finger table. Runs a stabilization protocol periodically to find newly joined and failed nodes. Responsible node for a key, is the successor of the key. Finger table contains records like the following: finger[k] = first node that succeeds (n + 2k-1) mod 2m, where 1 k m, n is the current node. Example Chord Ring 10 Nodes, 5 Keys m = 6 bits

Canon Networks are hierarchical Canon Paradigm Recursive Structure. Construct bottom-up. Merge smaller DHTs. Multi-level hierarchies. A global ring at the top level containing all the small rings. Canon adapts to the underlying network hierarchy

Crescendo Crescendo, Hierarchical Chord using Canon Paradigm. Merge Multiple Rings. All original links are retained. Each node n in one ring creates a link to a node n' in another ring if and only if: (a) n' is the closest node that is at least distance 2k away for some 0 k < m (b) and n' is closer to n than any node in n` s ring. Node state: Chord Node State x Level of hierarchy. Finger Tables after the lowest level of hierarchy have very few links, according to (a), (b)

Crescendo Merging Procedure Crescendo Merging example for nodes 0 (Ring A) and 8 (Ring B). Node 0 links to node 2. Node 8 links to node 10 and node 12. Ring A and Ring B Chord Rings. m = 4 bits Fingers distance (+1, +2, +4, +8) After merging procedure: One Crescendo Ring, containing both A and B Rings.

Crescendo Finger Tables Merging Procedure, including full Finger Tables. Comparison if instead of Crescendo, we had two Chord rings, one local and one global. Two Chord rings need 32 extra links. Crescendo needs only 12 extra links. Crescendo uses almost 3 times less links than the local, global Chord rings.

Crescendo Two Crucial Properties Crescendo has two crucial properties Locality of intra-domain paths. When the node which starts the lookup and the destination node are in the same domain, then the lookup never leaves this domain. Convergence of inter-domain paths. When different nodes from one domain A route to the same node in another domain, all the different routes exit the domain A through the same node. This node is the closest predecessor of the target node's identifier in the domain A.

Crescendo Key Responsibility Crescendo changes key responsibility. Responsible node for a key is it's predecessor.

Realistic Network Topologies OMNeT++/OverSim offers physical network topology with limitations. No Autonomous Systems Identification Numbers (ASIDs). No routing policy weights. Crescendo requires realistic physical network topology. Autonomous Systems Identification Numbers (ASIDs) required. With ASIDs nodes can calculate the relation among themselves and other nodes. Routing policy weights required. Solution: Extend BRITE topology generator export tool and OMNeT++. Support for Realistic Network Topologies based on GT-ITM model. Support for Autonomous Systems Identification Numbers (ASIDs). Support for Routing Policy Weights. Realistic network topologies can also be used as a standalone solution for other modules that require this level of reality.

Crescendo Implementation More than 7000 Lines of Code (including comments) Fully documented code Doxygen documentation NED documentation Crescendo implemented as a new autonomous module. Realistic Topologies based on the GT-ITM Model. KBRTestApp extended to export analytical statistics. Supports two levels hierarchy. Low level: Local AS. High level: All AS merged. OverSim Architecture. In blue boxes the sections where new code added.

Crescendo Implementation Crescendo node join procedure. Node sends join local ring request to the bootstrap node with same ASID. Node joins local ring. Node sends join global ring request to it's local predecessor. Node joins global ring. Join procedure ends. Predecessor node is the node that forwards the lookup to the next level of the hierarchy. Request to join the next level ring: (a) To any node except predecessor, extra hops. (b) To predecessor, minimum hops.

Crescendo Correctness Simulation to check that this implementation has the Crescendo's two crucial properties. Key length 16 bits, 1000 Nodes, 4 ASIDs. Lookup for key 2390. ASIDs Keys

Crescendo Evaluation Simulation Parameters Two topologies 112 ASIDs, 784 Access Routers, 28 Backbone Routers. 196 ASIDs, 1372 Access Routers, 28 Backbone Routers. Two runs for each topology with different seeds. Parameter: Number of nodes in the overlay. 256, 512, 1024, 2048, 4096. 4500 for the multicast simulation. Link latency. Transit-Transit 100ms, Transit-Stub 20ms, Stub-Stub 5 ms. Extended KBRTestApp for statistics. Physical network hops, direct and overlay path. Lookup latency. ASIDs path. Overlay hops. More than 50 hours of simulation.

Crescendo Simulation Results Average Lookup Latency Chord flat design. Lookups do many inter-domain hops until final destination. Crescendo adapts to the underlying network hierarchy. Crescendo key responsibility, many times needed one less hop than Chord. Crescendo, intra-domain paths locality, inter-domain paths convergence.

Crescendo Simulation Results Physical Network Routing Stretch Chord flat design. Lookups do many inter-domain hops until final destination. Crescendo adapts to the underlying network hierarchy. Crescendo key responsibility, many times needed one less hop than Chord. Crescendo, intra-domain paths locality, inter-domain paths convergence. Crescendo, more cheap intra-domain hops, fewer expensive inter-domain hops.

Crescendo Simulation Results Average Overlay Network Hops Chord flat design. Lookups do many inter-domain hops until final destination. Crescendo adapts to the underlying network hierarchy. Crescendo key responsibility, many times needed one less hop than Chord. Crescendo, intra-domain paths locality, inter-domain paths convergence. Crescendo, when very few nodes in network (e.g. 1 per AS), Chord like behavior.

Crescendo Simulation Results Average Physical Network Hops Chord flat design. Lookups do many inter-domain hops until final destination. Crescendo adapts to the underlying network hierarchy. Crescendo key responsibility, many times needed one less hop than Chord. Crescendo, intra-domain paths locality, inter-domain paths convergence. Crescendo, more cheap intra-domain hops, fewer expensive inter-domain hops.

Crescendo Simulation Results Efficient bandwidth usage during multicast 112 ASIDs Network Topology. 4500 Participating Nodes. 5.267.079 Hops Processed. Lookup for the same key. Overlay Network Inter-domain Hops Intra-domain Hops Chord 5.213.545 53.534 Crescendo 2.950.470 2.316.609 Chord flat design. Lookups do many inter-domain hops until final destination. Crescendo adapts to the underlying network hierarchy. Crescendo key responsibility, many times needed one less hop than Chord. Crescendo, intra-domain paths locality, inter-domain paths convergence. Crescendo, more cheap intra-domain hops, fewer expensive inter-domain hops.

Conclusion - Future Work Crescendo has many more advantages than Chord without extra disadvantages. Locality: Fault isolation, Security, Efficiency. Convergence: Caching, Bandwidth savings. Future Work More simulations for multicast behavior over Crescendo. Comparison of Crescendo with other DHTs. Implementation of hierarchical structure for other DHTs.

Thanks THANK YOU George S. Nomikos Mobile Multimedia Laboratory ATHENS UNIVERSITY OF ECONOMICS AND BUSINESS

Backup Slides

Distributed Hash Table (DHT) Hash table distributed among a set of nodes. Partition key space by node ID. Each node responsible for a part of the key space. SHA-1 Hash function, 160 bits identifiers. Flat overlay network. Basic actions. Insert (Key, Value) Lookup (Key, Value)

Chord Finger Table example for Node 8. Lookup Examples. Simple Lookup. Normal Lookup, using Finger Tables. Simple Lookup O(N) Normal Lookup O(logN)

Chord Definition of variables for node n, using m-bit identifiers Notation Definition finger[k] First node on circle that succeeds (n + 2k-1) mod 2m,1 k m successor The next node on the identifier circle, finger[1].node predecessor The previous node on the identifier circle Node Join Procedure Example: Node 26 joins the system between nodes 21 and 32. The arcs represent the successor relationship. (a) Initial state, node 21 points to node 32. Node 26 appears and finds it's successor node 32. (b) Node 26 points to it's found successor node 32. (c) Node 26 copies all keys less than 26 from node 32. (d) The stabilize procedure updates the successor of node 21 to node 26.

Chord Stabilization protocol Runs periodically at each node. Handles newly joined nodes and failed nodes. Stabilization Protocol procedure: Periodically ask your successor who it's predecessor is. If the predecessor is immediately after you then it's your successor. Notify this found successor and update self. Repeat this procedure.

Crescendo Due to the previous two properties Crescendo offers: Fault Isolation. Security. Efficient caching and bandwidth usage. Adaptation to the underlying physical network. Hierarchical Storage of content. Hierarchical access control. Less network stretch. Less lookup latency.

Crescendo Implementation Crescendo Core Structure. Crescendo (simple module). CrescendoFingerTable (simple module). CrescendoSuccessorList (simple module). CrescendoModules (compound module). CrescendoMessages. New code added to the following core OMNeT++ structures. CrescendoModules Structure ctopology: Routing Policy Weights support. New code added to the following core OverSim structures. INETUnderlay: Routing Policy Weights, ASIDs, Realistic Network Topologies. GlobalNodeList, BootstrapList: ASIDs, Hierarchy support. Common API, BaseOverlay: ASIDs, Hierarchy support. KBRTestApp: Heavily extended to export analytical statistics. Bash and Java parsers: To parse the simulation results.

Software Used OMNeT++ 4 OverSim BRITE GT-ITM INET for OverSim Arch Linux Doxygen

Realistic Network Topologies