*Adapted from slides provided by Stefan Götz and Klaus Wehrle (University of Tübingen)

Similar documents
Distributed Hash Tables (DHT)

*Adapted from slides provided by Stefan Götz and Klaus Wehrle (University of Tübingen)

L3S Research Center, University of Hannover

Advanced Distributed Systems. Peer to peer systems. Reference. Reference. What is P2P? Unstructured P2P Systems Structured P2P Systems

Part 1: Introducing DHTs

LECT-05, S-1 FP2P, Javed I.

Introduction to Peer-to-Peer Networks

Searching for Shared Resources: DHT in General

Searching for Shared Resources: DHT in General

Content Search. Unstructured P2P. Jukka K. Nurminen

Content Search. Unstructured P2P

Structured Peer-to-Peer Networks

CPSC 426/526. P2P Lookup Service. Ennan Zhai. Computer Science Department Yale University

Telematics Chapter 9: Peer-to-Peer Networks

Distributed Data Management. Profr. Dr. Wolf-Tilo Balke Institut für Informationssysteme Technische Universität Braunschweig

6 Structured P2P Networks

A Framework for Peer-To-Peer Lookup Services based on k-ary search

DISTRIBUTED SYSTEMS CSCI 4963/ /4/2015

Architectures for Distributed Systems

C 1. Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. Today s Question. What We Want. What We Want. What We Don t Want

L3S Research Center, University of Hannover

Distributed Systems. 17. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2016

DHT Optimizations for mobile devices. Seminar Mobile Systems Supervisor: Thomas Bocek Student: Dario Nakic

Content Overlays. Nick Feamster CS 7260 March 12, 2007

CS 640 Introduction to Computer Networks. Today s lecture. What is P2P? Lecture30. Peer to peer applications

Last Time. CSE 486/586 Distributed Systems Distributed Hash Tables. What We Want. Today s Question. What We Want. What We Don t Want C 1

A Survey of Peer-to-Peer Content Distribution Technologies

CompSci 356: Computer Network Architectures Lecture 21: Overlay Networks Chap 9.4. Xiaowei Yang

CIS 700/005 Networking Meets Databases

PEER-TO-PEER NETWORKS, DHTS, AND CHORD

CSE 124 Finding objects in distributed systems: Distributed hash tables and consistent hashing. March 8, 2016 Prof. George Porter

Lecture 8: Application Layer P2P Applications and DHTs

CS 347 Parallel and Distributed Data Processing

Badri Nath Rutgers University

Distributed Knowledge Organization and Peer-to-Peer Networks

Internet Services & Protocols

Distributed File Systems: An Overview of Peer-to-Peer Architectures. Distributed File Systems

Distributed lookup services

Scalability In Peer-to-Peer Systems. Presented by Stavros Nikolaou

Peer-to-Peer (P2P) Systems

CSE 486/586 Distributed Systems

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Simulations of Chord and Freenet Peer-to-Peer Networking Protocols Mid-Term Report

416 Distributed Systems. Mar 3, Peer-to-Peer Part 2

Peer to Peer I II 1 CS 138. Copyright 2015 Thomas W. Doeppner, Rodrigo Fonseca. All rights reserved.

DISTRIBUTED COMPUTER SYSTEMS ARCHITECTURES

15-744: Computer Networking P2P/DHT

Peer-to-Peer Systems. Network Science: Introduction. P2P History: P2P History: 1999 today

Peer to Peer Networks

Goals. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Solution. Overlay Networks: Motivations.

Introduction to Peer-to-Peer Systems

Peer-to-Peer Systems. Chapter General Characteristics

: Scalable Lookup

Opportunistic Application Flows in Sensor-based Pervasive Environments

Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications

Structured P2P. Complexity O(log N)

Introduction to P2P Computing

Special Topics: CSci 8980 Edge History

Distributed Hash Tables: Chord

Lecture 6: Overlay Networks. CS 598: Advanced Internetworking Matthew Caesar February 15, 2011

Peer to Peer Systems and Probabilistic Protocols

Peer-to-Peer Protocols and Systems. TA: David Murray Spring /19/2006

Distributed Systems. 16. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2017

Scaling Problem Millions of clients! server and network meltdown. Peer-to-Peer. P2P System Why p2p?

CPSC 426/526. P2P Lookup Service. Ennan Zhai. Computer Science Department Yale University

Flooded Queries (Gnutella) Centralized Lookup (Napster) Routed Queries (Freenet, Chord, etc.) Overview N 2 N 1 N 3 N 4 N 8 N 9 N N 7 N 6 N 9

Peer to Peer Networks

Scaling Problem Millions of clients! server and network meltdown. Peer-to-Peer. P2P System Why p2p?

Scalable overlay Networks

Chord : A Scalable Peer-to-Peer Lookup Protocol for Internet Applications

Deploying and Optimizing Squid in a Filesharing Application

Peer-To-Peer Techniques

PEER-TO-PEER (P2P) systems are now one of the most

Peer-to-Peer Systems and Distributed Hash Tables

12/5/16. Peer to Peer Systems. Peer-to-peer - definitions. Client-Server vs. Peer-to-peer. P2P use case file sharing. Topics

Today. Why might P2P be a win? What is a Peer-to-Peer (P2P) system? Peer-to-Peer Systems and Distributed Hash Tables

Overlay networks. To do. Overlay networks. P2P evolution DHTs in general, Chord and Kademlia. Turtles all the way down. q q q

Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems

Telecommunication Services Engineering Lab. Roch H. Glitho

Overlay and P2P Networks. Structured Networks and DHTs. Prof. Sasu Tarkoma

Peer-to-Peer Data Management. Hans-Dieter Ehrich Institut für Informationssysteme Technische Universität Braunschweig

Back-Up Chord: Chord Ring Recovery Protocol for P2P File Sharing over MANETs

P2P: Distributed Hash Tables

March 10, Distributed Hash-based Lookup. for Peer-to-Peer Systems. Sandeep Shelke Shrirang Shirodkar MTech I CSE

Department of Computer Science Institute for System Architecture, Chair for Computer Networks. File Sharing

Peer-to-Peer Internet Applications: A Review

CSCI-1680 P2P Rodrigo Fonseca

P2P Network Structured Networks: Distributed Hash Tables. Pedro García López Universitat Rovira I Virgili

Page 1. How Did it Start?" Model" Main Challenge" CS162 Operating Systems and Systems Programming Lecture 24. Peer-to-Peer Networks"

internet technologies and standards

Handling Churn in a DHT

Peer-to-Peer Systems. Winter semester 2014 Jun.-Prof. Dr.-Ing. Kalman Graffi Heinrich Heine University Düsseldorf

Effect of Links on DHT Routing Algorithms 1

Introduction to P2P Systems

An Expresway over Chord in Peer-to-Peer Systems

Cycloid: A constant-degree and lookup-efficient P2P overlay network

Today s Objec2ves. Kerberos. Kerberos Peer To Peer Overlay Networks Final Projects

Scaling Problem Computer Networking. Lecture 23: Peer-Peer Systems. Fall P2P System. Why p2p?

Overlay Networks: Motivations. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Motivations (cont d) Goals.

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Naming WHAT IS NAMING? Name: Entity: Slide 3. Slide 1. Address: Identifier:

Transcription:

Distributed Hash Tables (DHT) Jukka K. Nurminen *Adapted from slides provided by Stefan Götz and Klaus Wehrle (University of Tübingen)

The Architectures of st and nd Gen. PP Client-Server Peer-to-Peer. Server is the central entity and only provider of service and content. Network managed by the Server. Server as the higher performance system. 3. Clients as the lower performance system. Resources are shared between the peers. Resources can be accessed directly from other peers 3. Peer is provider and requestor (Servent concept) Unstructured PP Structured PP Centralized PP Pure PP Hybrid PP DHT-Based Example: WWW. All features of Peer-to-Peer included. Central entity is necessary to provide the service 3. Central entity is some kind of index/group database Example: Napster. All features of Peer-to-Peer included. Any terminal entity can be removed without loss of functionality 3. No central entities Examples: Gnutella.4, Freenet. All features of Peer-to-Peer included. Any terminal entity can be removed without loss of functionality 3. dynamic central entities Example: Gnutella., JXTA. All features of Peer-to-Peer included. Any terminal entity can be removed without loss of functionality 3. No central entities 4. Connections in the overlay are fixed Examples: Chord, CAN st Gen. nd Gen. -9- /Jukka K. Nurminen

Hash Function & Hash Tables Hashes Data Keys Hash function John Smith 45-345 John Smith Lisa Smith 3 4 Lisa Smith 45-5734 Sandra Dee 45-45545 Sam Doe 5 Sandra Dee 7 8 Sam Doe 343-755 put( Lisa Smith, 45-5734) get Lisa Smith ) -> 45-5734 5-9- /Jukka K. Nurminen

Addressing in Distributed Hash Tables Step : Mapping of content/nodes into linear space Usually:,, m - >> number of objects to be stored Mapping of data and nodes into an address space (with hash function) E.g., Hash(String) mod m : H( my data ) 33 Association of parts of address space to DHT nodes 3485 - - 79 8 - - - 7-95 9-3484 (3485 - ) m - H(Node Y)=3485 Data item D : H( D )=37 Y X H(Node X)=9 Often, the address space is viewed as a circle. -9- /Jukka K. Nurminen

Step : Routing to a Data Item Routing to a K/V-pair Start lookup at arbitrary node of DHT Routing to requested data item (key) put (key, value) value = get (key) H( my data ) = 37 79 8 7 Node 3485 manages keys 97-3485, 3485 9 Key = H( my data ) Initial node (arbitrary) (37, (ip, port)) Value = pointer to location of data -9- /Jukka K. Nurminen

Step : Routing to a Data Item Getting the content K/V-pair is delivered to requester Requester analyzes K/V-tuple (and downloads data from actual location in case of indirect storage) H( my data ) = 37 79 8 Get_Data(ip, port) 7 In case of indirect storage: After knowing the actual Location, data is requested 3485 9 Node 3485 sends (37, (ip/port)) to requester -9- /Jukka K. Nurminen

Chord -9- /Jukka K. Nurminen

Chord: Topology Keys and IDs on ring, i.e., all arithmetic modulo ^ (key, value) pairs managed by clockwise next node: successor 7 successor() = successor() = Chord Ring successor() = 3 5 3 4 X Identifier Node Key -9- /Jukka K. Nurminen

Chord: Primitive Routing Primitive routing: Forward query for key x until successor(x) is found Return result to source of query Pros: Cons: Simple Little node state Poor lookup efficiency: O(/ * N) hops on average (with N nodes) Node failure breaks circle 7 Node Key? 5 3 4-9- /Jukka K. Nurminen

Chord: Routing Chord s routing table: finger table Stores log(n) links per node Covers exponentially increasing distances: Node n: entry i points to successor(n + ^i) (i-th finger) finger table i start succ. keys 4 3 7 finger table i start 3 5 succ. 3 3 keys 5 3 finger table i start succ. keys 4 4 5 7-9- /Jukka K. Nurminen

Chord: Routing Chord s routing algorithm: Each node n forwards query for key k clockwise To farthest finger preceding k Until n = predecessor(k) and successor(n) = successor(k) Return successor(n) to source of query 3 4 i ^i Target Link i ^i i ^i Target Target 4 Link Link 4 453 4 54 4 5 54 54 lookup 4 43 (44) (44) = 45 4 75 35 3 8 47 49 3 8 3 33 4 55 4 5 4 4 39 393 5 3 7 7 5 3 55 5 49 5 5 54 45 44 4 39 37 33 3 7 3 3 4 9-9- /Jukka K. Nurminen

Comparison of Lookup Concepts System Per Node State Communi- cation Overhead Fuzzy Queries (e.g all names starting with letter J j* ) No false negatives Robustness Central Server O(N) O() Flooding Search O() O(N²) Distributed Hash Tables O(log N) O(log N) -9- /Jukka K. Nurminen