Decentralized Object Location In Dynamic Peer-to-Peer Distributed Systems

Similar documents
March 10, Distributed Hash-based Lookup. for Peer-to-Peer Systems. Sandeep Shelke Shrirang Shirodkar MTech I CSE

Overlay and P2P Networks. Structured Networks and DHTs. Prof. Sasu Tarkoma

Early Measurements of a Cluster-based Architecture for P2P Systems

Scalability In Peer-to-Peer Systems. Presented by Stavros Nikolaou

Challenges in the Wide-area. Tapestry: Decentralized Routing and Location. Global Computation Model. Cluster-based Applications

Distributed Hash Table

A Scalable Content- Addressable Network

P2P Network Structured Networks: Distributed Hash Tables. Pedro García López Universitat Rovira I Virgili

A Framework for Peer-To-Peer Lookup Services based on k-ary search

Content Overlays. Nick Feamster CS 7260 March 12, 2007

Building a low-latency, proximity-aware DHT-based P2P network

Challenges in the Wide-area. Tapestry: Decentralized Routing and Location. Key: Location and Routing. Driving Applications

Athens University of Economics and Business. Dept. of Informatics

LessLog: A Logless File Replication Algorithm for Peer-to-Peer Distributed Systems

CIS 700/005 Networking Meets Databases

Distriubted Hash Tables and Scalable Content Adressable Network (CAN)

Data Replication under Latency Constraints Siu Kee Kate Ho

Chapter 6 PEER-TO-PEER COMPUTING

Effect of Links on DHT Routing Algorithms 1

L3S Research Center, University of Hannover

CS514: Intermediate Course in Computer Systems

Chord : A Scalable Peer-to-Peer Lookup Protocol for Internet Applications

CS 640 Introduction to Computer Networks. Today s lecture. What is P2P? Lecture30. Peer to peer applications

An Expresway over Chord in Peer-to-Peer Systems

A Survey of Peer-to-Peer Content Distribution Technologies

08 Distributed Hash Tables

Distributed File Systems: An Overview of Peer-to-Peer Architectures. Distributed File Systems

LECT-05, S-1 FP2P, Javed I.

Chord: A Scalable Peer-to-peer Lookup Service For Internet Applications

Semester Thesis on Chord/CFS: Towards Compatibility with Firewalls and a Keyword Search

Structured Peer-to-Peer Networks

Peer-to-Peer Systems and Distributed Hash Tables

A Structured Overlay for Non-uniform Node Identifier Distribution Based on Flexible Routing Tables

DYNAMIC TREE-LIKE STRUCTURES IN P2P-NETWORKS

Lecture 6: Overlay Networks. CS 598: Advanced Internetworking Matthew Caesar February 15, 2011

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol

Goals. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Solution. Overlay Networks: Motivations.

CS555: Distributed Systems [Fall 2017] Dept. Of Computer Science, Colorado State University

Today. Why might P2P be a win? What is a Peer-to-Peer (P2P) system? Peer-to-Peer Systems and Distributed Hash Tables

Peer-to-Peer Systems. Chapter General Characteristics

Distributed Meta-data Servers: Architecture and Design. Sarah Sharafkandi David H.C. Du DISC

L3S Research Center, University of Hannover

Peer-to-peer computing research a fad?

Cycloid: A constant-degree and lookup-efficient P2P overlay network

Peer to peer systems: An overview

ReCord: A Distributed Hash Table with Recursive Structure

Query Processing Over Peer-To-Peer Data Sharing Systems

DATA. The main challenge in P2P computing is to design and implement LOOKING UP. in P2P Systems

Distributed Hash Tables

Flexible Information Discovery in Decentralized Distributed Systems

Part 1: Introducing DHTs

Simultaneous Insertions in Tapestry

EE 122: Peer-to-Peer (P2P) Networks. Ion Stoica November 27, 2002

! Naive n-way unicast does not scale. ! IP multicast to the rescue. ! Extends IP architecture for efficient multi-point delivery. !

: Scalable Lookup

Distributed Hash Tables: Chord

Architectures for Distributed Systems

Should we build Gnutella on a structured overlay? We believe

INF5071 Performance in distributed systems: Distribution Part III

EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Overlay Networks: Motivations

Searching for Shared Resources: DHT in General

Cycloid: A Constant-Degree and Lookup-Efficient P2P Overlay Network

P2P: Distributed Hash Tables

Searching for Shared Resources: DHT in General

Bayeux: An Architecture for Scalable and Fault Tolerant Wide area Data Dissemination

Brocade: Landmark Routing on Overlay Networks

Opportunistic Application Flows in Sensor-based Pervasive Environments

Understanding Chord Performance

Dynamic Load Sharing in Peer-to-Peer Systems: When some Peers are more Equal than Others

FPN: A Distributed Hash Table for Commercial Applications

*Adapted from slides provided by Stefan Götz and Klaus Wehrle (University of Tübingen)

INF5070 media storage and distribution systems. to-peer Systems 10/

Distributed Systems. 16. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2017

Overlay Networks: Motivations. EECS 122: Introduction to Computer Networks Overlay Networks and P2P Networks. Motivations (cont d) Goals.

CSE 124 Finding objects in distributed systems: Distributed hash tables and consistent hashing. March 8, 2016 Prof. George Porter

MULTI-DOMAIN VoIP PEERING USING OVERLAY NETWORK

Handling Churn in a DHT

Overlay networks. To do. Overlay networks. P2P evolution DHTs in general, Chord and Kademlia. Turtles all the way down. q q q

Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications

Distributed Systems. 17. Distributed Lookup. Paul Krzyzanowski. Rutgers University. Fall 2016

Resilient GIA. Keywords-component; GIA; peer to peer; Resilient; Unstructured; Voting Algorithm

A Scalable Content-Addressable Network

A Directed-multicast Routing Approach with Path Replication in Content Addressable Network

Overlay and P2P Networks. Structured Networks and DHTs. Prof. Sasu Tarkoma

CompSci 356: Computer Network Architectures Lecture 21: Overlay Networks Chap 9.4. Xiaowei Yang

Simulations of Chord and Freenet Peer-to-Peer Networking Protocols Mid-Term Report

Introduction to Peer-to-Peer Networks

Modern Technology of Internet

EE 122: Peer-to-Peer Networks

Cycloid: A Constant-Degree and Lookup-Efficient P2P Overlay Network

CPSC 426/526. P2P Lookup Service. Ennan Zhai. Computer Science Department Yale University

CSE 486/586 Distributed Systems

Peer-to-Peer (P2P) Systems

Degree Optimal Deterministic Routing for P2P Systems

CS 347 Parallel and Distributed Data Processing

Distributed Hash Tables

Chapter 10: Peer-to-Peer Systems

Locality in Structured Peer-to-Peer Networks

PEER-TO-PEER NETWORKS, DHTS, AND CHORD

Evolution of Peer-to-peer algorithms: Past, present and future.

Transcription:

Decentralized Object Location In Dynamic Peer-to-Peer Distributed Systems George Fletcher Project 3, B649, Dr. Plale July 16, 2003 1 Introduction One of the key requirements for global level scalability of distributed systems is decentralization of control. At this level, systems with a few points of concentrated control, such as those based on the client/server model, suffer from poor performance and fault tolerance. A system with decentralized control and decision making, however, is more robust in a real-world setting where failures and dynamic system membership are common place. Recent developments in peer-to-peer (P2P) systems have attempted to address this key issue. A P2P distributed system can be viewed as a virtual network overlay on top of an underlying network (e.g., the Internet). In such a system, each node can serve as a client when requesting services, as a server when providing services, and as a router when passing along messages within the system. A new design paradigm for distributed systems inspired by P2P technology is forming under the notion of stability through statistics. In this paradigm, it is argued that fault tolerance and general system-level dynamism will result from redundancy and decentralized

protocols and control data structures. In a recent essay [5], John Kubiatowicz of UC Berkeley has given a novel and insightful characterization of this approach, which he dubs Thermodynamic Systems Design in analogy to statistical mechanics. In this view, introspective mechanisms for system stabilization, analogous to thermodynamic stabilization, can be thought of as mechanisms for active entropy reduction. We will return to this characterization in section 4. 1.1 Global Distributed Data Storage: P2P s Killer App P2P systems originated in part as distributed file-sharing systems such as Morpheus, Napster, and Gnutella. Based on the real-world experiences and success of these systems, it became evident that global scale P2P systems were not only viable, but also very desirable. It has since become clear that one of the killer apps for P2P systems is stable, secure, and long-term global distributed file storage. Several P2P file systems currently in development are OceanStore, PAST, and the Chord File System. In this report we consider the core functionality of P2P file systems: decentralized object location. To survive in a dynamic P2P setting where individual nodes may come and go, an object location protocol must have the following features: Decentralized control: the datastructures and control protocols must not be centralized at a few special nodes. Stabilization mechanisms for dynamic network structure. The protocol must deal with dynamic node joins, node departures/failures, and system stabilization. In particular we will consider three of the most prominent protocols for object location in P2P networks: CAN ([7]), Chord ([8]), and Tapestry ([3]). 2

1.2 Outline The report is organized as follows. The next section (2) discusses CAN (section 2.1), Chord (section 2.2), and Tapestry (section 2.3). After this, we present a comparison of each of these systems in section 3. Finally we consider a broader perspective on Thermodynamic Systems Design and provide our closing remarks in section 4. 2 Decentralized Object Location Protocols All P2P distributed file systems are built on top of an infrastructure that maps locationindependent object identifiers to physical IP addresses. Every file system has as its core functionality a hash function that performs the mapping objectname location. Each of the following proposals are based on this assumption, namely that any successful P2P file distribution system will be built on top of a scalable indexing infrastructure. Although a large body of theoretical research exists on the topic of decentralized object location (c.f., [2]), much work remains to be done on extending this theory to actual dynamic environments. In fact, the systems discussed in this paper are just initial, tentative approximations to general solutions of the problem. As a note, the reader should keep in mind that these protocols assume that the applications built on top of them will be responsible for handling security, consistency, and user-friendliness, etc. 3

2.1 CAN: Content-Addressable Network The content-addressable network (CAN) proposal comes from a group working at the ICSI AT&T Center for Internet Research in Berkeley [7]. This proposal suggests that the cleanest way of solving the decentralized object location problem is to build a widearea hash table distributed over all nodes in the system, which they call a CAN. We describe their solution in this section. 2.1.1 System Model The CAN group developed a rather novel and simple system model. Their model is built around a d-dimensional Cartesian coordinate space on a d-torus. This virtual space is partitioned into n regions, one for each of n nodes participating in the network (see Figure 1 (left)). 1 Placement of objects is determined by a uniform hash function that maps object-keys to points in the virtual coordinate space. Each node is responsible for the objects mapped to its zone. 2.1.2 Object Location Each node in the CAN participates in an overlay network that represents the virtual space. The overlay is essentially a simple graph, where each node is adjacent to the zones that neighbor it in the virtual space. For example, in the overlay network representing the coordinate space in Figure 1 (left), node A has edges to nodes B and C, and node D has edges to nodes B, C, and E. Each node in the CAN maintains a routing table consisting of the actual IP addresses of its neighbors in the overlay. 1 Note that the figure shows the system space as a plane. This is just for the sake of simplicity since it is hard to visualize a high dimensional space. 4

Figure 1: CAN: (left) Example 2-d space with 5 nodes. (right) Example of routing. From [7]. Given an object to locate, a node hashes on the object s key and then routes its query essentially in a straight line path through the virtual coordinate space. This is achieved by routing the query to its neighbor in the direction of the point representing the hashvalue, this neighbor in turn routes the query to its neighbor in the proper direction, and this continues until the node responsible for the object receives the request. The response follows a path back to the originator of the message in a similar fashion. An example of routing a query from node 1 to point (x, y) is given in Figure 1 (right). In a d-dimensional space with n nodes, each node maintains a routing table of length 2d, and the average routing path length is (d/4)(n 1/d ). Thus, the routing table is of fixed size and the path length is O(n 1/d ). Note that if we set d = (log n)/2 then the path length is O(log n). Straightforward improvements to this scheme are: increasing the dimensionality d of the virtual space to shorten path length. 5

increasing the number of spaces to r realities each with an associated hash function. The contents of the hash table are replicated in each reality. Each node lives in all r spaces, and location of an object can be performed concurrently in each reality along r paths. This leads to increased fault-tolerance and shortened path lengths. better routing metrics that reflect the actual underlying network. using k hash functions to increase object availability. Each object has k copies in the CAN, and each query involves k messages. Care must be taken in this scheme to balance communication load with system needs. 2.1.3 Node Joins To join an existing system, a node N new randomly chooses a point P in the virtual space. The following procedure is then performed: 1. N new must locate a node N start already in the CAN. N start then forwards N new s request to join to the node responsible for P, N P. 2. N P then splits in half, giving the region containing P to N new. 3. Finally, all of N P s neighbors, N P itself, and N new must each update their routing tables to reflect N new s join. Note that the joining of a node only affects the routing tables of O(d) nodes, which is independent of n, the number of nodes in the system. 2.1.4 Node Failures and System Stabilization To leave a system, a node N hands over its region to one of its neighbors. If the node N P which originally split to form N is unavailable to take over the region, one of the other neighbors of N can temporarily maintain the region until N P is again available. 6

To detect and recover from a node failure, each node periodically sends an update message to each of its neighbors. This message contains the sender s routing table and zone coordinates. If a node N fails to update its neighbors for some set time-out period, then its neighbors proceed as follows: 1. Each neighbor sets a timer proportional to the volume of its zone. When this timer expires it continues to the next step. 2. A TAKEOVER message is sent to each of N s neighbors. This message contains the volume of the sender s zone. 3. Upon receipt of a TAKEOVER message from node N S, a node N R compares its volume to N S s volume. If N S has a larger volume than N R s then N R responds with a TAKEOVER message. Else it opts out of the takeover of N s region. 4. Finally, the neighbor with the smallest volume assumes responsibility for N s region. Further system stabilization techniques are discussed in [7]. 2.2 Chord Chord is another simple and quite elegant solution to decentralized object location developed by a group at MIT. Much like the team that developed CAN, this group is driven by the belief that the core functionality of P2P systems is efficient location of objects [8]. In this section we describe their solution to this problem. 7

2.2.1 System Model The Chord protocol uses consistent hashing to map node IP addresses and object keys to m-bit identifiers. Consistent hashing is guaranteed to uniformly distributed identifiers with high probability [4]. Node ID s are placed in order on a virtual identifier circle modulo 2 m. This circle is referred to by the authors as a Chord ring. An object with key ID k is stored at the first node on the ring whose ID is equal to or follows k in the ID-space. This node is called the successor of k, and is the first node clockwise on the ring from k. Figure 2(a) gives an illustration of a Chord ring with m = 6 and ten nodes. In this system, an object with a key that maps to 26 would be placed at node 32, and an object that maps to 43 would be located at node 48. The finger table in the figure is described in the following section. Figure 2: Chord: (a) Example of finger table for node 8. (b) Query path for key 54 starting at node 8. From [8]. 8

2.2.2 Object Location Each node n in the ring maintains a finger table with up to m entries. Each entry i, (1 i m), in this table contains the ID and IP address of the successor of (n + 2 i 1 ) modulo 2 m on the ring. In Figure 2(a), the finger table for node 8 is given. To locate an object with ID i, a node N routes a query to the node N in its finger table that immediately precedes i in the ring. N likewise routes i with its finger table. This process continues until the query reaches the node responsible for maintaining i. An example of this process is given in Figure 2(b). Node 8 locates object 54 by first using its finger table to route a locate request to node 42 which in turn routes the request to node 51 which finally resolves the lookup at node 56 (the successor of ID 54 on the ring). In a system with n nodes, routing path length is O(log n) and finger table size is O(log n). 2.2.3 Node Joins A working Chord ring requires that each node knows its successor. If all else fails, routing can resort to a linear search around the ring. When a new node wishes to join the ring, it is important that all nodes involved have their finger tables properly updated. To join an existing ring, a node n must contact any node n in the ring and perform the following: 1. Node n uses its finger table to find the successor of n. 2. n sets its successor to the value returned by n. 3. Each node in the system periodically perform stabilization. It is during this process that newly joined nodes are recognized. This process basically involves each node 9

checking that its successors and predecessors have not changed. If a newly joined node lies between two nodes, they each update their successor values to reflect the change. 4. Each node in the system is also responsible for maintaining its finger table by performing stabilization on each node in the table. This is also how a new node establishes its finger table. 2.2.4 Node Failures and System Stabilization To leave an established ring, a node can give its keys to its successor and then inform its predecessor. The successor and predecessor nodes then update their pointers. To deal with node failures (which are assumed to be independent), each node keeps a list of s successors. If a node s successor fails, it advances through this list until a live node is found. During stabilization, failed nodes are recognized and removed from finger tables. Note that failure handling is cleaner here than in CAN since most of the process is encapsulated in stabilization. Further discussion of the Chord protocol can be found in [8]. 2.3 Tapestry The Tapestry protocol was developed within the context of the OceanStore project at UC Berkeley, and is derived from the PRR protocol presented in [6]. PRR assumes that the set of participating nodes is static and known when initializing the system. Tapestry extends the PRR protocol to deal with a dynamic set of nodes. In this section, we discuss the Tapestry infrastructure [3]. 10

2.3.1 System Model The Tapestry system assumes that nodes and objects can be given globally unique identifiers. Nodes in the system form a routing mesh overlay network: each node maintains a neighbor map of IP addresses for its neighbors which is used to incrementally route messages towards their destination. The map consists of i levels, where each level contains j entries. Each level L in the map represents those neighbor node IDs that match the suffix of the node ID up to the Lth position, and each level contains a number of entries equal to the base of the ID-space. For example, in base-ten a node with ID 90348 will have the IP address in level 4, entry 2 of its neighbor map of the closest neighbor with ID ending in 2348. An example of a neighbor map is given in Figure 3(right). Working digit by digit, a query destined for node with ID N = n 1 n 2... n k is routed as follows. On the ith hop, a node receiving the query will route the message towards its neighbor pointed to in level i entry n k i. The node IDs visited are:... n k... n k 1 n k... n 3... n k n 2 n 3... n k n 1 n 2... n k where the s represent wildcards. The routing mesh of neighbor maps can be thought of as a set of network spanning trees, with one rooted at each node in the system [9]. An example of routing is given in Figure 3(left). 2.3.2 Object Location An object can reside at any node in the system. To make an object available in the system, the node N storing the object routes a message towards the root node R for that object (e.g., the node with ID that most closely matches the object s ID). Each 11

Figure 3: Tapestry: (left) Example of routing from node 0325 to node 4598 in a Plaxton mesh. (right) Example of a single tapestry node, with ID 0642. From [9]. node along the routing path to R stores a pointer to N associated with the object ID. This information is part of the data structures maintained at each node (see Figure 3(right)). N repeats this process for several known redundant paths (e.g., by varying the last digit of R s ID). To locate an object, a node in the system routes a query toward R. If a pointer to N is found along the way, the query is routed immediately to the object. Else, the pointer at R will route the query to N. The path length for a system of n nodes is O(log n) and table overhead is O(n log n). 2.3.3 Node Joins To join an existing overlay network, a node must first contact a member of the system closest to it (e.g., in network latency). It then proceeds to build its neighbor map level by level by routing messages to its own ID and performing optimizations on the maps of its neighbors. As the authors point out, this process is actually non-trivial and takes a fair amount of time complete. After this is complete, the node must notify its neighbors of its existence by working back level-wise from the root node R associated with its ID 12

(using backpointers, to be discussed in the next section) as well as by sending HELLO messages to neighbors in its newly formed map. 2.3.4 Node Failures and System Stabilization Each node periodically sends heartbeat messages along backpointers maintained in the neighbor map to nodes that route to it. If a node fails to receive a heartbeat message from a neighbor, it will set a timer to allow the neighbor a second chance to recover before assuming it has failed. During this time it will reroute to one of two backup neighbors maintained for each entry in the map. A potentially failed node is given this second chance since handling node failures is costly. If a node is determined to be dead after this second chance, one of the backup pointers is promoted to be the primary neighbor. Two further introspective optimizations are made in the Tapestry system to maintain system stability. First, each node runs a refresher thread that periodically pings each neighbor in the map. If latency goes above a preset threshold value for a neighbor, the two backup pointers for that node are explored for possible promotion to primary status. This dynamic optimization is characteristic of the self-maintenance approach advocated by P2P system developers. The second optimization is a hotspot monitoring algorithm for dynamically caching an object closer to the source of frequent queries for that object. This is achieved by keeping a frequency counter for objects in the object store of a node. The data structures for this monitoring are illustrated in Figure 3(right). 13

3 Comparison of Protocols Protocol # Messages for Object Location Space CAN (d-dimensions, n nodes) O(n 1/d ) O(1) Chord (n nodes) O(log n) O(log n) Tapestry (n nodes) O(log n) O(n log n) The complexity and storage overhead of object location for CAN, Chord, and Tapestry are given in the table above. Since each system is essentially comparable in complexity and robustness (since each system can incorporate an arbitrarily high amount of redundancy), the real comparison lies in understanding how well each solves the problem it was designed to solve, i.e., object location and system adaptation under load in a dynamic environment. CAN is perhaps the cleanest solution of the three. The virtual coordinate space is simple and easy to maintain. Consequently, it can adapt easily to dynamic environments. To achieve optimal query path length, however, it is necessary to anticipate the size of the network to get the best value for d, the dimensionality of the space. A major drawback of the system is that the overlay does not correspond well to the underlying network. A node is assigned a region in the space at random, and its neighbors may not be physically close to it. In this way, a message can potentially be routed across the the Atlantic and back to get to an object that may be stored on a geographically local node. The developers of CAN suggest some remedies to this problem in [7]. Chord is also a simple and clean solution. It too generally works well in a dynamic environment, although it is slightly less fault tolerant than CAN. Unfortunately, its simplicity leads to the same problem suffered by CAN: a Chord ring does not correspond to the underlying network. Likewise, the developers offer some tentative solutions to 14

this problem in [8]. Tapestry is the least elegant and most complicated protocol of the group. The PRR protocol on which it is based was intended for static, administratively configured systems with stable nodes. Tapestry is a slightly hack-ish extension of PRR to a dynamic setting. It seems PRR was chosen since it takes into account the underlying network when building the routing mesh. Unfortunately, Tapestry can t handle very dynamic systems like sensor networks due to the complexity of its algorithms (by the developer s own admission). Furthermore, node joins and failures affect a logarithmic number of nodes, in contrast to the constant number affected in CAN and Chord. These two weaknesses negate any performance gains due to locality of neighbors. It seems that the most fruitful path for future work would be in developing heuristics for node joins and routing metrics in either CAN or Chord. These two systems seem to be the best choice since they were built from scratch with a dynamic and faulty environment in mind. Current joint work by members of the CAN, Chord, Tapestry, and Pastry (not discussed here) teams is exploring the common role these systems play in P2P systems [1]. 4 Conclusion In this paper we have discussed and compared three solution to decentralized object location in dynamic P2P networks. We have seen that although their running time and storage complexities are similar, each system design is quite unique and has different strengths and weaknesses. The primary conclusion that we can draw from examining these protocols is that the dynamic environments of P2P systems necessitate new and creative algorithms and design patterns. These large scale networks point to new ways of 15

understanding, discussing, and building distributed systems. In particular, I believe that the role of randomness and redundancy in system design, in analogy to thermodynamic systems design as mentioned in section 1, deserves future consideration. From statistical mechanics, we can perhaps draw some understanding of the latent order of large scale networks and apply it to system design [5]. This, however, is an interesting topic for another paper. References [1] Dabek, F., et al. Towards a Common API for Structured Peer-to-Peer Overlays. In Proceedings of IPTPS 03. [2] Gavoille, Cyril. Routing in Distributed Networks: Overview and Open Problems. ACM SIGACT News, Distributed Comp. Column 32, 1. March 2001, pp. 36-52. [3] Hildrum, K., Kubiatowicz, J., Rao, S., Zhao, B. Distributed Object Location in a Dynamic Network. In Proceedings of ACM SPAA 02, August 2002. [4] Karger, D., et al. Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web. In Proc. of the 29th Annual ACM Symposium on Theory of Computing, May 1997, pp.654-663. [5] Kubiatowicz, John. Extracting Guarantees from Chaos. Communications of the ACM, February 2003, pp. 33-38. [6] Plaxton, C., Rajaraman, R., Richa, A. Accessing Nearby Copies of Replicated Objects in a Distributed Environment. Theory of Computing Systems, 32:241-280, 1999. [7] Ratnasamy, S., et al. A Scalable Content-Addressable Network. In Proceedings of ACM SIGCOMM 01, August 2001, pp. 161-172. [8] Stoica, I., et al. Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications. In Proceedings of ACM SIGCOMM 01, August 2001, pp. 149-160. [9] Zhao, B., Kubiatowicz, J., Joseph, A. Tapestry: An Infrastructure for Fault- Tolerant Wide-Area Location and Routing. Tech. Report UCB/CSD-01-1141, UC Berkeley, Computer Science Division, April 2001. 16