A Distributed Algorithm for the Replica Placement Problem

Size: px
Start display at page:

Download "A Distributed Algorithm for the Replica Placement Problem"

Transcription

1 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS A Distributed Algorithm for the Replica Placement Problem Sharrukh Zaman, Student Member, IEEE, and Daniel Grosu, Senior Member, IEEE Abstract Caching and replication of popular data objects contribute significantly to the reduction of the network bandwidth usage and the overall access time to data. Our focus is to improve the efficiency of object replication within a given distributed replication group. Such a group consists of servers that dedicate certain amount of memory for replicating objects requested by their clients. The content replication problem we are solving is defined as follows. Given the request rates for the objects and the server capacities, find the replica allocation that minimizes the access time over all servers and objects. We design a distributed approximation algorithm that solves this problem and prove that it provides a 2-approximation solution. We also show that the communication and computational complexity of the algorithm is polynomial with respect to the number of servers, the number of objects, and the sum of the capacities of all servers. Finally, we perform simulation experiments to investigate the performance of our algorithm. The experiments show that our algorithm outperforms the best existing distributed algorithm that solves the replica placement problem. Index Terms Replication, distributed replication group, distributed algorithm, approximation algorithm. INTRODUCTION REPLICATION of popular data objects at a server closer to the users can improve the access time for the users and reduce the network bandwidth usage as well. Replication of an object refers to maintaining a fixed copy of it for a specific time interval at a given server []. To efficiently use the server storage we need to replicate objects that will yield the best performance. Among different models of object replication, we consider the distributed replication group model and study the problem of replica placement within such a group. A distributed replication group consists of several servers dedicating some storage for the replicas. A server has to serve requests from its clients and also from other servers in the group. When a server receives a request from a client, it immediately responds to the client if the object is in its local storage. Otherwise, the object is fetched from other servers within the group at a higher access cost or from the origin server, at an even higher cost, in the case no server within the group stores a replica of the object. The origin server may be the actual source of that object or, if the servers are part of a hierarchical system, the parent replicator of these servers. The access cost is the highest when an object is accessed from the origin server. The purpose of the replication group is to achieve minimum access cost over all users of the participating servers and over all objects considered for replication. Thus, the replica placement problem we are solving is defined as follows. Given the request rates for the objects and the server capacities, find the replica allocation that minimizes the access time over all servers and objects. The replica placement should consider the constraint that each server employs a limited storage capacity for replication. There are several approaches to solve this problem and hence different solutions exist in the The authors are with the Department of Computer Science, Wayne State University, 543 Cass Avenue, Detroit, MI sharrukh@wayne.edu, dgrosu@cs.wayne.edu. literature. In the context of Internet, a distributed solution is more acceptable than a centralized one. The replica placement problem we are considering here is a generalized version of the multiple knapsack problem in that it allows multiple copies of the objects to be placed in the bins, and that object profits vary with the bin and the items already inserted in the bins. Since the multiple knapsack problem is NP-hard [2] it follows that the replica placement problem is NP-hard. We design an approximation algorithm that guarantees that the total access time is within a factor of two from the optimal. The algorithm runs in polynomial time and has a communication cost that is polynomial in the number of servers, objects and total server capacities.. Related Work The replica placement problem we are considering has some similarities with several other optimization problems such as, the generalized assignment problem [3], the multiple knapsack problem [2], the facility location problem [4], and the transportation problem [5]. The transportation problem was solved in [5] by extending the Auction Algorithm for linear network flow problems proposed in [6]. The closest work to ours is by Leff et al. [7] which presented the design of a family of distributed approximation algorithms for remote caching architectures and determined their performance by simulation. The model used in [7] assumes that servers have equal-sized caches, while in our proposed model and algorithm we eliminate this restriction. Although approximation algorithms are proposed, the authors of [7] do not provide and prove the theoretical bounds on the approximation ratios of their algorithms. We provide a theoretical proof of the approximation ratio of our proposed algorithm. Laoutaris et al. [] extended the model from [7] considering caches of different sizes and a setting where servers act

2 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 2 selfishly. They showed that the selfish behavior of the servers leads to a Nash equilibrium [8] and determined the price of anarchy induced by the selfish behavior. Although the servers can act selfishly, they have to communicate the replication decision in each iteration. Each server has to know the request rate for all objects from all servers in the initial phase. The servers have to go through multiple rounds to converge to the best possible solution. Our algorithm synchronizes the object placement decisions to achieve a solution close to the optimal. It achieves this performance without requiring more communication overhead than the algorithm presented in []. Some other papers (e.g., [9], []) also studied the gametheoretic aspect of the problem of caching and replication assuming selfish behavior of the servers. The problem of selfish caching was investigated in []. The object placement problem was also studied in [2] where approximation algorithms for object placement in networks modeled as general graphs were proposed. Khan and Ahmad [3] performed an extensive performance evaluation of several replication algorithms. Optimal placing of transparent en-route caches (TERCs) was studied in [4]. TERCs are caches placed along the paths from clients to servers. They work without requiring the clients or servers to be aware of them. Qiu et al. [5] studied different algorithms to place a maximum of k replicas of each object in a content distribution network, where k is determined beforehand and is given as an input to the algorithms. They showed by simulation that the greedy algorithm provides the closest solution to the optimum. The idea of utilizing neighbor caches to reduce the requests to parent proxies was explored in [6]. Centralized and distributed approximation algorithms for data caching in ad hoc networks were proposed in [7]. Kumar and Norris [8] proposed an improvement over the LRU algorithm by introducing a quasi-static portion of the cache. Rabinovich et al. [9] proposed protocols for cooperative caching among Internet Service Providers. They considered a scenario where servers cooperatively cache objects and the cost to access objects from the servers can be larger than the cost to fetch them directly from the Internet. Baev and Rajaraman [2] showed that the data placement problem in arbitrary networks is MAXSNP-hard for objects of uniform size. For non-uniform size objects, they proved that no polynomial-time approximation scheme exists unless P = NP. They also designed a 2.5-approximation algorithm for the former problem. Recently, Baev et al. [2] presented a -approximation algorithm for the data placement problem. Moscibroda and Wattenhofer [4] developed a distributed approximation algorithm for the facility location problem. Data replication in grids is addressed in [22], [23]. Research has also been conducted in the area of multicast replication [24] and file system replication [25]. Other recent work on replication evaluated different architectures [26], used artificial intelligence techniques [27], and proposed replicating web services at the operating system level [28]. An implementation of a content delivery network (CDN) is presented in [29]. The CDN is implemented with user s computers and provide replication solutions to ensure content availability, low user latency, and fault tolerance. Complexity results for problems ranging from the knapsack to the generalized assignment problem (GAP) are given in [2]. The replica placement problem we are considering is a general case of the multiple knapsack problem, which is NP-Hard [2]..2 Our Contribution We design a distributed approximation algorithm that solves the replica placement problem. We show that the communication and computational complexity of the algorithm is polynomial in the number of servers and objects and the sum of the server capacities. The closest work to ours [7] proposed distributed approximation algorithms for the replica placement problem and investigated them by simulation, but no theoretical proofs of the approximation ratios have been provided. We prove that our algorithm is a 2-approximation algorithm. We conducted extensive simulation experiments to compare the performance of our algorithm with that of the best distributed algorithm provided in [7]. In these experiments, our proposed algorithm performs better than the best existing distributed algorithm in more than 97.28% of the cases. We also compare the performance of our algorithm with that of a centralized algorithm based on A-Star search [3] that produces near-optimal solution but suffers from excessive running time. Our algorithm exhibited only % degradation in performance compared to the centralized algorithm..3 Organization The rest of the paper is organized as follows. In Section 2, we describe the replica placement problem and the system model. In Section 3, we describe the proposed distributed approximation algorithm that solves the replica placement problem. In Section 4, we analyze the complexity and show the approximation guarantees of our algorithm. In Section 5, we analyze the performance of our algorithm by simulation. In Section 6, we conclude the paper and discuss future research directions. 2 REPLICA PLACEMENT PROBLEM In this section, we formally define the replica placement problem we are solving. We use the system model described in [], with different notation. We consider that the replication group is composed of m servers s,...,s m with capacities c,...,c m. There are n unit-sized objects o,...,o n that will be placed in the server caches in order to achieve minimum possible access cost over all objects. The access costs are determined by the location and the request rates of the objects. We assume that a server can access an object with a cost of t l, if it is stored in its own cache. The cost becomes t r, when it has to access another replicator s cache to fulfill its client s request. The highest access cost is t s, if that particular object is not stored at any server in the group and it is to be accessed from the origin or source of that object. Obviously, t l t r t s. The motivation behind choosing this model is that distributed replication groups are effective when there is a high degree of proximity among the servers []. An example is a replication group composed of servers belonging to different departments

3 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 3 and offices in a university. In such a setting, we can consider the access costs among the servers in the replication group to be equal and the distance to the origin server to be much larger than the distances among the servers in the replication group. A server s i knows the request rates r i j, j =,...,n, of its local users for all objects. We denote by r i = (r i,r i2,...,r in ), the vector of request rates of the users at server s i, and by r = (r,r 2,...,r m ) T, the m n matrix of request rates of all the objects at all servers. We denote by X the placement matrix, an m n matrix whose entries are given by: {, if object o X i j = j is replicated at server s i, otherwise for i =,...,m and j =,...,n. The system s goal is to minimize the access time at each server over all objects, that is: min m i= subject to: r i j t l + r i j t r + r i j t s j:x i j = j:x i j = and rc j > j:rc j = () X i j {,}, i =,...,m; j =,...,n (2) n j= X i j c i, i =,...,m (3) where rc j = m i= r i j, is the replica count of object o j. The first term of the objective function represents the access time corresponding to the objects that are stored locally at server s i. The second term represents the access time corresponding to the objects that are not stored locally at server s i but are cached in one of the servers belonging to the replication group. The third term represents the access time for the objects that are not cached at any of the servers. The first constraint says that an object j is either allocated or not allocated to server s i. The second constraint, which is the capacity constraint, says that the number of objects allocated to server s i should not exceed the capacity c i of server s i. The above minimization problem can be translated into an equivalent maximization problem in which we maximize the overall gain in the access time obtained by replicating the objects. The overall gain in access time is given by the difference between the total access time over all objects if no objects are replicated ( m i= n j= r i jt s ) and the total access time obtained by replication (given by the objective function in equation ()). Thus, the equivalent maximization problem is as follows: max m i= j:x i j = r i j (t s t l )+ r i j (t s t r ) (4) j:x i j = and rc j > subject to constraints (2) and (3). The first term of the objective function represents the gain obtained by caching the objects locally at server s i, while the second term represents the gain obtained by caching the objects at other servers within the replication group. To understand the design of our algorithm we rewrite the objective function in equation (4) as follows. Since (t s t l ) can be written as (t r t l )+(t s t r ), we split the first term to obtain the following equivalent expression: r i j (t r t l )+ r i j (t s t r ) j:x i j = j:x i j = + r i j (t s t r ) (5) j:x i j = and rc j > Since X i j = implies rc j >, the union of sets { j : X i j = } and { j : X i j = and rc j > } is the set { j : rc j > }. Equation (5) is thus equivalent to r i j (t r t l )+ r i j (t s t r ) j:x i j = j:rc j > This leads to an equivalent maximization problem defined as: ( ) m max r i j (t r t l )+ r i j (t s t r ) (6) j:rc j > i= j:x i j = subject to constraints (2) and (3). The first term represents the additional gain obtained by replicating objects locally at server s i, while the second term represents the gain obtained by replicating objects within the replication group. In the next section, we design a distributed approximation algorithm that solves this problem. The algorithm decides the placement of objects based on the value of the total gain defined by the objective function above. 3 DISTRIBUTED REPLICA PLACEMENT ALGO- RITHM 3. Preliminaries We propose a distributed approximation algorithm, called DGR (Distributed Greedy Replication), that solves the replica placement problem. The algorithm has as input five parameters, r, c, t s, t r, and t l. The first parameter, r, is the matrix of request rates as defined in the previous section. The second parameter, c = (c,...,c m ) is the m-vector of server s capacities. The last three parameters, are the access costs of the objects from source, remote and local replicas, respectively. In order to describe the algorithm we define two additional parameters, insertion gain and eviction cost. The insertion gain for object o j and server s i is defined as follows: ig i j = p j (t s t r )+r i j (t r t l ), if rc j = r i j (t r t l ), if X i j =,rc j >, if X i j = where p j = m i= r i j is the popularity of object o j. As can be seen from the definition of ig i j, it represents the increase in overall gain the system would experience if it replicates object o j in server s i s cache. The highest insertion gain is for an object which does not have a replica in the group. It reduces to only the local gain of a server when that object is already replicated elsewhere. Otherwise, it is zero. The eviction cost of object o j at server s i is defined as: ec i j =, if X i j = r i j (t r t l ), if X i j =,rc j > p j (t s t r )+r i j (t r t l ), if X i j =,rc j = (7) (8)

4 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 4 The eviction cost, ec i j is the decrease in the system gain that would happen if object o j is evicted from server s i s space. The eviction cost has the highest value for an object that has only one replica in the group, since evicting this object will cause all servers to access it from the origin. The insertion gain and the eviction cost are used to characterize each local decision of replicating or evicting an object from a server. In making these decisions the algorithm considers the effect of replicating the objects on the overall system gain. 3.2 The Proposed Algorithm The proposed distributed approximation algorithm for replica placement is given in Algorithm. The algorithm is executed by each server within the replication group. It starts with an initialization phase (lines 2 to 7) in which the servers initialize their local variables and compute the popularity of each object. In order to compute the popularity, p j, of each object o j, all servers participate in a collective communication operation called all-reduce-sum [3]. This collective communication operation is defined by the communication primitive allreduce-sum(r i, p) which works as follows. Before executing the primitive each server has a vector r i = (r i,...r in ) of size n, and as the result of the primitive execution, each server will contain a vector p = (p,..., p n ) whose entries are given by p j = m i= r i j. Thus, all-reduce-sum computes the popularity of each object, and the result (the popularity vector) is made available at each server. In line 4, the algorithm initializes row i of the allocation matrix X to zero, that means no objects are allocated. It also initializes the available capacity e i to c i, the capacity of server s i. The insertion gain for each object is initialized to the maximum value which corresponds to the case in which no replica exists in the replication group. The eviction cost and the replica count for each object are initialized to. The second phase of the algorithm is the iterative phase, consisting of the while loop in lines 3 to 52. Before entering the loop, the global maximum insertion gain, ig max, is computed through another collective communication operation called all-reduce-max (send msg, recv msg) (lines 8 to ). The parameters are the send buffer and the receive buffer, respectively. Both are ordered lists of four variables (ig max, i, j, j ), where ig max is the maximum insertion gain, i and j are the indices of the corresponding server and object that gives the maximum insertion gain, and j is the object to be evicted, if necessary. To participate in this operation each server s i determines its highest insertion gain ig max and the object o j that gives this highest gain (lines 8-9). There is no object for eviction at this point, so j =. We shall discuss more about j later in this subsection. In line 9, each server s i prepares the buffer send msg with ig max and the indices i, j, and for j. The primitive all-reduce-max will return the send msg with highest ig max to each server through the output buffer recv msg (line ). After all-reduce-max execution each server s i knows the global maximum insertion gain ig max and the server and the object that has this ig max. It also knows the index j of the object to be evicted if needed. At this point the servers are ready to enter the main loop (line 3) of the algorithm. Algorithm DGR(r, c, t s, t r, t l ) : {Server s i :} 2: {Initialization} 3: all-reduce-sum(r i,p) 4: X i ; e i c i 5: for j := to n do 6: ig i j r i j (t r t l )+ p j (t s t r ); ec i j ; rc j 7: end for 8: ig max max k ig ik ; j argmax k ig ik 9: send msg (ig max,i, j,) : all-reduce-max(send msg, recv msg) : (ig max,i, j, j ) recv msg 2: {i is the server that has ig max for object j; j is the object to be evicted from server i ( if none)} 3: while ig max > do 4: if i = i then 5: {this server has the maximum insertion gain} 6: X i j 7: ec i j ig i j ; ig i j ; e i e i ; rc j rc j + 8: if j then 9: X i j 2: ig i j ec i j ; ec i j ; 2: e i e i + ; rc j rc j 22: end if 23: else 24: {another server has the maximum insertion gain} 25: rc j rc j + 26: if X i j = then 27: ig i j r i j (t r t l ) 28: else 29: ec i j r i j (t r t l ) 3: end if 3: if j then 32: rc j rc j 33: if X i j = and rc j = then 34: ec i j r i j (t r t l )+ p j (t s t r ) 35: end if 36: end if 37: end if 38: {prepare the next iteration} 39: ig max max k ig ik ; j argmax k ig ik 4: ec min min k (ec ik : ec ik > ) 4: j argmin k (ec ik : ec ik > ) 42: if e i = or c i e i n then 43: if ig max ec min then 44: ig max ; j 45: end if 46: else 47: j 48: end if 49: send msg (ig max,i, j, j ) 5: all-reduce-max(send msg, recv msg) 5: (ig max,i, j, j ) recv msg 52: end while

5 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 5 During each iteration, if server s i has the maximum global gain for an object, it performs allocation and, if necessary, deallocation of objects (lines 4 to 22). If s i does not have the maximum global gain, it only updates some local values to keep track of the changes that resulted from allocation/deallocation of objects at other servers (lines 24 to 37). These updates are performed according to equations (7) and (8). Allocation (deallocation) of object o j at server s i is performed by setting the X i j entry of the allocation matrix to (). Replica count, rc j, and available capacity, e i, are incremented, respectively decremented, in case of allocation. The reverse is done for deallocation. In the case of allocating an object, the ig value before the allocation becomes the new ec value for that object (this is according to equations (7) and (8)). For example, if before the allocation, object o j does not have any replica in the group (i.e., rc j = ), the value of ig i j is equal to the first entry in equation (7). After the allocation, X i j = and rc j =, so the value of ec i j is equal to the third entry in equation (8). This holds true for all other cases and, therefore, we can assign ig i j to ec i j when we allocate o j to s i and do the reverse when we evict o j from s i. Obviously, the insertion gain becomes zero after an insertion and the eviction cost becomes zero after an eviction. The eviction happens if j, when object o j is evicted from the server s i s cache (lines 8-22). Lines 24 to 37 simply update rc, ig and ec, since another server s i (i i) performed the allocation and s i needs to keep track of it. rc is incremented or decremented and equations (7) and (8) are used to update ig and ec. If another server replicates o j, server s i updates the values of its insertion gain and eviction cost for o j (lines 26-3). If object o j was evicted from another server (i.e., j ), the replica count is decremented and the insertion gain and the eviction cost corresponding to o j are updated. If object o j is replicated only at s i, then server s i updates only the eviction cost, e i j. If the object is not replicated at any server in the group, then s i updates only the insertion gain, ig i j. Then, each server participates in another all-reducemax operation that determines the next candidate server and object(s). Each server prepares the send msg as follows. The maximum insertion gain ig max and j are determined as before. Server s i also determines a candidate object o j for eviction. This is the object that has the minimum eviction cost at s i. A server is eligible to be considered for an allocation only if one of the following holds: it has available capacity to store more objects, or it is full but some inserted object o j has its eviction cost less than the insertion gain of some uninserted object o j. Otherwise, it reports its ineligibility by setting ig max to. In lines 4 and 4, ec min and j are determined, and in lines 42-43, ec min is compared with ig max only when the available capacity, e i =. If both eligibility conditions fail, ig max is set to zero. If e i > then server s i has space for new objects, and hence, no eviction is necessary ( j = ). The algorithm terminates when each server reports ig max =. 4 ANALYSIS OF DGR In this section, we analyze the complexity of the DGR algorithm and determine its approximation ratio. 4. Complexity We analyze the computational and communication complexity of DGR. To calculate the running time we determine the number of iterations of the main loop. We differentiate the iterations based on whether an eviction occurs ( j > ) or not. An insertion iteration is one that does not involve an eviction. If eviction takes place in an iteration, we call it a replacement iteration. It is clear from the algorithm description that each iteration falls into one of these two categories. We denote by C = m i= c i, the total capacity of the replication group. Finally, to represent the value of a variable after an iteration is executed, we use the notation variable iteration. For example, the value of ig i j after iteration t is completed is denoted by ig t i j. The initial value for this variable is ig i j. This notation is used to show the state of the variables that change during the main loop iterations. Lemma : The main loop of DGR requires at most C insertion iterations. Proof: Each e i is set to c i in the initialization phase (line 4). Therefore, m i= e i = C at the beginning. An insertion iteration decreases an e i by one (line 7). No insertion iteration takes place once m i= e i =, or earlier, if all objects are replicated. Also, a replacement iteration does not have any effect on any e i. Hence, DGR requires at most C insertion iterations. Lemma 2: For some object o j and server s i, ig t i j > igt i j only if t is a replacement iteration that evicts o j from s i. Proof: In the main loop of DGR, the insertion gain of an object is assigned a value in three places, line 7, 2 and 27. Only line 7 or line 27 is executed in an insertion iteration. ig i j becomes zero in line 7. Line 27 is executed if server s i does not contain object o j and therefore ig t i j is at least r i j (t r t l ). Hence, an insertion iteration cannot increase ig i j. Line 2 is executed only in replacement iterations. ig i j is increased from zero in line 2, since server s i evicts object o j. We next show that in a given iteration an object replica will not be evicted if in the previous iteration it is the only replica stored by the replication group. Lemma 3: An object o j will not be evicted in iteration t if rc t j =. Proof: We prove this lemma by induction on the order of objects first replica insertion at any server within the group. Let us assume an ordering o j,...,o jn of objects such that the first replica of object o j is inserted before the first replica of object o j2, and so on. As the base case of the induction we show that object o j will not be evicted when it has only one replica in the group. In the inductive step we prove that if each of the objects o j,...,o jk has at least one replica in the group and object o jk has only one replica, that replica will not be evicted. Base case: Since object o j is first replicated by the algorithm, the replication must occur in the first iteration. Let s i be the server that replicates o j in iteration. Therefore, the insertion gain of o j at s i has the highest possible value among all objects and servers. According to line 7 of DGR, ec i j is larger than any ig values after iteration. ec i j retains this value as long as the replica of object o j at server s i is the only

6 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 6 replica of object o j. Therefore, as long as o j s only replica remains stored at server s i, this replica cannot be evicted. Inductive step: Suppose that after some iteration, objects o j,...,o jk have one or more replicas in the group and object o jk has only one replica. Let us assume that server s l holds the replica of object o jk. It is clear that objects o jk+,...,o jn cannot have higher insertion gains than object o jk because otherwise they would have been replicated before o jk. Only the objects inserted earlier than o jk may have an increased insertion gain due to eviction from some server. Objects replicated at server s l have insertion gains equal to zero which can increase upon their eviction from s l (Lemma 2). Obviously, these insertion gains will not be larger than object o jk s eviction cost, since otherwise o jk would be selected for eviction instead. If an object is evicted from some other server, its insertion gain at server s l will remain fixed if there exists other replicas of that object in the group after the eviction (equation (7)). If an object s only one replica in the group is evicted from some other server in a replacement iteration, its insertion gain at server s l can increase (equation (7)). Only in this case the increased insertion gain can exceed the eviction cost of o jk from server s l. Therefore, the only replica of object o jk cannot be evicted if objects o j,...,o jk have at least one replica in the group. Clearly, a replacement will occur only if some object s eviction cost decreases or some other object s insertion gain increases, or both. From Lemma 3, we see that both ways of increasing an insertion gain cannot be sufficient for evicting an object. We can conclude with the following corollary. Corollary : A replacement cannot occur only because the insertion gain of an object is increased. A decrease in the eviction cost of an object must occur for a replacement to take place. Next, we show that only the first inserted replica of an object with multiple replicas is subject to a decrease in eviction cost, and thus, it can possibly be evicted. Lemma 4: If object o j is replicated at server s i in iteration t, then ec t i j < ect i j is possible for some iteration t > t only if rc t j =, as long as o j has a replica at s i. Proof: Equation (8) shows the three possible values for ec i j. It is zero when o j is not replicated at s i. If object o j is replicated at s i when there is no replica of o j in the group, ec i j has the maximum value. This value can decrease if some other server replicates object o j. On the other hand, when object o j is being replicated at server s i and there already exists other replicas of o j in the group, ec i j is set to the second highest value shown in equation (8). This value will not decrease as long as the replica remains stored at server s i. Therefore, as long as o j remains replicated at server s i, ec i j can decrease later only if it is the first replica of object o j in the group. Lemma 5: The main loop of DGR requires at most C replacement iterations. Proof: By Corollary, we only need to investigate the cases when eviction cost can be decreased. Lemma 3 says that only objects with multiple replicas may be evicted. Lemma 4 states that only the first inserted replica of an object with multiple replicas is subject to a decrease in eviction cost, and thus, it can possibly be evicted. In the worst case DGR will allocate two copies of C/2 objects. Of them, C/2 will be replaced by other C/4 objects, each having two replicas. This will continue until one replica of C objects exists in the system. Thus, the maximum number of replacement iterations is C/2+C/4+... = C. Theorem : The running time of DGR is O(n+C logn). Proof: From Lemmas and 5, the maximum number of main loop iterations is 2C. Each iteration consists of constant time operations with the exception of two max and min operations in lines 39 and 4, which can be implemented using one max-heap and one min-heap. Thus, the running time of each iteration is O(log n). The initialization phase consists of a n-iteration loop and a build-heap operation of cost n. Therefore, the worst case running time of DGR is n+2c logn = O(n+C logn). Theorem 2: The communication complexity of DGR is O((n+C)logm). Proof: The standard primitives all-reduce-sum and allreduce-max contribute to the communication cost of DGR. They are basically the same operations except that they use a different associative operator. Grama et al. [3] shows that the communication cost of all-reduce operations is O(w log m), where w is the message size in words and m is the number of participating servers. Therefore, the communication complexity of all-reduce-sum during initialization is O(n log m), since each message r i is of length n. The primitive all-reduce-max is called with constant size messages and it is executed 2C times in the main loop. Thus, its communication complexity is O(2C log m). We conclude that the total communication cost of DGR is O((n+C)logm). Since we are comparing the performance of DGR with the performance obtained by the best distributed algorithm presented in [7], we also need to compare them in terms of their complexity. The algorithm given in [7], which we refer to as LWY from the name of the authors, has the same communication overhead as DGR, that is O((n + C)logm), and a running time in O(mnlogn). Clearly, DGR has a better running time since n C in practice. The communication and computational complexity of the two algorithms are given in Table. We determine the actual differences in communication and running time of these two algorithms in Section 5. TABLE Complexity of DGR and LWY Complexity DGR LWY Communication O((n +C) logm) O((n +C) logm) Computational O(n +C logn) O(mnlogn) 4.2 Approximation Ratio In the following analysis, we assume that t s t r t r t l. This assumption considers the benefit a server can expect from participating in a distributed replication group. A server will benefit if the remote access time for an object is considerably lower than the access cost from origin. Also, a server will access an object from another replicator if that access cost is not unreasonably higher than the local access cost.

7 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 7 We show that OPT/DGR 2, where OPT and DGR denote the total gain by the optimal and DGR allocations, respectively. An equivalent expression is (OPT DGR)/DGR or, OPT DGR DGR. To determine OPT DGR we characterize the difference in allocation by means of primitive operations such as allocating or deallocating objects, etc. We show that any DGR allocation can be converted into an optimal one by a finite set of such primitive operations. An operation increases, decreases, or does not affect the total system gain. We term their effect on the system gain as the gain of the operations. Let OP be the set of operations that converts a particular DGR allocation into the optimal one. Therefore, G op = OPT DGR op OP where G op is the gain of the operations that change DGR into OPT. It is sufficient to show that G op DGR (9) op OP to prove that DGR is a 2-approximation algorithm. Definition : A discrepancy is the existence of a (server, object) pair, (s i,o j ), such that o j is replicated at s i in DGR allocation but not in OPT or vice versa. Resolving a discrepancy means performing the necessary insertions or evictions on the DGR allocation so that the discrepancy is eliminated. Definition 2: We define a primitive operation to be one of the following: (i) INSERT( j, i): inserts object o j in the cache of server s i. (ii) EVICT( j, i): evicts object o j from the cache of server s i. (iii) MOVE( j, i, i 2 ): equivalent to EVICT( j, i ) followed by INSERT( j, i 2 ). Note that INSERT and EVICT increases, and respectively, decreases the number of objects replicated at a server. MOVE decreases the number of objects at one server and increases it at another. Definition 3: A feasible set of operations is a finite set of primitive operations that: (i) can resolve a nonempty set of discrepancies; (ii) does not change the number of objects replicated at any server; and (iii) can increase the overall gain from DGR allocation. Corollary 2: A set consisting of a single primitive operation is not feasible. This corollary follows directly from Definition 3. Lemma 6: A feasible set of operations can only be one of the following: IME: A set composed of one INSERT( j, i ) operation, k MOVE( j l, i l, i l ) operations, l =,...,k+ 2, and one EVICT( j k+3, i k+2 ) operation. MM: A set composed of k MOVE( j l, i l, i l ) operations, l =,...,k and one MOVE( j k, i k, i ) operation. Proof: We prove this lemma by first showing that IME and MM sets of operations satisfy the feasibility properties and then showing that other sets of operations either are not feasible or can be transformed into an IME or a MM set. Now we show that IME and MM satisfy the feasibility conditions in Definition 3. First, an INSERT can resolve a discrepancy that an object is replicated in the optimal allocation but not in DGR. Similarly, an EVICT can resolve the opposite type of discrepancy and a MOVE can resolve a set of two discrepancies. Being sets of these primitive operations, IME and MM both satisfy the first condition. IME satisfies the second feasibility condition as follows. The number of objects at s i is increased by the INSERT operation, but the first MOVE decreases that number and replicates an object at another server, which is nullified by the next MOVE and so on. The number of objects incremented at s ik+2 by the last MOVE is balanced by the EVICT operation. The same logic applies to prove that MM satisfies the second property too. The third condition of feasibility is that the set of operations should be able to increase the overall gain from the DGR allocation. We show that both IME and MM sets can increase the overall gain. An INSERT operation increases the overall gain in access time since it allocates an object to a server and an EVICT operation decreases the system gain. A MOVE operation has the effects of both an INSERT and an EVICT. The sets for which the total increase in gain surpasses the total decrease in gain will increase the overall system gain. The same logic applies to MM, since MM contains only MOVE operations. This concludes that an IME or a MM set of operations can increase the overall system gain. It is worth mentioning here that not all such sets increase the overall gain, we consider the ones that increase the gain from the DGR allocation to characterize the difference between the DGR and the optimal allocation. Now we show that other types of sets of primitive operations are either not feasible or can be represented as an IME or a MM. A set with unequal number of INSERT and EVICT operations violates the second condition. A set with one INSERT and one EVICT satisfies the second condition if they operate on the same server. But they violate the third condition since for the greedy choice by DGR, the INSERT operation cannot gain more than the EVICT looses in this case. Similarly, we can show that a set of multiple pairs of INSERT and EVICT operations must follow the patterns of IME or MM sets to maintain the last property, and thus, it can be represented as an IME or a MM. Lemma 7: Let an IME resolve a set of discrepancies and A be the subset of the DGR allocations that are converted into optimal allocations in the process. Then, op IME G op a A G a, where G op is the gain of operation op and G a is the gain of the DGR allocation a. Proof: Let an IME be composed of the following operations: INSERT( j, i ), MOVE( j 2, i, i 2 ) and EVICT( j 3, i 2 ). Therefore, A = {(s i,o j2 ),(s i2,o j3 )} and B = {(s i,o j ),(s i2,o j2 )} are the subsets of DGR and optimal allocations that give the discrepancies resolved by this IME. Without loss of generality, we assume that all allocations are single replicas of the respective objects in the group.

8 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 8 Therefore, G a = r i j 2 (t r t l )+ p j2 (t s t r ) a A + r i2 j 3 (t r t l )+ p j3 (t s t r ) () G b = r i j (t r t l )+ p j (t s t r ) b B + r i2 j 2 (t r t l )+ p j2 (t s t r ) () By subtracting equation () from equation (), we find the gain of the operations in IME as G op = r i j (t r t l )+ p j (t s t r ) r i2 j 3 (t r t l ) op IME p j3 (t s t r )+r i2 j 2 (t r t l ) r i j 2 (t r t l ) (2) Therefore, we need to prove that equation (2) is less than equation (). First, we claim that r i2 j (t r t l )+ p j (t s t r ) r i2 j 3 (t r t l )+ p j3 (t s t r ) (3) Here the first term is the overall gain for replicating o j at s i2 and the second one is the overall gain for replicating o j3 at s i2. The inequality holds because otherwise o j would be replicated at s i2 by DGR instead of o j3. We prove the main result in two parts. First, we consider the first four terms from equation (2): r i j (t r t l )+ p j (t s t r ) r i2 j 3 (t r t l ) p j3 (t s t r ) = (r i j r i2 j )(t r t l )+r i2 j (t r t l )+ p j (t s t r ) (4) r i2 j 3 (t r t l ) p j3 (t s t r ) r i2 j 3 (t r t l )+ p j3 (t s t r ) since (r i j r i2 j )(t r t l ) r i2 j (t r t l )+ p j (t s t r ) because (r i j r i2 j ) p j and (t r t l ) (t s t r ) and the rest follows from equation (3). Now we claim that the remaining two terms from equation (2) satisfy (r i2 j 2 r i j 2 )(t r t l ) r i j 2 (t r t l )+ p j2 (t s t r ) (5) since (r i2 j 2 r i j 2 ) p j2 and (t r t l ) (t s t r ). Adding equations (4) and (5) gives us the result G op G a (6) op IME a A We showed that the inequality holds for an IME that includes only one MOVE operation. We claim that it also holds for IME sets with more than one MOVE operation. Equation (4) shows that the combined gain of the INSERT and EVICT operations is less than the gain of the DGR allocations they change. This will hold for each IME set, since by definition an IME includes only one INSERT and one EVICT. Equation (5) shows that the gain by a MOVE operation is less than the corresponding allocation in DGR. Therefore, as we add more MOVE operations, we have one such inequality for each MOVE operation. Therefore, the inequality in (6) holds for any IME set. Again, in these cases we assume that the respective objects have one replica each. But we use the overall system gain in our analysis. Therefore, we claim that these results are valid for an IME with any number of MOVE operations. Corollary 3: The results of Lemma 7 also hold for an MM set of operations. MM is a subset of an IME set with multiple MOVE operations. Therefore, the inequality in equation (5) will be applied to each MOVE operation in the set and we can conclude that op MM G op a A G a, where A is the set of allocations in DGR that were changed by the operations in MM. Theorem 3: DGR is a 2-approximation algorithm for the replica placement problem. Proof: From Lemma 6, Lemma 7 and Corollary 3, we find that if OP is the set of operations that resolves the set of discrepancies (if any) between a DGR and an optimal allocation, then OP will be the union of zero or more IME and MM set of operations. Also, a discrepancy can be resolved only once, therefore these IME and MM sets will be disjoint. So will be the set of DGR allocations they will change. Let A be the set of allocations in DGR, which are changed by the operations in OP. Therefore, op OP G op = IME x OP x op IME x G op + G a + a A x y G op MM y OP op MM y a A y G a DGR Here, A x A is the set of allocations affected by IME x and A y A is the set of allocations affected by MM y. Thus, we showed that the inequality in equation (9) is satisfied, and therefore, DGR is a 2-approximation algorithm for the replica placement problem. 5 EXPERIMENTAL RESULTS In this section, we perform simulation experiments to determine the performance of DGR in practice. We perform three sets of experiments. In the first set we compare the performance of DGR with the performance of the best distributed algorithm presented in [7] (which we call LWY from the name of the authors). To the best of our knowledge [7] is the closest work to ours that proposed the distributed algorithm with the best performance to date. We focus on investigating how the variability on the request rates affects the performance of the two algorithms. The second set of experiments compares DGR and LWY focusing on their scalability in terms of number of servers and objects. In the third set of experiments we compare DGR with a centralized algorithm, the Aε-Star search algorithm presented in [3]. We selected this algorithm because it provides the best performance in terms of reducing object access costs among those compared in [3]. 5. Experimental Setup In the first set of experiments we compare the performance of DGR and LWY [7] for different types of data distributions. The problem we are considering is similar to the one investigated in [7] except that their model considers that each server deploys the same amount of memory for replication, while we remove this restriction in our model. To be able to compare our algorithm with LWY [7], we chose an experimental setting in which the servers have the same storage capacity for

9 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 9 replication. There is also another subtle difference between the two models. In our model, we assume that the request rates are integers, i.e., they represent the number of requests, while in [7], the request rates are between and. The LWY algorithm works as follows. In the beginning, each server exchanges the request rates for all objects with other servers. Then, each server works independently but they exchange the information about their decision of replicating objects. At first the allocation matrix X is initialized to zero. Then, a server s i is randomly chosen. Server s i calculates the insertion gains of all objects using equation (7). Then, it replicates the first c i objects in decreasing insertion gain values, updates row i of matrix X accordingly, and shares this information with other servers. (Here c i is the capacity of server s i.) Next, another server s i is chosen randomly. This server has the information about the allocation matrix X with the replication decisions made by server s i. Server s i now calculates the insertion gains based on the updated information, makes replication as before and updates the matrix X accordingly. Thus, one server is randomly chosen in each step to perform the above actions. When all servers complete the replication process, a round is completed. The algorithm continues until the allocation converges, i.e., when no round can improve the overall gain in access cost. The authors of [7] used three parameters to characterize the distribution of the request rates. The first parameter, called the hot-set parameter, θ, determines the distribution of the request rates of all objects for one server as r i j = e θ j /T, where T = n j= e θ j and θ. The distribution is flat if θ =, and its skewness increases with θ. In [7], θ was varied from. to.82 with increments of.9. We found that for objects, the request rates vary between.5% and.95% for θ =.. This is almost a flat distribution. On the other hand, when θ =.82, we see the request rates of the first 6% objects add up to almost % of the total request rates. To get a steeper curve, in our experiments, we chose to extend the upper limit of θ to.28. This way we obtain a setting in which around 2% of the objects constitute about % of the total request rates. The request rate curves become even steeper as we increase the number of objects. Thus, we can get more steeper curves by only increasing the number of objects. Therefore, in our experiments, we varied θ from. to.28. We kept the same increments of.9 as in [7]. The second parameter, called the correlation of site hot-set, ρ, determines the randomness of request rates of an object for different servers. Its values are between and. It can be characterized as follows. At first we determine the request rates for server s for a given θ. Therefore, server s has the j th highest request rate for object o j (note that the request rates are exponential in negative θ j ). Now, at a server other than s, object o j will have the k th highest request rate among the objects, where k is a random number between and min(ρ n+ j,n). We varied ρ from. to.96 with.5 increments. As ρ increases from zero, the randomness of the access rates of the objects at different servers increases. The third parameter is the relative site activity, η. After we determine the object access rates at different servers with the above two parameters, we multiply the request rates at server s i by a i = /(A i η ), where A = m i= /i η and η. That means, all servers are equally active when η =, and the activity levels vary more as η decreases. In our experiments we varied η from to with. increments. The other parameters, t s, t r and t l were kept fixed at ms, 6 ms, and 63 ms, respectively (as in [7]). We varied the cache size of the servers so that the total capacity varies from % to % of the total number of objects. Here we kept the same cache size for all servers as in [7]. In the first set of experiments we consider the following scenarios: servers and,, 2 and 5 objects, and 2 objects and, 3 and 6 servers. Our dataset is a very large superset of that used in [7] with 52,8 data points for each (number of servers, number of objects) combination. In each experiment, we calculate the ratio of the gain achieved by DGR and LWY for analysis. In the above set of experiments, we mainly focused on how variability of requests affects the performance of DGR, hence we selected wide ranges of parameter values with small intervals for each (m, n) combination. In the second set of experiments we test how well DGR and LWY scale when we increase the problem size. We considered every combination of m and n, with m = (8, 6, 32, and 64) and n = (892, 6384, 32768, and 65536). For each (m, n) pair, we choose the other parameter values as follows: θ = (.5,.,.2,.4,.8), ρ = (.5,.,.2,.4,.8), and η = (,.5, ). We consider the following server capacities (25, 25, 5,, 2, and 4). Along with the replication performance, we compare the running time and message communication overhead of the two algorithms for these large size problems. The third set of experiments compares DGR with a variant of the Aε-Star search algorithm proposed in [3]. Aε-Star uses a technique to reduce the running time of the A-Star [3] search at the expense of achieving a (+ε)-approximation solution. We devised an admissible and monotone heuristic function for our problem and use it in the Aε-Star algorithm. It turned out that although Aε-Star yields a better gain over DGR, it has a prohibitively high running time. Therefore, we had to limit our experiments to small values of m and n and small variations for each (m, n). We performed the experiments with 4 servers and 5,, 2, 4, 8, and 6 objects, 8 servers and 5,, 2, and 4 objects, and 6 and 32 servers with and 2 objects. For the problem instances with 4 servers and the one with 8 servers and 5 objects, we considered the following values for θ = (.5,.,.2), ρ = (.5,.), and η = (,.5, ). For the rest of the cases, we fixed θ, ρ, and η at.5,., and.5, respectively. The server capacities were (5,, 2, and 4) in all of the cases. In the next subsection we discuss the results of all these experiments. 5.2 Performance Analysis In the first set of experiments we compare DGR with LWY for varying distributions of the request rates. We calculate the system gain achieved by both algorithms using equation (4)

10 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS Distribution of Relative Gain (m =, n = ) Distribution of Relative Gain (m =, n = ) Distribution of Relative Gain (m =, n = 5) Frequency (%) 4 3 Frequency (%) 4 3 Frequency (%) DGR / LWY DGR / LWY (a) n = (b) n = Fig.. Distribution of relative gain of DGR over LWY for m = DGR / LWY (c) n = 5 Distribution of Relative Gain (m =, n = 2) Distribution of Relative Gain (m = 3, n = 2) Distribution of Relative Gain (m = 6, n = 2) Frequency (%) 5 Frequency (%) 2.5 Frequency (%) DGR / LWY DGR / LWY (a) m = (b) m = 3 Fig. 2. Distribution of relative gain of DGR over LWY for n = DGR / LWY (c) m = 6 TABLE 2 DGR Gain vs. LWY Gain ( Servers) DGR/LWY n DGR > LWY Min Max Mean StDev 97.28% % % % TABLE 3 DGR Gain vs. LWY Gain (2 Objects) DGR/LWY m DGR > LWY Min Max Mean StDev 99.2% % % and then divided the gain of DGR by that of LWY. This gain ratio is used to determine the relative gain in performance of DGR over LWY. At first we calculate some statistics of the relative gains based on the number of servers (m) and objects (n). We summarize the results in Tables 2 and Tables 3. In Table 2, we show the statistics of the relative gains for a replication group composed of servers and considering different number of objects. The first column shows the number of objects. The second column indicates the percentage of cases where DGR performed better than LWY. The other columns show the minimum, maximum, standard deviation and mean values of the relative gains. Table 3 shows the same statistics where the number of objects is fixed to 2 and the number of servers is varied. The first column in this table shows the number of servers and the rest of the columns are the same as in Table 2. The results show that DGR obtains better gain than LWY in almost all cases. For example, for a system composed of servers and objects LWY yielded better or equal gains than DGR in less than 2.72% of the cases. For small number of objects, the minimum ratio of the DGR gain to the LWY gain is about.988, which means that LWY achieves a gain that is at most 2% better than DGR. On the other hand, in this set of experiments DGR achieves a gain of more than 7% over that obtained by LWY in the best case. The above results suggest that our algorithm obtains a higher overall gain in performance than the LWY algorithm in more than 97.28% of the cases. The reason is that in DGR, the servers coordinate the replication decisions in each iteration. In each iteration, each server determines the object that would give the highest system gain upon replication. The replication with the highest gain among all servers is chosen in each iteration. We note here that according to equation (7), preferences of all servers change after one object is replicated somewhere in the group. The DGR algorithm captures these changes and evaluates the replication gains after each object replication. But in LWY, a server makes decisions for all objects at the time of its turn, simply replicating the object that would yield the highest gain from replication until the server s capacity is exhausted. This prevents the detection of cases where some objects might have better gains if replicated elsewhere. On the other hand, we know that DGR is an

11 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS DGR < LWY cases vs. η (m = ) DGR < LWY cases vs. ρ (m = ).35.3 objects objects 2 objects 5 objects.2 objects objects 2 objects 5 objects.25.8 Percentage of cases.2.5 Percentage of cases η ρ (a) Distribution over η (b) Distribution over ρ Fig. 3. Distribution of cases in which DGR gain is less than LWY gain vs. η and ρ Minimum DGR Gain / LWY Gain Ratio for All m, n Maximum DGR Gain / LWY Gain Ratio for All m, n.5 n = 892 n = 6384 n = n = n = 892 n = 6384 n = n = m = 8 m = 6 m = 32 m = 64 Number of Servers (m) (a) Minimum DGR/LWY gain Fig. 4. DGR/LWY gain ratio vs. m, n.7 m = 8 m = 6 m = 32 m = 64 Number of Servers (m) (b) Maximum DGR/LWY gain approximation algorithm because the greedy choice cannot always obtain the optimal allocation. Therefore, it is possible to obtain a better performance than DGR in very few cases. However, the low percentage of cases where LWY performs better than DGR shows that the strategy applied in DGR is more robust in finding better solutions. Although in LWY the servers do exchange information about which objects they cache and how much they request each object it performs poorer than DGR since DGR synchronizes the servers and updates the incremental information. We also plot the frequency distributions for the relative gain of DGR with respect to LWY as histograms in Figures and 2. A bar in these histograms represents the percentage of the cases where the ratio of DGR gain and LWY gain falls between a specific limit. For example, Figure a shows that in almost 7% of the 52,8 cases, the gain ratio is between (inclusive) and. (exclusive). Now we discuss the results grouping them up according to Table 2 and Table 3. Figure show the distributions summarized in Table 2. In these figures, m is fixed while n is varied. We see that in each of these histograms, the frequency of the cases with relative gain between and. is the highest. This frequency increases as the number of objects increases. Also, we can see that the next range of gains, i.e., from. to.2 has frequencies close to 5% in all four figures and from there they decrease gradually at different rates. Although LWY has gains closer to DGR in several cases as the number of objects increases, DGR still offers considerable improvement over LWY in other cases. We observe that when the number of objects per server increases, LWY performs close to DGR in more cases. This is because the number of allocations that are different in DGR and LWY becomes less compared to the total number of allocations. Figure 2 gives the details of the results that are summarized in Table 3. Here the number of objects is fixed to 2 and the number of servers is, 3, and 6. Here the shape of the histogram changes a bit with the increase in the number of servers. The percentage of cases where LWY performs close to DGR decreases significantly as the number of servers increases. When the number of servers increases, the LWY algorithm performs poorer because of the amount of information available to a server when making decisions. For example, for servers, when the second server makes its replication decision, it has knowledge about /th of the overall replica placements. The third one taking the turn has the information for 2/th of the overall replica placements. But for 3 servers, these ratios become /3th and 2/3th, respectively. Thus, the more servers, the less informed decisions are made by LWY. But in DGR, the servers coordinate each replication decision and therefore DGR makes informed decisions regardless of the number of servers. We now present the cases where LWY performed better than DGR, with respect to different parameters. We would like to mention here that these cases constitute less than 2.72% of the total number of cases considered in the experiments. Figure 3a shows the cases with better LWY gains for different values of η. Figure 3b shows the same cases grouped according to different values of ρ (the correlation of site hot-set parameter). In Figure 3a, we see that LWY obtains bigger gains than DGR

12 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 2 Total capacity (%) vs. DGR / LWY gain ratio (m = 8) Total capacity (%) vs. DGR / LWY gain ratio (m = 64).25 avg max.3 avg max Total server capacity (% of total objects) (a) No. of servers = 8 Fig. 5. DGR/LWY gain ratio vs. server capacity Total server capacity (% of total objects) (b) No. of servers = 64 TABLE 4 Running time & total amount of communication of DGR vs. LWY DGR LWY Time Comm. Time Comm. m n (ms) (KB) (ms) (KB) in less than.35% of the cases for lower η values. Increasing η produces few cases in which LWY performs better. Here we see that if the servers are not equally active (i.e., small η), LWY can beat DGR in some cases. This will occur when highly active servers are chosen at the beginning and therefore the other servers can choose objects more efficiently for themselves. Whereas for the sake of coordination some servers might suffer in DGR in few cases. In Figure 3b, we see a clear relationship between ρ and the frequency of cases with higher LWY gain. Note that DGR performs better with higher ρ value. Higher ρ means more randomness in the request patterns among servers. It suggests that the coordination among servers is very important when the request rates are not similar. In practical cases, it is very unlikely to have identical request patterns, so we claim that DGR is going to always perform better than LWY. Now, we discuss the second set of experiments in which we compare DGR and LWY for scalability. In Figures 4a and 4b, we plot the minimum and the maximum gains of DGR over LWY across different number of servers and objects. We see that the maximum gain steadily increases with both m and n in majority of cases. The minimum gain of DGR over LWY is not less than.995 in all cases. The maximum gains ranging from about 8% to about 27% clearly indicates that DGR can deal with certain instances of the problem where LWY cannot. We also compare the running time and the communication overhead of these algorithms with respect to m and n in Table 4. The first two columns of this table represent the number of servers and objects, respectively. The next two columns show the execution time and total amount of data communicated in DGR in milliseconds and kilobytes, respectively. The last two columns give the same metrics for LWY. We find that in each case DGR is much faster and it requires less amount of data to be communicated among servers than LWY. This is because in LWY, a server needs to perform a costly sort operation over the insertion gains of the objects while DGR does it using a series of less costly all-reducemax operations. Similarly, in DGR, in each iteration, a single message can determine the current maximum and current allocation decision, while in LWY, servers need to perform a bulk data broadcast to inform the others of their replication decision. We have seen a large difference between the maximum gains of DGR over LWY when we measure them varying m and n. Now we present the data summarized along different other dimensions. This will enable us to justify whether the cases in which DGR is performing better are more common in practice. Figure 5 shows two plots of the average and maximum DGR vs. LWY gain over the ratio of the total server capacity and the total number of objects. This ratio varies from as small as.53% to as large as 325% in the data we generated for the experiments. In Figure 5a, we see that for 8 servers, DGR performs better when total server capacity is much smaller than the total objects considered for replication. Here we see the characteristic of an efficient algorithm in DGR, since it performs better under constrained conditions. When server capacities increase, naturally all algorithms will tend to converge in performance because all servers will be able to replicate their highly requested objects and the differences will be caused by the less requested objects. We observe this trend for every value of m and present another such plot for m = 64 in Figure 5b as a representative case. As the last comparisons of DGR with LWY, we present ten individual experiments where DGR obtains the maximum performance against LWY in Table 5. The ten cases in which DGR performs the worst are given in Table 6. At first, we

13 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 3 TABLE 5 Ten Best Performance Cases of DGR vs. LWY DGR/LWY m n Capacity(%) η θ ρ TABLE 6 Ten Worst Performance Cases of DGR vs. LWY DGR/LWY m n Capacity(%) η θ ρ note that the highest gain is about 27%, whereas DGR gain is less than % of LWY gain in the worst case. In Table 5, we see that DGR performs better in cases with large number of servers and objects. This ensures the scalability of performance for this algorithm. The other parameter values represent the fact that DGR is expected to yield better solutions in cases with constrained server capacities, equally active servers with high variability between their object request rates, and where the objects are of comparable importance. On the other hand, in Table 6, we see that DGR performs worse than LWY in cases where server capacities are large. As we mentioned before, every algorithm will perform almost equally in settings in which the servers have large capacities and therefore this results cannot suggest any specific pattern. This set of experiments reveal that DGR is expected to perform better than LWY in the majority of the cases. We also establish that both computation time and communication overhead of DGR are much less than those of LWY. Finally, we determined the problem characteristics for which DGR offers significant improvements over the LWY algorithm. We now discuss briefly the third set of experiments in which we compare DGR with the Aε-Star algorithm [3]. The results for different m and n are summarized in Table 7. Since Aε-Star is a variant of the A-Star algorithm [3], which is designed for searching the optimal solution, it always performs better than DGR on average. But the difference is only % on average and 3% in the worst case. On the other hand, Aε-Star spends huge amounts of time for finding the solution (as shown in the sixth column in Table 7), despite using pruning techniques to reduce the search space. TABLE 7 DGR vs. Aε-Star Comparison DGR / Aε-Star Gain Time (ms) m n Mean Min DGR Aε-Star Summary of Results The DGR algorithm has clear advantages over the existing algorithms. We first showed that it outperforms the best known distributed algorithm for the replica placement problem in the majority of the cases. The experiments show that DGR produce higher quality results with increasing problem size. The execution time and the communication overhead of DGR are less than those of LWY. We showed that DGR performs far better than LWY in cases with constrained server capacities, equally active servers with high variability between their object request rates, and where the objects are of comparable importance. Finally, we showed that DGR also performs close to the centralized Aε-Star algorithm, which was shown to outperform many current algorithms in [3]. DGR is much faster than the Aε-Star algorithm and thus more suitable for practical implementation. 6 CONCLUSION A distributed replication group helps create large replication storage by combining server caches and coordinating replication decisions. The efficiency of the group depends on how effectively the servers can store the replicas to minimize the overall object access cost. Therefore, an efficient distributed algorithm with minimum overhead is highly desired in this setting. We designed a distributed approximation algorithm for the replica placement problem. We showed that the proposed algorithm runs in polynomial time and has a polynomial time communication overhead. We also proved that the proposed algorithm is a 2-approximation algorithm. We compared by simulation the performance of our algorithm with the distributed algorithm providing the best performance known so far in the literature. The comparison results show that our algorithm performs better in more than 97.28% of all cases, yeliding a gain in performance of up to 26.9%. We also showed that our algorithm scales very well in terms of performance and computational and communication complexity. We also established that our algorithm is suitable for practical problem instances. Finally, we showed that the proposed algorithm performs within % of the best known centralized algorithm. Hence we claim that DGR is a very good candidate algorithm for practical implementation in distributed replication groups.

14 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 4 In future work, we plan to implement the proposed algorithm in a real system and extend it for more general settings. ACKNOWLEDGMENTS This research was supported in part by NSF grant DGE A short version of this paper [32] was published in the Proc. of NCA 29. The authors wish to express their thanks to the editor and the anonymous referees for their helpful and constructive suggestions, which considerably improved the quality of the paper. REFERENCES [] N. Laoutaris, O. Telelis, V. Zissimopoulos, and I. Stavrakakis, Distributed selfish replication, IEEE Trans. Parallel Distrib. Syst., vol. 7, no. 2, pp. 4 43, 26. [2] C. Chekuri and S. Khanna, A PTAS for the multiple knapsack problem, in Proc. th Ann. ACM-SIAM Symp. on Discrete Algorithms, 2, pp [3] D. B. Shmoys and E. Tardos, An approximation algorithm for the generalized assignment problem, Mathematical Programming, vol. 62, no. 3, pp , 993. [4] T. Moscibroda and R. Wattenhofer, Facility location: distributed approximation, in Proc. 24th Ann. ACM Symp. Principles of Distributed Computing, 25, pp [5] D. P. Bertsekas and D. A. Castanon, The auction algorithm for the transportation problem, Annals of Operations Research, vol. 2, no. -4, pp , 989. [6] D. Bertsekas, A distributed algorithm for the assignment problem, 979, Laboratory for Information and Decision Systems Unpublished Report, M.I.T. [7] A. Leff, J. L. Wolf, and P. S. Yu, Replication algorithms in a remote caching architecture, IEEE Trans. Parallel Distrib. Syst., vol. 4, no., pp , 993. [8] M. J. Osborne, An Introduction to Game Theory. Oxford University Press, USA, 23. [9] B. Chun, K. Chaudhuri, H. Wee, M. Barreno, C. H. Papadimitriou, and J. Kubiatowicz, Selfish caching in distributed systems: a game-theoretic analysis, in Proc. 23rd Ann. ACM Symp. Principles of Distributed Computing, 24, pp [] S. U. Khan and I. Ahmad, Discriminatory algorithmic mechanism design based www content replication, Informatica, vol. 3, no., pp. 5 9, 27. [] N. Laoutaris, G. Smaragdakis, A. Bestavros, I. Matta, and I. Stavrakakis, Distributed selfish caching, IEEE Trans. Parallel Distrib. Syst., vol. 8, no., pp , 27. [2] N. Laoutaris, V. Zissimopoulos, and I. Stavrakakis, Joint object placement and node dimensioning for internet content distribution, Information Processing Letters, vol. 89, no. 6, pp , 24. [3] S. U. Khan and I. Ahmad, Comparison and analysis of ten static heuristics-based internet data replication techniques, J. Parallel and Distributed Computing, vol. 68, no. 2, pp. 3 36, 28. [4] P. Krishnan, D. Raz, and Y. Shavitt, The cache location problem, IEEE/ACM Trans. Networking, vol. 8, pp , 2. [5] L. Qiu, V. N. Padmanabhan, and G. M. Voelker, On the placement of web server replicas, in Proc. 2th Ann. IEEE Conf. on Computer Communications, 2, pp [6] S. Bakiras, T. Loukopoulos, D. Papadias, and I. Ahmad, Adaptive schemes for distributed web caching, J. Parallel and Distributed Computing, vol. 65, no. 2, pp , 25. [7] B. Tang, H. Gupta, and S. R. Das, Benefit-based data caching in ad hoc networks, IEEE Trans. Mobile Computing, vol. 7, no. 3, pp , 28. [8] C. Kumar and J. B. Norris, A new approach for a proxy-level web caching mechanism, Decision Support Systems, vol. 46, no., pp. 52 6, 28. [9] M. Rabinovich, J. Chase, and S. Gadde, Not all hits are created equal: cooperative proxy caching over a wide-area network, Computer Networks and ISDN Systems, vol. 3, no , pp , 998. [2] I. D. Baev and R. Rajaraman, Approximation algorithms for data placement in arbitrary networks, in Proc. of the 2th Ann. ACM-SIAM Symp. Discrete Algorithms, 2, pp [2] I. Baev, R. Rajaraman, and C. Swamy, Approximation algorithms for data placement problems, SIAM J. Computing, vol. 38, no. 4, pp , 28. [22] U. Čibej, B. Slivnik, and B. Robič, The complexity of static data replication in data grids, Parallel Computing, vol. 3, no. 8+9, pp. 9 92, 25. [23] J. Zhou, Y. Wang, and S. Li, An Optimistic Replication Algorithm to Improve Consistency for Massive Data, ser. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 25, pp [24] Z. Begic, M. Bolic, and H. Bajric, Centralized versus distributed replication model for multicast replication, in Proc. 49th Int l Symp. ELMAR, 27, pp [25] B. Cai, C. Xie, and G. Zhu, EDRFS: An effective distributed replication file system for small-file and data-intensive application, in Proc. 2nd Int l Conf. on Communication Systems Software and Middleware, 27, pp. 7. [26] R. T. Hurley and B. Y. Li, A performance investigation of web caching architectures, in Proc. 28 C3S2E Conf., 28, pp [27] S. Sulaiman, S. M. H. Shamsuddin, F. Forkan, and A. Abraham, Intelligent web caching using neurocomputing and particle swarm optimization algorithm, in Proc. 2nd Asia Int l Conf. on Modelling and Simulation, 28, pp [28] V. Stantchev and M. Malek, Addressing web service performance by replication at the operating system level, in Proc. 3rd Int l Conf. on Internet and Web Applications and Services, 28, pp [29] G. Pierre and M. Steen, Globule: A collaborative content delivery network, IEEE Commun. Mag., vol. 44, pp , 26. [3] A. Grama, G. Karypis, V. Kumar, and A. Gupta, Introduction to Parallel Computing (2nd Edition). Addison Wesley, 23, ch. 4. [3] P. E. Hart, N. J. Nilsson, and B. Raphael, A formal basis for the heuristic determination of minimum cost paths, IEEE Trans. Syst. Sci. Cybernetics, vol. 4, no. 2, pp. 7, 968. [32] S. Zaman and D. Grosu, A distributed algorithm for web content replication, in Proc. 8th IEEE Int l Symp. on Network Computing and Applications, 29, pp Sharrukh Zaman received his Bachelors of Computer Science and Engineering degree from Bangladesh University of Engineering and Technology, Dhaka, Bangladesh. He is currently a PhD candidate in the Department of Computer Science, Wayne State University, Detroit, Michigan. His research interests include distributed systems, game theory and mechanism design. He is a student member of the IEEE. Daniel Grosu received the Diploma in engineering (automatic control and industrial informatics) from the Technical University of Iaşi, Romania, in 994 and the MSc and PhD degrees in computer science from the University of Texas at San Antonio in 22 and 23, respectively. Currently, he is an associate professor in the Department of Computer Science, Wayne State University, Detroit. His research interests include distributed systems and algorithms, resource allocation, computer security and topics at the border of computer science, game theory and economics. He has published more than sixty peer-reviewed papers in the above areas. He has served on the program and steering committees of several international meetings in parallel and distributed computing. He is a member of the ACM and a senior member of the IEEE and the IEEE Computer Society.

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Polynomial-Time Approximation Algorithms

Polynomial-Time Approximation Algorithms 6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem CS61: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem Tim Roughgarden February 5, 016 1 The Traveling Salesman Problem (TSP) In this lecture we study a famous computational problem,

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Storage Allocation Based on Client Preferences

Storage Allocation Based on Client Preferences The Interdisciplinary Center School of Computer Science Herzliya, Israel Storage Allocation Based on Client Preferences by Benny Vaksendiser Prepared under the supervision of Dr. Tami Tamir Abstract Video

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

Nash Equilibrium Load Balancing

Nash Equilibrium Load Balancing Nash Equilibrium Load Balancing Computer Science Department Collaborators: A. Kothari, C. Toth, Y. Zhou Load Balancing A set of m servers or machines. A set of n clients or jobs. Each job can be run only

More information

Module 7. Independent sets, coverings. and matchings. Contents

Module 7. Independent sets, coverings. and matchings. Contents Module 7 Independent sets, coverings Contents and matchings 7.1 Introduction.......................... 152 7.2 Independent sets and coverings: basic equations..... 152 7.3 Matchings in bipartite graphs................

More information

Algorithms for Storage Allocation Based on Client Preferences

Algorithms for Storage Allocation Based on Client Preferences Algorithms for Storage Allocation Based on Client Preferences Tami Tamir Benny Vaksendiser Abstract We consider a packing problem arising in storage management of Video on Demand (VoD) systems. The system

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

Comp Online Algorithms

Comp Online Algorithms Comp 7720 - Online Algorithms Notes 4: Bin Packing Shahin Kamalli University of Manitoba - Fall 208 December, 208 Introduction Bin packing is one of the fundamental problems in theory of computer science.

More information

Leveraging Transitive Relations for Crowdsourced Joins*

Leveraging Transitive Relations for Crowdsourced Joins* Leveraging Transitive Relations for Crowdsourced Joins* Jiannan Wang #, Guoliang Li #, Tim Kraska, Michael J. Franklin, Jianhua Feng # # Department of Computer Science, Tsinghua University, Brown University,

More information

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions. CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

More information

Clustering. (Part 2)

Clustering. (Part 2) Clustering (Part 2) 1 k-means clustering 2 General Observations on k-means clustering In essence, k-means clustering aims at minimizing cluster variance. It is typically used in Euclidean spaces and works

More information

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents

E-Companion: On Styles in Product Design: An Analysis of US. Design Patents E-Companion: On Styles in Product Design: An Analysis of US Design Patents 1 PART A: FORMALIZING THE DEFINITION OF STYLES A.1 Styles as categories of designs of similar form Our task involves categorizing

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Scheduling Unsplittable Flows Using Parallel Switches

Scheduling Unsplittable Flows Using Parallel Switches Scheduling Unsplittable Flows Using Parallel Switches Saad Mneimneh, Kai-Yeung Siu Massachusetts Institute of Technology 77 Massachusetts Avenue Room -07, Cambridge, MA 039 Abstract We address the problem

More information

Benefit-based Data Caching in Ad Hoc. Networks

Benefit-based Data Caching in Ad Hoc. Networks Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta, Samir Das Computer Science Department Stony Brook University Stony Brook, NY 790 Email: {bintang,hgupta,samir}@cs.sunysb.edu Abstract

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Practice Problems for the Final

Practice Problems for the Final ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each

More information

Leveraging Set Relations in Exact Set Similarity Join

Leveraging Set Relations in Exact Set Similarity Join Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

Approximability Results for the p-center Problem

Approximability Results for the p-center Problem Approximability Results for the p-center Problem Stefan Buettcher Course Project Algorithm Design and Analysis Prof. Timothy Chan University of Waterloo, Spring 2004 The p-center

More information

System Models. 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models. Nicola Dragoni Embedded Systems Engineering DTU Informatics

System Models. 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models. Nicola Dragoni Embedded Systems Engineering DTU Informatics System Models Nicola Dragoni Embedded Systems Engineering DTU Informatics 2.1 Introduction 2.2 Architectural Models 2.3 Fundamental Models Architectural vs Fundamental Models Systems that are intended

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3) COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms

More information

Chapter Design Techniques for Approximation Algorithms

Chapter Design Techniques for Approximation Algorithms Chapter 2 Design Techniques for Approximation Algorithms I N THE preceding chapter we observed that many relevant optimization problems are NP-hard, and that it is unlikely that we will ever be able to

More information

Algorithms for Provisioning Virtual Private Networks in the Hose Model

Algorithms for Provisioning Virtual Private Networks in the Hose Model IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 10, NO 4, AUGUST 2002 565 Algorithms for Provisioning Virtual Private Networks in the Hose Model Amit Kumar, Rajeev Rastogi, Avi Silberschatz, Fellow, IEEE, and

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

Approximation Algorithms for Item Pricing

Approximation Algorithms for Item Pricing Approximation Algorithms for Item Pricing Maria-Florina Balcan July 2005 CMU-CS-05-176 Avrim Blum School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 School of Computer Science,

More information

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari)

On Modularity Clustering. Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari) On Modularity Clustering Presented by: Presented by: Group III (Ying Xuan, Swati Gambhir & Ravi Tiwari) Modularity A quality index for clustering a graph G=(V,E) G=(VE) q( C): EC ( ) EC ( ) + ECC (, ')

More information

Prices and Auctions in Markets with Complex Constraints

Prices and Auctions in Markets with Complex Constraints Conference on Frontiers of Economics and Computer Science Becker-Friedman Institute Prices and Auctions in Markets with Complex Constraints Paul Milgrom Stanford University & Auctionomics August 2016 1

More information

Copyright 2000, Kevin Wayne 1

Copyright 2000, Kevin Wayne 1 Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

COMP Analysis of Algorithms & Data Structures

COMP Analysis of Algorithms & Data Structures COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Approximation Algorithms CLRS 35.1-35.5 University of Manitoba COMP 3170 - Analysis of Algorithms & Data Structures 1 / 30 Approaching

More information

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition. 18.433 Combinatorial Optimization Matching Algorithms September 9,14,16 Lecturer: Santosh Vempala Given a graph G = (V, E), a matching M is a set of edges with the property that no two of the edges have

More information

Data Caching under Number Constraint

Data Caching under Number Constraint 1 Data Caching under Number Constraint Himanshu Gupta and Bin Tang Abstract Caching can significantly improve the efficiency of information access in networks by reducing the access latency and bandwidth

More information

Benefit-based Data Caching in Ad Hoc Networks

Benefit-based Data Caching in Ad Hoc Networks Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Member, IEEE, Himanshu Gupta, Member, IEEE, and Samir R. Das, Member, IEEE Abstract Data caching can significantly improve the efficiency of information

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Algorithms for Grid Graphs in the MapReduce Model

Algorithms for Grid Graphs in the MapReduce Model University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Computer Science and Engineering: Theses, Dissertations, and Student Research Computer Science and Engineering, Department

More information

arxiv: v2 [cs.dm] 3 Dec 2014

arxiv: v2 [cs.dm] 3 Dec 2014 The Student/Project Allocation problem with group projects Aswhin Arulselvan, Ágnes Cseh, and Jannik Matuschke arxiv:4.035v [cs.dm] 3 Dec 04 Department of Management Science, University of Strathclyde,

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Clustering: Centroid-Based Partitioning

Clustering: Centroid-Based Partitioning Clustering: Centroid-Based Partitioning Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong 1 / 29 Y Tao Clustering: Centroid-Based Partitioning In this lecture, we

More information

CMPSCI 311: Introduction to Algorithms Practice Final Exam

CMPSCI 311: Introduction to Algorithms Practice Final Exam CMPSCI 311: Introduction to Algorithms Practice Final Exam Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question. Providing more detail including

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

DATA CACHING IN AD HOC NETWORKS USING GAME-THEORETIC ANALYSIS. A Thesis by. Hoang Dang. Bachelor of Arts, Lakeland College, 2007

DATA CACHING IN AD HOC NETWORKS USING GAME-THEORETIC ANALYSIS. A Thesis by. Hoang Dang. Bachelor of Arts, Lakeland College, 2007 DATA CACHING IN AD HOC NETWORKS USING GAME-THEORETIC ANALYSIS A Thesis by Hoang Dang Bachelor of Arts, Lakeland College, 2007 Submitted to the Department of Electrical Engineering and Computer Science

More information

6 Distributed data management I Hashing

6 Distributed data management I Hashing 6 Distributed data management I Hashing There are two major approaches for the management of data in distributed systems: hashing and caching. The hashing approach tries to minimize the use of communication

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

α Coverage to Extend Network Lifetime on Wireless Sensor Networks

α Coverage to Extend Network Lifetime on Wireless Sensor Networks Noname manuscript No. (will be inserted by the editor) α Coverage to Extend Network Lifetime on Wireless Sensor Networks Monica Gentili Andrea Raiconi Received: date / Accepted: date Abstract An important

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

Bipartite Roots of Graphs

Bipartite Roots of Graphs Bipartite Roots of Graphs Lap Chi Lau Department of Computer Science University of Toronto Graph H is a root of graph G if there exists a positive integer k such that x and y are adjacent in G if and only

More information

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Benjamin Grimmer bdg79@cornell.edu arxiv:1508.05567v2 [cs.ds] 20 Jul 2017 Abstract We consider a variety of NP-Complete network

More information

Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs

Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs Javad Akbari Torkestani Department of Computer Engineering, Islamic Azad University, Arak Branch,

More information

Chapter 8 DOMINATING SETS

Chapter 8 DOMINATING SETS Chapter 8 DOMINATING SETS Distributed Computing Group Mobile Computing Summer 2004 Overview Motivation Dominating Set Connected Dominating Set The Greedy Algorithm The Tree Growing Algorithm The Marking

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

Chapter 8 DOMINATING SETS

Chapter 8 DOMINATING SETS Distributed Computing Group Chapter 8 DOMINATING SETS Mobile Computing Summer 2004 Overview Motivation Dominating Set Connected Dominating Set The Greedy Algorithm The Tree Growing Algorithm The Marking

More information

Device-to-Device Networking Meets Cellular via Network Coding

Device-to-Device Networking Meets Cellular via Network Coding Device-to-Device Networking Meets Cellular via Network Coding Yasaman Keshtkarjahromi, Student Member, IEEE, Hulya Seferoglu, Member, IEEE, Rashid Ansari, Fellow, IEEE, and Ashfaq Khokhar, Fellow, IEEE

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

5. Lecture notes on matroid intersection

5. Lecture notes on matroid intersection Massachusetts Institute of Technology Handout 14 18.433: Combinatorial Optimization April 1st, 2009 Michel X. Goemans 5. Lecture notes on matroid intersection One nice feature about matroids is that a

More information

Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks

Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks 60 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 3, NO. 1, MARCH 2002 Adaptations of the A* Algorithm for the Computation of Fastest Paths in Deterministic Discrete-Time Dynamic Networks

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}. Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 02/26/15

CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 02/26/15 CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) HW#3 Due at the beginning of class Thursday 02/26/15 1. Consider a model of a nonbipartite undirected graph in which

More information

Online algorithms for clustering problems

Online algorithms for clustering problems University of Szeged Department of Computer Algorithms and Artificial Intelligence Online algorithms for clustering problems Ph.D. Thesis Gabriella Divéki Supervisor: Dr. Csanád Imreh University of Szeged

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Distributed Computing over Communication Networks: Leader Election

Distributed Computing over Communication Networks: Leader Election Distributed Computing over Communication Networks: Leader Election Motivation Reasons for electing a leader? Reasons for not electing a leader? Motivation Reasons for electing a leader? Once elected, coordination

More information

(67686) Mathematical Foundations of AI July 30, Lecture 11

(67686) Mathematical Foundations of AI July 30, Lecture 11 (67686) Mathematical Foundations of AI July 30, 2008 Lecturer: Ariel D. Procaccia Lecture 11 Scribe: Michael Zuckerman and Na ama Zohary 1 Cooperative Games N = {1,...,n} is the set of players (agents).

More information

Online algorithms for clustering problems

Online algorithms for clustering problems University of Szeged Department of Computer Algorithms and Artificial Intelligence Online algorithms for clustering problems Summary of the Ph.D. thesis by Gabriella Divéki Supervisor Dr. Csanád Imreh

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

THE INSULATION SEQUENCE OF A GRAPH

THE INSULATION SEQUENCE OF A GRAPH THE INSULATION SEQUENCE OF A GRAPH ELENA GRIGORESCU Abstract. In a graph G, a k-insulated set S is a subset of the vertices of G such that every vertex in S is adjacent to at most k vertices in S, and

More information

Interleaving Schemes on Circulant Graphs with Two Offsets

Interleaving Schemes on Circulant Graphs with Two Offsets Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical

More information

A Framework for Space and Time Efficient Scheduling of Parallelism

A Framework for Space and Time Efficient Scheduling of Parallelism A Framework for Space and Time Efficient Scheduling of Parallelism Girija J. Narlikar Guy E. Blelloch December 996 CMU-CS-96-97 School of Computer Science Carnegie Mellon University Pittsburgh, PA 523

More information

An Efficient Approximation for the Generalized Assignment Problem

An Efficient Approximation for the Generalized Assignment Problem An Efficient Approximation for the Generalized Assignment Problem Reuven Cohen Liran Katzir Danny Raz Department of Computer Science Technion Haifa 32000, Israel Abstract We present a simple family of

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin September 20, 2012 1 / 1 Lecture 2 We continue where we left off last lecture, namely we are considering a PTAS for the the knapsack

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Nash equilibria in Voronoi Games on Graphs

Nash equilibria in Voronoi Games on Graphs Nash equilibria in Voronoi Games on Graphs Christoph Dürr, Nguyễn Kim Thắng (Ecole Polytechnique) ESA, Eilat October 07 Plan Motivation : Study the interaction between selfish agents on Internet k players,

More information