Peer-Cached Content in Peer-Assisted Video-on-Demand Systems

Size: px
Start display at page:

Download "Peer-Cached Content in Peer-Assisted Video-on-Demand Systems"

Transcription

1 Peer-Cached Content in Peer-Assisted Video-on-Demand Systems Le Chang, Min Xing, Jianping Pan University of Victoria, Victoria, BC, Canada Abstract One advantage of peer-assisted video-on-demand (PA-VoD) systems is the opportunity to harness the vast amount of peer-contributed resources, such as the upload bandwidth and cached content at each peer, to reduce the cost of maintaining video servers and providing such services. However, how to utilize these resources effectively and efficiently at the system level is still a real challenge, particularly for multi-channel, multi-user PA- VoD systems with different playback rates and peer behaviors. While the available upload bandwidth determines whether such a system can possibly self-sustain or not, the peer-cached content plays an equally, if not more, important role: even when the system is in a surplus mode in terms of the overall available bandwidth and total playback rate, without a careful cache coordination and bandwidth allocation, peers often cannot fully utilize their upload bandwidth and at the same time cannot obtain what they need from other peers on time, leading to a poor system efficiency and a bad user experience. In this paper, we focus on peer cache and upload bandwidth at the same time for multi-channel, multi-user PA-VoD systems with heterogeneous playback rates and viewing behaviors. We first model the system and derive the bounds on the server bandwidth consumption, a metric of most concern for service providers, while guaranteeing the user experience. We then formulate an optimization problem to achieve the bound when global information is available and system-wide coordination is possible (e.g., with a tracker). To improve the scalability, we also propose heuristic algorithms to allocate upload bandwidth and peer cache in a distributed way, and the results are compared with the optimization solutions and performance bounds through extensive simulation, which shows the high efficiency of the proposed algorithms. The work can offer important insights to further improve real-world PA-VoD systems as they become more and more prevalent over the Internet. Index Terms Peer-to-peer video streaming, video on-demand (VoD), view-upload decoupling (VUD), load balancing, caching strategy I. INTRODUCTION Nowadays, the need for large-scale, high-quality on-demand video streaming (VoD) services has attracted great attention to the design and implementation of multi-channel, multiuser VoD systems with high efficiency and low cost. Among different system architectures, peer-assisted VoD (PA-VoD) systems have a great potential to harness the vast amount of peer-contributed resources, such as peer upload bandwidth and cached content, to lower the server bandwidth consumption and thus cost. However, due to the heterogeneity of the channel playback rate and popularity, as well as the dynamic user behaviors, the bandwidth supply from peers varies greatly between channels and over time, which poses the great resource imbalance challenges to the research community. As the streaming rate of a peer in PA-VoD systems is contributed by the upload bandwidth of its supporters, including the server(s) and other peers, the amount of available bandwidth in a channel determines the achievable quality of service and user experience. According to PPLive, small channels with low popularity often have bad quality as a result of the limited, highly varying bandwidth supply from participating peers to satisfy the bandwidth demand in these channels, while popular channels may have extra upload bandwidth left []. UUSee also observes the bandwidth supply deficit in high-definition (HD) channels [], as a result of the aggregated bandwidth demand greatly exceeding the average peer upload capacity. To overcome the bandwidth resource imbalance problem in peer-assisted live streaming systems, the traditional bandwidth allocation strategy, referred to as the isolated (ISO) channel allocation, needs an update. The view-upload decoupling (VUD) strategy [3], [4], which decouples what a peer is uploading from what it is watching, is proposed to balance the global bandwidth demand and supply for the entire system. As peers are no longer limited to uploading the content they are watching, it is possible to distribute the peer upload bandwidth across channels, i.e., channels with extra bandwidth supply can support channels with bandwidth deficit. In this case, the objective of bandwidth allocation is to guarantee that the bandwidth supply for each channel meets its bandwidth demand, regardless of the channel heterogeneity. In PA-VoD systems, the bandwidth imbalance problem calls for a more delicate solution. First, peers in PA-VoD systems no longer watch the same content at the same time, even if they are in the same channel. For a peer, the video segments it requires at any time can possibly spread all over the peers in the system, which poses the content bottleneck problem. For example, a peer s neighbors may have upload bandwidth available, but they do not have what the peer needs due to the limited cache size, while other peers may have what the peer needs but they do not have extra bandwidth available since they are fully serving their neighbors. Moreover, PA-VoD systems usually provide HD video services. The playback rate of HD videos can easily exceed the average upload capacity of peers [], which causes more bandwidth consumption at the server. If the server fails to contribute enough bandwidth, the quality of HD channels will suffer. To harness more bandwidth supply from peers, PA-VoD system can incorporate incentive mechanisms to encourage more peers to contribute more. However a more important problem is to design an appropriate strategy to allocate peer upload bandwidth effectively, so that the supply of each channel meets its demand by peer-contributed resources as much as possible. This not only offers a better user experience, but also reduces the bandwidth consumption at the server. As ISPs charge VoD

2 service providers based on the 95% consumed bandwidth or the total data volume, the objective is to minimize the server bandwidth consumption in a simple, efficient and distributed manner, while guaranteeing the service quality for all peers. The content replication strategies in PA-VOD systems have been studied to achieve such a demand and supply balance [5] [7], which distribute video content to the system to maximize the chance that a peer can utilize the bandwidth from other peers, with the heterogeneous channel popularity. However, the solution for the channel playback heterogeneity is still not well-studied, given the fact that HD videos need much more bandwidth supply and occupy much more cache space. Moreover, existing work mainly focuses at the video or movie level, where the possibility that a video might not be complete at any peer has not been considered. In this paper, we aim to solve the resource imbalance problem in PA-VoD systems with the channel playback rate heterogeneity at the segment level with two constraints: ) each peer has a limited cache size; ) the playback rate of some channels in the system exceeds the average peer upload capacity, e.g., HD channels. We allocate and balance two kinds of resources, i.e., peer cache and upload bandwidth, across all the channels in the system by developing appropriate coordination between regular, standard-definition (SD) channels and HD channels. To the best of our knowledge, this is the first analysis and performance evaluation work of PA-VoD systems with the channel playback heterogeneity. Our contributions in this paper are explained as follows. ) We build mathematical models to capture the main characteristics of PA-VoD systems with the channel playback rate heterogeneity. Statistical bounds on the server bandwidth consumption are derived based on different peer behaviors. Our models lead us to the observation that given the same bandwidth supply and demand from peers in PA-VoD systems, user viewing behaviors will largely determine the bandwidth provisioning for HD channels. Sufficient upload bandwidth of peers in SD channels is not enough to guarantee the quality of HD channels. Moreover, switching between SD and HD channels is highly encouraged in order to spread the HD video content to SD peers that can later help HD peers. ) A linear programming optimization problem is formulated aiming at minimizing the server bandwidth consumption, which is solvable in a centralized manner. The solution of the optimization problem provides guaranteed theoretical bounds for any given instance of the system at any time instant, when global information such as the aggregated demand for each segment and the upload capacity and cached content of each peer is available and system-wide coordination is feasible, e.g., with a tracker. Such solutions determine the lowest server bandwidth consumption we can achieve at that given time, which demonstrates that, even with a simple firstin-first-come (FIFO) cache replacement strategy, the server bandwidth consumption can be reduced to be very close to the aforementioned lower bounds. 3) To improve scalability, we also develop heuristic algorithms to allocate the upload bandwidth of peers and manage their local cached content in a distributed manner. Their performance is compared with the solution from the linear optimization problem using extensive simulation, which shows the high efficiency of the proposed algorithms. Even if peers do not switch between HD and SD channels as expected, peers can apply the proposed passive cache replacement strategy, along with the bandwidth allocation algorithm to help reduce the server bandwidth consumption. The remainder of this paper is organized as follows. In Section II we build mathematical models to capture the steadystate characteristics of PA-VoD systems, and formulate a solvable optimization problems for the system at any time instant. Heuristic algorithms are proposed and presented in Section III, which are then evaluated in Section IV. We list the most related work in Section V. Section VI discusses the further work and concludes the paper. A. System Model II. SYSTEM MODEL AND ANALYSIS In this section, we build models to capture the main characteristics of PA-VoD systems and peer behaviors. We first focus on the steady state of the system at the movie level, and then detail the bandwidth allocation problem at the segment level. The terms we often use in this paper are listed as follows. r S, r H : the playback rate of each SD/HD movie; ū p : the average upload capacity of each peer; L: the peer cache capacity, in the number of segments; N: the number of peers in the system; P c /P m /P s : the transfer probability matrix between all available video categories/channels/segments; D/B: the total bandwidth demand/supply of the system; B p : the total available upload bandwidth from all peers; SBC: the overall server bandwidth consumption. In our PA-VoD systems, video programs are released from a centralized server (or servers), which holds all the content of these video programs. The server is able to support any peers at any time, but the amount of the bandwidth consumed for each peer at the server will add up to the SBC. The PA-VoD system offers two categories of video programs, standard-definition (SD) videos/movies and highdefinition (HD) videos/movies, where the playback rate of HD videos r H is much higher than SD videos r S. For the rest of this paper, we use channel, video and movie interchangeably. We refer to a peer watching an SD/HD channel as an SD/HD peer or watcher. Naturally, in PP streaming systems, the average peer upload capacity creates a watershed for all channels, while the peer download capacity (.5 Mbps for typical DSL connections) is usually large enough to accommodate HD movies. As the playback rate of HD channels is higher than the average peer upload capacity, it is impossible for HD peers to support themselves by only using their own upload bandwidth, regardless of any bandwidth allocation techniques applied. To guarantee the quality of service, HD peers have to stream from the server or peers in other channels if they have the needed HD content. Thus,

3 HD channels are also referred to as deficit channels, and the SD channels with playback rates lower than the average peer upload capacity are surplus channels. In the system under consideration, the playback rate of HD channels is higher than the average peer upload capacity ū p and SD channels lower than ū p, which are quite representative on today s Internet. Therefore, it is possible for SD peers to contribute their extra bandwidth to support HD peers. In this paper, if a peer is uploading to another channel that it is not watching, it is referred to as a (bandwidth) helper. Each movie of a channel is divided into consecutive video segments. All video segments have the same size in bytes for easy cache management. In practice, segments are further divided into smaller pieces to facilitate the actual data transfer process, while in this paper, segment is the unit for cache management. Each peer has a local cache, which can be used to save movie segments. The size of the local cache is fixed as L, which is the maximum number of movie segments a peer can hold. A typical cache size is GB on each peer s hard-disk drive in PPLive. For an HD movie at, Kbps for hours, the movie size will be approximately 9 MB plus overhead. Therefore, to consider the cache management overhead and simplify the analysis, we assume that the peer cache can only hold complete copy of an HD movie, which is divided into segments, or SD movies with segments each. We assume that all movies are of the same playback time duration. To ensure that a peer watches a video program smoothly without interruption, it is required that the streaming rate of the movie must catch up with its playback rate. In this paper, we assume peers stream at the playback rate of the movie, from either the server and/or other peers. The peer will first attempt to locate bandwidth helpers within the same channel, referred to as concurrent peers [6], and then helpers from other channels. If the desired streaming rate still cannot be achieved, it will resort to the server as the final attempt. In the ideal case, if all peers are able to get help from other peers, the server bandwidth consumption can be reduced to. There aren peers considered in the system. When watching a movie, a peer starts from the beginning of the movie and watches each segment sequentially until finishing the last one. After that, the peer will transfer to another channel. In this paper, we assume the peer first determines whether it will stay in the same video category or transfer to another one. A transfer matrix P c at the category level is used to capture this behavior, with elements p c ij defined as the probability of transferring from category i to j. After determining the next category, the peer selects a movie in the targeted video category to watch. The transfer between any two movies is determined by a transfer matrix P m at the movie level, which implies the transfer matrix P c. Considering the fact that a peer usually does not stay in the same movie after it finishes watching the movie, we assume that a peer never transfers to the movie it has just watched, i.e., p m ij =, if i = j. B. Steady State Analysis In this section, we present our analysis for the steady state at the movie level. According to the Markov chain analysis, through the transfer matrix P c, we can derive useful system measures at the steady state, such as the expected number of peers in each category. Let p c ij denote the probability that a peer transfers from category i to j, where i,j {S,H}, and the steady-state vector {p c S,pc H } represents the probability of a peer watching the SD or HD category in the steady state, which can be calculated as p c S = pc HS p and p c c HS +pc H = pc SH SH p, and c HS +pc SH the total bandwidth demand of the system is p c HS D = N( r S p c HS +pc SH + pc SH r H p c ). () HS +pc SH Note that peers do not switch channels in a synchronized way, so the actual demand will fluctuate around D, even in the steady state, but in a bounded manner. A PA-VoD system must meet the hard requirement that the total bandwidth supply should catch up with the total streaming rate of all peers to ensure that all peers are satisfied. For each peer, the streaming rate is the playback rate of the movie it is watching, if the movie is not already in its local cache. The streaming rate can be supplied by either the server or other peers. Therefore, the server bandwidth consumption (SBC) can be computed as SBC = (D B p ) +, where D is the total bandwidth demand, B p = Nū p is the total upload capacity of all peers, and a + := max{a,}. According to our model, there exists the case that a peer happens to watch an SD movie i already stored in its local cache. Let C curr = X denote the event that a peer is watching category X at the current time, where X {HD, SD} represents the category of HD or SD movies, and V curr = i the event that a peer is watching movie i. The probability that a peer is watching a movie that it has in its local cache can be estimated as = = P[no demand] P[V curr = i] P[C curr = SD,C curr = SD, i {SD} V curr = i C curr = SD] i {SD} (p m i j {SD} p m j pm ji k {SD} pm k pm ki p m i pm ij k {SD} pm k pm kj where curr and curr represent the previous time slot and the second previous time slot, respectively, p m ij is the transfer probability from movie i to j, and p m i is the steady state popularity of movie i, which can be computed from the transfer matrix P m at the movie level. For peers watching the content that is already cached locally, the playback rate of that movie should be subtracted from the total bandwidth demand. Taking no demand into account, we have the first Bound I of the sever bandwidth consumption in this paper: SBC I = (D Nr S P[no demand] B p ) +. () In PA-VoD systems, due to the limited cache size at each peer, the upload capacity may not be fully utilized, i.e., a ) 3

4 peer cannot use its bandwidth to support other peers watching the video content that it does not have in its local cache. Specifically, in our system, SD peers are likely to support HD peers. Therefore, the bandwidth that SD peers can contribute to HD peers is determined by the availability of the HD video content in the local cache of SD peers. To utilize the bandwidth of concurrent peers watching the same movie, their temporal relationship can be utilized, which is referred to as stratification or chain-based approach [6], [8], [9], in which peers are sorted in the temporal order of their arrivals to form a channel chain. Using the chain, a peer only streams from the concurrent peers ahead of it, as the content is highly likely to be available from early peers. In this paper, we call such bandwidth allocation transferable, as the bandwidth can be transferred from early peers to late peers, without any wastage of upload bandwidth. The detailed formulation of the chain-based bandwidth allocation can be found in [6]. After the channel-chain bandwidth allocation, a peer can upload to other channels if it has extra bandwidth and if it has watched those channels recently and thus still has the video content in its local cache. According to our assumptions, the peer cache can store at most SD movies or HD movie, and HD peers (except the last HD peer in the chain) can never support SD peers as they have already suffered the bandwidth deficit. We can derive the probability that an SD peers has HD video content in its local cache, where two cases are straightforward: ) a peer finished watching an HD movie, and then transfers to an SD movie; ) a peer watched an HD movie and an SD movie sequentially, and then transfers to another SD movie. Although these HD movies may not be available in their entirety or evenly distributed in the system, to derive the theoretical bound, we assume that through proper bandwidth allocation, the SD peers caching these HD segments can fully utilize their remaining upload bandwidth to support HD peers. The probability of the first case is calculated as P[C curr = HD,C curr = SD] = P[C curr = SD C curr = HD] P[C curr = HD] = p c HSp c H. (3) For the same reason, the second case is calculated as P[C curr = HD,C curr = SD,C curr = SD] = P[C curr = SD C curr = SD] P[C curr = SD C curr = HD] P[C curr = HD] = p c SSp c HSp c H. (4) Therefore, the probability that a peer is watching an SD movie and has HD segments in its local cache is P[helper] = p c SS pc HS pc H + pc HS pc H. The total bandwidth available from SD peers to HD peers is B SD = Np c S (ū p r S ) + Nū p P[no demand], and the bandwidth deficit of HD peers is Np c H (r H ū p ). Therefore, the second bound for the server bandwidth consumption can be calculated as SBC II = (Np c H(r H ū p ) B SD P[helper]) +, (5) which is tighter than Bound I in (). Therefore, we will refer to Bound II when evaluating the performance of our heuristic algorithms, as Bound II serves as the statistical lower bound for all chain-based approaches. C. ment Availability at Peers In previous sections, we present the model at the movie level, and an HD movie is considered to be available in a peer even if only one segment of the movie is stored in the peer s local cache. In fact, such HD movies cached by SD peers are not always complete, and the availability of segments varies for different HD segments. Here the availability of a segment is the total number of the replicas/copies of the segment in the system, which greatly affects the capability of helpers to utilize their upload capacity to support the peers watching that segment. For instance, if an HD segment is rarely cached by SD peers, even if there are a large number of SD peers with available upload bandwidth, they are not able to support peers watching that HD segment due to the content bottleneck. Traditionally, in a single channel VoD system ( Single Movie Caching ), early segments have plenty of available copies, and late segments suffer from poor provisioning, which is a natural outcome of the sequential viewing behavior of most peers [9], []. Such a law still holds for the local cache of concurrent peers watching the same movie in a multi-channel system. However, in our model, a peer is allowed to cache movies other than the movie it is watching, which we refer to as Multiple Movie Caching. Due to the simple FIFO cache replacement of segments when the local cache of a peer is full, as to the same HD movie, late segments will be more likely to be stored in the local cache of SD peers, which counteracts the rarity of such late segments among the peers currently watching HD channels. Therefore, to understand the capability of the system to support each segment, we now mathematically capture the availability of each segment in the PA-VoD system. We can also build a Markov chain based on an S S sparse transfer matrix P s at the segment level, where each element p s ij represents the probability that a peer finishes segment i and then transfers to segment j, and S is the total number of segments of all the movies. P s can be set up as follows. If segment j is the segment following segment i within the same movie, p s ij =. If segment i is the last segment of movie l and j is the first segment of movie k, p s ij = pm lk, where pm lk is the transfer probability from movie l to movie k. Otherwise, p s ij =. Thus, the expected number of peers watching each segment can be computed. However, we adopt a simple approach rather than the Markov chain analysis on the segment level. As we assume that peers do not jump to other segments until they reach the end of a movie, each segment of a movie shares the equal probability of being watched by any peer. Therefore, in the long run, the probability that a peer watching a particular segment of a movie is equal to the probability of watching the movie divided by the number of segments in that movie, i.e., a uniform distribution within the movie. Note that, due to the sequential viewing behaviors of peers, the uniform distribution 4

5 may not be able to be observed at a random time instant. For instance, if every peer is watching the first segment of a movie, after they finish that movie and transfer to another movie, they will still watch the first segment of that movie. In this case, the number of peers watching each segment will change as waves, and thus the uniform distribution is referred to the expected population of peers at each segment over a long time. Given the probability that a peer watches movie k and segment i, the viewing history of the peer can be rebuilt according to the transfer matrix P m at the movie level. For instance, the probability that this peer just watched movie l is calculated as P[V curr = l V curr = k] = pm l p m lk M i p m i pm ik where p m i is the steady state probability that a peer is watching movie i, p m ij is the probability that a peer transfers to movie j from i, and M is the total number of movies in the system. For the same reason, the probability that this peer watched each movie before the last one is also computable. With the viewing history, we start to fill up the peer s local cache from the currently watching movie to the movies that have been watched before, and from the last segment to earlier segments within a movie, until the peer s local cache is full. Concerning the example mentioned above, as the peer is watching segment i in movie k, with the probability p m k /S k, where S k is the total number of segments in movie k, the probability that each of the first (i ) th segments of movie k is cached in the peer is also p m k /S k. Therefore, such probability of each segment is added up into a global availability vector which represents the overall probability of any segments being replicated by a peer. Also, the probability that a segment in the last watched movie to be stored in the local cache of the peer is P[V curr = l V curr = k]p m k /S k, if the peer cache is still able to accommodate the segment. Therefore, the probability that each segment of each movie is stored in the peer cache can be recursively calculated, given the currently watching segment and movie of the peer. As the number of all these cases is limited, we exhaustively enumerate all the possibilities that a peer is watching any segment of any movie, and rebuild its local cache using the method mentioned above and add to the probability of caching each segment to the global availability vector. After the probability vector is derived, the product of the vector and the total number of peers in the system indicates the expected number of replicas of each segment in the system, i.e., the availability of all the segments. Here we omit further details due to its complexity, and the numerical result of our analysis is presented in Section IV-C. D. Server Bandwidth Consumption Optimization After modeling the system at the movie level and analyzing the segment availability, we continue to explore the optimal server bandwidth consumption at any given time instant. To achieve that, we model the system at a time instant, and assume that all the global information, such as the segment popularity and the content cached in each peer s local cache, is known. We define three S N matrices, where N is the number of peers and S the total number of movie segments in the system at the time instant. The S movie segments are grouped into different movies, and listed in the sequential order of each movie. The watching matrix W represents which peer is watching which movie segment, with the elements w ij defined as { if peer j is watching movie segment i, w ij = otherwise. Two other matrices are defined following the same manner. We use a cached matrix C to describe the cache of all peers, with the elements defined as { if peer j has movie segment i in its local cache, c ij = otherwise; and an allocation matrix A, with the elements defined as a ij. a ij represents the percentage of the upload bandwidth that peer j allocates to movie segment i, and a ij u j is the amount of such bandwidth allocated. The bandwidth demand for a movie segment i is composed of the required streaming rate of all peers watching it, if the peer does not have the segment in its local cache, which N can be computed as D i = r i j= (w ij c ij ) +, where r i is the playback rate of segment i. The supply B i is computed as B i = N j= a iju j. As the server bandwidth consumption SBC is the aggregate difference between the peer bandwidth supply and demand of all the segments if their demand is greater than the supply, we have SBC = S (D i B i ) +. (6) i= An important constraint is that a peer cannot be assigned to upload any movie segments that it does not have in its local cache, i.e., i,j, a ij =, if c ij =. Moreover, for each peer, the sum of such bandwidth allocation percentages should be S less than or equal to, i.e., j, i a ij. In summary, the optimization problem is formulated as Min S N N (r i (w ij c ij ) + a ij u j ) + i= j= j= s.t. i,j, a ij = if c ij = S j, a ij i,j, a ij (7) i However, the formulation shown above exhibits a nonlinear programming problem, which is very hard to solve. Therefore, we convert the formulation into a tractable problem by introducing a new S vector U s, with the element u s i defined as the amount of bandwidth allocated to movie segment i from the server, and the objective function is converted to minimizing the total server bandwidth consumption S i= us i, with the constraint that the demand of each segment is equal 5

6 to the supply from the server and other peers. The problem now can be formulated as S Min i= u s i s.t. i,j, a ij = if c ij = ; i, u s i S j, a ij i,j, a ij i, i N N u j a ij +u s i = r i (w ij c ij ) +, (8) j= j= with all a ij and u s i as the unknown variables. Now the formulation exhibits a linear programming problem and thus is solvable in a centralized manner. Although the optimization algorithm is impractical for large-scale systems, it does provide guaranteed performance bounds for different scenarios. Moreover, as the optimization does not even rely on the channel-chain bandwidth allocation, we expect that such a solution will lead to a server bandwidth consumption lower than Bound II, which will be verified in our performance evaluation in Section IV. III. HEURISTIC ALGORITHMS Although the optimization problem formulated in Section II-D is solvable in polynomial time, it is difficult to adapt to large-scale PA-VoD systems. First, the solution calls for global information and centralized coordination, which contradicts to the distributed nature of PA-VoD systems. Moreover, the demand and supply of a PA-VoD system changes quickly, making it challenging to obtain the solution in real time. Therefore, we resort to efficient chain-based heuristic algorithms, which can run in a distributed manner and balance the system timely. A. Peer Bandwidth Allocation Given the availability of segments derived in Section II-C, we need to find the best way to allocate bandwidth from helpers to watchers to transfer a portion of the available segments of a movie, and thus the upload bandwidth. Inappropriate allocation may result in the wastage of upload bandwidth within the channel chain. For instance, consider the case that two helper he a and he b have the nd to th segments and the th to th segments of an HD movie, respectively, and two watchers wa a and wa b are watching the 6th and th segment of that HD movie, respectively. If we assign helper he a to help watcher wa b, helper he b will lack the content to support watcher wa a. As a result, the bandwidth of he b is wasted due to such a content bottleneck. We thus propose an inter-chain (or cross-channel) allocation algorithm to allocate the bandwidth of the helpers towards the watchers of an HD channel, such that the bandwidth wastage is most possibly minimized. Before we perform the chainbased bandwidth allocation for an HD channel, a preprocessing named inter-chain allocation is conducted. First, we find all the helpers that satisfy two conditions: ) it has cached the needed HD video segments already; ) it has extra available Algorithm Inter-chain bandwidth allocation : Assume there are m watchers as W A = {wa,wa,...,wa m } in an HD channel chain Ch, and set the current assigned streaming bandwidth d wa i to for each watcher wa i. : Find all helpers HE = {he,he,...,he n } who have cached part of the HD movie already, and sort them in the ascending order based on the number of the cached HD video segments they have. Set the current available upload bandwidth u he i of all helpers to its available upload bandwidth after its contributes to its own channel chain. 3: i and j 4: while i n do 5: while j m do 6: if Helper he i has cached the segment that wa j is watching and d wa j < r H then 7: B needed r H d wa j 8: if u he i B needed then 9: Assign all he i s upload bandwidth u he : d wa j d wa j +u he i ; u he i ; i to wa j. : else : Assign part of he i s upload bandwidth to wa j. 3: d j wa r H ; u he i u he i B needed ; j j + 4: end if 5: end if 6: end while 7: i i+ 8: end while SD Movie ments SD Movie ments HD Movie ments Inter-chain allocation: Inner-chain allocation: SD S7 SD S SD S SD S HD S HD S9 HD S8 Watcher Helper Helper Helper 3 Helper 4 Helper n Watcher SD S3 SD S SD S SD S SD S HD S HD S SD S SD S HD S HD S SD S SD S HD S HD S 7 SD S SD S HD S HD S9 HD S4 HD S3 Watcher Watcher 3 Watcher 4 Watcher m Arrival Time t 4 Server Server Watcher Watcher 3 Watcher 4 Watcher m Arrival Time t Fig. : The inter and inner-chain bandwidth allocation algorithms 6

7 upload bandwidth. After that, these helpers are sorted in the ascending order based on the portion of HD video segments they have. Helpers with a small portion of HD video segments only have the last few segments of the HD movie, and thus can only be allocated to a small number of later watchers. To take this factor into consideration, earlier watchers are assigned with a higher priority to send requests to helpers, and helpers with a smaller portion of HD video segments will allocate their upload bandwidth to the watchers first. If the bandwidth needed for the watcher is large than or equal to the helper s available upload bandwidth, we assign all the upload bandwidth of the helper to support the watcher. Otherwise we only assign part of the helper s upload bandwidth to meet the watcher s streaming demand. If the watcher is satisfied, the next watcher in the channel chain contacts the helpers. The algorithm terminates when all helpers have used up their upload bandwidth, or no helpers can be further assigned due to the content bottleneck. The inter-chain bandwidth allocation is described in Algorithm. Fig. illustrates the inter-chain and inner-chain bandwidth allocation process. Assume the playback rate of the SD and HD movie are 5 Kbps and, Kbps, respectively, and the upload capacity of each peer varies from 4 Kbps to 8 Kbps. Helper and from channel SD have cached several HD video segments, and are able to offer Kbps bandwidth. Helper 3 and 4 from channel SD have cached a half of the HD video segments, and each can provide6 Kbps bandwidth. Since Watcher is watching segment of the HD movie, all the helpers can provide this segment. So helper, and 3 will assign all of their upload bandwidth to Watcher, and Helper 4 only needs to provide Kbps bandwidth to meet Watcher s remaining bandwidth requirement. As Helper 4 still has 4 Kbps extra bandwidth, it allocates all the remaining upload bandwidth to Watcher. Thus the content bottleneck is removed, and the upload bandwidth of all helpers are fully utilized. The inner-chain allocation algorithm will be invoked after the inter-chain bandwidth allocation following the traditional chain-based allocation strategy. Early watchers in the HD channel chain will use the upload bandwidth to support later watchers. If the bandwidth needed for the later watcher is larger than or equal to early watchers upload bandwidth, all of the foregoing watchers upload bandwidth can be assigned to help the later watcher. Otherwise, we only assign part of the early watchers upload bandwidth to satisfy the later watcher s download demand, i.e., the playback rate of the movie it is watching. When early watchers do not have enough bandwidth, the server will have to compensate the bandwidth shortage, which results in server bandwidth consumption. Through the inner-chain and inner-chain bandwidth allocation, we make the peer upload bandwidth as transferable as possible, and thus avoid the wastage of peer upload bandwidth and reduce the server bandwidth consumption consequently. The inner-chain algorithm for SD peers is slightly different from HD chains. Consider that it is possible that an SD peer is watching the movie that is still cached. In this case, there will be no bandwidth demand for downloading the content. Moreover, the peer has the capability to support any concurrent peers watching that SD movie. Therefore, we utilize such amount of bandwidth to support the earliest peers in that SD channel. The benefits are two fold: ) the server contribution for the earliest peers in the traditional channel chain-based allocation can be waived; and ) such bandwidth supporting the earliest peers can be transferred to late peers, where they have a better chance of caching more HD segments. Both the inter-chain and inner-chain algorithm can be performed in a distributed way with the assistance of the tracker. To form the channel chain, a peer first requests a list of concurrent peers and helper peers from the tracker, and then contacts its chain neighbors and helpers to join the channel chain. If a peer i is selected as a helper, either due to an incentive mechanism, or being forced by the tracker based on the peer s viewing history, which is usually available when the peer stays in the system for some time and has watched a couple of movies, the tracker will forward a helper list to peer i and has it enrolled as a helper. Helpers periodically exchange their buffer bitmap through heartbeat messages to maintain the help chain and coordinate with the peer watching the channel and being helped. B. Peer Cache Management Besides the simple FIFO cache replacement, here are still cases that a more active caching strategy should be involved. First, due to system dynamics, the number of peers watching each channel is always changing. To adapt to such dynamics, we can always keep the HD segments that have been watched before in an SD peer, which will guarantee the sufficient availability of HD segments within the system, which is referred to as passive caching. Second, if a new movie is released in the system and peers flash crowd to watch the movie, its content in the system is limited initially and peers have to resort to the server. To avoid such problems, some peers can be assigned to start downloading the content of a new movie before it is released, even if they are not going to watch it. Following [6], we refer to this approach as to active caching. As active caching has been studied by some existing work, we omit the further details, and only explain and explore passive caching in the remainder of the paper. Since the peer cache can only store HD movie or SD movies, when a watcher switches to an HD channel, the existing cached segments will be removed gradually no matter what kind of movie it has watched before. When a watcher transfers from an HD movie to an SD movie, the first half number of HD video segments will be removed gradually. Only when a watcher switches from an SD movie to another SD movie after watching an HD movie, the passive caching will be invoked. As watchers watching SD movies can serve as helpers, the cached HD movie segments should be preserved as long as possible. To make room for new SD video segments, the segments of the previous SD movie will be removed gradually. In this case, the FIFO will be applied to the segments of the to-be-removed SD movie only. 7

8 C. Practical Issues ) Chain Maintenance: In this paper, we assume that channel and helper chains can be maintained in real time, which is difficult under peer churning. On one hand, watchers may finish watching the current segment and move to the next segment of the movie that is being watched, and helpers may switch to HD movies, and thus will be removed from the helper chain. On the other hand, new peers may join the system and become helpers and existing helpers may quit the system. As a result, such helpers are no longer able to support the watchers, while new helpers need to be assigned to watchers. Therefore, the unbalanced system with the inappropriate bandwidth allocation may consume more bandwidth supply from the server. To balance the system, the heuristic algorithm can be invoked periodically. However, we leave the study of how the frequency of such re-balancing will affect the performance of the system for our future work. ) Overhead Control: The maintenance of channel and helper chains also introduces overhead. To reduce the burden of processing such maintenance messages at the tracker, it is better to avoid the message exchange between peers and the tracker as much as possible, and the channel and helper chain should be maintained mainly by peers themselves. Therefore, we can restrict the message exchange between peers and the tracker to three parts: ) each newly-joined watcher sends a request to the tracker to retrieve the initial list of the peers who are watching the same movie and the helpers of that movie, and joins the channel chain with the knowledge of the corresponding helpers; ) a potential helper gets the initial list of the existing helpers of a movie from the tracker to determine its position in the helper chain, contacts those helpers to get enrolled in the chain, and then exchanges the local cache information through bitmap messages periodically with other helpers and watchers to maintain the helper chain and coordinate with the watchers; 3) watchers and helpers send heartbeat messages as liveness notifications to the tracker. Besides the periodical liveness notification, peers only interact with the tracker when they join a channel or helper chain. After that, the maintenance of chains is mainly performed by peers, through periodically-exchanged messages. Therefore, we can minimize the interaction between peers and the tracker, and thus reduce the risk of overloading of the tracker due to the huge amount of message exchange with peers and the workload of managing watcher and helper chains. IV. PERFORMANCE EVALUATION In this section, we evaluate our heuristic algorithms, in comparison with the performance bounds and the optimization solutions obtained in Section II. A. System Setting We developed a Java-based event-driven simulator to emulate a PA-VoD system with multiple channels. The playback rates of the SD and HD-category movies are set to 5 Kbps and, Kbps, respectively. The peer upload capacity follows the distribution listed in Table I, with an average TABLE I: The distribution of peer upload capacity Upload Capacity 8 Kbps 5 Kbps 4 Kbps Percentage 5% 3% % capacity of 63 Kbps. The peer average upload capacity is set higher than the usual value of 53 Kbps [], as the upload capacity of residential Internet accesses using DSL or Cable modem in North America nowadays is approaching 8 Kbps. At the beginning of the simulation, we let a number of peers (from, to,) join the system at the same time, and pick up a random segment of a random movie to watch. All the segments ahead of the initially watched segment are assumed to be stored in a peer s local cache, which indicates that the number of peers watching every category, every channel of a category, and every segment of a movie follows a uniform distribution initially. The transfer matrices at the category level are set differently to emulate user viewing behaviors in different scenarios. Given enough time, the distributions of peers in each category will converge to the the steady state determined by the transfer matrix accordingly, which has been confirmed through our simulation results. There are SD movies and HD movies in the system, respectively. Within each category, the popularity of the movies follows the Zipf distribution with parameter. Let p z i denote the popularity of movie i within the its category C i (S or H) following the Zipf distribution. The transfer matrix at the movie level is set as follows. Peers are not allowed to transfer to the movie it just watched (i.e., p m ij =, if i = j). For transferring from movie i to movie j within the same category (i.e., C i = C j ), if i j, set p m ij = p c C ic i p z j k {C i}, k i where p c C ic i is the probability that the peer stays in the same category C i. If a peer transfers to a different category (i.e., C i C j ), the probability of transferring to the movies in that category strictly follows the Zipf distribution of movies in that category as the peer is always watching a new movie, which is p m ij = pc C ic j p z j. Note that the steady state popularity of each movie in its category will be slightly different from its original Zipf-determined popularity, and the movie popularity will also be affected by the transfer matrices P c between SD and HD categories as well, as the steady state vector derived from P c determines the overall popularity of each category. We study three different user viewing behaviors in the PA-VoD system. The transfer matrices between SD and HD categories are [.8.;.8.] for Case, [.9.;.4.6] for Case, and [.5.5;.8.] for Case 3, respectively. The channel popularity in the steady state of the three cases is illustrated in Fig.. Case and Case both indicate the surplus mode, in which the total upload capacity of peers is larger than the aggregate required streaming rate, and thus both Bound I and Bound II are calculated as zeros using () and (5). In Fig., the two curves representing Case and p z k 8

9 Movie Popularity Case Case Case Movie Index Fig. : Movie popularity in the steady state of the three cases overlap with each other, which means the popularity of SD (channel to ) and HD (channel to ) channels in these two cases are almost the same. However, different user behaviors are exhibited by these two cases. Compared with Case, peers in Case are more active in transferring to a different category. Case 3 reflects the system in deficit mode, i.e., not able to survive by the peer-contributed bandwidth only, as a large portion of peers are watching HD movies, which consumes a huge amount of bandwidth (from channel to channel in Fig. ). Therefore, the server has to compensate such bandwidth deficit of the system, which is corresponding to Bound I. B. Helper Peers As we can tell, peer viewing behaviors (i.e., transferring between video categories and channels) determine the number of SD peers and thus SD helpers. For Case and Case defined above, note that the steady-state popularity of the categories are the same for these two cases, which indicates the same expected number of SD peers, and thus the total bandwidth demand and supply from peers are the same, i.e., the same Bound I. However, in Case, peers tend to transfer between HD and SD videos more frequently than Case, and thus bring more helpers for HD peers, which is shown in Fig. 3. As a result, the bandwidth supply from helpers to HD peers in Case will be better than Case, if a proper bandwidth allocation algorithm is adopted. This gives us the insight that when designing the incentive mechanism for PA- VoD systems, it is desirable to encourage peers to transfer to SD channels immediately after they finish watching an HD movie. First, the bandwidth requirement for HD channels will reduce, as peers staying in HD channels will consume more bandwidth. Moreover, after watching an HD movie and by transferring to an SD movie, the peer will have the HD content in its local cache, and thus is able to act as a helper to support HD peers using its extra bandwidth. Nevertheless, an enough number of helpers is only a necessary condition of reducing the server bandwidth consumption. The number of helpers for different channels has to be distributed in proportion to the numbers of watchers of the HD channels. For instance, if the bandwidth supply of the helpers for a particular HD channel overwhelms the actual extra demand of that HD channel after the HD watchers have fully utilized their upload bandwidth to support themselves, although the sever bandwidth consumption for this HD channel will be minimized to, other HD channels may suffer from the insufficient supply from helpers, given the fact that the number of HD helpers is finite in the system and each HD helper can only cache one HD movie. In this paper, according to the setting of our model and simulation, if an HD movie is more popular, it is more likely to be watched and then cached by SD peers. Therefore, in the steady state of the system, the more peers a particular HD channel has, the more segments of the HD movie will be cached in the system, and thus the HD channel will have more helpers. We refer to such phenomenon as to self-adaptivity of the system. We capture two snapshots of the system at the 6, th second of Case in Fig. 4, when the system has converged to its steady state. This time we only list all the HD movies (movie to ). The peer population is set to, and, (N =, and N =, ), respectively, and the caching strategy is a simple FIFO. The figure clearly demonstrates the self-adaptivity of the PA-VoD system at the movie level. Given more watchers in an HD channel, there will be more helpers from SD channels where peers previously watched the HD movie, and still have the HD segments in their local cache. Moreover, the system is more stable given a larger peer population, as the curve of the peer population N =, demonstrates a better balanced provisioning for HD channels, which will be detailed in Section IV-E. However, as the number of helpers is limited for each HD channel, the total available bandwidth of the helpers is still insufficient to compensate the bandwidth deficit of the HD channels, where the server has to offer the extra bandwidth supply. To this point, what a bandwidth allocation algorithm can do is to fully utilized the limited bandwidth resource of the helpers (i.e., make it as transferable as possible), such that the bandwidth wastage during the allocation process is minimized and the best support for the HD channels and the minimum server bandwidth consumption are therefore guaranteed. Number of Helpers Case, Ana Case, Simu Case, Ana Case, Simu Time (Seconds) 4 6 x 4 Fig. 3: Number of helpers with two peer viewing behaviors 9

Effective Utilization of User Resources in PA-VoD Systems with Channel Heterogeneity

Effective Utilization of User Resources in PA-VoD Systems with Channel Heterogeneity 1 Effective Utilization of User Resources in PA-VoD Systems with Channel Heterogeneity Le Chang, Jianping Pan, Senior Member IEEE, and Min Xing Abstract Nowadays, peer-assisted video on-demand (PA-VoD)

More information

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation 1 Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation Weijie Wu, Student Member, IEEE, and John C.S. Lui, Fellow, IEEE Abstract P2P-Video-on-Demand (P2P-VoD)

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Improving QoS in BitTorrent-like VoD Systems

Improving QoS in BitTorrent-like VoD Systems Improving QoS in BitTorrent-like VoD Systems Yan Yang Univ. of Southern California yangyan@usc.edu Alix L.H. Chow Univ. of Southern California lhchow@usc.edu Leana Golubchik Univ. of Southern California

More information

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek Yingfei Dong David Du Poznan Supercomputing and Dept. of Electrical Engineering Dept. of Computer Science Networking Center

More information

Parallel Algorithm Design. Parallel Algorithm Design p. 1

Parallel Algorithm Design. Parallel Algorithm Design p. 1 Parallel Algorithm Design Parallel Algorithm Design p. 1 Overview Chapter 3 from Michael J. Quinn, Parallel Programming in C with MPI and OpenMP Another resource: http://www.mcs.anl.gov/ itf/dbpp/text/node14.html

More information

138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009

138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009 138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009 Optimal Prefetching Scheme in P2P VoD Applications With Guided Seeks Yifeng He, Member, IEEE, Guobin Shen, Senior Member, IEEE, Yongqiang

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

Chunk Scheduling Strategies In Peer to Peer System-A Review

Chunk Scheduling Strategies In Peer to Peer System-A Review Chunk Scheduling Strategies In Peer to Peer System-A Review Sanu C, Deepa S S Abstract Peer-to-peer ( P2P) s t r e a m i n g systems have become popular in recent years. Several peer- to-peer systems for

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network

Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network Intra and Inter Cluster Synchronization Scheme for Cluster Based Sensor Network V. Shunmuga Sundari 1, N. Mymoon Zuviria 2 1 Student, 2 Asisstant Professor, Computer Science and Engineering, National College

More information

Analysis of Peer-Assisted Video-on-Demand Systems with Scalable Video Streams

Analysis of Peer-Assisted Video-on-Demand Systems with Scalable Video Streams Analysis of Peer-Assisted Video-on-Demand Systems with Scalable Video Streams Kianoosh Mokhtarian School of Computing Science Simon Fraser University Surrey, BC, Canada Mohamed Hefeeda School of Computing

More information

Diagnosing Network-wide P2P Live Streaming Inefficiencies

Diagnosing Network-wide P2P Live Streaming Inefficiencies Diagnosing Network-wide P2P Live Streaming Inefficiencies Chuan Wu Baochun Li Shuqiao Zhao Department of Computer Science Dept. of Electrical and Computer Engineering Multimedia Development Group The University

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

Providing NPR-Style Time-Shifted Streaming in P2P Systems

Providing NPR-Style Time-Shifted Streaming in P2P Systems University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln CSE Conference and Workshop Papers Computer Science and Engineering, Department of 11 Providing -Style Time-Shifted Streaming

More information

HIKVISION H.265+ Encoding Technology. Halve Your Bandwidth and Storage Enjoy the Ultra HD and Fluency

HIKVISION H.265+ Encoding Technology. Halve Your Bandwidth and Storage Enjoy the Ultra HD and Fluency HIKVISION H.265+ Encoding Technology Halve Your Bandwidth and Storage Enjoy the Ultra HD and Fluency Table of Contents 1. Background... 3 2. Key Technologies... 3 2.1. Prediction Encoding... 3 2.1.1. P-Frame

More information

Design of Parallel Algorithms. Course Introduction

Design of Parallel Algorithms. Course Introduction + Design of Parallel Algorithms Course Introduction + CSE 4163/6163 Parallel Algorithm Analysis & Design! Course Web Site: http://www.cse.msstate.edu/~luke/courses/fl17/cse4163! Instructor: Ed Luke! Office:

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation This paper was presented as part of the main technical program at IEEE INFOCOM 211 Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation Weijie Wu John C.S. Lui

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

BitTorrent Fairness Analysis

BitTorrent Fairness Analysis BitTorrent Fairness Analysis Team Asians Zhenkuang He Gopinath Vasalamarri Topic Summary Aim to test how the fairness affect the file transfer speed in a P2P environment (here using the BitTorrent Protocol)

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in

More information

ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation

ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation Weiping Liao, Saengrawee (Anne) Pratoomtong, and Chuan Zhang Abstract Binary translation is an important component for translating

More information

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors

More information

A Study of the Performance Tradeoffs of a Tape Archive

A Study of the Performance Tradeoffs of a Tape Archive A Study of the Performance Tradeoffs of a Tape Archive Jason Xie (jasonxie@cs.wisc.edu) Naveen Prakash (naveen@cs.wisc.edu) Vishal Kathuria (vishal@cs.wisc.edu) Computer Sciences Department University

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Resource analysis of network virtualization through user and network

Resource analysis of network virtualization through user and network Resource analysis of network virtualization through user and network P.N.V.VAMSI LALA, Cloud Computing, Master of Technology, SRM University, Potheri. Mr.k.Venkatesh, Asst.Professor (Sr.G), Information

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google* 정학수, 최주영 1 Outline Introduction Design Overview System Interactions Master Operation Fault Tolerance and Diagnosis Conclusions

More information

arxiv: v2 [cs.ni] 23 May 2016

arxiv: v2 [cs.ni] 23 May 2016 Simulation Results of User Behavior-Aware Scheduling Based on Time-Frequency Resource Conversion Hangguan Shan, Yani Zhang, Weihua Zhuang 2, Aiping Huang, and Zhaoyang Zhang College of Information Science

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

! Parallel machines are becoming quite common and affordable. ! Databases are growing increasingly large

! Parallel machines are becoming quite common and affordable. ! Databases are growing increasingly large Chapter 20: Parallel Databases Introduction! Introduction! I/O Parallelism! Interquery Parallelism! Intraquery Parallelism! Intraoperation Parallelism! Interoperation Parallelism! Design of Parallel Systems!

More information

Chapter 20: Parallel Databases

Chapter 20: Parallel Databases Chapter 20: Parallel Databases! Introduction! I/O Parallelism! Interquery Parallelism! Intraquery Parallelism! Intraoperation Parallelism! Interoperation Parallelism! Design of Parallel Systems 20.1 Introduction!

More information

Chapter 20: Parallel Databases. Introduction

Chapter 20: Parallel Databases. Introduction Chapter 20: Parallel Databases! Introduction! I/O Parallelism! Interquery Parallelism! Intraquery Parallelism! Intraoperation Parallelism! Interoperation Parallelism! Design of Parallel Systems 20.1 Introduction!

More information

ECE 669 Parallel Computer Architecture

ECE 669 Parallel Computer Architecture ECE 669 Parallel Computer Architecture Lecture 9 Workload Evaluation Outline Evaluation of applications is important Simulation of sample data sets provides important information Working sets indicate

More information

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services Zhiyong Liu, CATR Prof. Zhili Sun, UniS Dr. Dan He, UniS Denian Shi, CATR Agenda Introduction Background Problem Statement

More information

arxiv: v3 [cs.ni] 3 May 2017

arxiv: v3 [cs.ni] 3 May 2017 Modeling Request Patterns in VoD Services with Recommendation Systems Samarth Gupta and Sharayu Moharir arxiv:1609.02391v3 [cs.ni] 3 May 2017 Department of Electrical Engineering, Indian Institute of Technology

More information

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien

CERIAS Tech Report Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien CERIAS Tech Report 2003-56 Autonomous Transaction Processing Using Data Dependency in Mobile Environments by I Chung, B Bhargava, M Mahoui, L Lilien Center for Education and Research Information Assurance

More information

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System Bo Li,GabrielY.Keung,SusuXie,Fangming Liu,YeSun and Hao Yin Hong Kong University of Science and Technology Tsinghua

More information

Adaptive Server Allocation for Peer-assisted VoD

Adaptive Server Allocation for Peer-assisted VoD Adaptive Server Allocation for Peer-assisted VoD Konstantin Pussep, Osama Abboud, Florian Gerlach, Ralf Steinmetz, Thorsten Strufe Konstantin Pussep Konstantin.Pussep@KOM.tu-darmstadt.de Tel.+49 6151 165188

More information

Towards Low-Redundancy Push-Pull P2P Live Streaming

Towards Low-Redundancy Push-Pull P2P Live Streaming Towards Low-Redundancy Push-Pull P2P Live Streaming Zhenjiang Li, Yao Yu, Xiaojun Hei and Danny H.K. Tsang Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Caching video contents in IPTV systems with hierarchical architecture

Caching video contents in IPTV systems with hierarchical architecture Caching video contents in IPTV systems with hierarchical architecture Lydia Chen 1, Michela Meo 2 and Alessandra Scicchitano 1 1. IBM Zurich Research Lab email: {yic,als}@zurich.ibm.com 2. Politecnico

More information

Achieve Significant Throughput Gains in Wireless Networks with Large Delay-Bandwidth Product

Achieve Significant Throughput Gains in Wireless Networks with Large Delay-Bandwidth Product Available online at www.sciencedirect.com ScienceDirect IERI Procedia 10 (2014 ) 153 159 2014 International Conference on Future Information Engineering Achieve Significant Throughput Gains in Wireless

More information

7. Decision or classification trees

7. Decision or classification trees 7. Decision or classification trees Next we are going to consider a rather different approach from those presented so far to machine learning that use one of the most common and important data structure,

More information

Enhancing Downloading Time By Using Content Distribution Algorithm

Enhancing Downloading Time By Using Content Distribution Algorithm RESEARCH ARTICLE OPEN ACCESS Enhancing Downloading Time By Using Content Distribution Algorithm VILSA V S Department of Computer Science and Technology TKM Institute of Technology, Kollam, Kerala Mailid-vilsavijay@gmail.com

More information

Chapter 18: Parallel Databases

Chapter 18: Parallel Databases Chapter 18: Parallel Databases Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Chapter 18: Parallel Databases Introduction I/O Parallelism Interquery Parallelism Intraquery

More information

Chapter 18: Parallel Databases. Chapter 18: Parallel Databases. Parallelism in Databases. Introduction

Chapter 18: Parallel Databases. Chapter 18: Parallel Databases. Parallelism in Databases. Introduction Chapter 18: Parallel Databases Chapter 18: Parallel Databases Introduction I/O Parallelism Interquery Parallelism Intraquery Parallelism Intraoperation Parallelism Interoperation Parallelism Design of

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

P2P VoD Systems: Modelling and Performance

P2P VoD Systems: Modelling and Performance P2P VoD Systems: Modelling and Performance Samuli Aalto, Aalto University, Finland Pasi Lassila, Aalto University, Finland Niklas Raatikainen, HIIT, Finland Petri Savolainen, HIIT, Finland Sasu Tarkoma,

More information

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc. Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD

More information

The Scalability of Swarming Peer-to-Peer Content Delivery

The Scalability of Swarming Peer-to-Peer Content Delivery The Scalability of Swarming Peer-to-Peer Content Delivery Daniel Zappala Brigham Young University zappala@cs.byu.edu with Daniel Stutzbach Reza Rejaie University of Oregon Page 1 Motivation Small web sites

More information

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM ABSTRACT: SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM J.Priyanka 1, P.Rajeswari 2 II-M.E(CS) 1, H.O.D / ECE 2, Dhanalakshmi Srinivasan Engineering College, Perambalur. Recent years have witnessed

More information

Input/Output Management

Input/Output Management Chapter 11 Input/Output Management This could be the messiest aspect of an operating system. There are just too much stuff involved, it is difficult to develop a uniform and consistent theory to cover

More information

Study of Load Balancing Schemes over a Video on Demand System

Study of Load Balancing Schemes over a Video on Demand System Study of Load Balancing Schemes over a Video on Demand System Priyank Singhal Ashish Chhabria Nupur Bansal Nataasha Raul Research Scholar, Computer Department Abstract: Load balancing algorithms on Video

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION

CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION CHAPTER 5 ANT-FUZZY META HEURISTIC GENETIC SENSOR NETWORK SYSTEM FOR MULTI - SINK AGGREGATED DATA TRANSMISSION 5.1 INTRODUCTION Generally, deployment of Wireless Sensor Network (WSN) is based on a many

More information

Caching and Demand-Paged Virtual Memory

Caching and Demand-Paged Virtual Memory Caching and Demand-Paged Virtual Memory Definitions Cache Copy of data that is faster to access than the original Hit: if cache has copy Miss: if cache does not have copy Cache block Unit of cache storage

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

Architecture for Cooperative Prefetching in P2P Video-on- Demand System

Architecture for Cooperative Prefetching in P2P Video-on- Demand System Architecture for Cooperative Prefetching in P2P Video-on- Demand System Ubaid Abbasi and Toufik Ahmed CNRS LaBRI Lab. University of Bordeaux, France 351, Cours de la Libération Talence Cedex, France {abbasi,

More information

Demand fetching is commonly employed to bring the data

Demand fetching is commonly employed to bring the data Proceedings of 2nd Annual Conference on Theoretical and Applied Computer Science, November 2010, Stillwater, OK 14 Markov Prediction Scheme for Cache Prefetching Pranav Pathak, Mehedi Sarwar, Sohum Sohoni

More information

CSC630/CSC730 Parallel & Distributed Computing

CSC630/CSC730 Parallel & Distributed Computing CSC630/CSC730 Parallel & Distributed Computing Analytical Modeling of Parallel Programs Chapter 5 1 Contents Sources of Parallel Overhead Performance Metrics Granularity and Data Mapping Scalability 2

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information

Lecture 4: Principles of Parallel Algorithm Design (part 4)

Lecture 4: Principles of Parallel Algorithm Design (part 4) Lecture 4: Principles of Parallel Algorithm Design (part 4) 1 Mapping Technique for Load Balancing Minimize execution time Reduce overheads of execution Sources of overheads: Inter-process interaction

More information

Joint Optimization of Content Replication and Server Selection for Video-On-Demand

Joint Optimization of Content Replication and Server Selection for Video-On-Demand Joint Optimization of Content Replication and Server Selection for Video-On-Demand Huan Huang Pengye Xia S.-H. Gary Chan Department of Compute Science and Engineering The Hong Kong University of Science

More information

Parallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides)

Parallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Computing 2012 Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Algorithm Design Outline Computational Model Design Methodology Partitioning Communication

More information

QoE-aware Traffic Shaping for HTTP Adaptive Streaming

QoE-aware Traffic Shaping for HTTP Adaptive Streaming , pp.33-44 http://dx.doi.org/10.14257/ijmue.2014.9.2.04 QoE-aware Traffic Shaping for HTTP Adaptive Streaming Xinying Liu 1 and Aidong Men 2 1,2 Beijing University of Posts and Telecommunications No.10

More information

Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet

Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet Sachin Agarwal 1, Jatinder Pal Singh 1, Aditya Mavlankar 2, Pierpaolo Bacchichet 2, and Bernd Girod 2 1

More information

Peer-to-Peer Systems. Chapter General Characteristics

Peer-to-Peer Systems. Chapter General Characteristics Chapter 2 Peer-to-Peer Systems Abstract In this chapter, a basic overview is given of P2P systems, architectures, and search strategies in P2P systems. More specific concepts that are outlined include

More information

AutoTune: Game-based Adaptive Bitrate Streaming in P2P-Assisted Cloud-Based VoD Systems

AutoTune: Game-based Adaptive Bitrate Streaming in P2P-Assisted Cloud-Based VoD Systems AutoTune: Game-based Adaptive Bitrate Streaming in P2P-Assisted Cloud-Based VoD Systems Yuhua Lin and Haiying Shen Dept. of Electrical and Computer Engineering Clemson University, SC, USA Outline Introduction

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible

More information

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

Is BranchCache right for remote, serverless software distribution?

Is BranchCache right for remote, serverless software distribution? Is BranchCache right for remote, serverless software distribution? 1E Technical Whitepaper Microsoft BranchCache and System Center Configuration Manager 2007 Abstract BranchCache is a new feature available

More information

Enhancing Cloud Resource Utilisation using Statistical Analysis

Enhancing Cloud Resource Utilisation using Statistical Analysis Institute of Advanced Engineering and Science International Journal of Cloud Computing and Services Science (IJ-CLOSER) Vol.3, No.1, February 2014, pp. 1~25 ISSN: 2089-3337 1 Enhancing Cloud Resource Utilisation

More information

A Genetic Algorithm for Multiprocessor Task Scheduling

A Genetic Algorithm for Multiprocessor Task Scheduling A Genetic Algorithm for Multiprocessor Task Scheduling Tashniba Kaiser, Olawale Jegede, Ken Ferens, Douglas Buchanan Dept. of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB,

More information

Multiprocessing and Scalability. A.R. Hurson Computer Science and Engineering The Pennsylvania State University

Multiprocessing and Scalability. A.R. Hurson Computer Science and Engineering The Pennsylvania State University A.R. Hurson Computer Science and Engineering The Pennsylvania State University 1 Large-scale multiprocessor systems have long held the promise of substantially higher performance than traditional uniprocessor

More information

Incentive-Compatible Caching and Inter-Domain Traffic Engineering in CCN

Incentive-Compatible Caching and Inter-Domain Traffic Engineering in CCN Incentive-Compatible Caching and Inter-Domain Traffic Engineering in CCN Xun Shao, Hitoshi Asaeda 2016-05-19 Na9onal Ins9tute of Informa9on and Communica9ons Technology (NICT) 1 Outline Caching and inter-domain

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Performing MapReduce on Data Centers with Hierarchical Structures

Performing MapReduce on Data Centers with Hierarchical Structures INT J COMPUT COMMUN, ISSN 1841-9836 Vol.7 (212), No. 3 (September), pp. 432-449 Performing MapReduce on Data Centers with Hierarchical Structures Z. Ding, D. Guo, X. Chen, X. Luo Zeliu Ding, Deke Guo,

More information

Mark Sandstrom ThroughPuter, Inc.

Mark Sandstrom ThroughPuter, Inc. Hardware Implemented Scheduler, Placer, Inter-Task Communications and IO System Functions for Many Processors Dynamically Shared among Multiple Applications Mark Sandstrom ThroughPuter, Inc mark@throughputercom

More information

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

! Design constraints.  Component failures are the norm.  Files are huge by traditional standards. ! POSIX-like Cloud background Google File System! Warehouse scale systems " 10K-100K nodes " 50MW (1 MW = 1,000 houses) " Power efficient! Located near cheap power! Passive cooling! Power Usage Effectiveness = Total

More information

Hadoop Virtualization Extensions on VMware vsphere 5 T E C H N I C A L W H I T E P A P E R

Hadoop Virtualization Extensions on VMware vsphere 5 T E C H N I C A L W H I T E P A P E R Hadoop Virtualization Extensions on VMware vsphere 5 T E C H N I C A L W H I T E P A P E R Table of Contents Introduction... 3 Topology Awareness in Hadoop... 3 Virtual Hadoop... 4 HVE Solution... 5 Architecture...

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Chapter 8 Virtual Memory What are common with paging and segmentation are that all memory addresses within a process are logical ones that can be dynamically translated into physical addresses at run time.

More information

NPTEL Course Jan K. Gopinath Indian Institute of Science

NPTEL Course Jan K. Gopinath Indian Institute of Science Storage Systems NPTEL Course Jan 2012 (Lecture 39) K. Gopinath Indian Institute of Science Google File System Non-Posix scalable distr file system for large distr dataintensive applications performance,

More information

Lecture 14: Cache & Virtual Memory

Lecture 14: Cache & Virtual Memory CS 422/522 Design & Implementation of Operating Systems Lecture 14: Cache & Virtual Memory Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions

More information

FILE SYSTEMS, PART 2. CS124 Operating Systems Fall , Lecture 24

FILE SYSTEMS, PART 2. CS124 Operating Systems Fall , Lecture 24 FILE SYSTEMS, PART 2 CS124 Operating Systems Fall 2017-2018, Lecture 24 2 Last Time: File Systems Introduced the concept of file systems Explored several ways of managing the contents of files Contiguous

More information

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps Interactive Scheduling Algorithms Continued o Priority Scheduling Introduction Round-robin assumes all processes are equal often not the case Assign a priority to each process, and always choose the process

More information

Applying Network Coding to Peer-to-Peer File Sharing

Applying Network Coding to Peer-to-Peer File Sharing 1938 IEEE TRANSACTIONS ON COMPUTERS, VOL. 63, NO. 8, AUGUST 2014 Applying Network Coding to Peer-to-Peer File Sharing Min Yang and Yuanyuan Yang, Fellow, IEEE Abstract Network coding is a promising enhancement

More information

Google File System. Arun Sundaram Operating Systems

Google File System. Arun Sundaram Operating Systems Arun Sundaram Operating Systems 1 Assumptions GFS built with commodity hardware GFS stores a modest number of large files A few million files, each typically 100MB or larger (Multi-GB files are common)

More information

It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley

It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley 1 A Brief History! Fall, 2006: Started Conviva with Hui Zhang (CMU)! Initial goal: use p2p technologies to reduce distribution

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors

Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors Journal of Interconnection Networks c World Scientific Publishing Company Energy-Aware Scheduling for Acyclic Synchronous Data Flows on Multiprocessors DAWEI LI and JIE WU Department of Computer and Information

More information

Optical Communications and Networking 朱祖勍. Nov. 27, 2017

Optical Communications and Networking 朱祖勍. Nov. 27, 2017 Optical Communications and Networking Nov. 27, 2017 1 What is a Core Network? A core network is the central part of a telecommunication network that provides services to customers who are connected by

More information

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction

DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Modified by: Dr. Ramzi Saifan Definition of a Distributed System (1) A distributed

More information