Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Size: px
Start display at page:

Download "Reduction of Periodic Broadcast Resource Requirements with Proxy Caching"

Transcription

1 Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota kusmiere, Abstract Video streaming on a large scale can be expensive and resource demanding. Periodic broadcast reduces server bandwidth usage for popular videos. While improving the scalability this mechanism increases the WAN bandwidth and buffer space needed by a client considerably. We propose to address the high resource demands on the client side with caching by a proxy server. We present periodic broadcast schemes designed to work with proxy caching that optimize server bandwidth, client bandwidth and client buffer requirements. We show that prefix caching can lower WAN bandwidth usage and derive the lower limit on the bandwidth requirement. Similar result is obtained for server bandwidth usage with prefix caching. We show also that chunk caching can reduce client buffer space requirement to a small value. We analyze the trade-off involved in reducing usage of each of the three resources. The results of the optimization are illustrated with a group of mpeg-4 encoded videos for which optimal parameters are obtained through dynamic programming. We formulate also and solve a problem of efficient use of the proxy storage space under the assumption that such space is limited. I. INTRODUCTION Many multimedia applications such as distance learning, digital library, video-conferencing, and entertainment on-demand rely on video delivery over the Internet. The typical environment consists of a video server providing access to a number of videos for a potentially large number of clients over Wide Area Network (WAN). Video streaming is challenging due to the amount of resources required such as server I/O bandwidth and storage space at the client, and cost of resources such as WAN bandwidth. In a large scale environment server I/O bandwidth can be easily exhausted by a large number of requests for a popular video. WAN bandwidth, on the other hand, even though may be plentiful in the backbone network, may be also expensive with limited amount available for a client in the access network. Periodic broadcast (PB) [ ] has been proposed to address the scalability issue of video streaming. PB schemes make the server I/O bandwidth usage independent of the number of clients. However, the requirements that a client has to satisfy are typically much larger than for a unicast transmission. A video is divided into a number of segments, each broadcasted continuously on a separate channel. Typically the client receives multiple segments simultaneously and buffers them until their playback time. Such an approach increases significantly This work is partially supported by NSF Grant EIA-444 and DTC Intelligent Storage Consortium the WAN bandwidth usage for a client and the storage space required for buffering. In this paper we examine how client resource requirements imposed by PB can be reduced without affecting the scalability of the server I/O bandwidth usage. We assume that a proxy server is placed in each client community to assist the central server by caching part of a video and delivering it directly to the client [6 9]. Path between a proxy and a client is contained in a local network characterized by abundant and low cost bandwidth. Hence, a video stream coming from a proxy does not consume any WAN bandwidth and incurs a minimal cost. The proxy I/O bandwidth and storage space are limited. Thus, it may be possible to cache only a part of a video. The main advantage offered by proxy caching is reduction of the amount of data transferred over WAN and consequently reduction of the server I/O and WAN bandwidth usage. We examine influence of caching of different video portions by a proxy on the resource requirements and describe periodic broadcast scheme designed to work with proxy caching. Hence, we answer two questions: what part of a video should be cached and what PB scheme should be used to optimize bandwidth usage and client buffer utilization. We establish that prefix caching is an optimal way to reduce PB-related server I/O bandwidth usage. We show also that prefix caching is an optimal way to reduce the WAN bandwidth usage by a client and derive a lower bound on the bandwidth for a given prefix size. The client buffer space requirement is addressed with chunk caching, i.e., caching chunks of frames distributed throughout the whole video. We show that chunk caching by a proxy can reduce client buffer space needed to a small value and present a group of PB schemes designed to work with this type of caching. For each of the resources: server I/O bandwidth, client WAN bandwidth and client buffer space, we examine how usage optimization of one resource affects the other requirements. All three optimal schemes are designed for a CBR video but can be applied also to an encoded VBR video. We illustrate the results obtained by applying bandwidth and buffer optimal PB schemes to a group of mpeg-4 encoded videos. The parameters for the PB schemes are obtained through dynamic programming approach. Since the proxy buffer space is limited it is important to use it efficiently. Based of the knowledge of interdependency between different resources we formulate an optimization problem for efficient use of the proxy storage space for each of the

2 three optimal PB schemes and propose heuristic algorithms for solving the problems. The paper is organized as follows. The related work is presented in Section II. In Sections III, IV and V we cover server I/O bandwidth, client WAN bandwidth and client buffer space optimization, respectively. Dynamic programming approach for optimizing client bandwidth and buffer space for encoded video is presented in Section VI. In Section VII we formulate the problem of efficient utilization of proxy storage space and propose heuristic solution. We conclude the paper in Section VIII. II. RELATED WORK The main advantage offered by periodic broadcast scheme is the scalability of the server bandwidth usage. Different schemes result in different bandwidth requirements as well as requirements on the client side, namely client network and I/O bandwidth and storage space. The minimal server bandwidth achievable for a given service delay has been derived in [4, ] and is equal to times playback rate, where is the video length and is the delay. The server bandwidth required by Greedy Disk-Conserving Broadcast (GDB) presented in [4] reaches.44 of the minimal server bandwidth, while Greedy Equal-Bandwidth Broadcast (GEBB) presented in [] can get arbitrarily close to the optimal value. In GDB transmission rate for each channel is equal to the playback rate and the goal is to minimize the number of server channels. GEBB also uses the same rate for all channels but this can rate be different from playback rate. Intuitively there is a trade-off between server bandwidth and client resource requirements. This trade-off has been explored in [4]. Video segmentation by GDB is designed to minimize the number of server channels subject to client I/O bandwidth and storage availability. The client I/O bandwidth is expressed as a multiple of the playback rate and accounts for one segment read from disk and played out, and segments received simultaneously and written on disk. It has been shown that as the client I/O bandwidth grows, storage space and the number of server channels are reduced. Similarly given sufficient client I/O bandwidth increasing storage space allows to reduce the number of channels. Client bandwidth requirements have been addressed also in []. The authors propose a way to limit the number of segments that the client has to receive simultaneously in Pagoda Broadcast [] and Fast Broadcast [3]. The reduction of client bandwidth comes at a price of increased service delay. In order to use the same amount of server bandwidth the modified schemes required service delay larger than the original schemes. The reduction of the number of segment the client has to receive simultaneously is proposed in [] with segmentation based on the Fibonacci number sequence. The number of segments received by client at any time is limited to only two. The Fibonacci has reception rules similar to GEBB. The server bandwidth usage is higher than required by GEBB but the client storage requirement is lower. The problem of client bandwidth requirements has been examined also in context of other video stream sharing techniques such as stream merging [4]. Bandwidth skimming technique introduced in [] is based on the assumption that a video is encoded at the rate slightly lower than the bandwidth available to the client. This bandwidth surplus is used to perform hierarchical stream mering. We explore the possibility of reducing PB-related resource requirement with proxy server providing caching service. The goal of our research is to find out the optimal server bandwidth, client bandwidth and client storage space for a given amount of storage space available at the proxy server. Proxy caching allows to eliminate PB-related service delay in addition to reducing resource requirements. We analyze the trade-off between different resource requirements of proxy-assisted PB schemes. Proxy-assisted periodic broadcast has been introduce in [6] but the main purpose was to reduce server bandwidth requirement but caching a number of initial segments of a video. Our approach is to design a PB scheme to work with proxy caching in order to reduce resource demands. III. SERVER BANDWIDTH OPTIMIZATION Periodic broadcast has been designed to address server resource requirements. A PB scheme that minimizes server bandwidth usage was introduced in []. We examine how proxy support can further lower I/O and network bandwidth usage for the server. The reduction is mainly due to the fact that part of a video cached by a proxy does not have to be transmitted by the server. The bandwidth optimal transmission scheme is applied to the remaining part of the video. A. Server Bandwidth Optimal Scheme In the bandwidth optimal scheme [] a video is partitioned into a number segments whose sizes follow a geometric progression. Each segment is transmitted on a separate channel and the transmission rate is the same for all channels. Client starts receiving all segments simultaneously and the reception can start at any time, i.e., without waiting for the transmission of the beginning of the first segment. The playback starts as soon as the first segment is completely received and from that point the total reception rate decreases as subsequent segments are received. Note that each segment is completely received just in time for its playback. Two parameters determine server bandwidth requirements: the start-up delay equal to the reception time of the first segment and the number of segments. The server bandwidth decreases with an increase in the start-up delay as the transmission is stretched in time more, and with an increase in the number of segments. Prefix caching by a proxy complements the optimal scheme in a natural way. Video prefix is delivered by the proxy to the client and played immediately. During prefix playback the client starts receiving also the remaining part of the video (suffix) from the server. By setting the start-up delay for the suffix to be equal to the prefix playback length, we eliminate start-up delay for the whole video. We examine server bandwidth requirements with prefix caching and show that it is an optimal way to reduce server bandwidth usage. We assume that the size of the portion of

3 3 a video cached by the proxy is expressed by relative to the placements length of the video, where. For the ease of presentation and without loss of generality we assume also that the video length is and that the playback rate is also equal to. r Fig.. b Server bandwidth optimal PB with prefix caching B. Prefix Caching We first examine the prefix caching scheme, i.e., we consider the case when the proxy caches a video prefix of size. The server bandwidth optimal scheme is applied to the server suffix of length. By choosing length of the first segment such that its transmission time is equal to the prefix playback time ( ) we eliminate the PB-related delay. Client receives video prefix from the proxy and at the same time starts receiving each of the suffix segments as illustrated in Figure. The playback is shown on the top with the prefix playback marked with dashed line, and the reception at the bottom. The shaded area marks reception of each of the segments within the server transmission schedule. The suffix start-up delay time is spent receiving and playing the prefix. The per-channel transmission rate is expressed as (following derivation in []: () and is reached as the number of channels approaches infinity. Fig.. server bandwidth b = b =. b =. b = acceptable delay Server I/O bandwidth with prefix caching Figure presents server bandwidth as a function of the startup delay (relative to the video playback time) for various prefix sizes with suffix segments. Recall that server bandwidth decreases with an increase in the start-up delay. Now we add also prefix size influence. The actual start-up delay experienced by the client is equal to the difference between the acceptable delay and the prefix playback time, and is zero if the difference is negative, i.e., the prefix playback time is longer. We observe that server bandwidth decreases with an increase in prefix size for a fixed delay. Note that for a delay smaller than prefix playback time bandwidth is affected only by the prefix size. Generally, prefix caching reduces server I/O bandwidth usage and start-up delay. C. Server Bandwidth Optimal Caching The next question we ask is whether prefix caching is optimal with respect to the server bandwidth. In the general case the cached portion of a video does not have to constitute the prefix but may be distributed across the whole video in the form of small chunks of frames. A PB scheme that works with chunk caching is designed as follows. In order to eliminate PB-related delay we assume that the first chunk is placed as a small prefix. The reception time of the first segment is equal to the playback time of the first cached chunk. The reception time of the second segment is equal to the playback time of the first segment and the first two chunks. Generally, the reception time of the " th segment is equal to the playback time of the previous #" segments and " chunks. PSfrag replacements Figure 3 illustrates the scheme. The number of segments in () the portion of the video not cached by the proxy is equal to the number of cached chunks. Note that prefix caching can be where is the number of segments and the segment sizes are given by considered a special case of chunk caching with sizes of all. The per-channel bandwidth decreases with an increase in the number of segments, and in- chunks but the first one equal to. creases with the increase in the prefix size. The server bandwidth usage is equal to. The bandwidth lower bound is: Fig. 3. Server bandwidth optimal PB with chunk caching ) Bandwidth Optimization: Given the PB scheme based on chunk caching, we modify the server bandwidth optimization problem formulation given in [] in the following way: minimize $& (' subject to )$ * ' * $ * # ' * $ (' + $ (',.- / 3 where 4 is the transmission rate for the " th segment and is the size of the " th chunk. The goal is to minimize server bandwidth by selecting the size for each of cached chunks and the transmission rate for each of segments of the portion not cached. The sum of cached chunk sizes cannot exceed and the sum of segment size must be equal to the size of video portion not (3)

4 4 cached by the proxy. We require also that the number of segments is equal to, while the number of chunks can be smaller or equal to. We observe that the solution for the problem formulated in such a way is obtained for and for " / /, and with the same transmission rate selected for each channel. The first chunks constitute a prefix of size. Thus, the optimal server bandwidth is achieved with prefix caching. More precisely, the same optimal value of server bandwidth cannot be reached with another type of caching with the same number of channels. In order to verify this result, we change the problem formulation in the next step to eliminate the solution which relies on one large prefix chunk. We set the size of each of chunks to and change the first constraint in the problem formula- tion (3) to #" * # ' *. Now the set of variables $ contains only transmission rates for each channel (and automatically the segment sizes). We relax also the requirement that the number of segments is strictly. The solution obtained for the modified problem tends to combine together a number of initial chunks to form a prefix longer than (the size of a single chunk) but smaller then. Notice that combining two chunks eliminates one segment from the non-cached portion of the video. Both, the longer prefix and the larger number of segments, result in a lower server bandwidth. The trade-off between the size of prefix and the number of segments is explored to find values for both quantities which result in the minimal server bandwidth. Thus, the number of segments in the solution may be smaller than. The server bandwidth requirement is larger than server bandwidth needed with prefix caching for the same number of segments. Figure 4 presents server bandwidth as a function of the number of segments for size of cached portion equal also to. obtained with prefix caching and with fixed size chunk caching. We observe also that different rates are selected for different segments (recall the server optimal scheme with prefix caching uses the same rate for all channels), and that the rates are larger for segments with larger numbers. server bandwidth prefix caching optimal chunk caching optimal PSfrag replacements ) Client bandwidth and buffer requirements: The server bandwidth optimization comes at a price of increased requirements at the client. We now examine these requirements. The bandwidth needed by the client is determined by the peak requirement at the beginning of the reception when all segments are received simultaneously and is equal to the server bandwidth. The client is required to buffer segments for later playback. The maximum buffer occupancy is reached when the total reception rate becomes equal to the playback rate. After that point, the buffer occupancy decreases since the total reception rate decreases. Let be such that and, i.e., after reception of the (l+)st segment the total reception rate becomes smaller than the playback rate. Then the client buffer size required for server-optimal scheme is equal to: (4) where. The buffer size is calculated as a buffer occupancy just after the playback of the th segment. We observe that client buffer requirement decreases with an increase in the number of segments due to the fact that the length of the reception time from the server increases. The client buffer requirement decreases also with an increases in the prefix size. The minimum buffer size is required for the minimum server bandwidth and is equal to is, i.e., approximately of the video suffix, i.e., of the whole video. The maximum buffer size is required for one segment and is equal to the suffix size +. IV. CLIENT BANDWIDTH OPTIMIZATION We now examine how proxy caching can help reduce the PBrelated client requirements, namely the WAN bandwidth. We ask two questions: ) what part of the video should be cached, and ) what PB schemes should be used in order to minimize client WAN bandwidth usage. Given the size of cached part of the video equal to, the client bandwidth can be intuitively reduced to by stretching the reception from the server in time as much as possible and without introducing any delay. In order to achieve this result the cached part is received from the proxy and played without delay, i.e., simultaneously with the reception. The remaining part, not cached by the proxy, is received over the playback time of the entire video. We now construct a group of schemes that allow to achieved such bandwidth reduction. Similarly to the server bandwidth optimization we first examine PB schemes designed based on the proxy prefix caching. 3 number of segments b Fig. 4. Server bandwidth comparison for prefix and chunk caching Due to nonlinearity of the first constraint, the solution was obtained using numerical methods. Fig.. One-channel client bandwidth optimal PB with prefix caching

5 A. Prefix Caching In the client-centric PB scheme one of the parameters controlling the design is the maximum number of segments that the client has to receive simultaneously at any time. This number is denoted by, while is the total number of the video segments transmitted by the server. ) One-Channel Scheme: We start with the most straight forward design in which the client receives only one segment at a time ( ). In order to minimize client bandwidth we stretch the reception time of each suffix segment as much as possible. Therefore, the first segment s reception is stretched along the entire prefix playback and its is determined by the playback time of the prefix:. The second segment is received during the playback time of the first segment and its size is. We assume that all segments have the same transmission rate since equalizing the channel rates results in minimization of the maximum rate. In order to ensure uninterrupted playback, each segment must be received before the beginning of its playback as illustrated in Figure. Thus, the segments sizes are defined as &. Note that the following condition has to be satisfied: $ (' $ '. Since $ (', then the lower bound on the client bandwidth is: () Therefore, the one-channel scheme can achieve the optimal client bandwidth. Although the bound is reached with the infinite number of channels, we observe that in practice a relatively small number of channels is sufficient to achieve bandwidth close to the optimal value. Based on the above result we can also establish that without proxy caching, the minimal client bandwidth is equal to, where is the start-up delay relative to the video playback length. a) Server bandwidth: We now examine how optimizing the client bandwidth requirement affects server bandwidth usage. We explore server bandwidth dependence on the number of segments and the client bandwidth. Note that the same number of segments can result from different values of client bandwidth. We assume at first that for a given number of segments, perchannel rate is selected in such a way that the reception time of the last th segment is equal to the playback time of the st segment. Hence, the per-channel rate is a solution to the following equation: ' (6) Note that setting to be slightly higher than this value does not affect the number of segments but the reception time is not used efficiently, i.e., the last segment is received earlier than needed. Figure 6 presents client bandwidth and server bandwidth as functions of the number of segments for prefix size. We observe that the server bandwidth initially decreases and then starts increasing again. This behavior is fairly intuitive. As the number of segments increases linearly, the per-channel rate decreases much slower. There exists a number of segments for which the server bandwidth reaches minimum and this number is usually small. Fig. 6. bandwidth client bandwidth server bandwidth number of segments k Server bandwidth dependence on number of segments Given a value of client bandwidth the corresponding server bandwidth is derived as follows. The sum of the segment sizes has to satisfy the following condition $ ' +. Since $ ' & $ ',, then and the server bandwidth is: (7) Figure 7 presents server bandwidth as a function of the client bandwidth. We observe that as the per-channel transmission rate (and client bandwidth) approaches, the server bandwidth and the number of segments approaches infinity. In practice server bandwidth required to reach client bandwidth requirement close to the optimal value is not excessively high. Figure 9 presents server bandwidth required to reach client bandwidth within. from the optimal value (the difference with the optimal value is no larger than. of the playback rate) for different values of the prefix size. Note that even though the server bandwidth required may be higher than the optimal value, the scalability of PB is preserved, i.e., the server bandwidth usage is independent of the number of clients. server bandwidth kr client bandwidth r Fig. 7. Server bandwidth dependence on client bandwidth

6 6 b) Client buffer size: The buffer space required at the client is equal to the size of the largest suffix segment since the largest buffer occupancy occurs at the beginning of the largest segment playback. If the client bandwidth is smaller than the playback rate, then the size of the first segment determines buffer size:. The lower bound on the buffer size is and is achieved with the minimum client bandwidth. The lower bound is largest for (and prefix equal to of the whole video) and reaches of the whole video. For smaller than. the prefix size is larger reducing buffer requirements, for larger than. prefix size is small resulting in small segment sizes and cosequently small buffer requirements. For a given prefix size, client buffer requirement increases approximately linearly with the increase in client bandwidth and decrease in the number of segments. replacements Fig. 8. b Two-channel client bandwidth optimal PB with prefix caching ) Multi-Channel Scheme: We now examine how increasing the number of channels that the client has to receive simultaneously affects bandwidth requirements. We start with the case of and then formulate the general conclusion. Assume that the client can receive two channels at a time at rate each. Then the segment sizes are as follows:, + ), # #. The solution to this homogeneous linear recurrence is: where,, and are solution to the following set of equations: and. Note that for, and. In this case ' (' In order to examine what type of caching is optimal for client bandwidth we formulate the following optimization problem. 4 Similarly to the server bandwidth case, we assume that the cached portion of the video may be distributed in the form of chunks of frames throughout the whole video. We assume that is determinedpsfrag by the replacements fol- the client receives one segment at a time. The first segment is. The solutions Therefore, the minimum value of lowing equation: is (8) This minimum value of is equal to the half of the minimum value obtained in the one-channel case. However, the client has to receive two channels simultaneously and the minimum client bandwidth ends up being the same as in the previous case. For a given value of prefix size, both one-channel and two-channel schemes can achieve the same client bandwidth. We observe that the server bandwidth in the two-channel scheme is no larger and in most cases smaller than server bandwidth in one-channel scheme. For a given value of client band- Fig. 9. server bandwidth r s m = m = m = 3 m = prefix size b Server bandwidth for close to optimal client bandwidth width two-channel scheme has half the per-channel transmission rate of one-channel scheme and in most cases less than double the number of segments. The results obtained for the two-channel scheme can be generalized to describe schemes in which client receives segments simultaneously. In the general case # $ * ' * for " and # $ * ' for " -. Each of these schemes has the same lower bound on the client bandwidth:,. However, for a given value of client bandwidth the server bandwidth decreases with the increase in or at least does not increase. Figure 9 presents server bandwidth for various values of. We observe that the for small prefix size a considerable gain can be obtained by increasing to 3 or 4 channels. Further increase in does not yield a significant decrease of server bandwidth. The minimum client buffer requirements are similar for all schemes. For the client bandwidth smaller than playback rate the maximum buffer occupancy occurs at the beginning of the first suffix segment playback and is equal to +. B. Client Bandwidth Optimal Caching Scheme received during the playback time of the first chunk. The second segment is received during the playback time of the first segment and the second chunk. Each subsequent segment is received during playback of the previous segment and one cached chunk. The design is presented in Figure. Fig.. One-channel PB with chunk caching Our goal is to minimize client bandwidth. In the problem formulation in addition to the per-channel transmission rate,

7 * 7 Fig.. client bandwidth prefix caching chunk caching. 3 3 number of segments Client bandwidth comparison for prefix and chunk caching also the sizes of all chunks are variables. We assume that the cached portion of the video of size is divided into chunks and thus, the portion not cached by the proxy is divided into segments. More formally: minimize subject to ) for ". $ ', $ ' + / / We find that, similarly to the server bandwidth case, the minimum client bandwidth is obtained when4+ which means that all chunks are combined to form a prefix of size. The portion not cached by the proxy is still divided into segments. Hence, we conclude that prefix caching is optimal for client bandwidth for a given number of channels. In order to verify this conclusion we next exclude the prefix solution from the consideration by setting the size of each chunk to be equal to. We observe that for any value of the client bandwidth obtained with prefix caching, the same value can be achieved with equal size chunk caching but with a larger number of segments. Figure presents client bandwidth as a function of the number of segments for both schemes for the size of the cached portion equal to... As the number of segments increases, the difference between two schemes decreases. V. CLIENT BUFFER OPTIMIZATION PB schemes increase the storage space required at the client significantly. Prefix caching allows to minimize client network bandwidth and eliminate PB-related delay. We show that chunk caching, i.e., caching of chunks of frames distributed through the video, by a proxy can reduce the storage space requirements at the client to a small value. Recall that in a prefix-based PB scheme that minimizes client bandwidth usage the largest amount of data is accumulated in the client s buffer at the end of the prefix playback (assuming that client bandwidth is lower than the playback rate). Prefix playback is sustained by data received from the proxy while all data received during that time from the server is buffered. (9) By distributing cached groups of frames throughout the whole video instead of concentrating them at the beginning, we give client a chance to drain data from the buffer in-between the cached chunks playback intervals. A. One-channel Chunk Caching We now examine in more details the chunk caching scheme. We start with a simple scheme that requires the client to receive only one segment at a time. Client receives the first segment from the server during the first chunk playback. Each subsequent segment is received during the playback of the previous segment and one cached chunk. For simplicity we assume that all chunks are of equal size;. The segment sizes are then defined as follows:, $ * '. Note that then ' (' () () The first component of the above summation represents the sum of sizes of all PB segments, while the second component represents the sum of sizes of cached chunks. From () we have () Note that given, the size of cached portion decreases with an increase in. Therefore we examine the limit of the size of cached portion as approaches infinity: (3) The above result shows that given per-channel transmission rate the size of cached portion of the video is lower bounded by. In other words, if the size of cached portion is, then the smallest client bandwidth achievable is. This lower bound is the same as in the prefix caching case. Client bandwidth increases as the chunk size increases and the number of chunks decreases. a) Client buffer size: Client buffer requirements are determined by the size of the last segment since the segment sizes are increasing:, under the assumption that. Note that The larger the number of segment, the smaller the size of the largest segment. Therefore, the client buffer requirements can be made arbitrarily small by dividing the video into sufficiently large number of chunks and segments independent of the cached portion size.

8 8 b) Server bandwidth: The server bandwidth required for chunk caching scheme is expressed as, where is the solution of the following equation: (4) For a given size of cached portion the server bandwidth generally increases with the number of chunks and segments. More precisely we observe the same trend as for prefix caching scheme, i.e., the server bandwidth initially decreases and then increases as the number of segments increases. As the client bandwidth approaches optimal value, the server bandwidth approaches infinity. c) Comparison with prefix caching: In order to estimate the cost of lowering buffer requirements we compare the resource requirements of the chunk caching scheme with the requirements of the prefix caching scheme. Figure (a) shows client buffer requirement as function of prefix size for both schemes for the client bandwidth close to the optimal value. Recall that prefix caching scheme has the highest lower bound on the client buffer size for the prefix size of.. We observe that chunk caching reduces buffer requirements considerably. The Figure shows results for two different sizes of chunks:. and.. Figure (b) presents corresponding server bandwidth. We observe that generally the server bandwidth with chunk caching is larger for large sizes of cached portion of the video. Smaller value of chunk size (.) results in a larger value of server bandwidth than smaller chunk size (.). We observe also that for a given size of a chunk the largest server bandwidth is reached when the size of the cached portion of the video is equal to of the video length. Note that there exists size of the cached portion for which both schemes have similar server bandwidth requirements and for which chunk caching has smaller client buffer requirements. For example, the server bandwidth usage is similar for prefix caching and chunk caching with chunk size equal to. when the size of cached video portion is approximately. At the same the client buffer size required with chunk caching is about of the video length lower than with prefix caching and the difference in client bandwidth is minimal. B. Multi-channel Chunk Caching Scheme Similarly to the prefix caching case, increasing the number of channels that client has to read from simultaneously decreases the server bandwidth requirements. Similarly to the prefix caching scheme we observe that as increases the reduction of server bandwidth is smaller. C. Client Buffer Optimal Caching Scheme In order to find optimal sizes of chunks and their distribution throughout the video for a fixed number of segments, we formulate the client buffer optimization problem. The problem has the same constraints as the client bandwidth optimization problem (9) but the goal is to minimize the size of largest segment as the one determining the client buffer size: client buffer server bandwidth prefix chunk b i =. chunk b i = cached portion size (a) client buffer prefix chunk b i =. chunk b i = cached portion size (b) server bandwidth Fig.. Comparison between one-channel PB requirements with prefix and chunk caching minimize subject to # for "3 $ ', $ ' + / (/ () Intuitively a way to minimize the maximum segment size is to equalize segment sizes. The solution obtained using the numerical methods confirms the intuition. The segment sizes are given by. The first segment is received during the playback of the first chunk, while the consecutive segments are received each during playback time of one chunk and the preceding segment. Thus, the size of the first chunk is larger, while the subsequent chunk are of equal size: and for " / (/. The rate is obtained by solving the following equation: $ (' & and is equal to + (6)

9 * 9, rate has to satisfy the following condition:. Hence, for a given size of cached portion, the number of segments must be: Note that for + (7) For a smaller number of segments the smallest buffer requirement is obtained with +, i.e., with prefix caching. An attempt to use more than one chunk decreases the size of the first segment and increases subsequent segments and the buffer requirement. For in order to equalize the segment sizes it is necessary to choose rates which may differ from one segment to another. The rate for the first channel is and 4 for " / (/. Note that in this case the first channel rate is the largest and determines the client bandwidth requirement. The conclusion is as follows: for the number of segments smaller than, the smallest buffer is needed with prefix caching and equal size segments. For the number of segments larger or equal to it is possible to equalize segment sizes with chunk caching, which carries an additional advantage of equalizing transmission rates for all channels and consequently minimizing client bandwidth requirement. Note that equalizing rates in the first case increases buffer requirements as both quantities, segment sizes and transmission rates, cannot be equalized at the same time. We illustrate the concept of chunk caching with the fixed number of channels for in Figure 3. The smallest number of segments for which the number of chunks in the buffer optimal scheme is larger than is, for a value smaller than that prefix caching is used. Figure 3(a) presents client buffer space requirements as a function of the number of segments. We observe that for the buffer requirement decreases with the number of segments faster for chunk caching than for prefix caching. On the other hand, bandwidth requirement shows the opposite trend as presented in Figure 3(b). Prefix caching with equal segment sizes has higher client bandwidth than prefix caching with equal transmission rates. For the number of segments larger than or equal to, prefix caching with equal segment sizes has higher client bandwidth requirement than prefix caching with equal rates and smaller than chunk caching. VI. PERIODIC BROADCAST FOR VBR VIDEO Periodic broadcast with proxy caching presented so far can be applied also to compressed video whose transmission rate is not CBR. Such an application is possible due to the fact that the whole segment has to be received before its playback. Hence, transmission rate for each segment does not have to match playback rate. A video can be delivered through periodic broadcast at constant rate. A dynamic programing method has been constructed in [] to obtained solution for the server bandwidth optimization for encoded video. The pseudocode is presented in Figure 4, where is the size of the " th frame, is the number of frames, is the start-up delay and is consumption rate (frames/s). / " denotes the minimum server bandwidth for the first Fig. 3. client buffer client bandwidth prefix caching with equal rates prefix caching with equal segemnts chunk caching number of segments (a) client buffer prefix caching with equal rates prefix caching with equal segments chunk caching number of segments (b) client bandwidth Caching with fixed number of channels frames and " segments, while / " denotes starting position of the " th segment. The algorithm can be applied to the suffix of a video with set to the playback time of the prefix. A similar method can be used to solve client bandwidth optimization problem for one-channel scheme. In this case is computed as / " )/ / " where / " denotes the largest per-segment rate for the first frames of the suffix and " segments. The complexity of both algorithms is. Dynamic programming can be also used to compute minimal client buffer requirements. We devise a dynamic programming algorithm under the assumption that the sizes of all segments are the same and equal to size of the part of the video not cached by the proxy divided by the number of segments. The goal is to choose chunk sizes so that the client bandwidth is minimized. "/ denotes the minimum client bandwidth requirements with the " equal-size segments and the size of the cached video '

10 $ ". " " "/ #". " " "/ "/ * ' $ #". " " " " / " $ " / " / " / " Fig. 4. * ' Server Bandwidth Optimization with Dynamic Programming portion equal to. " / denotes the starting position of the " th segment. We introduce two more variables: "/, which denotes starting position of the " th chunk, and "/, which denotes ending position of the " th segment. is the targeted segment size. The pseudocode for client buffer is presented in Figure. Function " / determines the number of frames whose cumulative size is as close to as possible but no larger than. The complexity of the algorithm is. The granularity for the size of the cached portion can be selected to control the complexity. In order to illustrate resource optimization results we apply the dynamic programming methods to mpeg-4 encoding of several movies [7] at three different levels of quality. All movies are 6 minutes long and encoded at the rate of frames per second. We assume that the proxy caches video portion of size equal to in the first case, and in the second case, of the video length. Due to the complexity of the dynamic programming algorithms the computations are performed at the granularity of frames. Part of the video not cached by the proxy is divided into segments. Table I shows the statistics for each movie and each quality level. Table II presents the results for server bandwidth, client bandwidth and client buffer optimal optimization. The rates are specified in Mbps and sizes in MB. The three movies differ in frames sizes variability. Silence of the Lambs shows the highest variability (Figure 6(a)), Jurassic Park I (Figure 6(b)) has medium variability, and Star Wars IV (Figure 6(c)) exhibits the lowest variability out of the three movies. Generally, the client bandwidth optimal scheme equalizes rates along all segments, while buffer optimal scheme equalizes segment sizes. The frame variability affects buffer requirements of the bandwidth optimal schemes. We observe that with of the video cache the client has to buffer about of Silence of the Lambs and 4 of the other two videos. Buffer requirements of the client bandwidth optimal scheme are similar to buffer requirements of the server bandwidth optimal scheme for Silence of the Lambs, but lower for the other two movies. For of the video buffered the bandwidth requirements "3 " " "/ $ ' " / " / / / / / ' "3 " " " " " " / / " " / / " / " / #$ # '& )( * $ # '& * ( $ #+$ # '& )( ' #+$ # '& )( " "/ " / " / " / " /, " /- " / " / Fig.. Client Buffer Optimization with Dynamic Programming are similar for client bandwidth optimal scheme and buffer optimal scheme. The difference increases with frame variability. With of the video buffered, the difference between bandwidth requirements of these two schemes are more pronounced, e.i., buffer optimal scheme has higher client and server bandwidth requirements but the reduction of the buffer requirement is larger. In the case of of the video cached buffer optimal scheme chooses a prefix to cache, i.e., all of the ten chunks but the first one have sizes of zero. With caching there are multiple chunks of frames, not necessarily consecutive, of non-zero size spread through the video. Buffer optimal scheme based on chunk caching offer a nice alternative for the two bandwidth optimal schemes. Its client bandwidth requirement is between client bandwidths for the bandwidth optimal schemes. The same result holds for server bandwidth usage, while the buffer space required is the smallest out of all three schemes. VII. RESOURCE OPTIMIZATION FOR MULTIPLE VIDEOS We have explored so far relations between four elements: size of the cached portion of a video, server I/O bandwidth, client WAN bandwidth and client buffer space required. One general conclusions is that the size of the cached portion of a video has a big influence on the other three elements. The larger the cached portion, the lower bandwidth and buffer requirements. However, a proxy has limited storage space that makes it impossible to cache a whole video and limits the size

11 TABLE I MOVIE STATISTICS movie quality size (MB) mean rate (Mbps) peak rate (Mbps) frame std max(f)-min(f) (B) low Silence of the Lambs medium high low Jurassic Park I medium high low Star Wars IV medium high TABLE II BANDWIDTH AND BUFFER REQUIREMENTS FOR ENCODED VIDEOS movie Silence of the Lambs Jurassic Park I Star Wars IV cached server band. optimal client band. optimal client buffer optimal bandwidth buffer client band. server band. buffer client band. server band. buffer rate (Mbps). rate (Mbps). rate (Mbps) time (s) 3 3 time (s) 3 3 time (s) (a) Silence of the Lambs (b) Jurassic Park I (c) Star Wars IV Fig. 6. Per-frame playback rate for high quality mpeg-4 encoding

12 video size (MB) playback rate (Mbps). prefix size (MB) video number video number video number (a) video sizes (b) playback rate (c) prefix sizes Fig. 7. Server bandwidth optimal solution of the cached portion. Thus, it is important to use the proxy storage space efficiently. Given the size of the video portion cached by the proxy, the bandwidth and buffer space requirements still depend on the number of segments in the portion which is not cached by the proxy. Hence, the problem is to choose the size of the cached portion for each video and the number of segments delivered by the server in such a way that resource usage is optimized with the efficient use of the proxy storage space. The general strategy we use in order to reduce the complexity of the problem is to decouple the size of the cached portion choice from the selection of the number of channels. We first partition the proxy buffer space among all videos assuming an infinite number of channels for the video part delivered by the server. Recall that such an assumption yields a minimum resource usage for a given size of cached portion. Next, we choose the number of segments to be used. The choice of this number is made to minimize the usage of one of the three resources subject to the availability of the other two. The relation between server I/O bandwidth, client WAN bandwidth and client buffer depends on the PB scheme. We use knowledge of this relation to determine all values needed for an optimal video transmission. We consider one scheme for each resource: server bandwidth optimal PB with prefix caching, one-channel PB with prefix caching and with chunk caching for client bandwidth and client buffer space optimization, respectively. A. Server I/O Bandwidth We first address the server I/O bandwidth optimization problem with proxy caching. Given a set of videos, we want to select the prefix size for each video and the number of segments for the suffix so that the aggregate server I/O bandwidth usage is minimized. Recall that the lower bound on the bandwidth is reached in the server bandwidth optimal scheme with an infinite number of suffix segments. Thus, one of the constraints for the problem must set a limit on this number. The client bandwidth is equal to the server bandwidth, and the client buffer space required decreases with a decreases in the bandwidth. Thus, neither client bandwidth nor client buffer space available limit the number of segment. We assume that there is certain overhead related to maintaining a transmission channel for each segment and this overhead is used to set the upper limit on the number of channels. More formally: minimize $ ' subject to ) $ ' ) $ ' 3) (8) where is the playback rate of the " th video, is the length of the " th video, and is the proxy buffer size. The minimization function expresses server I/O bandwidth usage. The set of variables consists of prefix sizes for each video and the number of suffix segments. The first constraint accounts for the limited proxy buffer size, the second for the total number of channels that the server can maintain. The third constraint ensures that each video has a non-zero prefix cached by proxy. In order to simplify the problem we first consider an asymptotic case with the minimum server bandwidth determined by the prefix size and obtained with the infinite number of segments:. In this way we eliminate a set of variables representing number of suffix segments of each video. All these assumptions result in the following problem formulation: minimize $ ' subject to $ ' 3 3 (9) where are the only variables. Intuitively, a longer prefix should be selected for a video with higher playback rate that for the one with lower rate. Also a longer video should have longer prefix. However, the influence of the video length on the prefix choice is smaller due to the logarithmic function applied to the video length. Thus, an approximate solution can be obtained heuristically by choosing a prefix of length proportional to the playback rate for each video. If the prefix size is larger than video length the difference is distributed among remaining videos resulting in a weighted max-min fair sharing allocation of proxy buffer space. The solution obtained for the above problem has to be adjusted to find the number

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek Yingfei Dong David Du Poznan Supercomputing and Dept. of Electrical Engineering Dept. of Computer Science Networking Center

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Proxy Prefix Caching for Multimedia Streams

Proxy Prefix Caching for Multimedia Streams Proxy Prefix Caching for Multimedia Streams Subhabrata Seny, Jennifer Rexfordz, and Don Towsleyy ydept. of Computer Science znetworking & Distributed Systems University of Massachusetts AT&T Labs Research

More information

Frame-Based Periodic Broadcast and Fundamental Resource Tradeoffs

Frame-Based Periodic Broadcast and Fundamental Resource Tradeoffs Frame-Based Periodic Broadcast and Fundamental Resource Tradeoffs Subhabrata Sen ½, Lixin Gao ¾, and Don Towsley ½ ½ Dept. of Computer Science ¾ Dept. of Computer Science University of Massachusetts Smith

More information

Chapter 5: Analytical Modelling of Parallel Programs

Chapter 5: Analytical Modelling of Parallel Programs Chapter 5: Analytical Modelling of Parallel Programs Introduction to Parallel Computing, Second Edition By Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar Contents 1. Sources of Overhead in Parallel

More information

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces

More information

Analytical Modeling of Parallel Systems. To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003.

Analytical Modeling of Parallel Systems. To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Analytical Modeling of Parallel Systems To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic Overview Sources of Overhead in Parallel Programs Performance Metrics for

More information

Maximizing the Number of Users in an Interactive Video-on-Demand System

Maximizing the Number of Users in an Interactive Video-on-Demand System IEEE TRANSACTIONS ON BROADCASTING, VOL. 48, NO. 4, DECEMBER 2002 281 Maximizing the Number of Users in an Interactive Video-on-Demand System Spiridon Bakiras, Member, IEEE and Victor O. K. Li, Fellow,

More information

Delay-minimal Transmission for Energy Constrained Wireless Communications

Delay-minimal Transmission for Energy Constrained Wireless Communications Delay-minimal Transmission for Energy Constrained Wireless Communications Jing Yang Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, M0742 yangjing@umd.edu

More information

Subset sum problem and dynamic programming

Subset sum problem and dynamic programming Lecture Notes: Dynamic programming We will discuss the subset sum problem (introduced last time), and introduce the main idea of dynamic programming. We illustrate it further using a variant of the so-called

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 2, APRIL 2000 185 A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services Ion Stoica, Hui Zhang, Member, IEEE, and

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers

Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers Toward End-to-End Fairness: A Framework for the Allocation of Multiple Prioritized Resources in Switches and Routers Yunkai Zhou Microsoft Corporation, Redmond, WA 98052 and Harish Sethu Department of

More information

FUTURE communication networks are expected to support

FUTURE communication networks are expected to support 1146 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 13, NO 5, OCTOBER 2005 A Scalable Approach to the Partition of QoS Requirements in Unicast and Multicast Ariel Orda, Senior Member, IEEE, and Alexander Sprintson,

More information

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation 1 Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation Weijie Wu, Student Member, IEEE, and John C.S. Lui, Fellow, IEEE Abstract P2P-Video-on-Demand (P2P-VoD)

More information

Multi-Cluster Interleaving on Paths and Cycles

Multi-Cluster Interleaving on Paths and Cycles Multi-Cluster Interleaving on Paths and Cycles Anxiao (Andrew) Jiang, Member, IEEE, Jehoshua Bruck, Fellow, IEEE Abstract Interleaving codewords is an important method not only for combatting burst-errors,

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Using Multicast for Streaming Videos across Wide Area Networks

Using Multicast for Streaming Videos across Wide Area Networks Using Multicast for Streaming Videos across Wide Area Networks Bing Wang ½, Subhabrata Sen ¾, Micah Adler ½ and Don Towsley ½ ½ Department of Computer Science University of Massachusetts, Amherst, MA 0003

More information

36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008

36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008 36 IEEE TRANSACTIONS ON BROADCASTING, VOL. 54, NO. 1, MARCH 2008 Continuous-Time Collaborative Prefetching of Continuous Media Soohyun Oh, Beshan Kulapala, Andréa W. Richa, and Martin Reisslein Abstract

More information

Optimized Strategies for Real-Time Multimedia Communications from Mobile Devices

Optimized Strategies for Real-Time Multimedia Communications from Mobile Devices Optimized Strategies for Real-Time Multimedia Communications from Mobile Devices Enrico Masala Dept. of Control and Computer Engineering, Politecnico di Torino, Torino, Italy ( Part of this work has been

More information

Scalable On-Demand Streaming of Non-Linear Media

Scalable On-Demand Streaming of Non-Linear Media Scalable On-Demand Streaming of on-linear Media Yanping Zhao, Derek L. Eager, and Mary K. Vernon Abstract A conventional video file contains a single temporally-ordered sequence of video frames. Clients

More information

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1]

Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Report on Cache-Oblivious Priority Queue and Graph Algorithm Applications[1] Marc André Tanner May 30, 2014 Abstract This report contains two main sections: In section 1 the cache-oblivious computational

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming COPACC: A Cooperative - Caching System for On-demand Media Streaming Alan T.S. Ip 1, Jiangchuan Liu 2, and John C.S. Lui 1 1 The Chinese University of Hong Kong, Shatin, N.T., Hong Kong {tsip, cslui}@cse.cuhk.edu.hk

More information

Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm

Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm Saiyad Sharik Kaji Prof.M.B.Chandak WCOEM, Nagpur RBCOE. Nagpur Department of Computer Science, Nagpur University, Nagpur-441111

More information

A Network-Conscious Approach to End-to-End Video Delivery over Wide Area Networks Using Proxy Servers*

A Network-Conscious Approach to End-to-End Video Delivery over Wide Area Networks Using Proxy Servers* A Network-Conscious Approach to End-to-End Video Delivery over Wide Area Networks Using Proxy Servers* Yuewei Wang, Zhi-Li Zhang, David H.C. Du, and Dongli Su Abstract In this papel; we present a novel

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

MPEG4 VIDEO OVER PACKET SWITCHED CONNECTION OF THE WCDMA AIR INTERFACE

MPEG4 VIDEO OVER PACKET SWITCHED CONNECTION OF THE WCDMA AIR INTERFACE MPEG4 VIDEO OVER PACKET SWITCHED CONNECTION OF THE WCDMA AIR INTERFACE Jamil Y. Khan 1, Pratik Das 2 School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, NSW 238,

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

A Dynamic Caching Algorithm Based on Internal Popularity Distribution of Streaming Media

A Dynamic Caching Algorithm Based on Internal Popularity Distribution of Streaming Media A Dynamic Caching Algorithm Based on Internal Popularity Distribution of Streaming Media Jiang Yu 1,2, Chun Tung Chou 2 1 Dept. of Electronics and Information Engineering, Huazhong University of Science

More information

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Stretch-Optimal Scheduling for On-Demand Data Broadcasts Stretch-Optimal Scheduling for On-Demand Data roadcasts Yiqiong Wu and Guohong Cao Department of Computer Science & Engineering The Pennsylvania State University, University Park, PA 6 E-mail: fywu,gcaog@cse.psu.edu

More information

Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard

Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard Araz Jangiaghdam Seminar Networks and Distributed Systems School of Engineering and Sciences Jacobs University Bremen Campus Ring 1,

More information

Model answer of AS-4159 Operating System B.tech fifth Semester Information technology

Model answer of AS-4159 Operating System B.tech fifth Semester Information technology Q.no I Ii Iii Iv V Vi Vii viii ix x Model answer of AS-4159 Operating System B.tech fifth Semester Information technology Q.1 Objective type Answer d(321) C(Execute more jobs in the same time) Three/three

More information

IEEE TRANSACTIONS ON BROADCASTING, VOL. 51, NO. 4, DECEMBER

IEEE TRANSACTIONS ON BROADCASTING, VOL. 51, NO. 4, DECEMBER IEEE TRANSACTIONS ON BROADCASTING, VOL. 51, NO. 4, DECEMBER 2005 473 The Rate Variability-Distortion (VD) Curve of Encoded Video and Its Impact on Statistical Multiplexing Patrick Seeling and Martin Reisslein

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

APPLICABILITY OF TCP-FRIENDLY PROTOCOLS FOR REAL-TIME MULTIMEDIA TRANSMISSION***

APPLICABILITY OF TCP-FRIENDLY PROTOCOLS FOR REAL-TIME MULTIMEDIA TRANSMISSION*** POZNAN UNIVERSITY OF TE CHNOLOGY ACADEMIC JOURNALS No 54 Electrical Engineering 2007 Agnieszka CHODOREK* Robert R. CHODOREK** APPLICABILITY OF TCP-FRIENDLY PROTOCOLS FOR REAL-TIME MULTIMEDIA TRANSMISSION***

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

INTRODUCTION TO ALGORITHMS

INTRODUCTION TO ALGORITHMS UNIT- Introduction: Algorithm: The word algorithm came from the name of a Persian mathematician Abu Jafar Mohammed Ibn Musa Al Khowarizmi (ninth century) An algorithm is simply s set of rules used to perform

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming ECUSIVE ATCHING An Efficient Technique for Multicast Video Streaming Y. W. Wong, Jack Y. B. Lee Department of Information Engineering The Chinese University of Hong Kong, Shatin, N.T., Hong Kong Email:

More information

Application Layer Multicast Algorithm

Application Layer Multicast Algorithm Application Layer Multicast Algorithm Sergio Machado Universitat Politècnica de Catalunya Castelldefels Javier Ozón Universitat Politècnica de Catalunya Castelldefels Abstract This paper presents a multicast

More information

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS 1 ZHU QIANG, 2 SUN YUQIANG 1 Zhejiang University of Media and Communications, Hangzhou 310018, P.R. China 2 Changzhou University, Changzhou 213022,

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

White Paper Broadband Multimedia Servers for IPTV Design options with ATCA

White Paper Broadband Multimedia Servers for IPTV Design options with ATCA Internet channels provide individual audiovisual content on demand. Such applications are frequently summarized as IPTV. Applications include the traditional programmed Video on Demand from a library of

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Delivery Network on the Internet

Delivery Network on the Internet Optimal erver Placement for treaming Content Delivery Network on the Internet Xiaojun Hei and Danny H.K. Tsang Department of Electronic and Computer Engineering Hong Kong University of cience and Technology

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

AODV-PA: AODV with Path Accumulation

AODV-PA: AODV with Path Accumulation -PA: with Path Accumulation Sumit Gwalani Elizabeth M. Belding-Royer Department of Computer Science University of California, Santa Barbara fsumitg, ebeldingg@cs.ucsb.edu Charles E. Perkins Communications

More information

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution University of Massachusetts Amherst ScholarWorks@UMass Amherst Computer Science Department Faculty Publication Series Computer Science Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

More information

A Lossless Quality Transmission Algorithm for Stored VBR Video

A Lossless Quality Transmission Algorithm for Stored VBR Video 1 A Lossless Quality Transmission Algorithm for Stored VBR Video Fei Li, Yan Liu and Ishfaq Ahmad Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay, Kowloon,

More information

Scalability of Multicast Delivery for Non-sequential Streaming Access

Scalability of Multicast Delivery for Non-sequential Streaming Access Scalability of ulticast Delivery for on-sequential Streaming Access Shudong Jin Computer Science Department Boston University, Boston, A 5 jins@cs.bu.edu Azer Bestavros Computer Science Department Boston

More information

Using Multicast for Streaming Videos across Wide Area Networks

Using Multicast for Streaming Videos across Wide Area Networks Using Multicast for Streaming Videos across Wide Area Networks Bing Wang, Subhabrata Sen, Micah Adler and Don Towsley Department of Computer Science University of Massachusetts, Amherst, MA 0003 AT&T Labs-Research,

More information

An optimal bandwidth allocation strategy for the delivery of compressed prerecorded video

An optimal bandwidth allocation strategy for the delivery of compressed prerecorded video Multimedia Systems (1997) 5:297 309 Multimedia Systems c Springer-Verlag 1997 An optimal bandwidth allocation strategy for the delivery of compressed prerecorded video Wu-chi Feng, Farnam Jahanian, Stuart

More information

INTERLEAVING codewords is an important method for

INTERLEAVING codewords is an important method for IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 2, FEBRUARY 2005 597 Multicluster Interleaving on Paths Cycles Anxiao (Andrew) Jiang, Member, IEEE, Jehoshua Bruck, Fellow, IEEE Abstract Interleaving

More information

Module 10 MULTIMEDIA SYNCHRONIZATION

Module 10 MULTIMEDIA SYNCHRONIZATION Module 10 MULTIMEDIA SYNCHRONIZATION Lesson 36 Packet architectures and audio-video interleaving Instructional objectives At the end of this lesson, the students should be able to: 1. Show the packet architecture

More information

Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced

Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced 1 Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced Separation of control plane and data plane Image from: Lecompte

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY /$ IEEE

554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY /$ IEEE 554 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 2, FEBRUARY 2008 Cross-Layer Optimization of MAC and Network Coding in Wireless Queueing Tandem Networks Yalin Evren Sagduyu, Member, IEEE, and

More information

IBM 3850-Mass storage system

IBM 3850-Mass storage system BM 385-Mass storage system by CLAYTON JOHNSON BM Corporation Boulder, Colorado SUMMARY BM's 385, a hierarchical storage system, provides random access to stored data with capacity ranging from 35 X 1()9

More information

Scalable On-Demand Media Streaming with Packet Loss Recovery

Scalable On-Demand Media Streaming with Packet Loss Recovery Scalable On-Demand Media Streaming with Packet Loss Recovery Anirban Mahanti Derek L. Eager Mary K. Vernon David Sundaram-Stukel Dept. of Computer Science University of Saskatchewan Saskatoon, SK S7N 5A9

More information

On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms. Yunkai Zhou and Harish Sethu

On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms. Yunkai Zhou and Harish Sethu On Achieving Fairness in the Joint Allocation of Processing and Bandwidth Resources: Principles and Algorithms Yunkai Zhou and Harish Sethu Technical Report DU-CS-03-02 Department of Computer Science Drexel

More information

Video Quality for Live Adaptive Bit-Rate Streaming: Achieving Consistency and Efficiency

Video Quality for Live Adaptive Bit-Rate Streaming: Achieving Consistency and Efficiency Video Quality for Live Adaptive Bit-Rate Streaming: Achieving Consistency and Efficiency Introduction The video industry is undergoing an unprecedented amount of change. More premium live video content

More information

Trace Traffic Integration into Model-Driven Simulations

Trace Traffic Integration into Model-Driven Simulations Trace Traffic Integration into Model-Driven Simulations Sponsor: Sprint Kert Mezger David W. Petr Technical Report TISL-10230-10 Telecommunications and Information Sciences Laboratory Department of Electrical

More information

Utilization based Spare Capacity Distribution

Utilization based Spare Capacity Distribution Utilization based Spare Capacity Distribution Attila Zabos, Robert I. Davis, Alan Burns Department of Computer Science University of York Abstract Flexible real-time applications have predefined temporal

More information

Query Processing and Alternative Search Structures. Indexing common words

Query Processing and Alternative Search Structures. Indexing common words Query Processing and Alternative Search Structures CS 510 Winter 2007 1 Indexing common words What is the indexing overhead for a common term? I.e., does leaving out stopwords help? Consider a word such

More information

THE EMERGENCE of high-speed internetworks facilitates. Smoothing Variable-Bit-Rate Video in an Internetwork

THE EMERGENCE of high-speed internetworks facilitates. Smoothing Variable-Bit-Rate Video in an Internetwork 202 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 7, NO. 2, APRIL 1999 Smoothing Variable-Bit-Rate Video in an Internetwork Jennifer Rexford, Member, IEEE, Don Towsley, Fellow, IEEE Abstract The burstiness

More information

Application of SDN: Load Balancing & Traffic Engineering

Application of SDN: Load Balancing & Traffic Engineering Application of SDN: Load Balancing & Traffic Engineering Outline 1 OpenFlow-Based Server Load Balancing Gone Wild Introduction OpenFlow Solution Partitioning the Client Traffic Transitioning With Connection

More information

Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm

Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm 161 CHAPTER 5 Hierarchical Intelligent Cuttings: A Dynamic Multi-dimensional Packet Classification Algorithm 1 Introduction We saw in the previous chapter that real-life classifiers exhibit structure and

More information

A Framework for Space and Time Efficient Scheduling of Parallelism

A Framework for Space and Time Efficient Scheduling of Parallelism A Framework for Space and Time Efficient Scheduling of Parallelism Girija J. Narlikar Guy E. Blelloch December 996 CMU-CS-96-97 School of Computer Science Carnegie Mellon University Pittsburgh, PA 523

More information

Effect of TCP and UDP Parameters on the quality of Video streaming delivery over The Internet

Effect of TCP and UDP Parameters on the quality of Video streaming delivery over The Internet Effect of TCP and UDP Parameters on the quality of Video streaming delivery over The Internet MAZHAR B. TAYEL 1, ASHRAF A. TAHA 2 1 Electrical Engineering Department, Faculty of Engineering 1 Alexandria

More information

Multi-Way Number Partitioning

Multi-Way Number Partitioning Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California,

More information

Scalable proxy caching algorithm minimizing clientõs buffer size and channel bandwidth q

Scalable proxy caching algorithm minimizing clientõs buffer size and channel bandwidth q J. Vis. Commun. Image R. xxx (2005) xxx xxx www.elsevier.com/locate/jvci Scalable proxy caching algorithm minimizing clientõs buffer size and channel bandwidth q Hyung Rai Oh, Hwangjun Song * Department

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 11 Coding Strategies and Introduction to Huffman Coding The Fundamental

More information

RTP: A Transport Protocol for Real-Time Applications

RTP: A Transport Protocol for Real-Time Applications RTP: A Transport Protocol for Real-Time Applications Provides end-to-end delivery services for data with real-time characteristics, such as interactive audio and video. Those services include payload type

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

STATISTICAL PROPERTIES OF MPEG STREAMS AND ISSUES FOR TRAMSMISSION OF VIDEO INFORMATION IN HIGH SPEED NETWORKS

STATISTICAL PROPERTIES OF MPEG STREAMS AND ISSUES FOR TRAMSMISSION OF VIDEO INFORMATION IN HIGH SPEED NETWORKS STATISTICAL PROPERTIES OF MPEG STREAMS AND ISSUES FOR TRAMSMISSION OF VIDEO INFORMATION IN HIGH SPEED NETWORKS Marek Natkaniec, Krzysztof Wajda {wajda,natkanie}@kt.agh.edu.pl. Telecommunications Department,

More information

On Optimal Traffic Grooming in WDM Rings

On Optimal Traffic Grooming in WDM Rings 110 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 20, NO. 1, JANUARY 2002 On Optimal Traffic Grooming in WDM Rings Rudra Dutta, Student Member, IEEE, and George N. Rouskas, Senior Member, IEEE

More information

Introduction: Two motivating examples for the analytical approach

Introduction: Two motivating examples for the analytical approach Introduction: Two motivating examples for the analytical approach Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. D. Manjunath Outline

More information

The Transmitted Strategy of Proxy Cache Based on Segmented Video

The Transmitted Strategy of Proxy Cache Based on Segmented Video The Transmitted Strategy of Proxy Cache Based on Segmented Video Zhiwen Xu, Xiaoxin Guo, Yunjie Pang, Zhengxuan Wang Faculty of Computer Science and Technology, Jilin University, Changchun City, 130012,

More information

Energy-Aware Routing: a Reality Check

Energy-Aware Routing: a Reality Check 1 Energy-Aware Routing: a Reality Check Aruna Prem Bianzino 1, Claude Chaudet 1, Federico Larroca 2, Dario Rossi 1, Jean-Louis Rougier 1 1 Institut TELECOM, TELECOM ParisTech, CNRS LTCI UMR 5141, Paris,

More information

Chapter 13 Strong Scaling

Chapter 13 Strong Scaling Chapter 13 Strong Scaling Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Coalition Formation towards Energy-Efficient Collaborative Mobile Computing. Liyao Xiang, Baochun Li, Bo Li Aug. 3, 2015

Coalition Formation towards Energy-Efficient Collaborative Mobile Computing. Liyao Xiang, Baochun Li, Bo Li Aug. 3, 2015 Coalition Formation towards Energy-Efficient Collaborative Mobile Computing Liyao Xiang, Baochun Li, Bo Li Aug. 3, 2015 Collaborative Mobile Computing Mobile offloading: migrating the computation-intensive

More information

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links Network Working Group Request for Comments: 1046 W. Prue J. Postel ISI February 1988 A Queuing Algorithm to Provide Type-of-Service for IP Links Status of this Memo This memo is intended to explore how

More information

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS

Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Advanced Topics UNIT 2 PERFORMANCE EVALUATIONS Structure Page Nos. 2.0 Introduction 4 2. Objectives 5 2.2 Metrics for Performance Evaluation 5 2.2. Running Time 2.2.2 Speed Up 2.2.3 Efficiency 2.3 Factors

More information

HTRC Data API Performance Study

HTRC Data API Performance Study HTRC Data API Performance Study Yiming Sun, Beth Plale, Jiaan Zeng Amazon Indiana University Bloomington {plale, jiaazeng}@cs.indiana.edu Abstract HathiTrust Research Center (HTRC) allows users to access

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Chapter 4: Implicit Error Detection

Chapter 4: Implicit Error Detection 4. Chpter 5 Chapter 4: Implicit Error Detection Contents 4.1 Introduction... 4-2 4.2 Network error correction... 4-2 4.3 Implicit error detection... 4-3 4.4 Mathematical model... 4-6 4.5 Simulation setup

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Issues of Long-Hop and Short-Hop Routing in Mobile Ad Hoc Networks: A Comprehensive Study

Issues of Long-Hop and Short-Hop Routing in Mobile Ad Hoc Networks: A Comprehensive Study Issues of Long-Hop and Short-Hop Routing in Mobile Ad Hoc Networks: A Comprehensive Study M. Tarique, A. Hossain, R. Islam and C. Akram Hossain Dept. of Electrical and Electronic Engineering, American

More information

Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks. (Due in class on 7 March 2002)

Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks. (Due in class on 7 March 2002) EE Project Description Winter Experiments with Broadcast Routing Algorithms for Energy- Constrained Mobile Adhoc Networks (Due in class on March ) Abstract In this project, you will experiment with the

More information

Lecture 10: Performance Metrics. Shantanu Dutt ECE Dept. UIC

Lecture 10: Performance Metrics. Shantanu Dutt ECE Dept. UIC Lecture 10: Performance Metrics Shantanu Dutt ECE Dept. UIC Acknowledgement Adapted from Chapter 5 slides of the text, by A. Grama w/ a few changes, augmentations and corrections in colored text by Shantanu

More information

Lecture 18: Video Streaming

Lecture 18: Video Streaming MIT 6.829: Computer Networks Fall 2017 Lecture 18: Video Streaming Scribe: Zhihong Luo, Francesco Tonolini 1 Overview This lecture is on a specific networking application: video streaming. In particular,

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

1 More configuration model

1 More configuration model 1 More configuration model In the last lecture, we explored the definition of the configuration model, a simple method for drawing networks from the ensemble, and derived some of its mathematical properties.

More information