A Proxy Caching Scheme for Continuous Media Streams on the Internet

Similar documents
A Packet-Based Caching Proxy with Loss Recovery for Video Streaming

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Streaming Flow Analyses for Prefetching in Segment-based Proxy Caching to Improve Media Delivery Quality

Media Caching Support for Mobile Transit Clients

A CONTENT-TYPE BASED EVALUATION OF WEB CACHE REPLACEMENT POLICIES

A Survey of Proxy Caching Mechanisms for Multimedia Data Streams

Proxy Caching Mechanism for Multimedia Playback Streams in the Internet

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in the Internet

Decentralized Hash-Based Coordination of Distributed Multimedia Caches

MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS.

An Efficient Web Cache Replacement Policy

A Survey of Streaming Media Caching

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

Accelerating Internet Streaming Media Delivery using Network-Aware Partial Caching

Proxy Caching Algorithm based on Segment Group Popularity for Streaming Media

Cache Design for Transcoding Proxy Caching

Cost-based cache replacement and server selection for multimedia proxy across wireless Internet

Seminar on. By Sai Rahul Reddy P. 2/2/2005 Web Caching 1

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 )

Joint Server Scheduling and Proxy Caching for Video Delivery

Characterizing Document Types to Evaluate Web Cache Replacement Policies

Segment-Based Proxy Caching of Multimedia Streams

A Hybrid Caching Strategy for Streaming Media Files

Accelerating Internet Streaming Media Delivery Using Network-Aware Partial Caching

Proxy Caching for Video on Demand Systems in Multicasting Networks

ENHANCING QoS IN WEB CACHING USING DIFFERENTIATED SERVICES

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

Efficient Remote Data Access in a Mobile Computing Environment

LRC: Dependency-Aware Cache Management for Data Analytics Clusters. Yinghao Yu, Wei Wang, Jun Zhang, and Khaled B. Letaief IEEE INFOCOM 2017

On the Power of Cooperation in Multimedia Caching

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on

SF-LRU Cache Replacement Algorithm

Surveying Formal and Practical Approaches for Optimal Placement of Replicas on the Web

Improving VoD System Efficiency with Multicast and Caching

Multimedia Streaming. Mike Zink

Lightweight caching strategy for wireless content delivery networks

Interactive Video Streaming with Proxy Servers

Network-aware partial caching for Internet streaming media

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

An Interactive Video Delivery and Caching System Using Video Summarization

An Efficient LFU-Like Policy for Web Caches

Using Multicast for Streaming Videos across Wide Area Networks

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

Proxy Caching for Media Streaming over the Internet

A New Document Placement Scheme for Cooperative Caching on the Internet

An Energy-Efficient Client Pre-Caching Scheme with Wireless Multicast for Video-on-Demand Services

Hierarchical Content Routing in Large-Scale Multimedia Content Delivery Network

IN recent years, the amount of traffic has rapidly increased

SAT A Split-Up Cache Model to Boost the Performance of Web Cache Replacement Policies

BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers

Design and Implementation of a Caching System for Streaming Media over the Internet

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

Providing Resource Allocation and Performance Isolation in a Shared Streaming-Media Hosting Service

Mocha: A Quality Adaptive Multimedia Proxy Cache for Internet Streaming

Modelling and Analysis of Push Caching

Evaluation of Performance of Cooperative Web Caching with Web Polygraph

Chunk Scheduling Strategies In Peer to Peer System-A Review

Using Multicast for Streaming Videos across Wide Area Networks

Improving object cache performance through selective placement

Dynamic Broadcast Scheduling in DDBMS

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Page 1. Multilevel Memories (Improving performance using a little cash )

Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand

UbiqStor: Server and Proxy for Remote Storage of Mobile Devices

A Memory Management Scheme for Hybrid Memory Architecture in Mission Critical Computers

Proxy Prefix Caching for Multimedia Streams

CACHE COHERENCY IN P2P COOPERATIVE PROXY CACHE SYSTEMS

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

Operating Systems. Memory: replacement policies

Buffer Management Scheme for Video-on-Demand (VoD) System

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference

Quality Differentiation with Source Shaping and Forward Error Correction

The Transmitted Strategy of Proxy Cache Based on Segmented Video

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks

Web Caching and Content Delivery

MediaGuard: a model-based framework for building streaming media services

The Case for Reexamining Multimedia File System Design

A Network-Conscious Approach to End-to-End Video Delivery over Wide Area Networks Using Proxy Servers*

CHAPTER 4 OPTIMIZATION OF WEB CACHING PERFORMANCE BY CLUSTERING-BASED PRE-FETCHING TECHNIQUE USING MODIFIED ART1 (MART1)

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

CIT 668: System Architecture. Caching

INF5071 Performance in distributed systems Distribution Part II

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming

Design and Implementation of A P2P Cooperative Proxy Cache System

Chapter The LRU* WWW proxy cache document replacement algorithm

A Joint Replication-Migration-based Routing in Delay Tolerant Networks

Comparison of Shaping and Buffering for Video Transmission

Using Non-volatile Memories for Browser Performance Improvement. Seongmin KIM and Taeseok KIM *

Scalable proxy caching algorithm minimizing clientõs buffer size and channel bandwidth q

Architecture for Cooperative Prefetching in P2P Video-on- Demand System

Proxy Ecology - Cooperative Proxies with Artificial Life

Chapter 6 Objectives

An Integration Approach of Data Mining with Web Cache Pre-Fetching

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Transcription:

A Proxy Caching Scheme for Continuous Media Streams on the Internet Eun-Ji Lim, Seong-Ho park, Hyeon-Ok Hong, Ki-Dong Chung Department of Computer Science, Pusan National University Jang Jun Dong, San 30, Keum Jung Ku, Pusan, Korea Zip-code 609735 E-mail: {ejlim, shpark, hohong, kdchung}@melon.cs.pusan.ac.kr Phone : +82-51-510-2877, Fax: +82-51-515-2208 Contact author : Eun-ji Lim Subject area : Multimedia Communications and Systems Keywords : Caching, Proxy, Continuous Media, Replacement, Internet Abstract The Internet has enabled the dissemination and access of vast amounts of information to be easy. But dramatic increases of the number of users of Internet cause server overload, network congestion and perceived latency. In addition, currently the number of continuous media data such as audio and video is growing rapidly on the Internet. In this paper, we propose a proxy caching scheme that stores a portion of continuous media stream or entire stream on the Internet. The proposed scheme reduces initial latency and maximizes the amount of data served directly from cache without accessing the remote server. By caching the initial fraction of stream data, service startup latency can be reduced. And, by varying the size of the fraction of stream to be cached according to variation of stream popularity, we can utilize the cache space efficiently and maximize the amount of data served directly from cache. We use the caching utility of each stream for cache replacement. Caching utility represents correlation between popularity of a stream and the size of storage space that is allocated for a stream. The way to measure popularity of continuous media stream should be different from that of traditional data such as text and image. We propose the method of measuring popularity of stream using the amount of data played back by s. We have performed simulations to evaluate our caching policy. Simulation results shows that our caching policy outperforms other caching algorithms such as LRU, LFU and SIZE in aspects of BHR(byte hit ratio), initial latency and replacement overhead. 1

1. Introduction Through the dramatic growth of Internet, it became easy for people to access geographically distributed and vast amounts of information. But, currently Internet is suffering from problems such as server overload, network congestion and increase of response time. Network caching is one of the solutions to these problems[1,2,3,4,5]. By storing frequently accessed data at a cache closer to the, perceived latency can be reduced significantly. In addition, since data stored in cache can be shared by multiple s, the number of accesses to the remote server can be reduced. This results in the reduction of server load and network traffic. However, existing Web caching techniques are for traditional data such as text and image. Recently the number of continuous media streams such as audio and video are increasing rapidly in the Internet[6] and multimedia applications such as VoD(Video on Demand) are widely used. Due to the large size of continuous media objects compared to traditional data, existing Web caching techniques which store entire object can not be applied to cache continuous media objects. In this paper, we propose caching scheme for continuous media objects. Due to the large size of continuous media object, we do not store entire object. By storing the initial fraction of media stream, service initial latency can be reduced. Moreover, by varying the size of initial fraction of stream to be cached according to popularity of each stream, limited storage space of cache can be utilized efficiently. We also propose cache replacement policy which uses correlation between the size of allocated cache space for each stream and total amount of data played back by s of each stream. In Chapter 2, we present related works of this study. Chapter 3 describes details of our caching scheme, and Chapter 4 shows performance evaluation of our scheme through simulation. Finally, we conclude this paper and discuss future work. 2. Related Works 2.1 Web Caching Most previous studies of Web caching have been focused on the cache replacement algorithm. These studies present schemes to utilize limited resource of cache efficiently by evicting the least 2

valuable object from the cache and keeping the most valuable object in cache using access information of the objects. In general, access frequency, access recency and object size are widely used access information[7,8]. Examples of Web caching algorithms are LRU, LFU, SIZE, LRU-SIZE, LRU-MIN, LRFU, etc[9]. However, since these schemes are object caching for a small size traditional data such as text and image, these are not appropriate for continuous media stream which has a large object size. 2.2 Continuous Media Caching Caching for continuous media data has been studied in the context of memory caches. Due to the large size of continuous media object, the portion of stream is cached and access sequentiality must be considered. Well-known schemes are interval caching and distance caching[10,11]. Since these schemes are memory caching, these are not appropriate to be applied to the disk cache which has limited bandwidth. Recently the studies of proxy caching schemes for continuous media data are going on[9,12,13,14,1 5,16]. In [9], resource-based caching(rbc) algorithm which considers cache disk bandwidth and space was proposed. In [12], prefix caching scheme to reduce latency and perform workahead smoothing was proposed. In [13], caching scheme that uses characterization of layeredencoded stream was described. In [14], video delivery technique called video staging which is to prefetching a predetermined amount of video data and store them a priori at proxy servers was developed. 3. Proxy Caching In general, proxy cache on the Internet places nearby along the path from the server to the, as shown in Figure 1. Proxy stores a portion of a stream data or an entire stream in a cache. Upon receiving the request for one of the cached streams, the proxy transmits data directly from the cache. 3

Server proxy cache Internet proxy cache LAN LAN proxy cache LAN [figure 1] proxy cache on the Internet 3.1 Proxy caching scheme for continuous media Due to the large size of continuous media data, if proxy stores the entire object as for the traditional data, only the small number of objects can be cached. This results in inefficient utilization of cache space and increasing replacement overhead. Thus, we propose a scheme that caches a portion of a stream. Proxy stores the initial fraction of stream data first and then caches following data progressively. By doing this, the amount of data to be cached can vary dynamically according to popularity variation of each stream. That is, if the amount of data requested by s increases, proxy caches more data for that stream. On the contrary, if it decreases, proxy caches less data. The stream which has very high popularity can be cached entirely, but the stream which has low popularity can be cached very small amount of headmost data or evicted from cache entirely. In this way, we can make each stream occupy proper cache space according to their popularity. The objective of this scheme is to maintain the cache space of each stream to be proportional to its popularity. With our caching scheme, limited cache space is utilized efficiently. And we achieve higher cache hit rate, lower latency and fewer number of replacement compared with the other schemes. 4

In this paper, we define the popularity of a stream as the total amount of data of a particular stream played back by s during an interval. Caching or replacement of each steam data is performed in granularity of fixed size segment based on the popularity. When the variation of popularity of streams is not significant, the small number of replacement is needed and cache is not overloaded, while the significant number of replacements is required when the popularity of streams changes rapidly. Figure 2 shows stream caching pattern of our caching scheme. popularity decrease popularity increase stream i s0 s1 s2 s3 s4 s5 s6 s7 s8 [figure 2] caching pattern of a stream based on change in popularity Stream i is devided into equal-size segments, s0, s1, s2,. Currently s0~s5 are cached. If the popularity of stream i increases, segments after s5 will be cached more from s6. On the contrary, if popularity decreases, segments will be evicted from s5 to s0 gradually. As a result, since the possibility that the initial segments of a stream are kept in cache is high, initial latency perceived by the users can be reduced. Upon receiving the request for a particular stream, the proxy immediately transmits initial data of the stream from the cache to the, while fetching the remainder of the stream from the server. Whenever a particular stream is requested by a, proxy server determines whether it will cache more segment or not based on the correlation between popularity of the stream and the amount of data of the stream cached. If there is not enough free cache space for a new segment, cache replacement algorithm should be performed. 3.2 Replacement policy If there is not enough free space for a new segment, one of the existing segments has to be evicted from cache. In this paper, we use caching utility for performing cache replacement. Caching utility value represents the correlation between popularity of a stream and the size of cache space of a stream allocated. 5

To perform this replacement, proxy server calculates caching utility value of all cached streams and selects the victim stream which has minimum value. The last cached segment of a victim stream is selected to be replaced. We will refer to this replacement policy as Smallest Caching Utility(SCU). 3.2.1 Caching utility Caching utility of a steam depends on the achievable benefit by caching the stream and the cost of resource used to cache the stream. We define the caching utility of a stream as the ratio of caching benefit to the caching cost of a stream. CachingBenefit CachingUti lity = Cost cost : for the caching cost, we use the size of storage space allocated for the stream, i.e. the size of cached portion of the stream. Let the caching cost of stream i be C i. Caching Benefit : the achievable benefit of caching the stream is the amount of data that can be served directly from the cache by caching the stream. It depends on the number of access to the particular stream, i.e. popularity. We define the popularity of a stream as the total amount of data of each stream played back by s during an interval. Continuous media object has large size and wide range of playback duration from several seconds to 1 or 2 hours. Thus, it is proper to use the total amount of data played back by s instead of access frequency as traditional data. Figure 3 shows the way to measure the total amount of data of a particular stream played back by s during an interval. Current time is t. S1, S2,, S5 denote active streams at time t or streams that finished playback within [t-, t]. The solid lines indicate the areas that are within [t-, t]. The total amount of data played back within these areas will be used to calculate the popularity of the stream. 6

S1 S2 start playback end playback S3 S4 S5 t- time t [figure 3] total amount of data played back within interval. Let P i,j denote the amount of data played back by j within [t-, t], and let P i denote the total amount of data played back by all s for stream i within [t-, t]. Assuming that k s have played back stream i during, then P i is given by k P i = P i j= 1 Where denotes the size of time window. By using the time window, P i can reflect the access recency. The more the stream is recently played back, the larger P i becomes. And, the less the stream has recently played back, the smaller P i becomes. Finally, caching utility of stream i, CU i, is given by, j CU i P = C i i = k j= 1 C P i i, j 7

3.2.2 Replacement algorithm The main objective of our cache replacement policy, SCU, is to maintain the size of cache space that is allocated for each stream to be proportional to its popularity. That is, maintaining the caching utility of all streams to be similar. So, by deleting the last cached segment of the stream which has smallest caching utility value from the cache, the caching utility of that stream becomes larger. s : segment size L i : total length of stream i C i : the amount of cached data of stream i P i : the total amount of data played back by s for stream i CU i (C i ) : caching utility of stream i for cached data C i save_segment_in_cache(stream i, segment k) { // if there is free space in cache if ( free space in cache >= s ) { ε = L i - C i ; // the size of portion that is not cached for stream i if ( free space in cache >= ε ) cache the whole rear portion of stream i; else cache the rear portion of size as large as free space for stream i; return ; } // if there is not enough free space // calculate the caching utility of stream i by caching the segment k; CU i (C i +s)= P i / (C i + s); While(for all cached streams) Calculate CU i ; Select the stream which has the smallest CU i as the victim stream v; if ( CU v (C v -s)> CU i (C i + s) ) return; } evict the last cached segment of stream v from cache; cache the segment k of stream i; return; [figure 4] SCU replacement algorithm 8

4. Performance evaluation Our caching scheme is, by caching the portion of continuous media stream or entire stream, to reduce average initial latency and server load and network traffic. We evaluate the performance of our caching scheme, Smallest Caching Utility(SCU), and compare with other well known algorithms through simulation. 4.1 Simulation environment In general, Hit Rate(HR) has been used as the performance measurement of the caching schemes for traditional data such as text and image. But it is not proper for continuous media data that is not atomic object. Therefore, we use Byte Hit Rate(BHR) that is proper for continuous media characteristics to measure the performance of our caching scheme more correctly. In addition, we also use average service initial latency for request and the number of replacements as performance metrics. Simulation parameters are shown in table 1. Media object size 280MB ~ 420MB Request distribution Zipf distribution of θ = 0.27 Average request interarrival time 10 sec number of media object 200 Simulation duration Transmission delay between proxy server and Transmission delay between remote server and proxy server Segment size Processing time of 8000 user requests 100ms 200ms 10MB [table 1] simulation parameters We assume that the size of stream data ranges from 280MB to 420MB(about 25min. ~ 35min.) uniformly. User requests are generated in exponential distribution with an average interarrival time of 10sec, and popularity distribution conforms to Zipf distribution. There are 200 continuous media streams 9

in remote server. We performed simulation during the processing time of 8000 user requests. 4.2 Simulation results The amount of cached data of steams according to the popularity The main objective of our cache replacement algorithm is maintaining the amount of cached data for each stream to be proportional to its popularity. First experiment shows that this objective is achieved successfully by using SCU algorithm. Figure 5(a,b) show the amount of cached data of each stream 1 hour after the experiment started. The stream of higher popularity has more cached data than the stream of lower popularity. In the case of popular stream, since the amount of data that is transmitted directly from cache is large, remote server load can be reduced. In addition, the service initial latency for more user request can be reduced by caching the initial fraction of each stream. cache size : 5GB cache size : 23GB 400 350 300 250 200 150 100 50 0 1 13 25 37 49 61 73 85 97 109 121 133 145 157 169 181 193 the amount of cached data of each stream (MB) 400 350 300 250 200 150 100 50 0 0 13 26 39 52 65 the amount of cached data of each stream (GB) 78 91 104 117 130 143 156 169 182 195 popularity of stream popularity of stream (a) cache size : 5GB (b) cache size : 23GB [figure 5] the amount of cached data as a function of popularity of each stream Performance analysis of replacement policy Figure 6 shows the comparison of our replacement policy with other well-known Web caching algorithms. We compare the performance of the SCU against LRU, LFU and SIZE. First, figure 6(a) shows the BHR as a function of cache size. The SCU performs 14~36% better than LRU, about 5% better than LFU and much better than SIZE. Figure 6(b) shows average initial latency reduction as a function of cache size on the assumption that the transmission delay between proxy server and is 100ms and the delay between proxy server and remote server is 200ms. The SCU algorithm has 25% lower average latency 10

reduction compared to LFU and 33% lower average latency reduction compared to LRU. Figure 6(c) depicts the number of replacements. The SCU has a very small number of replacements when cache size is small, and largest number of replacements when cache size is 10GB. But the number of replacements decreases from that point. The reason is that the SCU algorithm caches initial fraction of a stream preferentially. When cache size is too small, only small initial fractions of most steams are cached and these data are hardly evicted from the cache. So, the number of replacement is very few in this case. Even though the SCU has larger number of replacements than LFU when cache size is large, since the SCU algorithm performs replacement in granularity of a segment whereas LFU is object level caching, we can say that the replacement overhead of the SCU algorithm is not significant. Byte Hit Rate(BHR) 1 0.8 0.6 0.4 0.2 0 5 10 18 23 28 32 cache size (GB) SCU LRU LFU SIZE (a) BHR initial latency 600 500 400 300 200 100 0 5 10 18 23 28 32 cache size (GB) SCU LRU LFU SIZE (b) initial latency 11

number of replacement 8000 6000 4000 2000 0 5 10 18 23 28 32 cache size (GB) SCU LRU LFU SIZE (c) number of replacement 5. Conclusions Proxy caching is one of the solutions to improve the performance of multimedia service systems on the Internet. By storing frequently accessed data at a cache, perceived latency, server load and network traffic can be reduced significantly. However, since existing Web caching techniques are for traditional data such as text and image, they are not suitable to continuous media data. In this paper, we proposed the proxy caching scheme which stores the portion of stream or entire stream. By storing the initial fraction of stream, service initial latency can be reduced. And, by varying the size of initial fraction of stream to be cached according to popularity of each stream, limited cache space can be utilized efficiently. Moreover, we also proposed the replacement criterion, caching utility. Caching utility represents the correlation between popularity of a stream and the size of storage space that is allocated for the stream. We perform the cache replacement algorithm using the caching utility value of each stream. Through simulation, we have evaluated the performance of our caching scheme and compared with other well-known Web caching algorithms. Simulation results show that our caching scheme outperforms other caching algorithms. In the future, we will study the way to improve cache performance better by prefetching the data that is expected to be accessed. And, we will solve the replacement overhead problem which can be occurred when popularity of a stream changes suddenly. 12

6. References [1] A. Chankhunthod, P. B. Danzig, C. Neerdaels, M. F. Schwartz, K. J. Worrell, "A Hierarchical Internet Object Cache", In Proc. of 1996 Usenix Technical Conference, January 1996. [2] A. Luotonen, K. Altis, "World Wide Web Proxies", In Proc. of the First International Conference on the WWW, May 1994. [3] R. Tewari, M. Dahlin, Harrick M. Vin, J. S. Kay, "Beyond Hierarchies: Design Considerations for Distributed Caching on the Internet", In Proc. ICDCS '99, Austin, May 1999 [4] J. Wang, "A Survey of Web Caching Schemes for the Internet", Technical Report TR99-1747, Cornell University Department of Computer Science [5] J. Gwertzman, M. Seltzer, "The Case for Geographical Push-Caching", In Proc. of the 1995 Workshop on Hot Operating Systems, 1995. [6] g. a. Gibson, J. Vitter, and J. Wilkes, Storage and I/O Issues in Large-Scale Computing, ACM Workshop on Strategic Directions in Computing Research, ACM Computing Surveys, 1996 [7] J. T. Robinson, N. V. Devarakonda, "Data Cache Management Using Frequency-based Replacement", in Proc. of ACM SIGMETRICS Conference, 1990. [8] E. J. O'Neil, P. E. O'Neil, and G. Weikum, "The LRU-k page replacement algorithm for database disk buffering", in Proc. of International Conference on Management of Data, 1993. [9] R. Tewari, H. M. Vin, A. Dan, D. Sitaram, "Resource-based caching for Web servers", In Proc. SPIC/ACM Conference on Multimedia Computing and Networking, January 1998. [10] A. Dan, D. Dias, R. Mukherjee, D. Sitaram, R. Tewari, "Buffering and Caching in Large-Scale Video Servers", In Proc. of IEEE COMPCON, March 1995. [11] B. Ozden, R. Rastogi, A. Silberschatz, "Buffer replacement algorithms for multimedia storage systems", In Proc. of the International Conference on Multimedia Coomputing and Systems, June 1996. [12] S. Sen, J Rexford and D. Towsley, "Proxy prefix caching for multimedia streams," In Proc. IEEE Infocom, March 1999 [13] Reza Rejaie, Haobo Yu, Mark Handely, Deborah Estrin, "Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in the Internet ", In Proc. of IEEE Infocom'2000, Tel-Aviv, 13

Israel, March 2000 [14] Y. Wang, Z.-L. Zhang, D. Du, and D. Su, "A Network- Conscious Approach to End-to-End video Delivery over Wide Area Networks Using Proxy Servers", In Proc. IEEE Infocom, April 1998 [15] S. Sahu, P. Shenoy and D. Towsley, "Design Consider ations for Integrated Proxy Servers", In Proc. IEEE NOSSDAV'99, June 1998 [16] M. Reisslein, F. Hartanto, K. W. Ross, Interactive Video Streaming with Proxy Servers, In Proc. IEEE Infocom, April 2000 14