RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming

Similar documents
Improving VoD System Efficiency with Multicast and Caching

Slice-and-Patch An Algorithm to Support VBR Video Streaming in a Multicast-based Video-on-Demand System *

On a Unified Architecture for Video-on-Demand Services

THE provisioning of large-scale video-on-demand (VoD)

Proxy Caching for Video on Demand Systems in Multicasting Networks

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming

The Transmitted Strategy of Proxy Cache Based on Segmented Video

A COOPERATIVE DISTRIBUTION PROTOCOL FOR VIDEO-ON-DEMAND

Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand

Threshold-Based Multicast for Continuous Media Delivery Ý

A Transpositional Redundant Data Update Algorithm for Growing Server-less Video Streaming Systems

Broadcasting Video With the Knowledge of User Delay Preference

An Energy-Efficient Client Pre-Caching Scheme with Wireless Multicast for Video-on-Demand Services

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

Selecting among Replicated Batching Video-on-Demand Servers

Towards Scalable Delivery of Video Streams to Heterogeneous Receivers

Threshold-Based Multicast for Continuous Media Delivery y

Using Multicast for Streaming Videos across Wide Area Networks

To reduce the cost of providing video on demand (VOD), one must lower the cost of operating each channel. This can be achieved by allowing many users

Analysis of Waiting-Time Predictability in Scalable Media Streaming

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

VoD QAM Resource Allocation Algorithms

Scalability And The Bandwidth Efficiency Of Vod Systems K.Deepathilak et al.,

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 )

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Towards Low-Redundancy Push-Pull P2P Live Streaming

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming

A Packet-Based Caching Proxy with Loss Recovery for Video Streaming

A Lossless Quality Transmission Algorithm for Stored VBR Video

Using Multicast for Streaming Videos across Wide Area Networks

398 MA Huadong, Kang G. Shin Vol.17 Because different videos are requested at different rates and at different times, researchers assume that the popu

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Distributed Video Systems Chapter 3 Storage Technologies

BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers

DUE to the remarkable growth of online video and

Provably Efficient Stream Merging

Performance Evaluation of Distributed Prefetching for Asynchronous Multicast in P2P Networks

Maximizing the Number of Users in an Interactive Video-on-Demand System

Research Article A Stream Tapping Protocol Involving Clients in the Distribution of Videos on Demand

Cloud Movie: Cloud Based Dynamic Resources Allocation And Parallel Execution On Vod Loading Virtualization

Probability Admission Control in Class-based Video-on-Demand System

Collapsed Cooperative Video Cache for Content Distribution Networks

Subnet Multicast for Delivery of One-to-Many Multicast Applications

FUTURE communication networks are expected to support

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

A Proxy Caching Scheme for Continuous Media Streams on the Internet

Shaking Service Requests in Peer-to-Peer Video Systems

End-To-End Delay Optimization in Wireless Sensor Network (WSN)

Provisioning Content Distribution Networks for Streaming Media

Multipath Routing for Video Unicast over Bandwidth-Limited Networks

Time Domain Modeling of Batching under User Interaction and Dynamic Adaptive Piggybacking Schemes 1

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

Scalability of Multicast Delivery for Non-sequential Streaming Access

Providing VCR in a Distributed Client Collaborative Multicast Video Delivery Scheme

Server Selection in Large-Scale Video-on-Demand Systems

Cost Effective and Scalable Video Streaming Techniques

Best-Effort Patching for Multicast True VoD Service

Video Streaming Over the Internet

WITH the advanced technologies recently developed in the

FuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc

The Encoding Complexity of Network Coding

Periodic Broadcasting with VBR-Encoded Video

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced

A Study of the Performance Tradeoffs of a Tape Archive

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

SSD Admission Control for Content Delivery Networks

Leveraging Transitive Relations for Crowdsourced Joins*

Web-based Energy-efficient Cache Invalidation in Wireless Mobile Environment

Chapter 7 CONCLUSION

P2FS: supporting atomic writes for reliable file system design in PCM storage

Study of Load Balancing Schemes over a Video on Demand System

RECENTLY, researches on gigabit wireless personal area

Proxy Caching Algorithm based on Segment Group Popularity for Streaming Media

Streaming Flow Analyses for Prefetching in Segment-based Proxy Caching to Improve Media Delivery Quality

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Broadcasting Techniques for Video-on-Demand in Wireless Networks

PERSONAL communications service (PCS) provides

Backup segments. Path after failure recovery. Fault. Primary channel. Initial path D1 D2. Primary channel 1. Backup channel 1.

Improving Video-on-Demand Server Efficiency Through Stream Tapping

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks

Packet Classification Using Dynamically Generated Decision Trees

Audio Streams Merging Over ALMI

OVSF Code Tree Management for UMTS with Dynamic Resource Allocation and Class-Based QoS Provision

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

5. Conclusion. 6. References

Generalized Proxy-Assisted Periodic Broadcasting (G-ProB) for Heterogeneous Clients in Video-on- Demand Service

A Survey of Streaming Media Caching

Scheduling Strategy for Realistic Implementation of Video on Demand over IPTV System

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

GloVE: A Distributed Environment for Low Cost Scalable VoD Systems

An Efficient Periodic Broadcast Technique for Digital Video Libraries

The Encoding Complexity of Network Coding

The Case for Reexamining Multimedia File System Design

THE TELEPHONE network of the past is fast evolving

L-Bit to M-Bit Code Mapping To Avoid Long Consecutive Zeros in NRZ with Synchronization

Virtual Machine Placement in Cloud Computing

Transcription:

ECUSIVE ATCHING An Efficient Technique for Multicast Video Streaming Y. W. Wong, Jack Y. B. Lee Department of Information Engineering The Chinese University of Hong Kong, Shatin, N.T., Hong Kong Email: ywwong1@ie.cuhk.edu.hk, yblee@ie.cuhk.edu.hk Keywords: Abstract: Video-on-demand, multicast, patching, transition, recursive. atching and transition patching are two techniques proposed to build efficient video-on-demand (VoD) systems. atching works by allowing a client to playback video data from a patching stream while caching video data from another multicast video stream for later playback. The patching stream can be released once video playback reaches the point where the cached data begins, and playback continues via the cache and the shared multicast channel for the rest of the session. Transition patching takes this patching technique one step further by allowing a new client to cache video data not only from a full-length multicast channel, but also from a nearby in-progress patching channel as well to further reduce resource consumption. This study further generalizes these patching techniques into a recursive patching scheme where a new client can cache video data recursively from multiple patching streams to further reduce resource consumption. This recursive patching scheme unifies the existing patching schemes as special cases. Simulation results show that it can achieve significant reductions (e.g. 60%~80%) in startup latency at the same load and with the same system resources. 1 INTODUCTION Although extensive researches on video-on-demand (VoD) have been conducted in the last decade, commercial deployment of VoD services in the market is still far from commonplace. Apart from content copyright issues, the main reason is the immense network and server resources needed to serve a large user population. As traditional VoD systems make use of unicast for video streaming, the required server and network bandwidth grows linearly with the number of subscribers, thereby rendering metropolitan-scale deployment economically difficult, if not impossible. To tackle this challenge, researchers have recently investigated a number of innovative video streaming architectures based on network multicast to dramatically reduce the server and network resources needed in large-scale VoD systems. One technique, called batching, groups users waiting for the same video data and then serves them using a single multicast channel (Dan et al., 1994a; Dan et al., 1994b; Dan et al., 1996; Aggarwal et al., 1996; Shachnai and Yu, 1997). This batching process can occur passively while the users are waiting, or actively by delaying the service of earlier users to wait for later users to join the batch. Various batching policies have been proposed in recent years, including first-come-first-serve (FCFS) and maximum queue length (MQL) proposed by Dan et al. (1994a), maximum factored queue (MFQ) proposed by Aggarwal et al. (1996), Max_Batch and Min_Idle proposed by Shachnai and Yu (1997), etc. Another technique, called patching, exploits client-side bandwidth and buffer space to merge users on separate transmission channels onto an existing multicast channel (Liao and Li, 1997; Hua et al., 1998; Cai et al., 1999; Carter et al., 1997; Eager et al., 2000). The idea is to let a client cache data from a nearby (in playback time) multicast transmission channel while sustaining playback with data from another transmission channel called a patching channel in (Hua et al., 1998). This patching channel can be released once video playback reaches the point where the cached data begins, and playback continues via the cache and the shared multicast channel for the rest of the session. Cai and Hua (1999) took this patching technique one step further by allowing a new client to cache video data not only from a full-length multicast channel, but also from a nearby in-progress patching channel as well. This technique, called transition patching, can further reduce resource consumption when compared to simple patching. 306

ECUSIVE ATCHING t a t b t c S a S b D 4 D 5 Multicasting schedule S c D 1 r a r b Client data reception r c D 6 D 7 D 2 D 3 D 1 D 4 D 5 hase 1 hase 2 hase 3 r a Client playback r b r c t c -t b 2tc -tb- ta Figure 1: Streaming, data reception and playback schedules in transition patching. In this study, our first contribution is the generalization of these patching techniques into a recursive patching scheme where a new client can cache video data recursively from multiple patching streams to further reduce resource consumption. Second, this recursive patching scheme also unifies the existing patching schemes as special cases. Third, we also consider recursive patching in bandwidthlimited systems and incorporate batching to further reduce resource consumption, especially in heavilyloaded systems. Our simulation results show that recursive patching can achieve startup latency reduction of 60%~80% compared to transition patching. The rest of this paper is organized as follows: Section 2 reviews transition patching; Section 3 introduces our recursive patching scheme; Section 4 addresses the stream assignment problem; Section 5 compares the performance of different patching schemes using simulation; and Section 6 summarizes the study. 2 TANSITION ATCHING Fig. 1 illustrates the patching and transition patching techniques. There are three clients, denoted by r a, r b and r c, which arrive at the system at time instants t a, t b, and t c respectively requesting the same video. We assume that the length of the video being served is L and is encoded at a constant bit-rate of bps. To facilitate discussion, we divide the video into 7 logical segments (D 1 to D 7 ) and denote the group of video segments from the r th segment to the s th segment by [D r, D s ]. Assuming the system is idle when r a arrives, then the system will assign a regular stream (-stream), denoted by S a, to stream the whole video from the beginning to the end (i.e. [D 1, D 7 ]) to client r a. The cost of serving client r a is thus equal to the bandwidth-duration product L. For client r b arriving at time t b, it clearly cannot begin playback by receiving video data from stream S a as it has missed the first (t b -t a ) seconds of the video, i.e., [D 1, D 3 ]. Instead of starting another -stream for client r b, the system assigns a patching stream (-stream) S b to transmit only the first (t b -t a ) seconds (i.e., [D 1, D 3 ]) of missed video data to enable client r b to begin playback. At the same time, client r b also caches video segment [D 4, D 6 ] from multicast stream S a for later playback. After (t b -t a ) seconds, the video playback of client r b will reach video time (t b -t a ) and thus the client can continue playback using the cached data received from S a for the rest of the video. The -stream S b can then be released for reuse by other clients. Given that S b is occupied for duration much shorter than the length of the video ((t b -t a ) versus L), the cost of serving client r b is thus significantly reduced. 307

ICEIS 2003 - Databases And Information Systems Integration t a t b t c t d S a C 1 C 2 C 3 C 4 C 5 C 6 S b S c C 1 C 2 C 3 C 4 C 1 C 2 C 3 C 5 Multicasting schedule S d C 1 r d r d hase 1 C 2 C 3 C 1 C 6 C 4 C 5 hase 2 hase 3 hase 4 C 1 C 2 C 3 C 4 C 5 C 6 Client data reception Client playback t d - t c 2t d - t c - t b 2t d - t b - t a Figure 2: Illustration of recursive patching. This technique is called patching in this study and it has been studied by a number of researchers (Liao and Li, 1997; Hua et al., 1998; Cai et al., 1999; Carter et al., 1997; Eager et al., 2000) under different names. The tradeoffs of patching are the need for network multicast, the need for the client to receive two video streams simultaneously, and the additional client buffer required (i.e. (t b -t a ) bytes) to cache video data received from the -stream. Nevertheless, subject to the client s buffer availability, a client admitted using patching always consumes less server resource than a client admitted using an -stream as is the case in a conventional VoD system. In patching, a patched client receives data from a -stream while caching video data from another - stream. Cai and Hua (1999) proposed a transition patching technique which extends patching to allow a client to cache video data not only from an - stream, but also from another -stream as well. In other words, a client admitted using transition patching will first receive video data from two - streams, then releases one -stream to be replaced by an -stream, and finally releases the remaining - stream to continue playback for the rest of the video using data received from the -stream. For example, consider client r c in Fig. 1 which arrives at time t c and is admitted using transition patching. We note that for client r c, it has already missed video segment [D 1, D 4 ] multicast from the - stream S a. To patch this missed video segment, transition patching employs a three-phase admission process as shown in Fig. 1. In hase 1, a -stream S c is allocated to stream the initial video segment D 1 to client r c to begin playback. At the same time, the client caches video data segment D 2 being multicasted by the -stream S b. In hase 2, the - stream S c is released and the client begins caching video data segment [D 6, D 7 ] from the -stream S a. Note that the client also continues to receive video segment [D 3, D 5 ] from the -stream S b. Finally, in hase 3 the remaining -stream S b is released and the client simply continues playback using cached data and data received from the -stream S a. This transition patching technique differs from simple patching in two aspects. First, the -stream S c allocated for client r c is occupied for a duration of (t c -t b ) seconds, which is shorter than the duration when simple patching is used, i.e., (t c -t a ) seconds. Second, the -stream S b is extended from (t b -t a ) seconds to (2t c -t a -t b ) seconds to support client r c. This stream is called a transition stream (T-stream) in (Cai and Hua, 1999). Thus the net gain in resource reduction is equal to {[(t c -t a )-(t c -t b )] - [(2t c - t a -t b )-(t b -t a )]} = 3t b -2t c -t a. For example, suppose L, t a, t b and t c equal to 7200, 0, 200 and 250 seconds respectively. Then the costs of supporting r a, r b and r c are 7200, 200 and 150 respectively, representing resource savings of 97.22% and 97.92% for clients r b and r c. 3 ECUSIVE ATCHING The fundamental principle of patching is to cache video data from the nearest stream so as to minimize the amount of video data missed. We observe that this principle not only can be applied to a single patching stream, but also to a series of patching streams as well. This motivates us to devise a new 308

ECUSIVE ATCHING recursive patching technique where video data from multiple levels of patching streams are cached to further reduce the resources required. Fig. 2 illustrates the recursive patching technique by considering a fourth client r d which arrives at the system at time t d, in addition to the three clients r a, r b, and r c considered in Fig. 1. To ease discussion, we divide the whole video into 6 different segments C 1 to C 6. For the client r d, it has missed the initial (t d -t a ) seconds of the video. Thus, if we use simple patching it will incur a cost of (t d -t a ) bytes. Using transition patching with S b as the transition stream will incur a cost of (3t d -2t c -t b ) bytes. Now consider the use of recursive patching, which in this case is divided into 4 phases as shown in Fig. 2. In hase 1, the client caches video segment C 2 from S c while playing back video segment C 1 using data received from S d. In hase 2, client r d continues to receive video segment C 3 from S c but releases S d and replaces it with S b to receive video segment C 4. In hase 3, client r d continues to receive video segment C 5 from S b but releases S c and replaces it with S a to receive video segment C 6. Finally, in hase 4 the client releases the remaining -stream S b and contines playback till the end of the video by receiving video data from S a. Subtracting the lengths of different streams, the total patching cost to serve r d is [(5t d -2t c -2t b -t a ) - (3t c -2t b -t a )] = 5(t d -t c ). Compared to the cost of (3t d -2t c -t b ) bytes in transition patching, there is a gain of (3t c -2t d -t b ) bytes. If t d equals to 260 seconds, the cost of serving r d is 50, or 99.31% resource saving over serving with a new regular stream. Note that for the example in Fig. 2, the client caches from at most two streams at any time and so the client bandwidth requirement is the same as simple patching and transition patching. In the whole patching process, the client caches video data through a total of three -streams and one -stream. In general, a client can cache video data through even more -streams as long as there are eligible - streams. Let k be the total number of streams (i.e., one - stream plus k-1 -streams) which a client caches data from. Then we call this process k-phase recursive patching (k-). It is worth noting that simple patching and transition patching are equivalent to 2- and 3- respectively under this unified framework. 4 STEAM ASSIGNMENT In admitting a new client through k-phase recursive patching, there could be more than (k-1) -streams eligible for patching. In this case, the system needs to determine the subset of eligible streams to use to increase resource reduction. We call this the stream assignment problem. In the next section, we first review the stream assignment scheme employed in the transition patching study (Cai and Hua, 1999) and then extend it to recursive patching in Section 4.2. 4.1 The Equal-Split Stream Assignment Scheme Fig. 3a depicts the equal-split stream assignment scheme proposed in the transition patching study (Cai and Hua, 1999). There are two parameters in this scheme, namely the regular window length denoted by ω r and the patching window length denoted by ω t. The first stream allocated in a regular window is always an -stream, streaming the video from the beginning to the end. The first stream in a patching window (except the first one), however, will be a T- stream, which is a -stream extended to support other clients transition patching. The rest are simple patching streams (-streams) streaming the missed initial video segment for the clients. For example, when the first client r 1 arrives at the system, an -stream S 1 is assigned, and all the subsequent requests (r 2 through r k-1 ) are assigned - streams to perform simple patching. When a client r k arriving more than ω t time units after the -stream S 1, the server will assign a T-stream S k to it. Within the next ω t window (r k+1 through r q-1 ), all requests are then assigned -streams to perform transition patching via the T-stream S k. The next request r q is again assigned a new T-stream which also starts another ω t window and so on. This process repeats until a request r p whose arrival time exceeds ω r seconds from the -stream S 1. In this case a new - stream will be assigned to begin a new regular window. The study by Cai and Hua did not address how to configure ω r and ω t to optimize performance. In our experiments, we simply optimize ω r and ω t using exhaustive search in unit of seconds. As this computation can be done offline, it does not affect the system s runtime performance. 309

ICEIS 2003 - Databases And Information Systems Integration 4.2 A Hierarchical Equal-Split Stream Assignment Scheme Unlike transition patching, a client admitted with k- phase recursive patching may cache data through a new -stream plus up to k-2 extended transition streams, say streams s 1, s 2,, s k-2, before merging back to an -stream. To be consistent with Cai and Hua s study, we denote these transition streams by T 1 -stream, T 2 -stream,, T k-2 -stream. T k-2 -stream. In hase 2, it will release the -stream and begin caching data from the first stream, in the L k-3 patching group, which is a T k-3 -stream. In general, the client caches video data from the T k-i - stream and the T k-i-1 -stream during hase i for i<(k- 1). In hase (k-1), the client caches video data from a T 1 -stream and the -stream, and finally releases the T 1 -stream to continue playback using the - stream in the last hase k. Again, we can optimize the set of window lengths {ω 0, ω 1,, ω k-2 } offline using exhaustive s r s t s t stream type T T r 1 r 2 r 3 r k-1 r k r k+1 r k+2 r q-1 r q r p patching group patching group regular group (a) The equal-split stream assignment scheme. s 0 s 1 s 1 s 2 s 2 stream type T 1 T 2 T 2 T 1 r 1 r 2 r 3 r k-1 r k r k+1 r m-1 r m r m+1 r n-1 r n r n+1 r q r p L 2 patching group L 2 patching group L 1 patching group L 1 patching group L 0 patching group (b) The hierarchical equal-split stream assignment scheme. Figure 3: Illustration of stream assignment schemes. Similarly we can generalize the two-level equalsplit stream assignment scheme to a hierarchical equal-split stream assignment scheme with (k-1) levels as shown in Fig. 3b for k=4. Let ω i be the window length for level i. Then ω 0 and ω 1 are equivalent to the regular window length ω r and patching window length ω t in the original equal-split stream assignment scheme. Streams within the same level i window belong to an L i patching group. For a client admitted via k- phase recursive patching, it will begin with a new - stream and at the same time caches video data from the first stream in the L k-2 patching group, which is a search. Note that the computational complexity increases exponentially with k. Further researches are therefore needed to devise computationallyefficient algorithms for optimizing the window lengths. 5 EFOMANCE EVALUATION In this section, we evaluate the performance of recursive patching using simulations. We assume oisson client arrivals and all clients playback the video from the beginning to the end without 310

ECUSIVE ATCHING interactive operations. For simplicity, we ignore network delays and processing delays. Table 1 lists the system parameters used in our simulation study. We use startup latency - defined as the time from client arrival to the time playback can begin, as the performance metric for comparison. Startup Latency (seconds) Table 1: arameters used in simulations. arameter ange of values 6 5 4 3 2 1 0 equest arrival rate (/s) 0.1-1.0 Movie length (L) 7200 Number of server channels (C) 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Arrival ate (per second) Figure 4: erformance of k-. k = 3 k = 4 k = 5 Fig. 4 plots the startup latency versus arrival rate ranging from 0.1 to 1 client/second. There are three curves plotting the startup delay for 3-phase, 4- phase, and 5-phase recursive patching respectively. Note that 3-phase recursive patching is equivalent to transition patching. Compared to transition patching (i.e. with k=3), the 4-phase recursive patching can achieve significantly lower startup latency under the same load. For example, the latency is reduced by 78%, 67% and 62% at arrival rates of 0.3/s, 0.6/s and 0.9/s respectively. The improvement is particularly significant at higher arrival rates. This can be explained by the observation that at higher arrival rates, the streams are more closely spaced and thus, enables more data sharing by patching recursively. The latency is further reduced when 5-phase recursive patching is employed although the reduction is less significant. Compared to transition patching, 5- can achieve latency reductions of 81%, 78% and 70% at arrival rates of 0.3/s, 0.6/s and 0.9/s respectively. We were not able to generate results for larger values of k due to the extensive computation time needed for optimizing the window lengths {ω 0, ω 1,, ω k-2 } (c.f. Section 4.2). Nevertheless the current results do suggest that the improvements will be less and less when k is increased further. 6 CONCLUSION We investigated in this study a generalized recursive patching scheme (k-) for building efficient, large-scale video-on-demand systems. This new scheme unified the existing patching and transition patching schemes as special cases of 2- and 3- respectively. By using larger values of k (i.e., k>3), we showed that the recursive patching scheme can achieve significantly lower startup latency compared to even the already efficient transition patching scheme, at the same load and with the same system resources. Optimization of the patching window lengths in stream assignment, however, turned out to be very computationally expensive. Therefore further research is needed to reduce the computation time needed and also to extend this algorithm to accommodate different system parameter settings like client access bandwidth, client buffer storage limit, etc. ACKNOWLEDGEMENT This work was supported in part by the Hong Kong Special Administrative egion esearch Grant Council under Grant CUHK4328/02E and by the Area-of- Excellence in Information Technology. EFEENCES Aggarwal, C. C., Wolf, J. L. and Yu,. S., 1996. On optimal batching policies for video-on-demand storage servers. In roc. International Conference on Multimedia Systems, June 1996. Cai, Y., Hua, K. and Vu, K., 1999. Optimizing patching performance. In roc. SIE/ACM Conference on Multimedia Computing and Networking, San Jose, CA, January 1999, pp.204-215. Cai, Y. and Hua, K. A., 1999. An efficient bandwidthsharing technique for true video on demand systems. In roc. 7th ACM International Conference on Multimedia, Orlando, Florida, United States, 1999. Carter, S.W., Long, D. D. E., Makki, K., Ni, L.M., Singhal, M. and issinou, N., 1997. Improving videoon-demand server efficiency through stream tapping. In roc. 6th International Conference on Computer 311

ICEIS 2003 - Databases And Information Systems Integration Communications and Networks, September 1997, pp.200-207. Dan, A., Sitaram, D. and Shahabuddin,., 1994. Scheduling policies for an on-demand video server with batching. In roc. 2nd ACM International Conference on Multimedia, pp. 15-23. Dan, A., Shahabuddin,., Sitaram, D. and Towsley, D., 1994. Channel allocation under batching and VC control in movie-on-demand servers. IBM esearch eport C19588, Yorktown Heights, NY. Dan, A., Sitaram, D. and Shahabuddin,., 1996. Dynamic batching policies for an on-demand video server. ACM Multimedia Systems, vol. 4, no. 3, June 1996, pp. 112-121. Eager, D. L., Vernon, M. K. and Zahorjan, J., 2000. Bandwidth skimming: A technique for cost-effective video-on-demand. In roc. IS&T/SIE Conf. On Multimedia Computing and Networking 2000 (MMCN 2000), San Jose, CA, January 2000, pp.206-215. Hua, K. A., Cai, Y. and Sheu, S., 1998. atching: A multicast technique for true video-on-demand services. In roc. 6th International Conference on Multimedia, September 1998, pp.191-200. Liao, W. and Li, V. O. K., 1997. The split and merge protocol for interactive video-on-demand. IEEE Multimedia, vol.4(4), 1997, pp.51-62. Shachnai, H. and Yu,. S., 1997. Exploring wait tolerance in effective batching for video-on-demand scheduling. In roc. 8th Israeli Conference on Computer Systems and Software Engineering, June 1997, pp. 67-76. 312