An Energy-Efficient Client Pre-Caching Scheme with Wireless Multicast for Video-on-Demand Services

Similar documents
The Transmitted Strategy of Proxy Cache Based on Segmented Video

Using Multicast for Streaming Videos across Wide Area Networks

Proxy Caching for Video on Demand Systems in Multicasting Networks

Using Multicast for Streaming Videos across Wide Area Networks

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

RECURSIVE PATCHING An Efficient Technique for Multicast Video Streaming

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Improving VoD System Efficiency with Multicast and Caching

Analysis of Resource Sharing and Cache Management in Scalable Video-on-Demand

Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution

Performance and Waiting-Time Predictability Analysis of Design Options in Cost-Based Scheduling for Scalable Media Streaming

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming

Scalability And The Bandwidth Efficiency Of Vod Systems K.Deepathilak et al.,

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

A Simulation-Based Analysis of Scheduling Policies for Multimedia Servers

A COOPERATIVE DISTRIBUTION PROTOCOL FOR VIDEO-ON-DEMAND

Providing VCR in a Distributed Client Collaborative Multicast Video Delivery Scheme

A Proxy Caching Scheme for Continuous Media Streams on the Internet

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

Media Caching Support for Mobile Transit Clients

Adaptive Hybrid Transmission Mechanism For On-Demand Mobile IPTV over LTE

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

A Hybrid Load Balance Mechanism for Distributed Home Agents in Mobile IPv6

Towards Scalable Delivery of Video Streams to Heterogeneous Receivers

Threshold-Based Multicast for Continuous Media Delivery y

Threshold-Based Multicast for Continuous Media Delivery Ý

OPTIMIZING VIDEO-ON-DEMAND WITH SOURCE CODING

A Packet-Based Caching Proxy with Loss Recovery for Video Streaming

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 )

arxiv: v2 [cs.ni] 23 May 2016

Enhanced Broadcasting and Code Assignment in Mobile Ad Hoc Networks

Clustering-Based Distributed Precomputation for Quality-of-Service Routing*

The Strategy of Batch Using Dynamic Cache for Streaming Media

OVSF Code Tree Management for UMTS with Dynamic Resource Allocation and Class-Based QoS Provision

Buffer Management Scheme for Video-on-Demand (VoD) System

Degrees of Freedom in Cached Interference Networks with Limited Backhaul

Transport Performance Evaluation of an ATM-based UMTS Access Network

Analysis of a Multiple Content Variant Extension of the Multimedia Broadcast/Multicast Service

Delay-minimal Transmission for Energy Constrained Wireless Communications

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Comparison of Shaping and Buffering for Video Transmission

OPTIMAL MULTI-CHANNEL ASSIGNMENTS IN VEHICULAR AD-HOC NETWORKS

Selecting among Replicated Batching Video-on-Demand Servers

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Proxy Caching Algorithm based on Segment Group Popularity for Streaming Media

Provisioning Content Distribution Networks for Streaming Media

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on

Fundamental Limits of Caching: Improved Bounds For Small Buffer Users

An Adaptive Bandwidth Reservation Scheme for Multimedia Mobile Cellular Networks

Web-based Energy-efficient Cache Invalidation in Wireless Mobile Environment

An Cross Layer Collaborating Cache Scheme to Improve Performance of HTTP Clients in MANETs

Dynamic Load Sharing Policy in Distributed VoD using agents

VoD QAM Resource Allocation Algorithms

Performance Evaluation of Distributed Prefetching for Asynchronous Multicast in P2P Networks

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

Hybrid WLAN for Data Dissemination Applications

Dynamic Cell Planning and Self-Organization for GREEN Cellular Networks

A Connection between Network Coding and. Convolutional Codes

The Impact of Clustering on the Average Path Length in Wireless Sensor Networks

A Bandwidth Management Technique for Hierarchical Storage in Large-Scale Multimedia Servers

All Rights Reserved 2017 IJARCET

A Dynamic TDMA Protocol Utilizing Channel Sense

Heuristic Algorithms for Multiconstrained Quality-of-Service Routing

Evolved Multimedia Broadcast/Multicast Service (embms) in LTE-advanced

On a Unified Architecture for Video-on-Demand Services

A Bandwidth Degradation Technique to Reduce Call Dropping Probability in Mobile Network Systems

Flexibility Evaluation of Hybrid WDM/TDM PONs

Dynamic Load Balancing Architecture for Distributed VoD using Agent Technology

Evaluation of traffic dispersion methods for synchronous distributed multimedia data transmission on multiple links for group of mobile hosts

MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS.

SV-BCMCS: Scalable Video Multicast in Hybrid 3G/Ad-hoc Networks

Heterogeneity Increases Multicast Capacity in Clustered Network

On Optimal End-to-End QoS Budget Partitioning in Network Dimensioning

Analysis of Waiting-Time Predictability in Scalable Media Streaming

Joint Optimization of Content Replication and Server Selection for Video-On-Demand

Efficient Assignment of Multiple E-MBMS Sessions towards LTE

Improving Scalability of VoD Systems by Optimal Exploitation of Storage and Multicast

Improving Connectivity via Relays Deployment in Wireless Sensor Networks

ADAPTIVE LINK WEIGHT ASSIGNMENT AND RANDOM EARLY BLOCKING ALGORITHM FOR DYNAMIC ROUTING IN WDM NETWORKS

A Novel Caching Scheme for Internet based Mobile Ad Hoc Networks

Performance of UMTS Radio Link Control

A QoE Friendly Rate Adaptation Method for DASH

BiHOP: A Bidirectional Highly Optimized Pipelining Technique for Large-Scale Multimedia Servers

Multi-path based Algorithms for Data Transfer in the Grid Environment

Quality-Assured Energy Balancing for Multi-hop Wireless Multimedia Networks via 2-D Channel Coding Rate Allocation

Broadcasting Video With the Knowledge of User Delay Preference

398 MA Huadong, Kang G. Shin Vol.17 Because different videos are requested at different rates and at different times, researchers assume that the popu

Applied Lagrange Duality for Constrained Optimization

Time Domain Modeling of Batching under User Interaction and Dynamic Adaptive Piggybacking Schemes 1

End-To-End Delay Optimization in Wireless Sensor Network (WSN)

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System

Device-to-Device Networking Meets Cellular via Network Coding

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

QoS and System Capacity Optimization in WiMAX Multi-hop Relay Using Flexible Tiered Control Technique

A Review Paper On The Performance Analysis Of LMPC & MPC For Energy Efficient In Underwater Sensor Networks

GATEWAY MULTIPOINT RELAYS AN MPR-BASED BROADCAST ALGORITHM FOR AD HOC NETWORKS. Ou Liang, Y. Ahmet Şekercioğlu, Nallasamy Mani

To address these challenges, extensive research has been conducted and have introduced six key areas of streaming video, namely: video compression,

A Survey of Streaming Media Caching

Adaptive Local Route Optimization in Hierarchical Mobile IPv6 Networks

Transcription:

An Energy-Efficient Client Pre-Caching Scheme with Wireless Multicast for Video-on-Demand Services Yanan Bao, Xiaolei Wang, Sheng Zhou, Zhisheng Niu Tsinghua National Laboratory for Information Science and Technology, Dept. of Electronic Engineering, Tsinghua Univ., Beijing, 100084, P.R. China {byn10, wang-xl09}@mails.tsinghua.edu.cn, {sheng.zhou, niuzhs}@tsinghua.edu.cn Abstract In this paper, we address the problem of providing video-on-demand (VoD) services to numerous clients energyefficiently. To reduce energy consumption, multiple requests for the same video are batched and served by one single multicast stream. However, this brings additional delay to most clients. Utilizing client pre-caching is an efficient way to eliminate the delay: while the server is batching multiple requests, the clients can play the locally cached prefix of the requested video. The multicast session containing the later part of a video can be delayed till the prefix is played out. Our evaluation demonstrates that with a carefully designed pre-caching scheme, even a small cache (with the size of a video) can reduce 50% energy consumption. Moreover, we determine the optimal client cache allocation scheme to maximize the utilization of client cache and further minimize the energy consumption. I. Introduction Recent years have witnessed a remarkable prosperity in the internet and communication technology (ICT) industry. The traditional voice traffic is gradually giving way to the data traffic such as news distribution, entertainment video broadcast, distance education, etc. Recently, there have been some standards to support video-on-demand (VoD) or near videoon-demand (NVoD) services in mobility. For instance, MBMS (Multimedia Broadcast Multicast Service) [1] and BCMCS (Broadcast and Multicast Service) [2] are proposed to realize multimedia broadcast and multicast in GSM/WCDMA by 3GPP (3rd Generation Partnership Project) and in CDMA2000 by 3GPP2 (3rd Generation Partnership Project 2), respectively. However, energy consumption of ICT is exponentially growing nowadays, and brings about severe potential environmental hazards. Recent studies show that ICT is responsible for 2%- 2.5% of the global carbon emissions [3]. Therefore how to reduce the energy consumption caused by ever-growing data traffic, especially traffic concerning multimedia contents, while guaranteeing the QoS (Quality of Service) has been a critical issue nowadays confronting not only network operators but also service providers. In this paper, we address the problem of providing VoD services energy-efficiently in wireless multicast networks through clients pre-caching the video prefixes. This work was supported in part by the National Basic Research Program of China (973 Program: No. 2012CB316001), the Nature Science Foundation of China (No. 61021001, No.60925002) and Hitachi R&D Headquarter. Existing researches about VoD services are mainly focused on wired computer networks and the way to realize multicast is IP multicast. To save the channel resources, a server tends to multicast data to as many clients as possible and therefore techniques such as batching [4], [5], [6] and patching [7], [8] are proposed. In batching, the server batches the requests for the same video and serves them with one multicast session. This will induce additional delay to the first coming requests and thus may cause the clients to renege [9]. In patching, once a client requests a video, it can join an existing multicast channel to receive the later part of the video. Meanwhile, the server will setup a new channel to unicast to the client the missed beginning part of the video. Thus, there is no delay in a patching VoD system. Nevertheless, patching requires the clients to have the ability to access to two channels simultaneously. Caching is widely used to improve user experience, alleviate traffic congestion and reduce resource consumption in communication networks. In the context of wireless communication networks, various caching schemes have been proposed, including base station caching [10], proxy caching [11], [12], relay caching [13] and client caching [14]. In terms of deploying cache for VoD services, there are works talking about proxy caching to save bandwidth on the links, such as back bone and I/O of a data disk, from the server to the proxies [15], [16]. Pre-caching is the same with regular caching in most aspects. One difference between them is that pre-caching implies the stored data has not yet been requested by any user, whereas regular caching deals with the already requested data. As most of today s user terminals have been updated to smartphones, it is practicable and reasonable to apply client pre-caching in developing VoD services. In this paper, we explore the combination of prefix caching and a multicast technique batching to reduce the energy consumption of multiple videos distribution while guaranteeing zero-delay for each request. The following are the main contributions of this work. We formulate the problem of client cache allocation into a convex optimization problem and find the optimal solution. We prove that it is a weighted water filling problem of how to allocate cache space. We quantitatively compare the optimal cache allocation

0.18 Request Probability 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0 2 4 6 8 10 12 14 16 18 20 Video No. Fig. 1. The system topology. The signal from the video server covers a round area with radius R. scheme with other heuristic schemes and explore the impact of cache size, number of videos, video length and popularity distribution on the resultant energy-efficiency. To the best of our knowledge, this is the first systematic evaluation of energy-efficiency issues that arise when combining client pre-caching with batching in wireless multicast networks for VoD services. The remainder of this paper is organized as follows. In Section II, we introduce key concepts and terminologies, and present the system model. Section III presents our optimization problem and the optimal solution. Section IV evaluates the performance of the optimal solution compared with two heuristic schemes. Finally, Section V concludes the paper and presents future work. II. System Model and Problem Formulation In this work, we consider a wireless multicast scenario where there is a multicast server and numerous clients, as shown in Fig. 1. To ensure the reliability of transmission, the signal from the server must be strong enough to cover clients in the edge of the area and therefore the transmission power is independent of the clients locations. The channel fading is ignored here. As to each client, it demands to play the video immediately after sending a request to the server, i.e., a client tolerates no delay in this true VoD system. This is reasonable in practice, since a client has many alternative choices other than the requested one. Considering a limited number of popular videos, the clients can pre-cache the first few minutes or seconds of each video, i.e., the prefix, such that these clients can play the prefix while waiting for the later data from the server. Thus the transmission from the server to a client can be delayed for a period and, this gives the server a chance to batch more requests and serve them by one multicast; the more requests batched, the more resources in the wireless channel can be saved. In this case, Fig. 2. Request probabilities of videos sorted in a nonincreasing order, which follow a Zipf distribution with the skew parameter θ = 71 and the number of videos N = 20. the server only needs to transmit part of the video after the prefix, i.e., the body, to clients. We next introduce key notations and terminologies that will be used in the rest of this paper, and provide a formal model. Table I presents the key parameters in this model. TABLE I Parameters in The Model Para. Definition N Number of videos L Length of a video (sec.) D Cache size in a client (sec.) λ Arrival rate of requests (sec. 1 ) P i Probability of requesting video i C Multicast cost per second d i Prefix size of the i-th video pre-cached by a client (sec.) d Cache vector, d = (d 1, d 2,...,d N ) For simplicity, we assume there are N constant-bit-rate (CBR) videos provided by the server and the videos are of the same length, i.e., L seconds. The videos have the same playout rate, so we can use a video s length to refer to its size. For different videos, the request probabilities vary according to popularities. Typically, the popularities of videos follow a Zipf distribution with the skew parameter θ = 71[8][9] (In a Zipf distribution, when the videos are sorted form the most popular to the least popular, the request probability for i-th video is given by P i = c, where θ is the skew parameter and c is a i 1 θ constant for normalization). Fig. 2 illustrates the probability distribution of requests when N = 20 and θ = 71. In this work, the request arrivals follow a Poisson process with average arrival rate λ. We assume that for each request, the corresponding video is chosen independently from N videos with probability P 1, P 2,...,P N. Therefore, the request arrivals for video i follow a Poisson process with average arrival rate λ i =. Note that Poisson processes remain Poisson processes under merging and splitting.

Fig. 3. An illustration of the batching multicast services. We assume the transmission rate of each video is constant and equal to the playout rate. Thus, the energy consumption of multicasting a video per second is a constant denoted by C. Here we ignore the cost of setup in a multicast session and assume the energy consumption of a multicast session is proportional to the duration of the session. The energy consumption of broadcasting the prefixes to all the clients is also ignored, since there is only one broadcast of prefixes compared with numerous multicasts of the body part of the videos after the broadcast. For simplicity, we only consider the case that each client has a cache of D seconds in size. Note that the unit of a cache has been transformed to the playout time of a video. The cache vector d = (d 1, d 2,...,d N ) specifies that a prefix of length d i for video i is pre-cached by a client, i = 1, 2,...,N. Here we give an illustration of the batching service model in Fig. 3. The prefixes of video 1, video 2 and video 3 of length d 1, d 2 and d 3 respectively are stored by each client in advance. The first line shows the aggregated requests for the three videos. The second line, third line and fourth line show the requests for video 1, video 2 and video 3 respectively. When there is a request for video 1 at time t 1, the server starts to multicast the body of length L d 1 at time t 1 + d 1 to the three batched clients. From time t 1 to t 1 + d 1, the first three clients play the locally pre-cached prefix and will not experience any delay. The request for video 1 at time t 7 will trigger another multicast session, since it comes beyond the batching window of length d 1. Note that the summary of all the video prefixes pre-cached by a client cannot exceed the limit of storage, i.e., d i D. (1) III. Optimal Client Cache Allocation Our goal is to develop a cache allocation scheme that minimizes the energy consumption of transmitting all the videos in the wireless multicast network. Recall that the energy consumption of one multicast session is proportional to the length of the session. For video i, of which the first d i seconds is stored by the clients, the multicast energy consumption is thus C(L d i ). Note requests for video i follow a Poisson process with average arrival rate. With a batching window of length d i, the server can batch d i λp i + 1 requests for video i in average. Thus the average power consumption of multicasting video i is C i (d i ) = CP iλ(l d i ) d i + 1, (2) and the aggregated power consumption concerning N videos is C(L d i ) C i (d i ) = d i + 1. (3) The optimization problem can therefore be formulated as min s.t. C(L d i ) d i +1 (4) d i D, 0 d i L, i = 1, 2,...,N. After some algebra, the objective function (4) can be simplified to N C(L+ 1 ) CN. Since both C and N are constants, the d i + 1 optimization problem is the same as: min (5) s.t. L+ 1 d i D, 0 d i L, i = 1, 2,...,N. Note that the objective function is convex and the constraints are linear. So it is a convex optimization problem, and thus has an optimal solution d (d1, d 2,...,d 1 N ). If L and L d i, which means the video size is comparatively much larger, we can make some approximations and get a typical water filling problem min (6) s.t. 1 d i D, d i 0, i = 1, 2,...,N. Otherwise it is a weighted water filling problem with an upper limit on each d i. The optimal solution to the problem (6) is d i = (v 1 ) +, where y + = max(y, 0) and v is obtained by bisection search of the following expression: (v 1 ) + = D. Fig. 4 illustrates the water filling results with the parameter settings given in table II. In Fig. 4 the deep color part in the columns corresponds to the cached prefixes of videos. For the optimization problem (5), it is a weighted water filling problem with an upper limit on d i. We use Lagrangian

50 45 40 35 30 1 + di λpi 25 20 15 10 5 0 0 2 4 6 8 10 12 14 16 18 20 Video No. Fig. 4. Water filling results for optimal cache allocation. TABLE II Parameters Settings in the Model Para. Value N 20 L (s) 100 D (s) 200 λ (s 1 ) 1 θ 71 method to solve it. Define the Lagrangian function as L + 1 P J = i λ v i (L d i ) w(d d i ), (7) where v i (i = 1, 2,...,N) and w are Lagrangian multiplexers. According to the Karush-Kuhn-Tucker (KKT) necessary conditions in optimization, the following constraints should be met for optimality: 1 =, L + 1 w + vi i = 1, 2,...,N (8) v i (L d i ) = 0, i = 1, 2,...,N (9) w(d d i ) = 0, (10) v i 0, i = 1, 2,...,N (11) w 0. (12) When NL D, which means the cache size is sufficient to store all the videos, it( is obvious ( that d i = L, for i )) = 1, 2,...,N. Otherwise, d i = max 0, min c L + 1 1, L, where c is a normalization factor to meet the constraint d i = D. We show the solution to the optimization problem (5) in Fig. 5. In subfigure (a),(b), the parameter settings are given in Table II. In subfigure (c),(d), the parameter settings are the same except that we change the length of each video from 100s Fig. 5. The results of weighted water filling with an upper limit on each column for optimal cache allocation. to 20s to show the impact of constraints d i < L(i = 1, 2,...,N). Subfigure (a),(c) show the value of and the deep color part corresponds to the cached prefix of length d i. Subfigure 1 (b),(d) show the value of (p i + λd i ) and the deep color L+ 1 part corresponds to the weighted prefix of length d i.in L+ 1 subfigure (b) there is a water level among the popular videos. In subfigure (d), d 1 = L and d 2 = L, and therefore the water level is upper bounded in these two columns. IV. Performance Evaluation In this section, we examine the performance of the proposed optimal cache allocation scheme based on numerical computation. For a large number of clients, the simulation results coincide with the numerical results, and therefore we do not need to give the simulation results in this work. We compare the optimal cache allocation scheme with two heuristic schemes: one is to evenly cache the prefixes of all videos; the other is to cache the most popular videos only. For instance, given the video length L = 100s, number of videos N = 20 and cache size D = 200s, the evenly cache scheme will cache every prefix of length d i = 10s (i = 1, 2,...,20) and the other scheme will only cache the entire data of video 1 and video 2, i.e., d 1 = 100s, d 2 = 100s, d i = 0(i = 3, 4,...,20). The energy consumption of the three schemes is normalized by the energy consumption without client cache in which one multicast serves only one client each time. In the rest of the paper, the parameter settings follow the table II except for special instructions. First, let the cache size increases from 0s to 400s with a step of 20s. Fig. 6 plots the normalized energy consumption versus the client cache size. We observe that a small size of client cache can lead to dramatically energy saving. For instance, even with a cache size of 100s, i.e., the size of a video

Normalized Energy Consumption 1 0 50 100 150 200 250 300 350 400 Cache Size D(s) Normalized Energy Consumption 0 1 1.5 2 2.5 3 3.5 4 4.5 5 Average arrival rate λ Fig. 6. Normalized energy consumption versus client cache size. Fig. 7. Normalized energy consumption versus request arrival rate. length, the optimal cache allocation scheme reduce energy by about 50% and the other two schemes reduces energy by more than 20% and 30%, respectively. With a cache size of 200s, i.e., double the size of a video length, about 60% of energy consumption can be saved by the optimal cache scheme. The big jumps in the scheme that only cache the most popular videos are caused by available space to cache a fraction of a new video. The optimal cache scheme saves more than 10% normalized energy compared with the evenly cache scheme over the range of client cache size. Fig. 7 depicts the impact of request arrival rate on the normalized energy consumption. The average arrival rate varies from request/second to 5 requests/second. The normalized energy consumption of the scheme that cache only the most popular videos is independent of request arrival rate, in that it only reduces the energy consumption of transmitting video 1 and video 2, and the energy consumption of transmitting other videos is not reduced at all. We can observe that the higher the request arrival rate is, the smaller the gap between the optimal cache scheme and the evenly cache scheme. This is because with higher request arrival rate, the optimal cache allocation scheme tends to allocate cache space among videos more evenly. From this point, when the traffic is high, we can use the evenly cache scheme to achieve near optimal performance, considering that the evenly cache scheme is of low complexity and easy to apply. As we discussed before, the typical skew parameter θ of a Zipf distribution equals 71. However, in various applications, the parameter may be different. In Fig. 8, we change θ from 0.1 to to illustrate the impact of video popularity distribution on the performance. The smaller the parameter θ is, the more non-uniform the popularities are. It shows that with larger difference among the video popularities, all the three schemes save more energy and the optimal cache allocation scheme gets relatively much better performance compared with the other schemes. When there is little difference among the video popularities, the evenly cache scheme is as energyefficient as the optimal one. Normalized Energy Consumption 5 5 5 5 5 0.1 Skew paratmeter θ of Zipf distribution Fig. 8. Normalized energy consumption versus the skew parameter θ of Zipf distribution. Normalized Energy Consumption Fig. 9. 0.1 5 10 15 20 25 30 35 40 45 50 Number of videos Normalized energy consumption versus the number of videos.

Normalized Energy Consumption 0.1 0 0 20 40 60 80 100 120 140 160 180 200 Video length (s) Fig. 10. Normalized energy consumption versus video length. Fig. 9 and Fig. 10 show the impact of number of videos and video length on the performance of the three schemes. Though the energy-efficiency of the optimal cache scheme degrades as the number of videos increases and length of video grows, the gap between it and other schemes is enlarged. This means the optimal cache allocation scheme is relatively more effective. In summary, adopting client pre-caching can significantly reduce the energy consumption in a VoD wireless multicast network. If only each client has a cache with the size of a video length, the multicast system can reduce energy consumption by 50%. The optimal client cache allocation scheme we proposed outperforms the heuristics dramatically. In a typical case, the optimal cache scheme can save energy by more than 10% compared with the evenly cache scheme. V. Conclusions In this paper, we integrated the client pre-caching with batching to realize energy-efficient wireless multicast for VoD services. We have presented a way to determine the optimal client cache allocation for a set of video prefixes in order to minimize the aggregated transmission energy consumption. Moreover, we quantitatively explored the impact of cache size, number of videos, video length and popularity distribution on the performance of pre-caching schemes. Our evaluation shows that the use of video prefixes pre-caching in user terminals is essential to achieve high energy-efficiency while guaranteeing zero-delay for requests. With a relatively small cache size, i.e., the average length of a video, the optimal cache allocation scheme can save energy by 50%. Future work includes deploying patching in this scenario and considering more practical transmission energy consumption model in wireless networks. References [1] 3GPP TS 23.246, Multimedia Broadcast/ Multicast Service (MBMS); Architecture and functional description. [2] 3GPP2 X S022-A v1.0, Broadcast and multicast service in cdma2000 wireless IP network, revision A. [3] R. Hodges and W. White, Go green in ict, tech. rep., GreenTech News, 2008. [4] A. Dan, D. Sitaram, P. Shahabuddin, Dynamic batching policies for an on-demand video server, Multimedia Systems, v.4 n.3, p.112, Jun. 1996. [5] C. Aggarwal, L. Wolf, P. Yu, On Optimal Batching Policies for Videoon-Demand Storage Servers, in Proc. of IEEE Int. Conf. on Multimedia Computing and Systems (ICMCS), p.253, Jun. 17-23, 1996. [6] S. Sheu, K. Hua, W. Tavanapong, Chaining: A Generalized BatchingTechnique for Video-on-Demand Systems, in Proc. of Int. Conf. on Multimedia Computing and Systems (ICMCS), p.110, Jun. 03-06, 1997. [7] K. A. Hua, Y. Cai, S. Sheu, Patching: a multicast technique for true video-on-demand services, in Proc. of ACM Multimedia, p.191, Sep. 13-16, 1998. [8] Y. Cai, K. A. Hua, and K. Vu, Optimizing Patching Performance, in Proc. of MMCN 99, Jan. 1999. [9] S. Aggarwal, J. Garay, A. Herzberg, Adaptive video on demand, in Proc. of the thirteenth annual ACM symposium on Principles of distributed computing, p.402, Aug. 14-17, 1994. [10] J. Cai, T. Terada, T. Hara, S. Nishio, On a Cooperation of Broadcast Scheduling and Base Station Caching in the Hybrid Wireless Broadcast Environment, in Proc. of IEEE the 7th International Conference on Mobile Data Management, 2006. [11] B.Wang,S.Sen, M. Adler, D. Towsley, Optimal proxy cache allocation for efficient streaming media distribution, in proc. of IEEE Infocom, 2002. [12] K. Wu, P. Yu, J. Wolf, Segment-based proxy caching of multimedia streams, in Proc. of the 10th International WWW Conference, 2001. [13] F. Xie; Hua, K.A, A caching-based video-on-demand service in wireless relay networks, in Proc. of Wireless Communications & Signal Processing(WCSP), 2009. [14] X. Wang, Y. Bao, X. Liu and Z. Niu, On the design of relay caching in cellular networks for energy efficiency, in Proc. of IEEE INFOCOM Workshops, p.259, Jun. 2011. [15] S. Ramesh, I. Rhee, K. Guo, Multicast with cache (mcache): An adaptive zero-delay video-on-demand service, in Proc. of IEEE INFOCOM, 2001. [16] B. Wang, S. Sen, M. Adler, D. Towsley, Optimal Proxy Cache Allocation for Efficient Streaming Media Distribution, in Proc. of IEEE INFOCOM, 2002