Lightweight caching strategy for wireless content delivery networks

Similar documents
Efficient Resource Management for the P2P Web Caching

LRC: Dependency-Aware Cache Management for Data Analytics Clusters. Yinghao Yu, Wei Wang, Jun Zhang, and Khaled B. Letaief IEEE INFOCOM 2017

Stretch-Optimal Scheduling for On-Demand Data Broadcasts

Frequency-based NCQ-aware disk cache algorithm

IN recent years, the amount of traffic has rapidly increased

Evaluating the Impact of Different Document Types on the Performance of Web Cache Replacement Schemes *

An Area-Efficient BIRA With 1-D Spare Segments

Routing Protocols in MANETs

CONCLUSIONS AND SCOPE FOR FUTURE WORK

Operating Systems. Memory: replacement policies

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

Analyzing Cacheable Traffic in ISP Access Networks for Micro CDN applications via Content-Centric Networking

Locality of Reference

Caching Algorithm for Content-Oriented Networks Using Prediction of Popularity of Content

TOWARDS HIGH-PERFORMANCE NETWORK APPLICATION IDENTIFICATION WITH AGGREGATE-FLOW CACHE

Web Caching and Content Delivery

Coupling Caching and Forwarding: Benefits, Analysis & Implementation

Cloud Transcoder: Bridging the Format and Resolution Gap between Internet Videos and Mobile Devices

Cache Replacement Strategies for Scalable Video Streaming in CCN

Efficient Content Verification in Named Data Networking

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

Cache Management for TelcoCDNs. Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK)

Design and Deployment Considerations for High Performance MIMO Testbeds

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

A Hybrid Load Balance Mechanism for Distributed Home Agents in Mobile IPv6

ENHANCING QoS IN WEB CACHING USING DIFFERENTIATED SERVICES

A New Web Cache Replacement Algorithm 1

idash: improved Dynamic Adaptive Streaming over HTTP using Scalable Video Coding

Large Object Caching for Distributed Multimedia Information Systems

Partial Caching Scheme for Streaming Multimedia Data in Ad-hoc Network

Optimal Cache Allocation for Content-Centric Networking

Hyperbolic Caching: Flexible Caching for Web Applications

An Efficient Web Cache Replacement Policy

Paging algorithms. CS 241 February 10, Copyright : University of Illinois CS 241 Staff 1

Virtual Memory Management. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Relative Reduced Hops

SUPA: A Single Unified Read-Write Buffer and Pattern-Change-Aware FTL for the High Performance of Multi-Channel SSD

IEEE b WLAN Performance with Variable Transmission Rates: In View of High Level Throughput

Performance Comparison of Caching Strategies for Information-Centric IoT

Hierarchical Content Routing in Large-Scale Multimedia Content Delivery Network

THE CACHE REPLACEMENT POLICY AND ITS SIMULATION RESULTS

A Proxy Caching Scheme for Continuous Media Streams on the Internet

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Caching and Demand-Paged Virtual Memory

Adaptive Caching Algorithms with Optimality Guarantees for NDN Networks. Stratis Ioannidis and Edmund Yeh

Robust Wireless Delivery of Scalable Videos using Inter-layer Network Coding

AN EVOLUTIONARY APPROACH TO DISTANCE VECTOR ROUTING

Volume 2, Issue 4, April 2014 International Journal of Advance Research in Computer Science and Management Studies

SUMMERY, CONCLUSIONS AND FUTURE WORK

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Multimedia Streaming. Mike Zink

Fig 7.30 The Cache Mapping Function. Memory Fields and Address Translation

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks

Basic Page Replacement

CHAPTER 6 Memory. CMPS375 Class Notes Page 1/ 16 by Kuo-pao Yang

Exploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors

TCP/IP THROUGHPUT ENHANCEMENT FOR GLOBAL IP NETWORKS WITH TRANS-OCEANIC SUBMARINE LINK

CFDC A Flash-aware Replacement Policy for Database Buffer Management

An Integration Approach of Data Mining with Web Cache Pre-Fetching

Performance and cost effectiveness of caching in mobile access networks

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

Named Data Networking for 5G Wireless

3. Evaluation of Selected Tree and Mesh based Routing Protocols

Memory Hierarchy: Caches, Virtual Memory

A Light-weight Content Distribution Scheme for Cooperative Caching in Telco-CDNs

ECE7995 Caching and Prefetching Techniques in Computer Systems. Lecture 8: Buffer Cache in Main Memory (I)

SEVEN Networks Open Channel Traffic Optimization

MIPS) ( MUX

Unequal Error Recovery Scheme for Multimedia Streaming in Application-Level Multicast

An Empirical Study of Performance Benefits of Network Coding in Multihop Wireless Networks

Analysis of Virtual Machine Scalability based on Queue Spinlock

Improvement of Buffer Scheme for Delay Tolerant Networks

New QoS Measures for Routing and Wavelength Assignment in WDM Networks

Performance Evaluation of MANET through NS2 Simulation

A Graph-based Approach to Compute Multiple Paths in Mobile Ad Hoc Networks

Improving object cache performance through selective placement

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

Course Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems

CHAPTER 7 SIMULATION OBSERVATIONS

Dynamic Broadcast Scheduling in DDBMS

HTRC Data API Performance Study

CAVA: Exploring Memory Locality for Big Data Analytics in Virtualized Clusters

Cache Policies. Philipp Koehn. 6 April 2018

Using Non-volatile Memories for Browser Performance Improvement. Seongmin KIM and Taeseok KIM *

Cache Less for More in Information- Centric Networks W. K. Chai, D. He, I. Psaras and G. Pavlou (presenter)

Adaptive Cell-Size HoG Based. Object Tracking with Particle Filter

arxiv: v3 [cs.ni] 3 May 2017

Computer Sciences Department

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Anil Saini Ph.D. Research Scholar Department of Comp. Sci. & Applns, India. Keywords AODV, CBR, DSDV, DSR, MANETs, PDF, Pause Time, Speed, Throughput.

A FRESH LOOK AT SCALABLE FORWARDING THROUGH ROUTER FIB CACHING. Kaustubh Gadkari, Dan Massey and Christos Papadopoulos

Comparison of pre-backoff and post-backoff procedures for IEEE distributed coordination function

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Role of Aging, Frequency, and Size in Web Cache Replacement Policies

Reservation Packet Medium Access Control for Wireless Sensor Networks

A Robust Cloud-based Service Architecture for Multimedia Streaming Using Hadoop

Memory Hierarchies &

Thwarting Traceback Attack on Freenet

Fast Location-based Association of Wi-Fi Direct for Distributed Wireless Docking Services

Transcription:

Lightweight caching strategy for wireless content delivery networks Jihoon Sung 1, June-Koo Kevin Rhee 1, and Sangsu Jung 2a) 1 Department of Electrical Engineering, KAIST 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701, Republic of Korea 2 MtoV Inc. 218 Gajeong-ro, Yuseong-gu, Daejeon, 305-700, Republic of Korea a) sjung@mtov.net Abstract: The proliferation of mobile devices boosts mass multimedia content delivery in wireless environments. Wireless content delivery network systems strongly require a lightweight caching strategy because of the processing capability constraints. In this paper, we introduce a one-touch caching scheme that exploits a temporal locality property as the minimum intelligence by making cache servers blindly cache the most recently requested content. Simulation results show that our strategy with little overhead has a comparable performance to conventional schemes such as LRU (least recently used), LFU (least frequently used) and other variants with heavy computational overheads. Keywords: Wireless content delivery network, content storage management, lightweight caching, temporal locality Classification: Network References [1] R. Buyya, M. Pathan, and A. Vakali, Content Delivery Networks, Springer, 2008. [2] H. ElAarag, A quantitative study of web cache replacement strategies using simulation, Web Proxy Cache Replacement Strategies, pp. 17 60, Springer, 2013. [3] Intel Corporation, Facing network and data demands with customized intelligent cloud, Case study, 2012. [4] [Online] http://www.linksys.com/en-us/wireless/ [5] V. Almeida, et al., Characterizing reference locality in the WWW, Proc. 4th PDIS, Miami, USA, pp. 92 103, Dec. 1996. [6] S. K. Fayazbakhsh, et al., Less Pain, Most of the Gain: Incrementally Deployable ICN, Proc. ACM SIGCOMM, Hong Kong, pp. 147 158, Oct. 2013. 150

1 Introduction As the demand for various mass multimedia content increases and mobile devices are popularized, it has recently become essential that content is transmitted up to these devices in wireless environments. Wireless content delivery networks (CDNs) [1] have rapidly grown in popularity as a primary alternative to meet the need by deploying cache servers even within the range of wireless networks, in contrast with wired CDNs. There has been little work on caching strategies for the wireless CDNs, whereas caching strategies for the wired CDNs have received much attention [2]. A caching scheme for the wireless CDNs must be lightweight and simple because the processing capability of wireless equipment is insufficient, compared to the existing wired CDN servers with about 3 GHz CPU and hundreds of GB memory [3] despite considering advances of recent wireless network equipment with about 1 GHz CPU and hundreds of MB memory [4]. Note that many content-related processing beyond networking tasks consumes more CPU and memory resources at higher layers than layer 3 in the cache servers by CDN system characteristics [1]. Unfortunately, all existing strategies for the wired CDNs tend to have a heavy overhead of excessive operation to enable good performance. This implies that wired schemes are inappropriate for the wireless CDNs. In this paper, we introduce a lightweight one-touch caching strategy that has good feasibility due to its simplicity and an acceptable performance level based on the minimum intelligence suitable for the wireless CDNs. Our major contribution is to find remarkable results to break away from the stereotype for performance-overhead tradeoff. Through simulations, we demonstrate that the performance of our low-overhead scheme is comparable to that of the conventional high-overhead LRU (least recently used), LFU (least frequently used), and other variants in terms of hit ratio. 2 Brief review of caching strategies Most caching strategies are based on the two key metrics of recency and frequency. A number of schemes have been developed by extending these metrics with additional intelligence [2]. Unfortunately, as shown in Table I, all of the existing schemes suffer from heavy overheads due to their high degree of intelligence [2]. This is because the schemes are basically designed for wired systems. In the worst case, these strategies have to compare all content to pick out that with the highest or lowest priority (i.e., O(n)). Table I. Run-time complexity of strategies with n content objects Caching strategy Worst case Best case One-touch O(1) O(1) LRU O(n) O(1) LFU O(n) O(logn) Hybrid O(n) More than O(logn) 151

On the other hand, although most schemes including LRU, LFU, and other variants can have reduced complexity, they require specific data structures such as a list or heap for low complexity. This implies that the schemes are not suitable for wireless systems, where the additional constraint of system lightness is strongly required. Further, there has been little effort to devise caching strategies for wireless systems. This is why we are motivated to find a lightweight yet powerful caching scheme. 3 One-touch caching strategy In this section, we introduce details of one-touch caching appropriate for wireless CDNs, which shows competitive performance with the minimum intelligence even under light weight. 3.1 Simplicity The main advantage of one-touch caching is its simplicity due to its light weight. This property makes our scheme suitable for wireless systems with limited computational capability. As shown in Table I, the run-time complexity of our scheme is consistently, under any circumstances, O(1). One-touch caching makes cache servers maintain the most recently requested content and, when there is no free space for storage, evict random existing content. Fig. 1 illustrates the content management approach of our strategy. Until the sixth time slot, all the requested content is accumulated in the cache server because there is an extra space for storage. Of course, if content retained in the cache server is requested, the content is replied from the cache server as shown in the fifth time slot of the figure. After the sixth time slot, some content is evicted by newly requested content because of the limited storage space. The evicted content is chosen in a random way. At the seventh time slot, we can find that the content with ID 12 is newly stored while the content with ID 2 that is randomly selected is replaced. It shows the simplicity of our scheme. Surprisingly, there is no requirement for any data structure, sorting operation, or other processing capabilities for storage and replacement. Fig. 1. One-touch caching mechanism 152

3.2 Competitive performance under simplicity A minimum amount of intelligence is unavoidable to permit competitive performance, even under the simplicity of O(1). Once content is requested, one-touch caching makes cache servers store the content regardless of its type. This means that our scheme can at least reflect recent request trends. Our scheme exploits the temporal locality [5] whereby there is a high probability that the content will be requested again in the very near future, similar to LRU. Interestingly, it is all of intelligence in our scheme, which is used only for storage. One-touch caching does not require any additional intelligence for replacement by a nondeterministic way, in contrast to LRU. In conclusion, our strategy exhibits competitive performance even under its very simple mechanism. 4 Performance evaluations We evaluate one-touch caching using our c++ based simulator, and compare it with LRU, LFU, and RANDOM in terms of hit ratio: Nk=1 h k Hit ratio = N, (1) where N is the total number of requests and h k = 1 if content k is already in the cache and hit when it is requested while h k = 0 otherwise. Here, RANDOM is a scheme based on a nondeterministic decision without any intelligence as a baseline for comparison. It makes cache servers store random content among accessible content, irrespective of the type of content requested, and replace existing content at random in order to store the new content. In addition, we adopt variants of the conventional schemes called smart LRU and smart LFU to clearly show the effect of the minimum intelligence in comparison with ideal cases. Smart LRU and smart LFU have identical replacement rules to conventional LRU and LFU. The only difference is that the smart versions manage additional lists to store all accumulated request information from the beginning, causing an enormous overhead. They do not store content if its priority is lower than that of all accumulated content in the list, regardless of whether the content is the most recently requested. Smart LRU is not shown in Fig. 2 because it is functionally the same as conventional LRU. 4.1 Simulation scenario We consider a wireless CDN with one cache server and fifteen users for proof of concept. The packet loss rate of one-hop links between the cache server and users is set to 0.3 on average. The content request model is simulated by a Poisson distribution with mean 5 and the content popularity follows a Zipf distribution characterized by the exponent value α. The simulation period is set to 10,000 s and we conduct 20 iterations. We consider 2,500 items of content, each with a size of 2 GB, and set the cache space to 1 TB. 153

Fig. 2. Performance comparisons 4.2 Simulation results We first analyze the performance of the caching strategies for various static content request scenarios with different α values. Basically, larger values of α means that a small number of contents are intensively requested, whereas smaller values of α characterize the inverse tendency. From Fig. 2 (a), we can see that our scheme is generally competitive with the conventional LRU and LFU, as well as the variant smart LFU scheme, regardless of α. The maximum difference in performance occurs at α =0.75, when the smart LFU scheme has a hit ratio of about 20% higher than our scheme. Interestingly, this can be regarded as an acceptable difference if we consider that the overhead of smart LFU is much heavier than that of conventional LFU and that the scenario is very unrealistic as a static content request model. An obvious fact is that the performance of our strategy is surprisingly close to LRU, LFU, and smart LFU in spite of having relatively little intelligence. Actually, it is a considerable achievement to break the stereotype of the performanceoverhead tradeoff. Next, we compare the performance of each scheme when the content 154

request distribution changes in order to observe how well each strategy reacts to changing request patterns. In other words, the feasibility of each scheme is definitively verified. In the simulation, various frequencies of changes in content request patterns are considered, from 10 times up to an extreme case of 1,000. Here, α is set to a realistic value of 0.983 [6]. Intuitively, the overall performance is degraded, irrespective of strategy, as the frequency of the content request change increases, as shown in Fig. 2 (b). LRU shows the best performance in all cases, except when the frequency of changes is 1,000. Surprisingly, our low-intelligence scheme remains competitive with the other higher-intelligence strategies. In addition, the maximum difference in performance between LRU and our scheme is only 7%. In this context, we can observe the potential of our scheme in flexibly coping with dynamic request patterns in spite of its little intelligence. 5 Conclusions In this paper, we devise a lightweight one-touch caching strategy that is appropriate for wireless CDNs and has a run-time complexity of O(1) under any circumstances. The performance of our scheme, which is designed to operate with a minimum of intelligence, is comparable to that of conventional, heavy overhead, highly intelligent schemes such as LRU, LFU, and other variants. Acknowledgments This research was funded by the SMBA(Small and Medium Business Administration), Korea in the Entrepreneur Incubating Project of Preliminary Technology for Researchers. 155