On the Throughput Capacity of Information-Centric Networks

Similar documents
Throughput-Delay Scaling in Wireless Networks with Constant-Size Packets

performance to the performance they can experience when they use the services from a xed location.

Lecture 28: Data Link Layer

3D Model Retrieval Method Based on Sample Prediction

Big-O Analysis. Asymptotics

1 Graph Sparsfication

Realistic Storage of Pending Requests in Content-Centric Network Routers

CS 683: Advanced Design and Analysis of Algorithms

Ones Assignment Method for Solving Traveling Salesman Problem

Announcements. Reading. Project #4 is on the web. Homework #1. Midterm #2. Chapter 4 ( ) Note policy about project #3 missing components

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5

Big-O Analysis. Asymptotics

Performance Analysis of Multiclass FIFO: Motivation, Difficulty and a Network Calculus Approach

Lecture Notes 6 Introduction to algorithm analysis CSS 501 Data Structures and Object-Oriented Programming

The isoperimetric problem on the hypercube

CIS 121 Data Structures and Algorithms with Java Fall Big-Oh Notation Tuesday, September 5 (Make-up Friday, September 8)

The Magma Database file formats

DATA MINING II - 1DL460

Combination Labelings Of Graphs

Relay Placement Based on Divide-and-Conquer

Improving Template Based Spike Detection

Improvement of the Orthogonal Code Convolution Capabilities Using FPGA Implementation

Pattern Recognition Systems Lab 1 Least Mean Squares

How do we evaluate algorithms?

Lecture 5. Counting Sort / Radix Sort

Fundamentals of Media Processing. Shin'ichi Satoh Kazuya Kodama Hiroshi Mo Duy-Dinh Le

Python Programming: An Introduction to Computer Science

Evaluation of Distributed and Replicated HLR for Location Management in PCS Network

Adaptive and Lazy Segmentation Based Proxy Caching for Streaming Media Delivery

1. SWITCHING FUNDAMENTALS

Elementary Educational Computer

Prevention of Black Hole Attack in Mobile Ad-hoc Networks using MN-ID Broadcasting

Lecture 1: Introduction and Strassen s Algorithm

A QoS Provisioning mechanism of Real-time Wireless USB Transfers for Smart HDTV Multimedia Services

Throughput-Delay Tradeoffs in Large-Scale MANETs with Network Coding

6.854J / J Advanced Algorithms Fall 2008

On (K t e)-saturated Graphs

27 Refraction, Dispersion, Internal Reflection

Lower Bounds for Sorting

. Written in factored form it is easy to see that the roots are 2, 2, i,

Performance Plus Software Parameter Definitions

Data diverse software fault tolerance techniques

Parallel Polygon Approximation Algorithm Targeted at Reconfigurable Multi-Ring Hardware

Numerical Methods Lecture 6 - Curve Fitting Techniques

Sorting in Linear Time. Data Structures and Algorithms Andrei Bulatov

condition w i B i S maximum u i

Algorithms Chapter 3 Growth of Functions

One advantage that SONAR has over any other music-sequencing product I ve worked

Fast Fourier Transform (FFT) Algorithms

Lecture 2: Spectra of Graphs

arxiv: v2 [cs.ds] 24 Mar 2018

Algorithms for Disk Covering Problems with the Most Points

Adaptive Resource Allocation for Electric Environmental Pollution through the Control Network

Image Segmentation EEE 508

Load balanced Parallel Prime Number Generator with Sieve of Eratosthenes on Cluster Computers *

The Penta-S: A Scalable Crossbar Network for Distributed Shared Memory Multiprocessor Systems

Xiaozhou (Steve) Li, Atri Rudra, Ram Swaminathan. HP Laboratories HPL Keyword(s): graph coloring; hardness of approximation

S. Mehta and K.S. Kwak. UWB Wireless Communications Research Center, Inha University Incheon, , Korea

Master Informatics Eng. 2017/18. A.J.Proença. Memory Hierarchy. (most slides are borrowed) AJProença, Advanced Architectures, MiEI, UMinho, 2017/18 1

Enhancing Cloud Computing Scheduling based on Queuing Models

Optimization of Priority based CPU Scheduling Algorithms to Minimize Starvation of Processes using an Efficiency Factor

Multiprocessors. HPC Prof. Robert van Engelen

Evaluation scheme for Tracking in AMI

Lecture 6. Lecturer: Ronitt Rubinfeld Scribes: Chen Ziv, Eliav Buchnik, Ophir Arie, Jonathan Gradstein

Introduction to OSPF. ISP Training Workshops

Lecture 10 Collision resolution. Collision resolution

A Study on the Performance of Cholesky-Factorization using MPI

Improved Random Graph Isomorphism

Markov Chain Model of HomePlug CSMA MAC for Determining Optimal Fixed Contention Window Size

Random Graphs and Complex Networks T

The Counterchanged Crossed Cube Interconnection Network and Its Topology Properties

Global Optimization of File Availability Through Replication for Efficient File Sharing in MANETs

UH-MEM: Utility-Based Hybrid Memory Management. Yang Li, Saugata Ghose, Jongmoo Choi, Jin Sun, Hui Wang, Onur Mutlu

The Closest Line to a Data Set in the Plane. David Gurney Southeastern Louisiana University Hammond, Louisiana

Parabolic Path to a Best Best-Fit Line:

CMSC Computer Architecture Lecture 10: Caches. Prof. Yanjing Li University of Chicago

Accuracy Improvement in Camera Calibration

FREQUENCY ESTIMATION OF INTERNET PACKET STREAMS WITH LIMITED SPACE: UPPER AND LOWER BOUNDS

An Improved Shuffled Frog-Leaping Algorithm for Knapsack Problem

Lecturers: Sanjam Garg and Prasad Raghavendra Feb 21, Midterm 1 Solutions

Media Access Protocols. Spring 2018 CS 438 Staff, University of Illinois 1

Pruning and Summarizing the Discovered Time Series Association Rules from Mechanical Sensor Data Qing YANG1,a,*, Shao-Yu WANG1,b, Ting-Ting ZHANG2,c

ISSN (Print) Research Article. *Corresponding author Nengfa Hu

APPLICATION NOTE PACE1750AE BUILT-IN FUNCTIONS

Reliable Transmission. Spring 2018 CS 438 Staff - University of Illinois 1

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #2: Randomized MST and MST Verification January 14, 2015

ADVANCES in information and communication technologies

AN EFFICIENT GROUP KEY MANAGEMENT USING CODE FOR KEY CALCULATION FOR SIMULTANEOUS JOIN/LEAVE: CKCS

On the Throughput-Delay Trade-off in Georouting Networks

Octahedral Graph Scaling

Course Site: Copyright 2012, Elsevier Inc. All rights reserved.

Redundancy Allocation for Series Parallel Systems with Multiple Constraints and Sensitivity Analysis

Morgan Kaufmann Publishers 26 February, COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. Chapter 5.

Sum-connectivity indices of trees and unicyclic graphs of fixed maximum degree

Basic allocator mechanisms The course that gives CMU its Zip! Memory Management II: Dynamic Storage Allocation Mar 6, 2000.

CIS 121 Data Structures and Algorithms with Java Spring Stacks, Queues, and Heaps Monday, February 18 / Tuesday, February 19

Creating Exact Bezier Representations of CST Shapes. David D. Marshall. California Polytechnic State University, San Luis Obispo, CA , USA

Cubic Polynomial Curves with a Shape Parameter

Lecture 7 7 Refraction and Snell s Law Reading Assignment: Read Kipnis Chapter 4 Refraction of Light, Section III, IV

Relationship between augmented eccentric connectivity index and some other graph invariants

Transcription:

O the Throughput Capacity of Iformatio-Cetric Networks Bita Azimdoost, Cedric estphal, ad Hamid R. Sadjadpour Departmet of Electrical Egieerig ad Computer Egieerig Uiversity of Califoria Sata Cruz, Sata Cruz, CA 95064, USA {bazimdoost,cedric,hamid}@soe.ucsc.edu Huawei Iovatio Ceter, Sata Clara, CA 95050, USA cedric.westphal@huawei.com Abstract ireless iformatio-cetric etworks cosider storage oe of the etwork primitives, ad propose to cache data withi the etwork i order to improve latecy to access cotet ad reduce badwidth cosumptio. e study the throughput capacity of a iformatio-cetric etwork whe the data cached i each ode has a limited lifetime. The results show that with some fixed request ad cache expiratio rates, the etwork ca have the maximum throughput order of / ad / log i cases of grid ad radom etworks, respectively. Comparig these values with the correspodig throughput with o cache capability (/ ad / log respectively, we ca actually quatify the asymptotic advatage of cachig. Moreover, sice the request rates will decrease as a result of icreasig dowload delays, icreasig the cotet lifetimes accordig to the etwork growth may result i higher throughput capacities. I. INTRODUCTION I today s etworkig situatios, users are mostly iterested i accessig cotet regardless of which host is providig this cotet. They are lookig for a fast ad secure access to data i a whole rage of situatios: wired or wireless; heterogeeous techologies; i a fixed locatio or whe movig. The dyamic characteristics of the etwork users makes the hostcetric etworkig paradigm iefficiet. Iformatio-cetric etworkig (ICN is a ew etworkig architecture where cotet is accessed based upo its ame, ad idepedetly of the locatio of the hosts [] [4]. I most ICN architectures, data is allowed to be stored i the odes ad routers withi the etwork i additio to the cotet publisher s servers. This reduces the burde o the servers ad o the etwork operator, ad shortes the access time to the desired cotet. Combiig cotet routig with i-etwork-storage for the iformatio is ituitively attractive, but there has bee few works cosiderig the impact of such architecture o the capacity of the etwork i a formal or aalytical maer. I this work we study a wireless iformatio-cetric etwork where odes ca both route ad cache cotet. e also assume that a ode will keep a copy of the cotet oly for a fiite period of time, that is util it rus out of memory space i its cache ad has to rotate cotet, or util it ceases to serve a specific cotet. Bita Azimdoost was with Huawei Iovatio Ceter, Sata Clara, CA 95050, USA, as a iter while workig o this paper. The odes issue some queries for cotet that is ot locally available. e suppose that there exists a server which permaetly keeps all the cotets. This meas that the cotet is always provided at least by its publisher, i additio to the potetial copies distributed throughout the etwork. Therefore, at least oe replica of each cotet always exists i the etwork ad if a ode requests a piece of iformatio, this data will be provided either by its origial server or by a cache cotaiig the desired data. he the customer receives the cotet, it will store the cotet ad share it with the other odes if eeded. The preset paper thus ivestigates the throughput capacity i such cotet-cetric etworks ad addresses the followig questios: Lookig at the throughput capacity, ca we quatify the performace improvemet brought about by a cotetcetric etwork architecture over etworks with o cotet sharig capability? 2 How does the cotet discovery mechaism affect the throughput capacity? More specifically, does selectig the earest copy of the cotet improve the scalig of the capacity compared to selectig the earest copy i the directio of origial server? 3 How does the cachig policy, ad i particular, the legth of time each piece of cotet speds i the cache s memory, affect the capacity? e state three Theorems; Theorem formulates the throughput capacity i a grid etwork which uses the shortest path to the server cotet discovery mechaism cosiderig differet cotet availability i differet caches, ad Theorem 2 will aswer the first two questios studyig two differet etwork models (grid ad radom etwork ad two cotet discovery scearios (shortest path to the server ad shortest path to the closest copy of the cotet whe the iformatio exists i all caches with the same probability. Theorem 3 derives some coditios o the respective request rate (amely, the popularity of the cotet ad the time spet i the cache, so that these throughputs ca be supported by all the odes ad the flow i o ode be a bottleeck. These theorems demostrate that addig the cotet sharig capability to the odes ca sigificatly icrease the capacity.

The rest of the paper is orgaized as follows. After a brief review of the related work i Sectio II, the etwork models ad the cotet discovery algorithms used i the curret work are itroduced i Sectio III. Mai Theorems are stated ad proved i Sectio IV. e will discuss the results ad study some simple examples i Sectio V. Fially the paper is cocluded ad some possible directios for the future work will be itroduced i sectio VI. II. RELATED ORK Iformatio Cetric Networks have recetly received cosiderable attetio. hile our work presets a aalytical abstractio, it is based upo the priciples described i some ICN architectures, such as CCN [4], NetIf [5], PURSUIT [2], or DONA [6], where odes ca cache cotet, ad requests for cotet ca be routed to the earest copy. Papers surveyig the ladscape of ICN [3] [7] show the dearth of theoretical results uderlyig these architectures. Cachig, oe of the mai cocepts i ICN etworks, has bee studied i prior works [3]. Some performace metrics like miss ratio i the cache, or the average umber of hops each request travels to locate the cotet have bee studied i [8], [9], ad the beefit of cooperative cachig has bee ivestigated i [0]. Optimal cache locatios [] ad cache replacemet techiques [2] are two other aspects most commoly ivestigated. Ad a aalytical framework for ivestigatig properties of these etworks like fairess of cache usage is proposed i [3]. [4] cosidered iformatio beig cached for a limited amout of time at each ode, as we do here, but focused o floodig mechaism to locate the cotet, ot o the capacity of the etwork. However, to the best of our kowledge, there are just a few works focusig o the achievable data rates i such etworks. Calculatig the asymptotic throughput capacity of wireless etworks with o cache has bee solved i [5] ad may subsequet works [6] [7]. Some work has studied the capacity of wireless etworks with cachig [8] [9] [20]. There, cachig is used to buffer data at a relay ode which will physically move to deliver the cotet to its destiatio, whereas we follow the ICN assumptio that cachig is triggered by the ode requestig the cotet. [2] uses a etwork simulatio model ad evaluates the performace (file trasfer delay i a cache-ad-forward system with o request for the data. [22] proposes a aalytical model for sigle cache miss probability ad statioary throughput i cascade ad biary tree topologies. [23] cosiders a geeral problem of deliverig cotet cached i a wireless etwork ad provides some bouds o the cachig capacity regio from a iformatio-theoretic poit of view. Some scalig regimes for the required lik capacity is computed i [24] for a static cache placemet i a multihop wireless etwork. A. Network Model III. PRELIMINARIES Two etwork models are studied i this work. Grid Network: Assume that the etwork cosists of odes V = {v, v 2,..., v } each with a local cache of size L located o a grid (Figure. The distace betwee two adjacet odes equals to the trasmissio rage of each ode, so the packets set from a ode are oly received by four adjacet odes. There are m differet cotets, F = {f, f 2,..., f m } with sizes B i, i =,..., m, for which each ode v j may issue a query. Based o the cotet discovery algorithms which will be explaied later i this sectio, the query will be trasmitted i the etwork to discover a ode cotaiig the desired cotet locally. v j the dowloads b bits of data with rate γ i a hop-by-hop maer through the path P xj from either a ode (v i, x = i cotaiig it locally (f v i or the server (x = s. he the dowload is completed, all the odes o the dowload path or just the ed user store the data i their local cache ad share it with other odes. P js deotes the odes o the path from v j to server. ithout loss of geerality, we assume that the server is attached to the ode located at the middle of the etwork, chagig the locatio of the server does ot affect the scalig laws. Usig the protocol model ad accordig to [25] the trasport capacity i such etwork is upper bouded by. This is the model studied i ad the first two scearios of Theorem 2. w v u Server Fig.. The trasmissio rage of ode v cotais four surroudig odes. The black vertices cotai the cotet i their local caches. The arrow lies demostrate a possible discovery ad receive path i sceario i, where ode v dowloads the required iformatio from u. I sceario ii, v will dowload the data from w istead. 2 Radom Network: The last etwork studied i Theorem 2 is a more geeral etwork model where the odes are radomly distributed over a uit square area accordig to a uiform distributio. e use the same model used i [25] (sectio 5 ad divide the etwork area ito square cells each with side-legth proportioal to the trasmissio rage r(, log which is selected to be at least i the order of to guaratee the coectivity of the etwork [26]. Accordig to the protocol model [25], if the cells are far eough they ca trasmit data at the same time with o iterferece; we assume that there are M 2 o-iterferig groups which take tur to trasmit at the correspodig time-slot i a roud robi fashio. The server is assumed to be located at the middle of the etwork. I this model the maximum umber of simultaeous feasible trasmissios will be i the order of r 2 ( as each trasmissio cosumes a area proportioal to

r 2 (. All the other assumptios are similar to the grid etwork. B. Cotet Discovery Algorithm Path-wise Discovery: To discover the locatio of the desired cotet, the request is set through the shortest path toward the server cotaiig the requested cotet. If a itermediate ode has the data i its local cache, it does ot forward the request toward the server aymore ad the requester will start dowloadig from the discovered cache. Otherwise, the request will go all the way toward the server ad the cotet is obtaied from the mai source. I case of the radom etwork whe a ode eeds a piece of iformatio, it will sed a request to its eighbors toward the server, i.e. the odes i the same cell ad oe adjacet cell i the path toward the server, if ay copy of the data is foud it will be dowloaded. If ot, just oe ode i the adjacet cell will forward the request to the ext cell toward the server. 2 Expadig Rig Search: I this algorithm the request for the iformatio is set to all the odes i the trasmissio rage of the requester. If a ode receivig the request cotais the required data i its local cache, it otifies the requester ad the dowloadig from the discovered cache is started. Otherwise, all the odes that receive the request will broadcast the request to their ow eighbors. This process cotiues util the cotet is discovered i a cache ad the dowloadig follows after that. IV. THEOREM STATEMENTS AND PROOFS Theorem. Cosider a grid wireless etwork cosistig of odes. Each ode ca trasmit over a commo wireless chael, with bits per secod badwidth, shared by all odes. Assume that there is a server which cotais all the iformatio. ithout loss of geerality we assume that this server is located i the middle of the etwork. Each ode cotais some iformatio i its local cache. Assume that the probability of the iformatio beig i all the caches with the same distace (j hops from the server is the same (ρ j (. The maximum achievable throughput capacity order (γ max i such etwork whe the odes use the earest copy of the required cotet o the shortest path toward the server is give by i= i i j=0 (i jρ j( i k=j+ ( ρ k(, where ρ 0 ( =, which meas that the server always cotais the iformatio. Proof: A request iitiated by a user v i i i-hop distace from the server (located i level i =,.., is served by cache u j located i level j, j i o the shortest path from v i to the server if o caches before u j, icludig v i, o f( = O(g( if sup (f(/g( <. f( = Ω(g( if g( = O(f(. f( = g( or f( g( if both f( = O(g( ad f( = Ω(g(. f( = o(g( if f(/g( 0. f( = ω(g( if g(/f( 0. this path cotais the required iformatio, ad u j cotais it. This request is served by the server if o copy of it is available o the path. Assumig that the availability of the iformatio i each cache is idepedet of the cotets i the other caches, this probability deoted by P i,j is give by P i,j = ( ρ i (( ρ i (...( ρ j+ (ρ j ( ( where ρ j ( is the probability of the iformatio beig available i a cache i level j, j =,..,, ad j = 0 shows the server ad ρ 0 ( =. Thus a cotet requested by v i is travelig i j hops with probability P i,j. There are 4i odes i level i so the average umber of hops ( h traveled by each piece of data from the servig cache (or the origial server to the requester is h = 4i i= = 4i i= i (i jp i,j j=0 i (i j( ρ i (...( ρ j+ (ρ j ( j=0 Assume that each user is receivig data with rate γ. The trasport capacity i this etwork, which equals to γ h, is upper bouded by. So γ max = h ad the Theorem is proved. Theorem 2. Cosider a wireless etwork cosistig of odes, with each ode cotaiig the iformatio i its local cache with commo probability ρ(. Each ode ca trasmit over a commo wireless chael, with bits per secod badwidth, shared by all odes. Sceario i- If the odes are located o a grid ad search for the cotets just o the shortest path toward the server, the maximum achievable throughput capacity order γ max is { ρ( ( ρ(, if ρ( = Ω( /2 ( ρ(, if ρ( = O( /2 Sceario ii- If the odes are located o a grid ad use rig expasio as their cotet search algorithm, the maximum achievable throughput γ max is { ρ( 0.4646 ( ρ(, if ρ( = Ω( /2 ( ρ(, if ρ( = O( /2 Sceario iii- If the odes are radomly distributed over a uit square area ad use path-wise cotet discovery algorithm, the maximum achievable capacity γ max is { log ( ρ(, if ρ( = Ω( log log ( ρ(, if ρ( = O( log Here we prove Theorem 2 by utilizig some lemmas. (2

Lemma. Cosider the wireless etworks described i Theorem 2. For sufficietly large etworks ad whe ρ( is large eough (ρ( = Ω( /2 for case i, ii ad ρ( = Ω(log for case iii, the average umber of hops betwee the customer ad the earest cached cotet locatio is ρ( (i h = ρ( (ii 0.4646 (3 (iii Proof: Sceario i- This case ca be cosidered as a special case of the etwork studied i theorem, where ρ i ( is the same for all i. Thus we ca drop the idex i ad let ρ( deote the commo value of this probability. Usig equatio 2 we will have h = 4 i{i( ρ( i + i= i (i j( ρ( i j ρ(} j= = 4 ( i 2 ( ρ( i + = = i= 4( ρ( ρ( i= k( ρ( k ρ( i i k=0 (i i( ρ( i i= 4( ρ( ( + ( ρ( ρ( 2 ρ 2 ( ( ρ( +2 ( + ( ρ( + ρ 2 ( Sice for every N ad x the followig is true x = o( N lim ( N xn = e xn x = N 0 x = ω( N we ca write ( ρ( 0 if ρ( = Ω( /2, which results i h = ρ(, ad ( ρ( if ρ( = O( /2, which results i h = Ω(, ad sice is upper limited by it is equal to that value. Sceario ii - The probability that the discovered cache is located at a distace of oe hop from the requester is the probability that oe of the odes o the rig at oe hop distace cotais the data (it cosists of 4 odes, which equals to ( ρ( 4, ad the probability that the data eeds to travel through h hops from the discovered cache to where it is required is ( ( ρ( 4h h k= ( ρ(4k as there are 4h odes at distace of h hops. Therefore, h = h= h h( ( ρ( 4h ( ρ( 4k k= Numerical aalysis show that the above equatio is /ρ δ where 0.4 < δ < 0.5 if ρ = Ω( /2. I the rest of paper we use δ = 0.4646 which is obtaied by curve fittig. For smaller ρ( s, h will icrease to. Sceario iii - The discovered cache is oe hop away from the requester if there is a replica of the data i a cache at the same cell or at the adjacet cell toward the server. Sice there are log odes i each cell, the probability of the discovered cache beig at oe hop distace is ( ρ( 2 log, ad the probability of the discovered cache beig at distace of h hops away from the requester is ( ρ( h log ( ( ρ( log. The maximum umber of hops that may be traveled this way is r(. Thus h = ( ρ( 2 log + r( h=2 h( ρ(h log ( ( ρ( log Large = (4 where the last equality is correct whe ρ( = Ω(log. I sceario iii the average umber of hops betwee the earest cotet locatio ad the customer is just hop. This is the result of havig log( caches i oe hop distace for every requester. Each oe of these caches ca be a potetial source for the cotet. he the etwork grows, this umber will icrease ad if ρ( is large eough (ρ( = Ω(log the probability that at least oe of these odes cotai the required data will approach, i.e., lim ( ( ρ( log =. Lemma 2. The average probability that the server eeds to serve a request is p s = ( Θ (2 ρ( 2 O Θ ρ( ( 2 (2 ρ( 2 ρ( ( 2 (2 ρ( log (i (ii (iii Proof: Sceario i- The data will eed to be dowloaded from the server (at average distace h s if o copy of the data is available o the path betwee a requester ode ad the server. As the etwork area is assumed to be a square ad the server is i the middle of it, this probability is bouded by + hmax/2 k= 4k( ρ( k p s + hmax k= 4k( ρ( k Thus for large, p s = (2 ρ(2 ρ(. 2 Note that both h s ad h max, the maximum umber of hops which may be traveled betwee the requester ad the ode that possesses a valid copy of data, i this sceario are. Sceario ii- The data will eed to be dowloaded from the server (at average distace h s if o copy of the cotet is available i the etwork caches. Sice comparig to sceario i more odes will be ivolved i the process of cotet discovery, it is obvious that i this case the request will be forwarded to the server with less probability. Thus p s = O( (2 ρ(2 ρ(. 2 Sceario iii- The data is dowloaded from the server if o ode i the cells o the path toward the server cell cotais a (5

copy of the cotet. p s = r( log +5 log ( ρ(+h=2 4h log ( ρ((h Large = (2 ρ( log (6 It ca be see that i all cases the average umber of hops betwee the server ad the ode requestig the cotet is a fuctio of the total umber of odes i the etwork ad ρ(. Now we ca prove Theorem 2 usig the above lemmas. Proof: Assume that each cotet is retrieved with rate γ bits/sec. The traffic geerated because of oe dowload from a cache at average distace of h hops from the requester ode is γ h ad the traffic geerated due to the dowloads from the server at average distace of h s hops from the requester is γ h s. The probability that the server is uploadig the data is p s ad the probability that a cache ode is servig the customer is p = p s. The total umber of requests for a cotet i the etwork at ay give time is limited by the umber of odes ot havig the cotet i their ow cache (( ρ(. Thus the maximum total badwidth eeded to accomplish these dowloads will be ( ρ((p h+p s hs γ, which is upper limited by ( i scearios i, ii, ad ( r 2 ( i sceario iii. Therefore the maximum dowload rate is γ max = /( ρ( ( (2 ρ(2 ρ( 2 ρ( + (2 ρ(2 /( ρ( ( (2 ρ(2 ρ( 2 ρ( 0.4646 + (2 ρ(2 /r 2 (( ρ( ( (2 ρ( log (2 ρ( log + r( ρ( 2 (i ρ( 2 (ii (iii The results of Theorem ca be derived by approximatig these equatios for sufficietly large. Note that if there were o cache i the system, or ρ( is less tha the stated threshold values, all the requests would be served by the server, ad the maximum dowload rate would be hs = i case i, ii ad log i case iii. I the previous Theorems the maximum throughput capacity i a cache wireless etwork has bee calculated. Now it is importat to verify if this throughput ca be supported by each cell (ode, i.e. the traffic carried by each cell (ode is ot more tha what it ca support (. Theorem 3. The throughput capacities of Theorem 2 are log log supportable if ρ( = O( log + log log i sceario i, ii, ad ρ( = O( i sceario iii. log log log log log log log log log log Here we start with sceario iii ad a complete proof. Sceario i will be the briefly studied. Similar reasoig ca be used for sceario ii. Proof: Sceario iii- The traffic load at the server is γ max p s ( ρ( =. So the flow at the server will ot be a bottleeck. (7 The traffic load at a ode as a customer will ot be a bottleeck either as it does ot exceed the maximum data rate which is γ max = log ( ρ( <. To compute the traffic load at a ode which is servig a request, we eed to kow how may requests that ode may serve at a time. A ode v i V is the dowload source if it has the iformatio (ρ(, it is i the same cell as the requester or i a cell o the path from the requester to the server ad o ode i the previous cells o this path cotais the cotet (( ρ( x log where x is the umber of hops betwee v j ad v i, ad amog those odes i the same cell which have the data v i is selected to serve the query ( log k= k (log k ρ( k ( ρ( log k. For ot too small ρ( ad large, we have P (v i is servig v j s request = ρ( log v j, v i same cell h ji = & v i P js ( ρ( log log ( ρ( h+log log ( <hji=h log & v i P js 0, otherwise Therefore, each ode cotaiig the cotet will serve oly the odes i the same cell with high probability, ad the probability of beig selected to serve the query iitiated at the same cell is ρ( log. Based o the bi-balls Theorem [27], the maximum umber of queries served by a ode will be log log log log log. Cosequetly, the maximum traffic load per source log log is γ max. Therefore, to log log log = log log ( ρ( log log log log make sure that all the cells ca support the stated throughput log log log log log log ρ( is ot allowed to exceed O( log log log log. Fially each dowload of iformatio will geerate a traffic load o all the itermediate cells o the path from the source to the customer. However as stated i the proof of Theorem 2, the probability that the required cotet is discovered at distace of oe hope is ( ρ( 2 log which is almost oe for large. So we may coclude that with high probability i sufficietly large etworks o cell is workig as relay or the umber of trasmissios passig through a cell as relay is close to zero. Sceario i- Similar to sceario iii, the maximum traffic load is the load geerated i a ode whe servig the requests. Here there are ( ρ( requests which will be served by ρ( other odes, so accordig to the bi-balls Theorem the maximum requests for a ode will be i the order of ρ( log ( ρ( log log (8 log log log, which geerates traffic at the busiest ode. This traffic does ot exceed as log as ρ( does ot log log exceed O( log + log log. V. DISCUSSION e studied the impact of cachig o the maximum capacity order i the grid ad radom etworks where all the caches have the same probability of havig each item at ay give time. The etworks where the received data is stored oly at

the receivers ad the shared with the other odes as log as the ode keeps the cotet ca be cosidered as a example of such etworks. I Figure 2 (a we assume that the request rate is roughly 7 times the drop rate, so ρ( = 7/8, ad show the maximum throughput order as a fuctio of the etwork size. Accordig to Theorem ad as ca be observed from this figure, the maximum throughput capacity of the etwork i a grid etwork with the described characteristics is iversely proportioal to the square root of the etwork size if the probability of each item beig i each cache is fixed. Similarly i the radom etwork the maximum throughput is iversely proportioal to the logarithm of the etwork size. Moreover, comparig sceario i with ii, we observe that the throughput capacity i both cases are almost the same; meaig that usig the path discovery scheme will lead to almost the same throughput capacity as the expadig rig discovery. Thus, we ca coclude that just by kowig the address of a server cotaiig the required data ad forwardig the requests through the shortest path toward that server we ca achieve the best performace, ad icreasig the complexity ad cotrol traffic to discover the closest copy of the required cotet does ot add much to the capacity. O the other had with a fixed etwork size, if the probability of a item beig i each cache is greater tha a threshold ( /2 i cases i, ii ad log i case iii, most of the requests will be served by the caches ad ot the server, so icreasig the probability of a itermediate cache havig the cotet reduces the umber of hops eeded to forward the cotet to the customer, ad cosequetly icreases the throughput (Figure 2 (b, = 0 4. For cotet presece probability orders less tha these thresholds most of the requests are served by the mai server (p s approaches, so the maximum possible umber of hops will be traveled by each cotet to reach the requester ad the miimum throughput capacity ( ( ρ( i cases i, ii, ad log ( ρ( i case iii will be achieved. Furthermore, if the cotet availability icreases with etwork growth, higher throughput capacities may be achievable. log For example i sceario iii if ρ( = log +log log, the the resultig throughput will be γ max = log log which is much higher tha log. However, otig that accordig to Theorem 2, ρ( is upper bouded by some values, the achievable capacity will be upper bouded by log log log log log log (i ad log log (iii. As may have bee expected ad accordig to our results, the obtaied throughput is a fuctio of the probability of each cotet beig available i each cache, which i tur is strogly depedet o the etwork cofiguratio ad cache maagemet policy. I the followig, we describe this probability i more details ad study simple examples i which each item is equally probably available i ay cache. A. Cotet Distributio i Steady-State The time diagram of data access process i a cache is illustrated i Figure 3. he a query for cotet f i is iitiated, the cotet is available at the requester s cache after a wait time (T 3 which is a fuctio of the distace betwee the user ad the data source (server or a itermediate cache, the data size, ad the dowload speed. A expiratio timer will be set upo receivig the data, ad this data will be fially dropped after a holdig time (T with distributio f ad mea /µ i. Durig this time, the cached data ca be shared with the other users if eeded. The same user may re-issue a query for that data after some radom time (T 2 with distributio f 2 ad mea /λ i. Note that a ode will sed out a request for a cotet oly if it does ot have it i its local cache, otherwise, its request will be served locally ad o request is set to the other odes. The solid lies i this diagram deote the portios of time that the data is available at local cache. 0 0 (a 0 2 γ max γ max 0 4 0 6 Sceario(i Sceario(ii Sceario(iii Grid Net/No Cache Rad Net/No Cache 0 8 0 4 0 5 0 6 Network size ( 0 7 (b 0 5 0 0 Sceario(i Sceario(ii Sceario(iii Grid Net/No Cache Rad Net/No Cache 0 0. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ Fig. 2. Maximum dowload rate (γ max vs. (a the umber of odes (, (b the cotet presece probability(ρ(. Fig. 3. Data access process time diagram i a cache etwork I this work we assume idetical cotet sizes B i = B, ad assume all the cotets have the same popularity leadig to similar request rates λ i = λ, ad the same holdig times µ i = µ. As the requests for differet cotets are supposed to be idepedet ad holdig times are set for each cotet idepedet of the others, we ca do the calculatios for oe sigle cotet. If the total umber of cotets is ot a fuctio of the etwork size, this will ot chage the capacity order. Suppose that B is much larger tha the request size, so we igore the overhead of the discovery phase i our calculatios. Furthermore, if the iformatio sizes are the same ad the dowload rates are also the same, the dowload time will be

a fuctio of the umber of hops (h betwee the source ad the customer; T 3 = Bh/γ. I the steady-state aalysis, we igore this costat time. The average portio of time that each ode cotais a cotet i its local cache is ρ( = /µ /µ + /λ = λ λ + µ, (9 which is the average probability that a ode cotais the data at steady-state. λ is the rate of requests for a data from each user i case of the data ot beig available, ad µ is the rate of the data beig expuged from the cache. Both these parameters are strogly depedet o the total umber of users, or the topology ad cofiguratio of the etwork or the cache characteristics like size ad replacemet policy. Example : As a possible example leadig to equal probability of all the caches cotaiig a piece of data, which is the basic assumptio of Theorems 2 ad 3, assume that receivig a data i the local cache of the requestig user sets a time-out timer with expoetially distributed duratio with parameter η ad o other evet will chage the timer util it times-out, meaig that µ = η. Cosiderig the request process for each cotet from each user beig a Poisso process with rate β, ad usig the memoryless property of expoetial distributio (iteral request iter-arrival times, ad assumig that the received data is stored oly i the ed user s cache (the caches o the dowload path do t store the dowloadig data, it ca be proved that λ = β. Thus we ca write the presece probability of each cotet i each cache β β+η. as ρ( = Figures 4 (a,(b respectively illustrate the total request rate ad the total traffic geerated i a fixed size etwork i sceario i for differet request rates whe the time-out rate is fixed. The total request rate i the etwork is the product of the umber of requestig odes ad the rate at which each ode is sedig the request. The total traffic is the product of the total request rate ad the umber of hops betwee source ad destiatio ad the cotet size. Small λ meas that each ode is sedig requests with low rate, so fewer caches have the cotet, ad cosequetly more odes are sedig requests with this low rate. I this case most of the requests are served by the server. The total request rate will icrease by icreasig the per ode request rate. High λ shows that each ode is requestig the cotet with higher rate, so the umber of cached cotet i the etwork is high, thus fewer odes are requestig the cotet with this high rate exterally. Here most of the requests are served by the caches. The total request rate the is determied by the cotet drop rate. So for very large λ, the total request rate is the total umber of odes i the etwork times the drop rate (µ ad the total traffic is µb. As ca be see there is some request rate at which the traffic reaches its maximum; this happes whe there is a balace betwee the requests served by the server ad by the caches, for smaller request rates, most of the requests are served by the server ad icreasig λ icreases the total traffic; for larger λ, o the other had, most of the requests are served by the caches ad icreasig the request rate will decrease the distace to the earest cotet ad decrease total traffic. Figures 5 (a,(b respectively illustrate the total request rate ad the total traffic geerated i a fixed size etwork i sceario i for differet time-out rates whe the request rate is fixed. Low /µ meas high time-out rates or small lifetimes, which meas most of the requests are served by the server ad cachig is ot used at all. For large time-out times, all the requests are served by the caches, ad the oly parameter i determiig the total request rate is the time-out rate. However, whe the etwork grows the traffic i the etwork will icrease ad the dowload rate will decrease. If we assume that the ew requests are ot issued i the middle of the previous dowload, the request rate will decrease with etwork growth. If the holdig time of the cotets i a cache icreases accordigly the total traffic will ot chage, i.e. if by icreasig the etwork size the requests are issued ot as fast as before, ad the cotets are kept i the caches for loger times, the etwork will perform similarly. Total request rate ( ρλ Total traffic 0 5 0 0 (a =0 4 =0 5 =0 6 µ 0 0 0 5 0 0 0 5 0 0 5 x 05 0 5 0 (b λ/µ=2/( /2 µb 5 0 0 0 5 0 0 λ 0 5 0 0 Fig. 4. (a Total request rate i the etwork (λ( ρ(, (b Total traffic i the etwork (Bλ( ρ((p h + p s hs vs. the request rate (λ with fixed time-out rate (µ =. 2 Example 2: Assume that each cache i level i i a grid etwork receives requests for a specific documet accordig to a Poisso distributio with rate β i ( from all the other odes, ad with rate β from the local user. Note that rate β i ( is a fuctio of the idividual request rate of users (β ad also the locatio of the cache i the etwork. The cotet discovery mechaism is path-wise discovery, ad wheever a copy of the required data is foud (i a cache or server, it will be dowloaded through the reverse path, ad all the odes o the dowload path store it i theirs local caches. Moreover, we assume that receivig the data ad also ay request for the available cached data by a ode i level i refreshes a timeout timer with fixed duratio D i. Accordig to [28] this is a good approximatio for caches with Least Recetly Used (LRU replacemet policy whe the cache size ad the total umber of documets are reasoably large. e will calculate the average probability of the data beig i a cache i level i

Total request rate ( ρλ 0 6 0 4 λ 0 2 0 0 =0 4 =0 5 =0 6 (a 0 2 0 0 0 5 0 0 0 5 0 0 Total traffic 20 x 08 5 0 5 λb (b =0 4 =0 5 =0 6 0 0 0 0 5 /µ 0 0 0 5 0 0 Fig. 5. (a Total request rate i the etwork (λ( ρ(, (b Total traffic i the etwork (Bλ( ρ((p h + p s hs vs. the iverse of the time-out rate (/µ with fixed request ratio (λ =. Now we eed to calculate the rate of requests received by each ode i level i. e assume that the shortest path from the requester to the server is selected such that all the odes i level i receive the requests with the same rate. There are 4i odes i level i ad 4(i + odes i level i +. So the request iitiated or forwarded from a ode i level i+ will be received by a specific ode i level i with probability i i+ if it is ot locally available i that ode, so β i ( ca be expressed as β i = ( ρ i+(β + β i+ (i + i (3 Combiig equatio 3, the relatioship betwee ρ i ad β i, ad the fact that there is o exteral request comig to the odes i the most bottom level (β = β, together with the result of Theorem we ca obtai the capacity (γ max i the grid etwork with path-wise cotet discovery ad o-path storig scheme which is give by (ρ i ( based o these assumptios ad the use Theorem to obtai the throughput capacity. Let radom variable t o (T deote the total time of the data beig available i a cache durig costat time T. Assume that N(T requests are received by each ode v i i level i (i hop distace from the server. The data available time betwee ay two successive requests (iteral ad exteral is D i if the timer set by the first request is expired before the secod oe comes, or is equal to the time betwee these two requests. Let τ req i deote the time betwee receivig two successive requests. This process has a expoetial distributio with parameter β i = β + β i. So the total time of data availability i a level i cache is t o (T = N(T k=0 mi(τ req i, D i, (0 ad the average value of this time is m E[t o (T ] = E[ mi(τ req i, D i ]P r(n(t = m, = m=0 m=0 k=0 me[mi(τ req i, D i ]P r(n(t = m, = E[mi(τ req i, D i ]E[N(T ]. ( Accordig to the Poisso arrivals of requests with parameter β + β i, E[N(T ] = (β + β i T. E[mi(τ req, D i ] ca be easily calculated ad equals to i e D i (β+β i β+β i. Therefore, E[t o (T ] = ( e Di(β+β i T (2 Ad fially the probability of a item beig available i a E[to(T ] level i cache is ρ i = T = e Di(β+β i (. Note that D 0 = so that ρ 0 =. /4 i= i i i j=0 e k=j+ D k(β+β k ( e Dj(β+β j (4 Figure 6 (a illustrates the maximum throughput capacity chages with the etwork size ( whe D i β is the same for all odes. It ca be see that the this capacity is iversely proportioal to, just like the throughput capacity whe o timer refreshig is available ad the dowloaded data is stored just i the ed user s cache. Figure 6 (b shows the capacity versus differet values for D i β assumig = 0 4 ad same timer expiratio time for all odes. It ca be see that the maximum capacity is very close to e Dβ /. For large Dβ products the probability of the cotet beig available i each ad every cache will ted to be oe, so all the cotets are dowloaded from the local cache ad o data trasfer is eeded to be doe, therefore the calculated throughput capacity will be very large which meas that the all the liks are available with their maximum badwidth. VI. CONCLUSION AND FUTURE ORK e studied the asymptotic throughput capacity of ICNs with limited lifetime cached data at each ode. The grid ad radom etworks are two etwork models we ivestigated i this work. The results show that with fixed cotet presece probability i each cache, the etwork ca have the maximum throughput order of / ad / log i cases of grid ad radom etworks, respectively. Furthermore, sice the request rates will decrease as a result of icreasig dowload delays, icreasig the cotet lifetimes accordig to the etwork growth may result i higher throughput capacities. However, the throughput capacity is upper limited by some values which comes from the fact that the supportable throughput by each ode is limited.

γ max γ max 0 0 D i β=0.2 D i β= (a /2 0 5 D i β=5 0 3 0 4 0 5 Network size ( 0 6 0 7 0 60 0 40 0 20 0 0 (b example 3 result exp(dβ /2 0 20 0 3 0 2 0 Dβ 0 0 0 0 2 Fig. 6. Maximum throughput capacity (γ max versus (a etwork size (, (b Timeout-request rate product (βd. Moreover, we studied the impact of the cotet discovery mechaism o the performace. It ca be observed that lookig for the closest cache cotaiig the cotet will ot have much asymptotic advatage over the simple path-wise discovery. Cosequetly, dowloadig the earest available copy o the path toward the server will have the same performace as dowloadig from the earest copy. A practical cosequece of this result is that routig may ot eed to be updated with kowledge of local copies, just gettig to the source ad fidig the cotet opportuistically will yield the same beefit. Aother iterestig fidig is that whether all the caches o the dowload path keep the data or just the ed user does it, the maximum throughput capacity scale does ot chage. I this work, we have made several assumptios to simplify the aalysis. For example, we assumed all the cotets have the same characteristics (size, popularity. This assumptio should be relaxed i future work. e also assumed that the requester dowloads the data completely from oe cotet locatio. However, if the ode that eeds the data ca dowload each part of it from differet odes ad makes a complete cotet out of the collected parts, achievable capacities may be differet. Proposig a cachig ad dowloadig scheme that ca improve the capacity order is part of our future work. REFERENCES [] L. Zhag, D. Estri, J. Bruke, V. Jacobso, J. Thorto, D. Smetters, B. Zhag, G. Tsudik, K. Claffy, D. Krioukov, D. Massey, C. Papadopoulos, T. Abdelzaher, L. ag, P. Crowley, ad E. Yeh, Named data etworkig (NDN project, Oct. 200. [2] PURSUIT: Pursuig a pub/sub iteret, http://www.fp7-pursuit.eu/, Sep. 200. [3] B. Ahlgre, C. Daewitz, C. Imbreda, D. Kutscher, ad B. Ohlma, A survey of iformatio-cetric etworkig, Commuicatios Magazie, IEEE, vol. 50, o. 7, July 202. [4] V. Jacobso, D. K. Smetters, J. D. Thorto, M. F. Plass, N. H. Briggs, ad R. L. Brayard, Networkig amed cotet, i ACM CoNEXT, 2009, pp. 2. [5] B. Ahlgre, M. D Ambrosio, M. Marchisio, I. Marsh, C. Daewitz, B. Ohlma, K. Petikousis, O. Stradberg, R. Rembarz, ad V. Vercelloe, Desig cosideratios for a etwork of iformatio, i ACM CoNEXT, 2008, pp. 6. [6] T. Kopoe, M. Chawla, B. G. Chu, A. Ermoliskiy, K. H. Kim, S. Sheker, ad I. Stoica, A data-orieted (ad beyod etwork architecture, i ACM SIGCOMM, 2007, pp. 8 92. [7] A. Ghodsi, T. Kopoe, B. Raghava, S. Sheker, A. Sigla, ad J. ilcox, Iformatio-Cetric etworkig: Seeig the forest for the trees, i HotNets, 20. [8] H. Che, Z. ag, ad Y. Tug, Aalysis ad desig of hierarchical web cachig systems, i IEEE INFOCOM, 200, pp. 46 424. [9] E. Rosesweig, J. Kurose, ad D. Towsley, Approximate models for geeral cache etworks, i IEEE INFOCOM, 200, pp. 9. [0] A. olma, M. Voelker, N. Sharma, N. Cardwell, A. Karli, ad H. M. Levy, O the scale ad performace of cooperative eb proxy cachig, SIGOPS Oper. Syst. Rev., vol. 33, o. 5, pp. 6 3, Dec. 999. [] E. J. Rosesweig ad J. Kurose, Breadcrumbs: Efficiet, Best-Effort cotet locatio i cag etworks, i IEEE INFOCOM, 2009, pp. 263 2635. [2] L. Yi ad G. Cao, Supportig cooperative cachig i ad hoc etworks, IEEE Trasactios o Mobile Computig, o., pp. 77 89, 2005. [3] M. Tortelli, I. Ciaci, L. A. Grieco, G. Boggia, ad P. Camarda, A fairess aalysis of cotet cetric etworks, Nov. 20. [4] C. estphal, O maximizig the lifetime of distributed iformatio i ad-hoc etworks with idividual costraits, i ACM MobiHoc, 2005, pp. 26 33. [5] P. Gupta ad P. Kumar, The capacity of wireless etworks, IEEE Trasactios o Iformatio Theory, vol. 46, o. 2, 2000. [6] J. Li, C. Blake, D. S. De Couto, H. I. Lee, ad R. Morris, Capacity of ad hoc wireless etworks, i MobiCom, 200, pp. 6 69. [7] U. Niese, P. Gupta, ad D. Shah, O capacity scalig i arbitrary wireless etworks, Iformatio Theory, IEEE Trasactios o, vol. 55, o. 9, pp. 3959 3982, 2009. [8] M. Grossglauser ad D. Tse, Mobility icreases the capacity of ad hoc wireless etworks, Networkig, IEEE/ACM Trasactios O, vol. 0, o. 4, pp. 477 486, 2002. [9] J. D. Herdter ad E. K. Chog, Throughput-storage tradeoff i ad hoc etworks, i IEEE INFOCOM, 2005, pp. 2536 2542. [20] G. Alfao, M. Garetto, ad E. Leoardi, Cotet-cetric wireless etworks with limited buffers: whe mobility hurts, i IEEE INFOCOM, 203. [2] H. Liu, Y. Zhag, ad D. Raychaudhuri, Performace evaluatio of the cache-ad-forward (CNF etwork for mobile cotet delivery services, i ICC orkshop, 2009, pp. 5. [22] G. Carofiglio, M. Gallo, L. Muscariello, ad D. Perio, Modelig data trasfer i cotet-cetric etworkig, i IEEE Teletraffic Cogress (ITC23, 20, pp. 8. [23] U. Niese, D. Shah, ad G. orell, Cachig i wireless etworks, IEEE Trasactios o Iformatio Theory, 20. [24] S. Gitzeis, G. S. Paschos, ad L. Tassiulas, Asymptotic laws for cotet replicatio ad delivery i wireless etworks, i IEEE INFOCOM, 202, pp. 53 539. [25] F. Xue ad P. Kumar, Scalig Laws for Ad Hoc ireless Networks: a Iformatio Theoretic Approach. Foudatios ad Treds i Networkig, NO Publishers, 2006. [26] M. D. Perose, The logest edge of the radom miimal spaig tree, The Aals of Applied Probability, pp. 340 36, 997. [27] M. Raab ad A. Steger, Balls ito bis - a simple ad tight aalysis, i Proceedigs of the Secod Iteratioal orkshop o Radomizatio ad Approximatio Techiques i Computer Sciece, 998, pp. 59 70. [28] H. Che, Y. Tug, ad Z. ag, Hierarchical eb cachig systems: modelig, desig ad experimetal results, IEEE Joural o Selected Areas i Commuicatios, vol. 20, o. 7, pp. 305 34, Sep. 2002.