Performance Modeling of Proxy Cache Servers

Similar documents
A queueing network model to study Proxy Cache Servers

Analyzing a Proxy Cache Server Performance Model with the Probabilistic Model Checker PRISM

The effect of server s breakdown on the performance of finite-source retrial queueing systems

PERFORMANCE COMPARISON OF TRADITIONAL SCHEDULERS IN DIFFSERV ARCHITECTURE USING NS

Queuing Systems. 1 Lecturer: Hawraa Sh. Modeling & Simulation- Lecture -4-21/10/2012

Dr. Ayad Ghany Ismaeel

Performance Analysis of Interactive Internet Systems for a Class of Systems with Dynamically Changing Offers

Chapter 5. Minimization of Average Completion Time and Waiting Time in Cloud Computing Environment

DDSS: Dynamic Dedicated Servers Scheduling for Multi Priority Level Classes in Cloud Computing

Advanced Internet Technologies

Cover sheet for Assignment 3

WEB OBJECT SIZE SATISFYING MEAN WAITING TIME IN MULTIPLE ACCESS ENVIRONMENT

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

Introduction to Queuing Systems

PARALLEL ALGORITHMS FOR IP SWITCHERS/ROUTERS

Adaptive Local Route Optimization in Hierarchical Mobile IPv6 Networks

Using Queuing theory the performance measures of cloud with infinite servers

MODELING OF SMART GRID TRAFFICS USING NON- PREEMPTIVE PRIORITY QUEUES

COE 431 Computer Networks. Welcome to Exam I Thursday April 02, Instructor: Wissam F. Fawaz

Read Chapter 4 of Kurose-Ross

MODELS FOR QUEUING SYSTEMS

Models. Motivation Timing Diagrams Metrics Evaluation Techniques. TOC Models

Why is scheduling so difficult?

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS

Management and Analysis of Multi Class Traffic in Single and Multi-Band Systems

Insensitive Traffic Splitting in Data Networks

3. Examples. Contents. Classical model for telephone traffic (1) Classical model for telephone traffic (2)

Examining the Performance of M/G/1 and M/Er/1 Queuing Models on Cloud Computing Based Big Data Centers

Performance analysis of Internet applications over an adaptive IEEE MAC architecture

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Application of QNA to analyze the Queueing Network Mobility Model of MANET

PREDICTING NUMBER OF HOPS IN THE COOPERATION ZONE BASED ON ZONE BASED SCHEME

SharePoint 2010 Technical Case Study: Microsoft SharePoint Server 2010 Enterprise Intranet Collaboration Environment

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Teletraffic theory (for beginners)

Characterizing Home Pages 1

Improving VoD System Efficiency with Multicast and Caching

Observations on Client-Server and Mobile Agent Paradigms for Resource Allocation

BANK ATM QUEUEING MODEL: A CASE STUDY Ashish Upadhayay* 1

Integrated Buffer and Route Management in a DTN with Message Ferry

ON ANALYTICAL MODELING OF IMS CONFERENCING SERVER

Delay-minimal Transmission for Energy Constrained Wireless Communications

The Effectiveness NGN/IMS Networks in the Establishment of a Multimedia Session

A Fuzzy System for Adaptive Network Routing

Lecture 5: Performance Analysis I

Performance Modeling and Evaluation of Web Systems with Proxy Caching

M/G/c/K PERFORMANCE MODELS

Best-Effort versus Reservations Revisited

Modelling and Analysis of Push Caching

A model for the evaluation of storage hierarchies

INVESTIGATION ON THROUGHPUT OF A MULTI HOP NETWORK WITH IDENTICAL STATION FOR RANDOM FAILURE

Review. EECS 252 Graduate Computer Architecture. Lec 18 Storage. Introduction to Queueing Theory. Deriving Little s Law

Modelling traffic congestion using queuing networks

Development of Time-Dependent Queuing Models. in Non-Stationary Condition

The Analysis of the Loss Rate of Information Packet of Double Queue Single Server in Bi-directional Cable TV Network

MULTIMEDIA PROXY CACHING FOR VIDEO STREAMING APPLICATIONS.

10. Network planning and dimensioning

ECE SPRING NOTE; This project has two parts which have different due dates.

048866: Packet Switch Architectures

On TCP friendliness of VOIP traffic

A New Hashing and Caching Approach for Reducing Call Delivery Cost and Location Server s Load in Wireless Mobile Networks

Inconsistency of Logical and Physical Topologies for Overlay Networks and Its Effect on File Transfer Delay

SDSS Dataset and SkyServer Workloads

Resource Allocation Strategies for Multiple Job Classes

SharePoint 2010 Technical Case Study: Microsoft SharePoint Server 2010 Social Environment

Queueing Networks. Lund University /

Delay Analysis of IEEE Wireless Metropolitan Area Network

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

Software Tools for Network Modelling

100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms

A Comparative Approach to Reduce the Waiting Time Using Queuing Theory in Cloud Computing Environment

Analysis of the effects of removing redundant header information in persistent HTTP connections

Proxy Caching for Video on Demand Systems in Multicasting Networks

Analysis of Replication Control Protocols

Neural-based TCP performance modelling

Queueing Theory. M/M/1 and M/M/m Queues

Calculating Call Blocking and Utilization for Communication Satellites that Use Dynamic Resource Allocation

Comparison of pre-backoff and post-backoff procedures for IEEE distributed coordination function

because all web objects are not cacheable [3]. Data compression algorithms are applied on the information content of the web object.

An Efficient Scheduling Scheme for High Speed IEEE WLANs

Analysis of Web Caching Architectures: Hierarchical and Distributed Caching

How Harmful The Paradox Can Be In The Cohen-Kelly Computer Network Model Using A Non-Cooperative Dynamic Load Balancing Policy

Modeling Improved Low Latency Queueing Scheduling Scheme for Mobile Ad Hoc Networks

ON NEW STRATEGY FOR PRIORITISING THE SELECTED FLOW IN QUEUING SYSTEM

Trace Driven Simulation of GDSF# and Existing Caching Algorithms for Web Proxy Servers

Unavoidable Constraints and Collision Avoidance Techniques in Performance Evaluation of Asynchronous Transmission WDMA Protocols

Cost-effective Burst-Over-Circuit-Switching in a hybrid optical network

Simulation Studies of the Basic Packet Routing Problem

Performance of Optical Burst Switching Techniques in Multi-Hop Networks

Future-ready IT Systems with Performance Prediction using Analytical Models

Southern Polytechnic State University Spring Semester 2009

Traffic Management and QOE in MPLS Network based on Fuzzy Goal Programming

Operating Systems and Networks Recitation Session 4

Energy Consumption Estimation in Cluster based Underwater Wireless Sensor Networks Using M/M/1 Queuing Model

RD-TCP: Reorder Detecting TCP

An Efficient Queuing Model for Resource Sharing in Cloud Computing

Math Lab 6: Powerful Fun with Power Series Representations of Functions Due noon Thu. Jan. 11 in class *note new due time, location for winter quarter

UNIT IV -- TRANSPORT LAYER

A model for Endpoint Admission Control Based on Packet Loss

Introduction to Modeling. Lecture Overview

Transcription:

Journal of Universal Computer Science, vol. 2, no. 9 (2006), 39-53 submitted: 3/2/05, accepted: 2/5/06, appeared: 28/9/06 J.UCS Performance Modeling of Proxy Cache Servers Tamás Bérczes, János Sztrik (Department of Informatics Systems and Networks, University of Debrecen P.O. Box 2, H-400, Debrecen, Hungary {tberczes, jsztrik}@inf.unideb.hu) Abstract: The primary aim of the present paper is to modify the performance model of Bose and Cheng [] to a more realistic case when external visits are also allowed to the remote Web servers and the Web servers have limited buffer. We analyze how many parameters affect the performance of a Proxy Cache Server. Numerical results are obtained for the overall response time with and without a PCS. These results show that the benefit of a PCS depends on various factors. Several numerical examples illustrate the effect of visit rates, visit rates for the external users, and the cache hit rate probability on the mean response times. Key Words: Queueing Network, Proxy Cache Server, Performance Models Category: H.., H.3.3 Introduction The World Wide Web (WWW) can give a quick and easy access to a large number of web servers where users can find all kind of information, documents and multimedia files. From the user s point of view it does not matter whether the requested files are on the firm s computer or on the other side of the world. The usage of the web has been growing very fast. The number of internet users increased from 474 million in 200 to 590 million in 2002, and the forecast for 2006 is 948 million users. According to the facts, that in 996 the number of users was only 627000, the growth is rapid and we can justify and exponential grows in traffic, too. The users want to get a high quality service and modest response time. The answer from the remote web server to the client often takes a long time. One of the problems is that the same copy of the file can be claimed by other users at the same time. Because of this situation, identical copies of many files pass through the same network links, resulting in an increased response time. A natural solution to avoid this situation is to store this information. In general caching can be implemented at browser software; the originating Web sites; and the boundary between the local area network and the Internet. Browser cache are inefficient since they cache for only one user. The caching at the Web sites can improve performance, although the requested files are still subject to delivery through the Internet. It has been suggested that the greatest improvement in response time for corporations will come from installing a proxy cache server (PCS) at the boundary between the local area network and the Internet. Requested documents can be delivered directly from the web server or through Research is partially supported by the Hungarian National Foundation for Scientifc Research under grant OTKA-K60698.

40 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers a proxy cache server. A PCS has the same functionality as a web server when looked at from the client and the same functionality as a client when looked at from a web server. The primary function of a proxy cache server is to store documents close to the users to avoid retrieving the same document several times over the same connection. In this paper a modification of the performance model of Bose and Cheng [] is given to deal with a more realistic case when external visits are also allowed to the remote Web servers and the Web servers have a limited buffer. For the easier understanding of the basic model and comparisons we follow the structure of the cited work. In Section 2 we construct a queuing network model to study the dynamics of installing a PCS. Overall response-time formulas are developed for both the case with and without a PCS. In Section 3 numerical experiments are conducted to examine the response time behavior of the PCS with respect to various parameters of the model. Concluding remarks can be found in Section 4. 2 An analytical model of Proxy Cache Server traffic In this section we briefly describe the mathematical model with the suggested modifications. Using proxy cache server, if any information or file is requested to be downloaded, first it is checked whether the document exists on the proxy cache server. (We denote the probability of this existence by p). If the document can be found on the PCS then its copy is immediately transfered to the user. In the opposite case the request will be sent to the remote Web server. After the requested document arrived to the PCS then the copy of it is delivered to the user. The advantage of a PCS depends on several factors: The probability of the cache hit rate of the PCS, the speed of the PCS, the bandwidth of the firm s and the remote network and the speed of the remote web server []. Fig. illustrates the path of a request in the modified model starting from the user and finishing with the return of the answer to the user. The notations used in this model are collected in Table. We assume that the requests of the PCS users arrive according to a Poisson process with rate λ, and the external visits at the remote web server form a Poisson process with rate Λ. Let F be the average of the requested file size. We define λ, λ 2, λ 3 and λ 5 such that: λ = p λ and λ 2 = ( p) λ () λ 3 = λ 2 + Λ,λ 5 = ( P b ) λ 2 (2) The solid line in Fig. (λ ) represents the traffic when the requested file is available on the PCS and can be delivered directly to the user. The λ 2 traffic depicted by dotted

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 4 Figure : Network model line, represents those requests which could not be served by the PCS, therefore these requests must be delivered from the remote web server. Naturally the web server serves not only the requests of the studied PCS but it also serves requests of other external users. Let λ 3 denote the intensity of the overall requests arriving to the remote Web server. The λ 3 traffic undergoes the process of initial handshaking to establish a onetime TCP connection [7], []. We denote by I s this initial setup. According to [], The remote Web server performance is characterized by the capacity of its output buffer B s, the static server time Y s, and the dynamic server rate R s. In our model we assume that the Web server has a buffer of capacity K. Let P b be the probability that a request will be denied by the Web server. As it is well-known from basic queueing theory the blocking probability P b for the M/M//K queueing system: where Now we get P b = P (N = K) = µ = R s B s F (Y s R s + B s ) ( ρ) ρk ρ K+ (3) ρ = λ 3F (Y s R s + B s ) R s B s (5) Now we can see that the requests arrive to the buffer of the Web server according to a Poisson process with rate λ 4 = ( P b ) λ 3 (6) (4)

42 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers The performance of the firm s PCS is characterized by the parameters B xc, Y xc and R xc. If the size of the requested file is greater then the Web server s output buffer it will start a looping process until the delivery of all requested file s is completed. Let ( q = min, B ) s (7) F be the probability that the desired file can be delivered at the first attempt. Let λ 4 be the rate of the requests arriving at the Web service considering the looping process. According to the conditions of equilibrium and the flow balance theory of queueing networks λ 4 = q λ 4 (8) Then, we get the overall response time: T xc = I xc (λ) + p B xc F (Y xc+ Bxc ) λ Rxc + ( p) + I s λ B s 3 F (Y s+ Bs ) λ4 q Rs + F + N B xc s F (Y xc+ Bxc ) λ + F 5 N c, Rxc + F N c (9) The response time T xc consists of tree terms. The first term is the time to check whether the requested file is on the PCS or not. This is derived form the waiting time in an M/M/ queueing system where the visits form a Poisson process with rate λ and the service rate is I xc. The second term is the response time in the case if the requested document exists on the PCS, the probability of which is p. The first item in this term is the waiting time on the PCS, where the numerator B xc F (Y xc+ Bxc Rxc is the service demand. The second item in ) the second term correspond to the required time for content to travel through the client network bandwidth. The third term is the response time when the requested file does not exist on the PCS. The probability of that event is ( p). This term consists of tree terms too. The first item is the expected one-time initialization time of the TCP connection between the PCS and the remote web server. The second item is the waiting time of the queueing system on the remote Web server, where λ 4 /q = λ 4 and F/N s is the expected time of transferring the requested documents on the network of the bandwidth. The third term

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 43 is the waiting time of the PCS when the copy of the requested document is transfered to the user. When there is no PCS, the overall response time T, is given by the same arguments: T = I s (λ + Λ) + 3 Numerical results B s F (Y s+ Bs ) ( P b) (λ+λ) q Rs + F N s + F N c (0) For the numerical explorations the corresponding parameters of Cheng and Bose [] are used. The value of the other parameters for numerical calculationd are: I s = I xc = 0.004 seconds, B s = B xc = 2000 bytes, Y s = Y xc = 0.00006 seconds, R s = R xc = 250 Mbyte/s, N s = 544 Kbit/s, and N c = 28 Kbit/s. In Figures 2.- 0. the dotted line plot the case with a PCS and the normal line depicts the case without a PCS. 3. Effect of visit rate In Fig 2. the response time is depicted as a function of the visit rate. In this Figure the visit rates for the external users is 00 requests/s and the cache hit rate is 0.. We see that in this case the response time will be greater when we install a PCS. In Fig 3. we use the same parameters, but the cache hit rate is 0.25. In this case the response time is the same with and without a PCS. In Fig. 4 we use a higher visit rate for the external users (Λ = 50) with a smaller cache hit rate (p = 0.). When λ is smaller then 70 requests/s the response time is larger with a PCS than without a PCS. When we use a higher cache hit rate with a higher visit rate for the external users (Fig 5, p = 0.25, Λ = 50) the efficiency of PCS is clear. In this case the response time with a PCS will be smaller than the response time without a PCS for any value of the visit rate. So, we can see that the performance of a PCS depends on a high scale of the firms behaviour, but when the intensity of the requests from the firm is greater than 70, and the visit rate for the external users is 50 requests/s than it is enough a small cache hit rate to access a smaller response time. 3.2 Effect of visit rates for the external users Now we investigate the effect of the visit rate for the external users. In Fig. 6 the visit rate from the PCS is 20 requests/s, the requested file size is 5000 byte and the cache hit rate is 0.. We can see that with these parameters installing a PCS we get a higher response time. In Fig. 7 we modified only the cache hit rate probability to 0.25. In

44 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers this situation when we have more than 40 external requests/s then the response time with PCS is smaller than without a PCS. When the cache hit rate probability is smaller (p = 0.) and λ = 70 requests/s (Fig. 8) then the response time with a PCS is smaller than without, when we use a greater visit rate for the external users (Λ > 50). When we use a higher cache hit rate probability (Fig. 9 p = 0.4) then the response time with a PCS is smaller, independently of the external visits. Observing Fig. 6-9, we can find that in general the response time with and without a PCS increases when the visit rate for the external users increases. When the visit rate of the studied firm is modest (20 requests/s) then the benefit of the PCS is visible when the visit rates for the external users are bigger or when the cache hit rate probability is higher. 3.3 Comparison of the two model When the visit rate for the external users is zero (Λ = 0) and the the buffer size (K) for the remote Web server is unlimited we get the equation used by Bose and Cheng. Fig 0. (p = 0., F = 5000 bytes, Λ = 0) depicts the overall response time as a function of the arrival rate given by the equation used by Bose and Cheng and Fig 4. depicts the response time given by our model with the same parameters using visit rate for external users (Λ = 50) and buffer size (K = 00). Investigating Fig 4. we see that the response time with PCS will be smaller than without a PCS only in the case when we use a higher visit rate (λ > 85). In Fig 4. the advantage of the PCS will be visible with lower visit rates (λ > 65). The numerical examples show us that using our modifications we got a more realistic queueing network model. Using our modified model when the external visits are allowed, the PCS was beneficial with a lower traffic. 3.4 Effect of the cache hit rate probability Fig. (λ = 20, Λ = 00, F = 5000 bytes, K=00) depicts the overall response time as a function of the cache hit rate probability. The overall response time without a cache server is independent of p, as observed from (0). Hence we use T 0.32s for all values of p. The overall response time with a PCS decreases as p increases as seen in Fig.. When we use a cache hit rate probability 0.2 the efficiency of a PCS is visible. 3.5 Effect of the buffer size Let T xc denote the response time, when we use unlimited buffer for the Web server. In this case the blocking probability P b is 0, so from (6) we get that λ 4 = λ 3. We can easily derive the overall response time using equation (9):

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 45 Txc = I xc (λ) + p B xc F (Y xc+ Bxc ) λ Rxc + ( p) + B I s λ s 3 F (Y s+ Bs ) λ3 q Rs + F + N B xc s F (Y xc+ Bxc ) λ + F 2 N c, Rxc + F N c () In Fig 2., 3. we depict the response time in function of the buffer size. In both Figures the dotted line plot the response time when we use Web server with limited buffer and the normal line illustrates the case when we use unlimited buffer. Of course, using unlimited buffer we get constant value for the response time. Accordingly, we would like to get information, regarding the effect of the buffer size on the response time. When the visit rate for the external users is zero (Λ = 0) then the response time given by the equation equals to the response time given by the equation that was used by Bose and Cheng. In Fig 2. (p = 0., λ = 70, Λ = 0, F = 5000 bytes) we find that the buffer size of the Web server effects the response time only if it is very small. When the buffer size (K) is more than 5, the response time is independent of the buffer size. In Fig. 3 we analyze the situation when external visits are allowed (p = 0., λ = 70, Λ = 00, F = 5000 bytes). We can observe that allowing external visits the effect of the buffer size vanishes at a bigger value. In our case this happens when K is greater than 8.

46 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 0.35 0.34 0.33 0.32 0.3 0.3 0.29 0.28 0 20 30 40 50 60 70 80 90 Visit Rate Figure 2: p = 0., F = 5000 bytes,λ = 00, K=00 0.34 0.32 0.3 0.28 0 20 30 40 50 60 70 80 90 Visit rate Figure 3: p = 0.25, F = 5000 bytes,λ = 00, K=00

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 47 0.42 0.4 0.38 0.36 0 20 30 40 50 60 70 80 90 Visit rate Figure 4: p = 0., F = 5000 bytes,λ = 50, K=00 0.42 0.4 0.38 0.36 0 20 30 40 50 60 70 80 90 Visit rate for the external users Figure 5: p = 0.25, F = 5000 bytes,λ = 50, K=00

48 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 0.355 0.35 0.345 0.34 0.335 0.33 0.325 00 0 20 30 40 50 60 70 Visit rate for the external users Figure 6: λ = 20, p = 0., F = 5000 bytes, K=00 0.365 0.36 0.355 0.35 0.345 0.34 20 30 40 50 60 70 80 90 200 Visit rate for the external users Figure 7: λ = 20, p = 0.25, F = 5000 bytes, K=00

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 49 0.42 0.4 0.38 0.36 20 30 40 50 60 70 Visit rate for the external users Figure 8: λ = 70, p = 0., F = 5000 bytes, K=00 0.37 0.365 0.36 0.355 0.35 0.345 0 0 20 30 40 50 60 70 Visit rate for the external users Figure 9: λ = 20, p = 0.4, F = 5000 bytes, K=00

50 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 0.37 0.365 0.36 0.355 0.35 40 50 60 70 80 90 Visit rate Figure 0: p = 0., F = 5000 bytes, Λ = 0 0.33 0.328 0.326 0.324 0.322 0.32 0.38 0.36 0.34 0.32 0 0. 0.2 0.3 0.4 Cache "hit rate" probability Figure : λ = 20, Λ = 00, F = 5000 bytes, K=00

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 5 0.36 0.358 0.356 0.354 0.352 0.35 0.348 0 2 3 4 5 Buffer size Figure 2: p = 0., λ = 70, Λ = 0, F = 5000 bytes 0.35 0.348 0.346 0.344 0.342 0.34 0.338 2 3 4 5 6 7 8 Buffer size Figure 3: p = 0., λ = 70, Λ = 00, F = 5000 bytes

52 Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers λ: arrival rate from the PCS Λ: visit rates for the external users F : average file size (in byte) p: cache hit rate probability B xc : PCS output buffer (in byte) I xc : lookup time of the PCS (in second) Y xc : static server time of the PCS (in second) R xc : dynamic server time of the PCS (in byte/second) N c : client network bandwidth (in bit/second) B s : Web output buffer (in byte) I s : lookup time of the Web server (in second) Y s : static server time of the Web server (in second) R s : dynamic server time of the Web server (in byte/second) N s : server network bandwidth (in bit/second) K: the buffer size of the Web server (in requests) Table : Notations 4 Conclusions We modified the queueing network model of Bose and Cheng [] to a more realistic case when external arrivals are allowed to the remote web server and the web server has limited buffer. To examine this model we conducted numerical experiments adapted to realistic parameters. We noticed that, when the arrival rate of requests increases, then the response times increase as well regardless of the existence of a PCS. But in contrast with [] when external visits are allowed to the remote web server, the PCS was beneficial with a low traffic and a low cache hit rate. When we used a high visit rate with a high cash hit rate probability, then the response time gap was more significant between the cases with and without a PCS. To compare the two models we examined the effect of the visit rate for the external users. With low external arrival rate installing a PCS resulted higher response times. Increasing the visit rate for the external users, the difference between response time with and without a PCS was smaller and smaller until this difference vanished and the existence of a PCS resulted lower response times. Examining numerical results it was clear that allowing external arrivals and limited buffer a more realistic model was obtained. References. BOSE, I., CHENG, H.K., Performance models of a firms proxy cache server. Decision Support Systems and Electronic Commerce., 29 (2000), 45 57.

Berczes T., Sztrik J.: Performance Modeling of Proxy Cache Servers 53 2. CACHEFLOW INC., 999. CacheFlow White Papers. Available from http://cacheflow.com/technology/wp. 3. LASHINSKY, A., Suddenly cache is king the world of net stocks. Fortune., (999), 370 372. 4. MENASCE, D.A., ALMEIDA, V.A.F., Capacity Planning for Web Performance: Metric, Models, and Methods. Prentice Hall., (998) 5. RUBENSTEIN, R., HERSCH, H.M., LEDGARD, H.F., The Human Factor: Designing Computer Systems for People. Digital Press, Burlington, MA., (984) 6. ZHAO, J.L., KUMAR, A., Data management issues for large scale, distributed workflow systems on the internet. ACM SIGMIS Data Base., 29 (4), 22 32. 7. L.P. SLOTHOUBER, A model of Web server performance. 5th International World Wide Web Conference, Paris, Farnce., (996)