WORKLOADS MODELS FOR HTTP SERVERS. Gbenga Olowoeye. B.S., University of Massachusetts Lowell. Signature of Author: Date

Size: px
Start display at page:

Download "WORKLOADS MODELS FOR HTTP SERVERS. Gbenga Olowoeye. B.S., University of Massachusetts Lowell. Signature of Author: Date"

Transcription

1 WORKLOADS MODELS FOR HTTP SERVERS By Gbenga Olowoeye B.S., University of Massachusetts Lowell SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN ELECTRICAL ENGINEERING UNIVERSITY OF MASSACHUSETTS-LOWELL Signature of Author: Date Signature of Thesis Supervisor: Kavitha Chandra, D.Eng., Associate Professor Signature of Other Thesis Committee Members: Charles Thompson, Ph.D, Professor Vineet Mehta, D.Eng., Adjunct Professor

2 Abstract In this thesis, workload models for HTTP servers are developed. The analysis of traffic measurements made at the University of Massachusetts Lowell and Ohio State University campus connection to the Internet form the basis for the model. The HTTP client packet count traffic is shown to exhibit a slowly varying mean trend which contributes to the slow decay rate in the normalized autocovariance function. A finite-state discrete-time Markov chain is used to model the mean variation. The short range correlations are captured using a dynamic autoregressive process, where the AR parameters and residual error variance are a function of the state of the Markov chain. The combination of these fast and slow time scale processes are shown to adequately capture the first and second moments, the probability distribution function and correlation statistics of the packet count time-series. The proposed model also performs favorably in infinite and finite buffer queues. The HTTP packet count model is also used in a regression model to estimate the number of HTTP bytes generated by the campus clients. ii

3 Acknowledgements I would like to thank my advisor, Prof. Kavitha Chandra for introducing me to the world of network traffic engineering. I am grateful for her guidance, patience and especially her dedication to her students. I would also like to thank Professor Thompson for his advice and critical comments on my thesis and for not letting me settle for anything less than excellence. He has been a mentor and is a model for technical expertise and competence. Thank you Vineet for your advice and help with the thesis, and thank you Prof. Krishnan for sharing your knowledge of time-series analysis which was inv aluable to this work. I would like to acknowledge the warm friendship from all the members of the Center for Advanced Computation and Telecommunications. I am especially grateful to Mital for his immense help with the data collection effort and for being a good friend. Thanks Jimmie for taking the time to answer all my questions. And thanks Parita for your encouragement and moral support. Thanks to my family and friends, for their unconditional love, support and patience that has carried me through. Lastly, I thank God for giving me the strength and perseverance without which I would not have made it. iii

4 Table of Contents Abstract Acknowledgements Table of Contents List of Figures List of Tables List of Symbols ii iii iv vi xi xii Chapter 1. Introduction HTTP Protocol Web Server Architecture Objective 8 Chapter 2. Traffic Measurements and Statistical Analysis Introduction Aggregate Traffic Statistics UML Aggregate Traffic Statistics OSU Aggregate Traffic Statistics HTTP Traffic Statistics HTTP Aggregate Traffic Statistics Packet Size Distribution Packet Count Distribution Packet Count Correlation Features HTTP Packet-Byte Dependence Summary 46 Chapter 3. HTTP Client Access Traffic Model Introduction 48 iv

5 3.2 Traffic Models for Time-Series Having Long Range Correlations Markov Chain Models For Mean Trend HTTP Packet Count Traffic Model Parameter Estimation Model Validation Comparison of Dynamic and State Independent Models Comparison of First and Second Moments Correlation Statistics Probability Density Function Queuing Analysis Byte-Rate Model Summary 94 Chapter 4. Model Validation Introduction OSU Traffic Measurements UML Data 11 Chapter 5. Conclusion and Future Work Data Analysis Model Development Proposals for Future Work 19 Appendix I. Protocol and Packet Size Based Traffic Decomposition 111 Appendix II. Traffic Shaping Effects of TCP Congestion Control 124 Appendix III. Kolmogorov-Smirnov Test 129 References 132 Biography 134 v

6 List of Figures Figure 1.1 Web Server Model 1 Figure 2.1 UML Network Architecture 13 Figure 2.2(a) UML Aggregate Inbound Traffic in Packets: Oct 5-11, Figure 2.2(b) UML Aggregate Outbound Traffic in Packets: Oct 5-11, Figure 2.3(a) UML Aggregate Inbound Traffic in Bytes: Oct 5-11, Figure 2.3(b) UML Aggregate Outbound Traffic in Bytes: Oct 5-11, Figure 2.4(a) OSU Aggregate Outbound Traffic in Packets 21 Figure 2.4(b) OSU Aggregate Inbound Traffic in Packets 21 Figure 2.5(a) OSU Aggregate Outbound Traffic in Bytes 22 Figure 2.5(b) OSU Aggregate Inbound Traffic in Bytes 22 Figure 2.6(a) UML HTTP Client Port Number Distribution 24 Figure 2.6(b) OSU HTTP Client Port Number Distribution 25 vi

7 Figure 2.7(a) UML HTTP Server and Client Traffic in Bytes 27 Figure 2.7(b) OSU HTTP Server and Client Traffic in Bytes 27 Figure 2.8(a) UML Aggregate & HTTP Upstream Traffic Patterns - 6hrs 3 Figure 2.8(b) UML Aggregate & HTTP Downstream Traffic Patterns - 6hrs 31 Figure 2.9(a) OSU Aggregate & HTTP Upstream Traffic Patterns - 2hrs 31 Figure 2.9(b) OSU Aggregate & HTTP Downstream Traffic Patterns - 2hrs 32 Figure 2.1(a) OSU Client Packet Size Distribution 34 Figure 2.1(b) OSU Server Packet Size Distribution 34 Figure 2.11(a) UML Client Packet Size Distribution 35 Figure 2.11(b) UML Server Packet Size Distribution 35 Figure 2.12(a) UML Packet Count Series Ffr Oct 3-9, Figure 2.12(b) UML Byte Rate Series for Oct 3-9, Figure 2.13(a-b) OSU Packet Count Distribution 38 Figure 2.13(c-d) UML Packet Count Distribution 39 vii

8 Figure 2.14(a) NACF of UML HTTP Client Packet Count Traffic 41 Figure 2.14(b) NACF of OSU HTTP Client Packet Count Traffic 42 Figure 2.15(a) NACF of UML HTTP Server Packet Count Traffic 42 Figure 2.15(b) NACF of OSU HTTP Server Packet Count Traffic 43 Figure 2.16(a) UML Client Packet-Byte Dependence 44 Figure 2.16(b) OSU Client Packet-Byte Dependence 45 Figure 2.17(a) UML Server Packet-Byte Dependence 45 Figure 2.17(b) OSU Server Packet-Byte Dependence 46 Figure 3.1 NACF of Client Packet Count Process 5 Figure 3.2 Minimum Mean Square Error Criterion for Smoothing Parameter 54 Figure 3.3 Exponentially Smoothed Packet Count Process 55 Figure 3.4 Quantized Mean Packet Count Process 55 Figure 3.5 NACF of Smoothed 3sec Packet Count Process & Markov Model 6 Figure 3.6 State Independent Residual NACF 73 Figure 3.7 State Dependent Residual NACF 74 viii

9 Figure 3.8 Sum Square of Residual NACFs 75 Figure 3.9 Measurement and Data CDF Comparison 76 Figure 3.1 Original Packet Count Process 77 Figure 3.11 Model Generated Packet Count Process 78 Figure 3.12 Standard Deviation of Number of Packets in each State 79 Figure 3.13 NACF of Data and Model 8 Figure 3.14(a) Model and Data PDF Comparison 82 Figure 3.14(b) Quantile-Quantile Plot 82 Figure 3.15 Model and Data Complementary Delay Distribution 84 Figure 3.16 BLR of Data and Model order 6 87 Figure 3.17 Scatter Plot of Bytes vs. Packets 88 Figure 3.18 Measurement & Model Bytes vs. Packet Counts 91 Figure 3.19 Measurement Byte-Rate Process 91 Figure 3.2 Model Byte-Rate Process 92 Figure 3.21 Byte-Rate BLR 93 Figure 3.22 Byte-Rate NACF 94 ix

10 Figure 4.1 OSU Sum Square of Residual NACFs 97 Figure 4.2 OSU Measurement and Model Data 98 Figure 4.3 OSU Measurement and Model NACF 99 Figure 4.4 OSU Measurement and Model BLR 11 Figure 4.5 UML-2 Data 12 Figure 4.6 UML-2 Sum Square of Residual NACFs 12 Figure 4.7 UML-2 Model Generated Data 13 Figure 4.8 UML-2 Model and Measurement NACF 14 Figure 4.9 UML-2 Model and Measurement BLR 15 Figure A.1.1 WAN Data NACF 116 Figure A.1.2 WAN TCP Traffic Packet Size Distribution 117 Figure A.1.3 WAN Low, Medium & High Packet Size NACFs 119 Figure A.1.4 BC Low, Medium & High Packet Size Histogram 122 Figure A.1.5 BC Low, Medium & High Packet Size NACFs 123 Figure A.2.1 TCP Congestion Window Dynamics 127 Figure A.2.2 NACF of TCP Modulated Traffic Pattern 128 x

11 List of Tables Table 2.1 UML Aggregate Inbound Traffic Statistics 17 Table 2.2 UML Aggregate Outbound Traffic Statistics 17 Table 2.3 OSU Aggregate Traffic Statistics 2 Table 2.4 UML Client & Server Traffic Statistics 28 Table 2.5 OSU Client & Server Traffic Statistics 28 Table 3.1 Comparison of Markov Model and Data pdfs 61 Table 3.2 Parameters of State Independent Model 71 Table 3.3 AR Parameters of State Dependent Model (order 1) 72 Table 3.4 AR Parameters of State Dependent Model (order 6) 72 Table 3.5 Mean and Variance Statistics of Model and Data 79 Table 4.1 UML-2 Mean and Variance Statistics of Model and Data 14 Table A.3.1 Critical Values of D for K-S Test 131 xi

12 List of Symbols C(h) h X Normalized Autocovariance Function (NACF) NACF lags Random variable µ X Mean of random variable X σ 2 X t s t l x(n) ˆx(n) α ε (n) x s (n) ˆx s (n) Variance of random variable X Short time scale Long time scale Time-series on a long time scale Exponentially Smoothed mean trend Exponential Smoothing Parameter residual Error Quantized mean trend scaled to short time scale Markov Chain model for mean trend xii

13 K R r j p ij P p(n) p i (n) Π π i y(n) f 1 () f 2 () p g(n) β i Number of states Rate vector Rate in state j Transition probability Transition probability matrix probability vector Steady State Probability Steady State probability vector Probability of being in state i Packet count process Slow varying component Noise component Autoregressive Model order Zero mean, unit variance i.i.d. process Autoregressive model parameters xiii

14 γ g x (n) f Y (y) θ L β j (i) gx(n) j γ j e j (n) n j ŷ(n) S[ε (p)] F X (x) C Variance scaling parameter State dependent noise process Probability density function (pdf) Parameter set Log Likelihood function Autoregressive parameters for state j Gaussian noise function for state j Variance scaling parameter for state j values of y(n) that satisfy amplitude constraint Number of elements in e j (n) Model generated packet count process Sum of the square of residual error of order p NACF amplitude Cumulative Probability distribution function (cdf) Capacity xiv

15 λ ρ S BLR D D c C D C K Arrival rate Utilization Complementary distribution function Bit Loss Ratio Maximum distance between two cdfs Critical Value of D Empirically determined cdf Analytical cdf xv

16 1 Chapter 1 INTRODUCTION The World Wide Web (WWW) currently generates over 5% of backbone traffic on the internet. The Web is the outcome of work done by Tim Berners-Lee at CERN in The object of the CERN project was to allow information sharing and dissemination among globally dispersed teams and support groups. Since 1995, the Web has evolved into a communication platform supporting business, education and research transactions. WWW information is best viewed using a browsers [1]. A browser is an application program that allows one to access and examine information at WWW sites. The first public domain Web browser was Mosaic, dev eloped by Marc Andreesen and a number of other graduate students at the National Center For Supercomputing Applications (NCSA). Most of that team later went on to develop the Netscape Navigator. Netscape Navigator and Microsoft s Internet Explorer are the most popular browsers in current use [1]. Today, org anizations like the World Wide Web Consortium (W3C) [2] are helping to promote the Web by dev eloping protocols that ensure its evolution and interoperability. The Web now includes a large number and diverse set of information sites and services, and has contributed to the exponential growth of Internet hosts and users. This increase in Web activity has prompted interest in im-

17 2 proving the performance of the WWW. The focus of this effort has been on improving or optimizing the performance of Web servers. The performance of the Web depends on the performance of Web servers and the communication network connecting the Web server to the user. Web transactions follow a client-server protocol. The user, located on a client machine, launches a web browser to request or access information on a remotely located web server. The client and server communicate using the Hyper Text Transfer Protocol (HTTP). HTTP is a platform independent protocol. It is based on a request-response paradigm. A typical HTTP transaction is initiated when the user either types in the Universal Resource Locator (URL), or selects a link on a web page. The client then looks up the server Internet protocol (IP) address using a Domain Name Service (DNS). Once the client obtains the server IP address, it establishes a two way connection between itself and the server using the Transmission Control Protocol (TCP). After the TCP connection setup, the client sends a HTTP request for a document. The server, upon receipt of the request, parses the request and determines the document to be retrieved. If the document is found, it is written to the server input/output (I/O) buffer, from where it is sent to the client. If the document is not found, an error message is sent to the client. Upon completion, the HTTP server closes the TCP connection, terminating the

18 3 HTTP transaction. The HTTP protocol and server architecture are important for understanding the performance issues in Web based communication. 1.1 HTTP Protocol Most WWW servers currently deployed use the HTTP/1. standard. To address some of the scalability problems in HTTP/1. the HTTP/1.1 standard has been developed. One of the main problems with HTTP/1. was that it opened a new TCP connection for every object being requested. This creates a large amount of TCP control traffic and reduces the throughput of document transfers. HTTP/1. was also characterized by poor data caching. Improvements in HTTP/1.1 were made while maintaining backward compatibility with HTTP/1.. The major objectives of HTTP/1.1 are to reduce HTTP s protocol overhead in information transfer, thus reducing congestion caused by HTTP traffic and to improve on HTTP/1. s data caching, ultimately improving end user performance [3] [4]. One of the ways HTTP/1.1 achieves its goals is through persistent TCP connections. Unlike HTTP/1. which opens a new TCP connection for each object requested, HTTP/1.1 keeps the TCP connection open for backto-back requests. This reduces connection setup latency. It is especially beneficial today when a typical web page can have an average of 1-2 objects.

19 4 Another important feature of HTTP/1.1 is pipelining. Pipelining allows multiple requests and responses to be sent together at the same time over persistent connections. This speeds up the transfer of documents since the server does not have to wait for the acknowledgement (ACK) of each object before sending another. HTTP/1.1 improves on HTTP/1. s caching by allowing applications to mark cacheable objects. HTTP/1.1 also attempts to reduce the amount of HTTP traffic using data compression techniques. Web page content has also been modified in HTTP/1.1. This includes the introduction of the new image format PNG (Portable Network Graphics) and the new animation format MNG (Multi-Image Network Graphics). PNG is expected to replace the GIF image format. PNG renders more quickly on the screen, produces higher quality cross-platform images and is usually smaller in size than GIF images. Likewise, MNG, an animation format in the PNG family, is expected to replace animated GIF. It has been demonstrated [3], that 4 static GIF images with a sum of 13,299 bytes could be reduced to 92,96 bytes when converted to the PNG format. Similarly, two animated GIFs totaling 24,988 bytes were reduced to 16,329 bytes when converted to MNG. 1.2 Web Server Architecture The basic architecture of the Web server is described in this section.

20 5 The main components of the Web server are shown in Fig.1.1. They include the TCP listen queue or socket buffer for incoming TCP packets, the HTTP listen queue for the various requests, the HTTP daemons and server threads, the central processor unit (CPU), the hard disk for storing web pages, and the output buffers [5]. The size of the TCP and HTTP listen queue buffers, the output buffer, the number of server threads available, and the CPU speed determine how well a Web server performs. It has been shown that the proper tuning of these components significantly improves the overall performance of the server [5]. Much work is being done on studying and ultimately improving the performance of Web servers. In their analysis, they found a increasing growth trend for Web traffic. Initial study of Web server performance involved the analysis of logs of active servers. These logs contain the client source address, the date and timestamp, file requested, status, and the file size in bytes. In early work Mc- Grath [6] analyzed logs from the National Center for Supercomputing Applications WWW server and concluded that not enough was known about Web server workloads and stated that theoretical models to describe server workloads were needed. In a later work Kwan and McGrath [7] analyzed one week of Web server logs per month for several months in order to characterize the access patterns to Web servers.

21 6 Another way of evaluating Web servers is through benchmark tests. Benchmarking is a form of laboratory testing. It involves implementation of a software tool that allows comparison of the performance of different systems. A Web benchmark generates a controlled stream of Web requests that simulate the workload. Web benchmarking software typically generate requests at rates higher than the rate that can be tolerated by the server. Performance results on throughput, maximum number of simultaneous connections supported, and response time are then reported for a set of workload metrics. These metrics are typically the connections or requests per second. The first widely accepted Web benchmark was WebStone developed by Silicon Graphics. WebStone was replaced by Webperf [8] which preceded SPECweb96 which in turn was replaced by SPECweb99 [9] due to changes in server usage patterns. SPECweb99, developed by SPEC (Standard Performance Evaluation Committee), has the added advantage of specifying the request rate distribution and the request type distribution. It also has added support for dynamic content and persistent connections. The question arises as to how accurately Web benchmarks simulate the workload on Web servers. Currently, little work has been done to model specifically the workload on Web servers. Most traffic analysis studies have focused on aggregate traffic measured at points on Internet backbones. Reeser et al. [7], assumed

22 7 that the arrival processes into a Web server is exponentially distributed. Arlitt et al. [1] showed that the Poission assumption for interarrival times is not valid at all time scales. Working with data from a variety of Web servers on a variety of networks, Arlitt et al. identified ten characteristics that are important in Web server workloads. These are the percentage of successful requests, document type (requested the most), distinct requests, percentage of files accessed only once, file size distribution, concentration of references (1% of files accessed 9% of the time), inter-reference times, and wide area usage. With the knowledge of these features, the Web servers performance was shown to be improved by proper tuning. Performance characteristics of Web servers has been examined by Reeser et al. [5]. An analytical model of a server was developed to obtain fast approximations to performance measures such as server throughput, end-to-end service times, and connection blocking probabilities. The model used queuing models with various queue and service disciplines to represent the TCP, HTTP, and I/O subsystems. The TCP subsystem was modeled as a M/M/N tcp / queue, the HTTP subsystem as a M/M/N http /Q http queue, where N () and Q () denote the number of servers and buffer size respectively. The I/O subsystem was modeled as an infinite buffer M/M/N buff / queuing system. In the above notations, N tcp is the number of TCP listen queue slots,

23 8 N http are the number of HTTP daemons that service the HTTP listen queue of length Q http, and N buff is the number of network I/O buffers. With the web server models and a good workload model, the relationship between client requests and the corresponding server response can be determined. This is useful in the prediction of the expected server throughput given the expected workload. Network traffic today consists largely of HTTP traffic. HTTP server traffic containing large packet sizes can be a source of network congestion and transmission delays. Knowledge of server workloads that exit a network will allow for provision of adequate bandwidth and reservation of adequate network resources. 1.3 Objective The objective of this thesis is to develop a measurement based description of server workloads. Parametric models are derived for access patterns into Web servers. Such models may replace Web benchmarks which still often use long traces of server logs to generate requests. These models can provide a compact representation of server workloads, and can be implemented within the Web benchmarks to generate requests. Web benchmarks also do not accurately account for network transmission delays encountered between the server and client. The load level and amount of congestion on

24 9 the connecting network(s) directly affect Web server performance, and must also be incorporated into the Web server workload model. The thesis is organized as follows. In Chapter 2, an analysis of traffic measurements is presented. Chapter 3 proposes workload models based on the analysis of measured data. The model is validated in Chapter 4, and in Chapter 5 the conclusion and future work are presented.

25 1 TCP listen queue HTTP listen queue HTTP daemon & server threads CLIENT NETWORK Output Buffers write read I/O Controller DISK Figure 1.1: Web Server Model WEB SERVER

26 11 Chapter 2 TRAFFIC MEASUREMENTS AND STATISTICAL ANALYSIS 2.1 Introduction The traffic analysis results presented in this chapter are based on measurements made on the (i) University of Massachusetts Lowell (UML) routing connected to the Internet and (ii) The Ohio State University (OSU) connection to the vbns (very High Speed Backbone Network Service) [11]. Prior to procuring these traces, preliminary studies were carried out using the Bellcore Ethernet Traffic Data [12] and traffic measurements collected from the FDDI interface at FIX-West (Federal Internet Exchange) and MAE-West interconnection facility located at NASA-Ames Research Center in California. The Bellcore data is available in the public domain at [ftp://ita.ee.lbl.gov/traces/]. The Bellcore data was limited to time-stamps and packet sizes. The FIX-West, and MAE-West traces were obtained from the National Laboratory for Applied Networking Research (NLANR) public-domain archive at The NLANR traces were collected using the OC3MON utility [ ]. This data collection system is comprised of an optical splitter that connects the monitored OC3 link to an ATM network interface card residing on a PC.

27 12 The OC3MON software extracts the header from each arriving packet. The header and arrival time is then recorded. Most of the NLANR traces available on the web are of short duration. Following preliminary analysis, a two hour trace from the OSU network was obtained by special request to NLANR staff. At the same time, traffic monitoring and data collection processes were set up at UML, to allow around the clock measurements. The data sets obtained will be referred to as UML data. The UML data were collected using the tcpdump utility [13]. This utility provides to a file a summary of all the packets that traverse the network interface of the measurement node. The measurement and data archive site location as well as the UML network architecture is depicted in Fig.2.1. Data coming into and out of UML is forwarded to a port on the UML Internet router (R). The monitored port is connected to a 4-port repeater. An optical transceiver is connected to the auxilliary interface (AUI) port of the repeater. An optical fiber pair is used to carry inbound and outbound traffic to the data collection and storage area. At this site packet traces are captured. Each packet in the trace is characterized by a time-stamp, source and destination IP addresses, source and destination port numbers, packet data size and protocol type. The data includes traffic in both the outbound and inbound directions to the Internet.

28 13 Figure 2.1: UML Network Architecture The development of an application dependent traffic model will be based on these measured traffic traces. In Appendix 1 the results of preliminary studies are presented and the importance of partitioning traffic by protocols or packet sizes is discussed. The long range dependence that is observed in many of the data traffic traces [12] is the result of cyclical trends generated by applications. The decomposition based on these traffic features allows the

29 14 identification of these structural traffic features. In the following sections, HTTP traffic for UML and OSU are examined. The aggregate traffic time-series are first described. Following this, the important characteristics of HTTP client and server traffic are identified. 2.2 Aggregate Traffic Statistics UML Aggregate Traffic Statistics Daily and hourly trends in the traffic to and from the Internet are useful to determine the peak usage hours as well as the off-peak hours. The inand out-bound traffic was recorded at the access point of the UML network from 12: a.m. on 1/5/99 to 11:59 on 1/11/99. The data is processed to create a time-series. Each sample in the time-series is comprised of the bytecount/packet-count summed over a one hour period. This results in one sample per hour. Figures 2.2 and 2.3 show an example of the weeklong UML traffic pattern during the 1/5/99-1/11/99 period. Figure 2.2(a) denotes the packet count time-series for the inbound traffic and (b) represents the same for outbound traffic. From the figures, one can see that the peak periods of activity typically occur during 11 a.m. to about 8 p.m.. The inbound traffic volume is summarized in Table 2.1 and the outbound traffic in Table 2.2. Each table gives the total number of packets and bytes for each day. In addi-

30 15 tion, hourly averages are given. 4e+6 3e+6 packets 2e+6 1e hours Figure 2.2 (a) : UML Aggregate Inbound Traffic in Packets: Oct 5-11, e+6 3e+6 packets 2e+6 1e hours Figure 2.2 (b) : UML Aggregate Outbound Traffic in Packets: Oct 5-11, 1999

31 16 2e+9 bytes 1e hours Figure 2.3 (a) : UML Aggregate Inbound Traffic in Bytes: Oct 5-11, e+9 bytes 1e hours Figure 2.3 (b) : UML Aggregate Outbound Traffic in Bytes: Oct 5-11, 1999

32 17 Day Tot Pkts Tot Bytes Avg Pkts/hr Avg Bytes/hr x1 7 x1 1 x1 6 x1 9 1/5/99 (T) /6/99 (W) /7/99 (R) /8/99 (F) /9/99 (S) /1/99 (SU) /11/99 (M) Avg Daily Volume (pkts) 3.82x1 7 Avg Daily Volume (bytes) 1.89x1 1 Avg Daily Rate (pkts/sec) 442 Avg Daily Rate (bits/sec) 1.75x1 6 Table 2.1: UML Aggregate Inbound Traffic Statistics Day Tot Pkts Tot Bytes Avg Pkts/hr Avg Bytes/hr x1 7 x1 1 x1 6 x1 9 1/5/99 (T) /6/99 (W) /7/99 (R) /8/99 (F) /9/99 (S) /1/99 (S) /11/99 (M) Avg Daily Volume (pkts) 3.81x1 7 Avg Daily Volume (bytes) 2.17x1 1 Avg Daily Rate (pkts/sec) 441 Avg Daily Rate (bits/sec) 2.x1 6 Table 2.2: UML Aggregate Outbound Traffic Statistics

33 18 In both tables, the first column represents the total packets transmitted for each day while the second column shows the total bytes transmitted for each day. The average number of packets and bytes per hour for each day are listed in the third and fourth columns respectively. The daily average rates were obtained by summing the total number of bytes/packets for each hour and dividing by 24. The average daily volumes were calculated by summing the total number of bytes or packets per day and dividing by the total number of days in the observation period. The average daily rates were similarly determined by summing the average hourly rates for each day and dividing by the total number of days in the observation period. The average in bytes per hour was converted to the more commonly used bits per second by first multiplying the byte-rate value by 8 bits/byte to convert the amount to bits per hour. The product is then divided by 36 seconds to convert from bits per hour to bits per second. The volume of in- and outbound traffic is comparable in magnitude in terms of the packet count. However the outbound traffic transmits 2. 17x1 1 bytes compared to the 1. 8x1 1 bytes coming into the campus.

34 OSU Aggregate Traffic Statistics The OSU data was collected over two hours between 11 p.m - 1 a.m. on February 24, The analysis of the outbound OSU TCP traffic by You, [14] identified that the aggregate traffic exhibited non-stationary features. You proposed that applications such as FTP contributed to the nonstationarity. With the FTP traffic removed, residual traffic was modeled using a non-linear time-series model. As in the UML case the OSU byte and packet count is summed over one second intervals. Cyclical trends are evident in the resulting packet and byte rate time series. The packet count in Fig.2.4 and byte count in Fig.2.5 is given for both the outbound and inbound directions. From the data, the transmitted packet count for the 2 hour period is equal to 73,715,481 packets. Of this total, 31,926,2 packets are transfered in the outbound direction while the remaining 41,789,281 packets are inbound directed. These totals were calculated by summing the packet size of each packet. In the outbound direction the total byte volume was 8.6 gigabytes while 25 gigabytes was transmitted into the OSU campus. The average packet/byte rate was determined by summing the packets/bytes for each

35 2 1-second interval and dividing by the total number of packets/bytes. Table 2.3 summarizes the total packet/byte values and number of samples of the 1-second time-series that was used to calculate the packets per second and bits per second rates. The average number of packets per second in the outbound and inbound directions is 4,356 and 5,74 respectively. The average byte-rate was converted to bits per second by multiplying it by 8 bits/byte to obtain a rate of 27 Mbits/sec in the inbound direction and 9.4 Mbits/sec in the outbound direction. Direction # of points # of points Tot Tot in 1-sec in pkt count Bytes Pkts byte series series x1 9 Outbound ,926,21 Inbound ,789,281 Outbound Rate (pkts/sec) 4356 Outbound Rate (bits/sec) 9,427,416 Inbound Rate (pkts/sec) 574 Inbound Rate (bits/sec) 27, 351,2 Table 2.3: OSU Aggregate Traffic Statistics

36 packets secs Figure 2.4(a): OSU Aggregate Outbound Traffic in Packets packets secs Figure 2.4(b): OSU Aggregate Inbound Traffic in Packets

37 22 3e+6 2.5e+6 bytes 2e+6 1.5e+6 1e secs Figure 2.5(a): OSU Aggregate Outbound Traffic Bytes 5.5e+6 5e+6 4.5e+6 4e+6 bytes 3.5e+6 3e+6 2.5e+6 2e+6 1.5e+6 1e secs Figure 2.5(b): OSU Aggregate Inbound Traffic Bytes In the next section, the HTTP traffic is extracted from the traffic data and rel-

38 23 evant statistical features are presented. 2.3 HTTP Traffic Statistics In this section, traffic generated by HTTP alone is analyzed. A 6-hour period from 12 p.m to 6 p.m. on February 16, 2 in the UML data and the entire 2 hour period of OSU traffic is examined. Any further mention of the UML data refers to this subset unless otherwise specified. First, the measurements of the client access traffic on to the Internet is considered. This will allow one to develop a traffic model for client traffic into Web servers. Server traffic initiated by UML and OSU clients will be presented next HTTP Aggregate Traffic Statistics The HTTP traffic packet trace is identified by the port number field. HTTP servers transfer information on source port 8. HTTP clients typically have source ports or registered ports in the range 1,24 to 49,151. However they can be distinguished by their destination port number 8. The HTTP client traffic generated from UML and OSU networks and the corresponding server traffic generated in the inbound direction is considered. In the UML

39 24 case, this traffic is identified from the aggregate traffic by all packets with destination port number 8 and source IP network address xxx.xxx, where the last two are not considered. The OSU aggregate data is marked with a interface index zero in the traces to identify outbound traffic. Figs 2.6(a-b) show the distribution of the client source port numbers for UML and OSU HTTP clients. The longer number of occurrences in the UML data is the result of the larger measurement interval. However, the distribution of client ports are similiar in structure. The highest count occurs for port numbers in the 1-5 range. 1e+7 1e+6 # of occurences HTTP Client Port Number Figure 2.6(a): UML HTTP client port number distribution

40 # of occurences HTTP Client Port Number Figure 2.6(b): OSU HTTP client port number distribution Next the volume of client traffic is compared to the volume of server traffic. Traffic from WWW servers is given by all packets with source port number 8 and all source IP addresses except the UML addresses xxx.xxx. For the OSU data, the server traffic is identified by the interface index one and packets with source port 8. Figures 2.7 show the UML and OSU client and server traffic bytes aggregated over one minute intervals. The disparity in client and server traffic volume arises from the typically small packet sizes (4-64 bytes) that characterize client traffic in comparison with the server which generally transmits data using 15 byte packets.

41 26 Table 2.4 and 2.5 show the client and server traffic statistics for the UML and OSU data sets respectively. The first column lists the number of points in the 1-second aggregated time-series. The second colunm shows the total bytes in the byte-rate time-series while the third column lists the total packet count in the packet count time-series. The per second byte-rate and packet rate are given below each table. These are determined by dividing the total in column two or three by the amount in column one. The ratio of mean client bytes to mean server bytes transmitted is approximately 1:9 for UML and 1:12 for OSU data. The average client and server bytes generated per second are 32 and 328 Kbytes respectively for UML and 183 and 1,844 kbytes for the OSU data. On the other hand the number of client and server packets are comparable in magnitude. The average number of packets generated per second by the clients and servers are 361 and 4 respectively for UML and 1,929 and 2,262 for OSU data.

42 27 # of 1-sec Total Total series pts bytes packets Client 21,67 799,976,722 7,83,735 Server 21,67 7,95,873,777 8,638,775 Client Packet Rate Client Bit Rate Server Packet Rate Server Bit Rate 361 packets/sec 296,191 bits/sec 4 packets/sec 2,627,25 bits/sec Table 2.4: UML Client & Server Traffic Statistics # of 1-sec Total Total series pts bytes packets Client 7,33 1,342,666,664 14,142,83 Server 7,325 13,51,455,437 16,575,428 Client Packet Rate Client Bit Rate Server Packet Rate Server Bit Rate 1929 packets/sec 1,465,393 bits/sec 2,262 packets/sec 14,755,446 bits/sec Table 2.5: OSU Client & Server Traffic Statistics

43 28 3.5e+6 3e+6 UML_server.byterate UML_client.byterate bytes 2.5e+6 2e+6 1.5e+6 1e time (secs) Figure 2.7(a): UML HTTP server and client traffic bytes. 3e+7 2.5e+7 osu_server.byterate osu_client.byterate 2e+7 bytes 1.5e+7 1e+7 5e time (secs) Figure 2.7(b): OSU server and client traffic bytes. The following figures (Fig.2.8(a-b)) show how the HTTP traffic volume compares to the total traffic during the busy period of 12 p.m. and 6

44 29 p.m.. During this 6-hour period, the mean upstream HTTP traffic rate is on the order of 25 packets per second transfering 292 kb/sec and consisted of about 13% of the aggregate traffic stream. In the downstream direction, HTTP made up 66% of the total traffic and had a mean rate of 334 packets per second transferring 2.2 Mbits/sec. On the UML campus, there were about 1,299 individual clients accessing a total of 2,268 servers off-campus. For the OSU traffic, HTTP consists of 4% of the bytes and 53% of the packets observed in the upstream direction and 55% of the bytes and 45% of the packets in the downstream direction. This is shown in Fig.2.9. A total volume of 3.5 gigabytes and 13.6 gigabytes were transferred in the upstream and downstream directions respectively. The average rates seen in the upstream and downstream traffic is 3.8 Mbits/sec and 15 Mbits/sec respectively. During the 2-hour period, there were 1,413 individual on-campus clients observed that accessed a total of 12,85 Web servers off-campus. The figures show the bandwidth asymmetry that is typical in traffic patterns at the intersection between local access networks and the Internet. The figures also indicate that the trends in the aggregate downstream traffic

45 3 are significantly influenced by the HTTP server traffic. Next the distribution of packet sizes, the relationship between the number of packets and bytes transferred, the client request arrival statistics, the server response statistics, and the client interarrival time statistics are examined. 3e+7 2.5e+7 Aggregate 2e+7 bytes 1.5e+7 1e+7 5e+6 HTTP time (secs) Figure 2.8(a): UML Aggregate & HTTP Upstream traffic patterns

46 31 3.5e+7 3e+7 Aggregate bytes 2.5e+7 2e+7 1.5e+7 1e+7 HTTP 5e time (secs) Figure 2.8(b): UML Aggregate & HTTP Downstream traffic patterns 1.2e+8 1e+8 8e+7 Aggregate bytes 6e+7 4e+7 HTTP 2e time (secs) Figure 2.9(a): OSU Aggregate & HTTP Upstream traffic patterns

47 32 3e+8 2.5e+8 Aggregate 2e+8 bytes 1.5e+8 1e+8 HTTP 5e time (secs) Figure 2.9(b): OSU Aggregate & HTTP Downstream traffic patterns Packet Size Distribution The distribution of packet sizes for the client and server traffic is examined in this section. The estimated probability density functions are plotted in Figs. 2.1(a-b) for the UML data and in Figs. 2.11(a-b) for the OSU data. Again many similiarities exist in the statistics of the two networks. The vertical axis for the client packets is plotted on a logarithmic scale to show that in addition to the mode at 4 bytes comprising over 9% of the packets, a small fraction of HTTP client packets exist in the 2-4 byte range. The 4 byte packets correspond to the TCP control and signaling

48 33 packets generated at connection setup and during acknowledgements (ACKs). The larger packets may be attributed to various types of HTTP get requests. The server packet size distribution in Figs. 2.1 can be split into three packet size regions. The dominant size is 15 bytes, which in both data sets, make up about 4% of all server generated traffic. The second largest packet sizes fall in the 4-64 byte range and account for 15-2% of the server traffic resulting from the ACKS and control packets generated by the servers. In addition a third mode exists around 576 bytes which corresponds to the minimum transfer unit (MTU) that TCP selects when an intermediate network is not capable of handling 15 byte packets. Overall, the average client packet size for the OSU and UML data sets was 84 and 86 bytes respectively, while the average server packet size was 847 and 871 bytes respectively.

49 pdf.1.1 1e packet size (bytes) Figure 2.1(a): OSU client packet size distribution pdf packet size (bytes) Figure 2.1(b): OSU server packet size distribution

50 pdf.1.1 1e-5 1e packet size (bytes) Figure 2.11(a): UML client packet size distribution pdf packet size (bytes) Figure 2.11(b): UML server packet size distribution

51 Packet Count Distribution The statistics of the packet count will be investigated further. The packet sizes exhibit the most predictable behavior, at least at the hour time scale. In comparison, the hourly byte statistic exhibits significant variation. Figures 2.12(a-b) show the number of packets versus time. It is seen in Fig.2.12(a) that the number of packets generated remains reasonably stationary during the peak usage hours 12-6 pm, whereas the hourly byte volume exhibits high variations on this time scale. The development of models that predict number of packets generated on a given time scale and estimation of the dependence between packets generated and bytes transferred will be considered for HTTP traffic characterization. 1.6e+7 1.4e+7 1.2e+7 packets 1e+7 8e+6 6e+6 4e+6 2e time (hrs) Figure 2.12(a): UML Packet Count Series for Oct 3-9, 1999

52 37 bytes 5e+9 4.5e+9 4e+9 3.5e+9 3e+9 2.5e+9 2e+9 1.5e+9 1e+9 5e time (hrs) Figure 2.12(b): UML Byte Rate Series for Oct 3-9, 1999 Focussing on the statistics of packet counts, the distribution functions for the upstream (clients) and downstream (server) directions were estimated from the data. These results are shown in Figs. 2.13(a-d). The horizontal axis represents the number of packets per second, a random variable. These figures show the distribution of the number of packets per 1-second interval in each direction for both networks considered. The shapes of the distribution functions for the client and server packet counts show remarkable similarity for the two different networks. The smoothness in the UML data results from the longer measurement time period used. This suggests that a model may be developed for HTTP packet traffic generated from local access networks.

53 osu_client.pktcnt.hist # of occurences packets per second (a) 3 25 osu_server.pktcnt.hist # of occurences packets per second (b) Figure 2.13: OSU Packet Count Distribution (a) Client (b) Server

54 UML_client.pktcnt.hist 6 # of occurences packets per second (c) 9 8 UML_server.pktcnt.hist 7 # of occurences packets per second (d) Figure 2.13: UML Packet Count Distribution (c) Client (d) Server

55 Packet Count Correlation Features To get further insight into the temporal features of HTTP traffic the normalized autocovariance function (NACF) of the packet counts is considered. Let X k, j=,1,... represent the packet count time series. Each sample is the number of packets in a one second interval. The subscript k represents the interval index. Under the assumption that the process is stationary, the estimate for the NACF may be calculated using the expression, C(h) = N h Σ (X k µ X )(X k+h µ X ) k= (2.1) N Σ(X k µ X ) 2 k= where µ X represents the estimated mean of the random variable {X }. The NACF for the client and server packet count traffic streams are shown in Fig.2.14 and 2.15 respectively for both OSU and UML data sets. The UML client and server packet count time-series contained 18, data points while the OSU client and server packet counts were 7,326 data points. This is important as the number of points in the time-series affects the accuracy of the NACF plot. The figures show a slow rate of decay with the lag values. In

56 41 particular the curve exhibits two dominant time scales as represented by the asymptotes in the limit as the lag tends to zero and 1 respectively. The UML data shows a stronger evidence of this feature. The slower decay rate at large lags may be attributed to a slowly changing component in the traffic. One hypothesis is that this may be the result of a change in the number of clients actively using the network during the consecutive hours. 1 UML_client.pktcnt.nacf Nacf lag Figure 2.14(a): NACF of UML HTTP Client Packet Count Traffic

57 42 1 osu_client.pktcnt.nacf Nacf lag Figure 2.14(b): NACF of OSU HTTP Client Packet Count Traffic 1 UML_server.pktcnt.nacf Nacf lag Figure 2.15(a): NACF of UML HTTP Server Packet Count Traffic

58 43 1 osu_server.pktcnt.nacf Nacf lag Figure 2.15(b): NACF of OSU HTTP Server Packet Count Traffic HTTP Packet-Byte Dependence Finally, to relate the models that will be proposed for HTTP client packet variations with the corresponding byte variations the dependence between these two variables will be examined. The packet byte relation for the HTTP servers is also considered. The scatter plots in Fig.2.16 and 2.17 show the relationship between packets and bytes generated for UML and OSU data in one second intervals. Fig.2.16 depicts the relation for HTTP clients and Fig.2.17 shows the structure for HTTP server packets and bytes. Again suffi-

59 44 cient similarity exists between the two networks to hypothesize a common model. In all four figures, the horizontal axis represents the packet count per second and the vertical axis represents the number of bytes generated per second. It can be observed that in both OSU and UML data, the client and server packet-byte relationship exhibits an approximate linear trend. The server packet-byte data has a steeper slope than that of the client s which allows it to attain higher byte values for the same number of packets bytes packets Figure 2.16(a): UML Client Packet-Byte Dependence

60 bytes packets Figure 2.16(b): OSU Client Packet-Byte Dependence bytes packets Figure 2.17(a): UML Server Packet-Byte Dependence

61 46 1.2e+6 1e+6 8 bytes packets Figure 2.17(b): OSU Server Packet-Byte Dependence 2.4 Summary The HTTP traffic study in this chapter has identified that certain common features exist between the OSU and UML HTTP data. First the distribution of client port numbers is similar. The client port numbers vary between 1 and 6 with a higher concentration in the 1-5 range. The server traffic volume is significantly greater than the client traffic volume. This is due to the fact that the server traffic in both data sets is dominated by 15 byte and 5-6 byte packets while the client traffic is dominated by 4-1 byte packets. In the upstream direction, both UML and

62 47 OSU data sets show HTTP traffic to be a small portion of the aggregate traffic In the downstream direction however, both OSU and UML show HTTP to constitute a significant portion of the aggregate traffic, about 66% in UML and 55% in OSU. HTTP not only constitutes a large portion of the aggregate downstream traffic, it shows a strong correlation with the downstream traffic. This shows that HTTP significantly influences the downstream traffic patterns into the OSU and UML campuses. Both OSU and UML have similar upstream and downstream packet count distributions, as well as similar client and server packet count distributions. Although traffic from more sites need to be studied, it can be presumed that the HTTP traffic features described here are typical of.edu (educational, University) Web sites.

63 48 Chapter 3 HTTP CLIENT ACCESS TRAFFIC MODEL 3.1 Introduction In this chapter, traffic models for HTTP client access traffic are proposed. In particular, the modeling of the HTTP client packet count time series is considered. Given a robust packet count model, the number of bytes generated in a unit time interval may then be estimated using the client packet size distribution. The packet size is typically between 4 to 64 bytes. IP packets arrive at the measurement location at random points in time. The packet size is also a random variable. To apply a time-series model to the packet counts, the arrivals must be first aggregated over a time interval t. In such a case, the packet count time series x(n), for n = 1, 2,... represents the number of packets that arrive in the time interval (n 1) t < t n t. The observation of the HTTP client traffic correlations in Chapter 2 suggests that the traffic may be comprised of elements that vary over different time-scales.

64 49 Tw o aggregation intervals t s and t l will be used to characterize the short and long-range dependence respectively. The minimum interval over which the model can be used for predicting or simulating the packet arrival process is t s. This may be prescribed based on the specific application. Here an interval t s = 1 second is considered. This implies that correlation time intervals less than one second can be neglected. The NACF of a time-series using a one-second aggregation time is depicted in Fig.3.1. It is assumed that the time-series is stationary in the mean. For this figure data from the UML data set was used. The correlations are computed for six hours (12 p.m. - 6 p.m.) of HTTP client traffic measured during a week day. The piece-wise linear trends shown in the log-linear plot of the NACF, depict the two dominant exponential time-scales that characterize the traffic. Estimates of the slope from the figure yield values of approximately 6 and 58 seconds respectively for the short and long-range time constants. To examine the process characteristics on the long-range scale the one-second packet count time series is further aggregated. When 3 samples of the time-series are summed the result is a time-series aggregated at an interval of t l = 3 seconds. The

Visualization of Internet Traffic Features

Visualization of Internet Traffic Features Visualization of Internet Traffic Features Jiraporn Pongsiri, Mital Parikh, Miroslova Raspopovic and Kavitha Chandra Center for Advanced Computation and Telecommunications University of Massachusetts Lowell,

More information

TRAFFIC METRICS FOR ADAPTIVE ROUTING. Mital Parikh. B.S., Bombay University, India. Signature of Author: Date. Signature of Thesis Supervisor:

TRAFFIC METRICS FOR ADAPTIVE ROUTING. Mital Parikh. B.S., Bombay University, India. Signature of Author: Date. Signature of Thesis Supervisor: TRAFFIC METRICS FOR ADAPTIVE ROUTING By Mital Parikh B.S., Bombay University, India SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN ELECTRICAL ENGINEERING UNIVERSITY

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Monitoring and Analysis

Monitoring and Analysis CHAPTER 3 Cisco Prime Network Analysis Module 5.1 has two types of dashboards: One type is the summary views found under the Monitor menu, and the other type is the over time views found under the Analyze

More information

Slides 11: Verification and Validation Models

Slides 11: Verification and Validation Models Slides 11: Verification and Validation Models Purpose and Overview The goal of the validation process is: To produce a model that represents true behaviour closely enough for decision making purposes.

More information

School of Engineering Department of Computer and Communication Engineering Semester: Fall Course: CENG415 Communication Networks

School of Engineering Department of Computer and Communication Engineering Semester: Fall Course: CENG415 Communication Networks School of Engineering Department of Computer and Communication Engineering Semester: Fall 2012 2013 Course: CENG415 Communication Networks Instructors: Mr Houssam Ramlaoui, Dr Majd Ghareeb, Dr Michel Nahas,

More information

Application of QNA to analyze the Queueing Network Mobility Model of MANET

Application of QNA to analyze the Queueing Network Mobility Model of MANET 1 Application of QNA to analyze the Queueing Network Mobility Model of MANET Harsh Bhatia 200301208 Supervisor: Dr. R. B. Lenin Co-Supervisors: Prof. S. Srivastava Dr. V. Sunitha Evaluation Committee no:

More information

Week 7: Traffic Models and QoS

Week 7: Traffic Models and QoS Week 7: Traffic Models and QoS Acknowledgement: Some slides are adapted from Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition, J.F Kurose and K.W. Ross All Rights Reserved,

More information

Comprehensive Final Exam for Capacity Planning (CIS 4930/6930) Fall 2001 >>> SOLUTIONS <<<

Comprehensive Final Exam for Capacity Planning (CIS 4930/6930) Fall 2001 >>> SOLUTIONS <<< Comprehensive Final Exam for Capacity Planning (CIS 4930/6930) Fall 001 >>> SOLUTIONS

More information

On the Relationship of Server Disk Workloads and Client File Requests

On the Relationship of Server Disk Workloads and Client File Requests On the Relationship of Server Workloads and Client File Requests John R. Heath Department of Computer Science University of Southern Maine Portland, Maine 43 Stephen A.R. Houser University Computing Technologies

More information

A Capacity Planning Methodology for Distributed E-Commerce Applications

A Capacity Planning Methodology for Distributed E-Commerce Applications A Capacity Planning Methodology for Distributed E-Commerce Applications I. Introduction Most of today s e-commerce environments are based on distributed, multi-tiered, component-based architectures. The

More information

Analysis of WLAN Traffic in the Wild

Analysis of WLAN Traffic in the Wild Analysis of WLAN Traffic in the Wild Caleb Phillips and Suresh Singh Portland State University, 1900 SW 4th Avenue Portland, Oregon, 97201, USA {calebp,singh}@cs.pdx.edu Abstract. In this paper, we analyze

More information

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY Joseph Michael Wijayantha Medagama (08/8015) Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Lecture 5: Performance Analysis I

Lecture 5: Performance Analysis I CS 6323 : Modeling and Inference Lecture 5: Performance Analysis I Prof. Gregory Provan Department of Computer Science University College Cork Slides: Based on M. Yin (Performability Analysis) Overview

More information

Simulation and Analysis of Impact of Buffering of Voice Calls in Integrated Voice and Data Communication System

Simulation and Analysis of Impact of Buffering of Voice Calls in Integrated Voice and Data Communication System Simulation and Analysis of Impact of Buffering of Voice Calls in Integrated Voice and Data Communication System VM Chavan 1, MM Kuber 2 & RJ Mukhedkar 3 1&2 Department of Computer Engineering, Defence

More information

INTERNET TRAFFIC MEASUREMENT (PART II) Gaia Maselli

INTERNET TRAFFIC MEASUREMENT (PART II) Gaia Maselli INTERNET TRAFFIC MEASUREMENT (PART II) Gaia Maselli maselli@di.uniroma1.it Prestazioni dei sistemi di rete 2 Overview Basic concepts Characterization of traffic properties that are important to measure

More information

Computer Networks - Midterm

Computer Networks - Midterm Computer Networks - Midterm October 30, 2015 Duration: 2h15m This is a closed-book exam Please write your answers on these sheets in a readable way, in English or in French You can use extra sheets if

More information

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS

ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS ANALYSIS OF THE CORRELATION BETWEEN PACKET LOSS AND NETWORK DELAY AND THEIR IMPACT IN THE PERFORMANCE OF SURGICAL TRAINING APPLICATIONS JUAN CARLOS ARAGON SUMMIT STANFORD UNIVERSITY TABLE OF CONTENTS 1.

More information

Analyzing Cacheable Traffic in ISP Access Networks for Micro CDN applications via Content-Centric Networking

Analyzing Cacheable Traffic in ISP Access Networks for Micro CDN applications via Content-Centric Networking Analyzing Cacheable Traffic in ISP Access Networks for Micro CDN applications via Content-Centric Networking Claudio Imbrenda Luca Muscariello Orange Labs Dario Rossi Telecom ParisTech Outline Motivation

More information

On the 95-percentile billing method

On the 95-percentile billing method On the 95-percentile billing method Xenofontas Dimitropoulos 1, Paul Hurley 2, Andreas Kind 2, and Marc Ph. Stoecklin 2 1 ETH Zürich fontas@tik.ee.ethz.ch 2 IBM Research Zürich {pah,ank,mtc}@zurich.ibm.com

More information

Time-Domain Analysis of Web Cache Filter Effects (Extended Version)

Time-Domain Analysis of Web Cache Filter Effects (Extended Version) Time-Domain Analysis of Web Cache Filter Effects (Extended Version) Guangwei Bai, Carey Williamson Department of Computer Science, University of Calgary, 25 University Drive NW, Calgary, AB, Canada T2N

More information

Resource Allocation Strategies for Multiple Job Classes

Resource Allocation Strategies for Multiple Job Classes Resource Allocation Strategies for Multiple Job Classes by Ye Hu A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer

More information

Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers

Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers Adaptive Linear Prediction of Queues for Reduced Rate Scheduling in Optical Routers Yang Jiao and Ritesh Madan EE 384Y Final Project Stanford University Abstract This paper describes a switching scheme

More information

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs PeerApp Case Study University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs November 2010 Copyright 2010-2011 PeerApp Ltd. All rights reserved 1 Executive Summary

More information

Bridging and Switching Basics

Bridging and Switching Basics CHAPTER 4 Bridging and Switching Basics This chapter introduces the technologies employed in devices loosely referred to as bridges and switches. Topics summarized here include general link-layer device

More information

A Fluid-Flow Characterization of Internet1 and Internet2 Traffic *

A Fluid-Flow Characterization of Internet1 and Internet2 Traffic * A Fluid-Flow Characterization of Internet1 and Internet2 Traffic * Joe Rogers and Kenneth J. Christensen Department of Computer Science and Engineering University of South Florida Tampa, Florida 33620

More information

END-TO-END estimation of the spare capacity along a network

END-TO-END estimation of the spare capacity along a network 130 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 16, NO. 1, FEBRUARY 2008 A Stochastic Foundation of Available Bandwidth Estimation: Multi-Hop Analysis Xiliang Liu, Kaliappa Ravindran, and Dmitri Loguinov,

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

CS244a: An Introduction to Computer Networks

CS244a: An Introduction to Computer Networks Grade: MC: 7: 8: 9: 10: 11: 12: 13: 14: Total: CS244a: An Introduction to Computer Networks Final Exam: Wednesday You are allowed 2 hours to complete this exam. (i) This exam is closed book and closed

More information

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput Master s Thesis Title TCP Congestion Control Mechanisms for Achieving Predictable Throughput Supervisor Prof. Hirotaka Nakano Author Kana Yamanegi February 14th, 2007 Department of Information Networking

More information

Analytic Performance Models for Bounded Queueing Systems

Analytic Performance Models for Bounded Queueing Systems Analytic Performance Models for Bounded Queueing Systems Praveen Krishnamurthy Roger D. Chamberlain Praveen Krishnamurthy and Roger D. Chamberlain, Analytic Performance Models for Bounded Queueing Systems,

More information

Model suitable for virtual circuit networks

Model suitable for virtual circuit networks . The leinrock Independence Approximation We now formulate a framework for approximation of average delay per packet in telecommunications networks. Consider a network of communication links as shown in

More information

Modeling and Performance Analysis with Discrete-Event Simulation

Modeling and Performance Analysis with Discrete-Event Simulation Simulation Modeling and Performance Analysis with Discrete-Event Simulation Chapter 10 Verification and Validation of Simulation Models Contents Model-Building, Verification, and Validation Verification

More information

Edge Switch. setup. reject. delay. setup. setup ack. offset. burst. burst. release. φ l. long burst. short burst. idle. p s

Edge Switch. setup. reject. delay. setup. setup ack. offset. burst. burst. release. φ l. long burst. short burst. idle. p s Performance Modeling of an Edge Optical Burst Switching ode Lisong Xu, Harry G Perros, George Rouskas Computer Science Department orth Carolina State University Raleigh, C 27695-7534 E-mail: flxu2,hp,rouskasg@cscncsuedu

More information

Appendix A. Methodology

Appendix A. Methodology 193 Appendix A Methodology In this appendix, I present additional details of the evaluation of Sync-TCP described in Chapter 4. In Section A.1, I discuss decisions made in the design of the network configuration.

More information

A Generalization of a TCP Model: Multiple Source-Destination Case. with Arbitrary LAN as the Access Network

A Generalization of a TCP Model: Multiple Source-Destination Case. with Arbitrary LAN as the Access Network A Generalization of a TCP Model: Multiple Source-Destination Case with Arbitrary LAN as the Access Network Oleg Gusak and Tu rul Dayar Department of Computer Engineering and Information Science Bilkent

More information

Advanced Internet Technologies

Advanced Internet Technologies Advanced Internet Technologies Chapter 3 Performance Modeling Dr.-Ing. Falko Dressler Chair for Computer Networks & Internet Wilhelm-Schickard-Institute for Computer Science University of Tübingen http://net.informatik.uni-tuebingen.de/

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Performance Modeling and Evaluation of Web Systems with Proxy Caching

Performance Modeling and Evaluation of Web Systems with Proxy Caching Performance Modeling and Evaluation of Web Systems with Proxy Caching Yasuyuki FUJITA, Masayuki MURATA and Hideo MIYAHARA a a Department of Infomatics and Mathematical Science Graduate School of Engineering

More information

Trace Traffic Integration into Model-Driven Simulations

Trace Traffic Integration into Model-Driven Simulations Trace Traffic Integration into Model-Driven Simulations Sponsor: Sprint Kert Mezger David W. Petr Technical Report TISL-10230-10 Telecommunications and Information Sciences Laboratory Department of Electrical

More information

Analysis of wireless information locality and association patterns in a campus

Analysis of wireless information locality and association patterns in a campus 1 Analysis of wireless information locality and association patterns in a campus Francisco Chinchilla, Maria Papadopouli Department of Computer Science, The University of North Carolina at Chapel Hill

More information

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 6, DECEMBER 2000 747 A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks Yuhong Zhu, George N. Rouskas, Member,

More information

Time-Step Network Simulation

Time-Step Network Simulation Time-Step Network Simulation Andrzej Kochut Udaya Shankar University of Maryland, College Park Introduction Goal: Fast accurate performance evaluation tool for computer networks Handles general control

More information

Improving Internet Performance through Traffic Managers

Improving Internet Performance through Traffic Managers Improving Internet Performance through Traffic Managers Ibrahim Matta Computer Science Department Boston University Computer Science A Glimpse of Current Internet b b b b Alice c TCP b (Transmission Control

More information

Low Latency via Redundancy

Low Latency via Redundancy Low Latency via Redundancy Ashish Vulimiri, Philip Brighten Godfrey, Radhika Mittal, Justine Sherry, Sylvia Ratnasamy, Scott Shenker Presenter: Meng Wang 2 Low Latency Is Important Injecting just 400 milliseconds

More information

The Internet and the World Wide Web

The Internet and the World Wide Web Technology Briefing The Internet and the World Wide Web TB5-1 Learning Objectives TB5-2 Learning Objectives TB5-3 How Did the Internet Get Started? Internet derived from internetworking 1960s U.S. Defense

More information

Numerical analysis and comparison of distorted fingermarks from the same source. Bruce Comber

Numerical analysis and comparison of distorted fingermarks from the same source. Bruce Comber Numerical analysis and comparison of distorted fingermarks from the same source Bruce Comber This thesis is submitted pursuant to a Master of Information Science (Research) at the University of Canberra

More information

Workload Characterization Techniques

Workload Characterization Techniques Workload Characterization Techniques Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu These slides are available on-line at: http://www.cse.wustl.edu/~jain/cse567-08/

More information

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links Network Working Group Request for Comments: 1046 W. Prue J. Postel ISI February 1988 A Queuing Algorithm to Provide Type-of-Service for IP Links Status of this Memo This memo is intended to explore how

More information

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications

Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Performance Analysis of Cell Switching Management Scheme in Wireless Packet Communications Jongho Bang Sirin Tekinay Nirwan Ansari New Jersey Center for Wireless Telecommunications Department of Electrical

More information

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering McGill University - Faculty of Engineering Department of Electrical and Computer Engineering ECSE 494 Telecommunication Networks Lab Prof. M. Coates Winter 2003 Experiment 5: LAN Operation, Multiple Access

More information

STUDYING NETWORK TIMING WITH PRECISION PACKET DELAY MEASUREMENTS

STUDYING NETWORK TIMING WITH PRECISION PACKET DELAY MEASUREMENTS STUDYING NETWORK TIMING WITH PRECISION PACKET DELAY MEASUREMENTS Lee Cosart R&D, Symmetricom, Inc. 2300 Orchard Parkway San Jose, CA 95131, USA lcosart@symmetricom.com Abstract As the transmission of telecommunications

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

Creating transportation system intelligence using PeMS. Pravin Varaiya PeMS Development Group

Creating transportation system intelligence using PeMS. Pravin Varaiya PeMS Development Group Creating transportation system intelligence using PeMS Pravin Varaiya PeMS Development Group Summary Conclusion System overview Routine reports: Congestion monitoring, LOS Finding bottlenecks Max flow

More information

General comments on candidates' performance

General comments on candidates' performance BCS THE CHARTERED INSTITUTE FOR IT BCS Higher Education Qualifications BCS Level 5 Diploma in IT April 2018 Sitting EXAMINERS' REPORT Computer Networks General comments on candidates' performance For the

More information

A VERIFICATION OF SELECTED PROPERTIES OF TELECOMMUNICATION TRAFFIC GENERATED BY OPNET SIMULATOR.

A VERIFICATION OF SELECTED PROPERTIES OF TELECOMMUNICATION TRAFFIC GENERATED BY OPNET SIMULATOR. UNIVERSITY OF LJUBLJANA Faculty of Electrical Engineering Daniel Alonso Martinez A VERIFICATION OF SELECTED PROPERTIES OF TELECOMMUNICATION TRAFFIC GENERATED BY OPNET SIMULATOR. Erasmus exchange project

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture

Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture Design and Implementation of Measurement-Based Resource Allocation Schemes Within The Realtime Traffic Flow Measurement Architecture Robert D. allaway and Michael Devetsikiotis Department of Electrical

More information

TELCOM 2130 Queueing Theory. David Tipper Associate Professor Graduate Telecommunications and Networking Program. University of Pittsburgh

TELCOM 2130 Queueing Theory. David Tipper Associate Professor Graduate Telecommunications and Networking Program. University of Pittsburgh TELCOM 2130 Queueing Theory David Tipper Associate Professor Graduate Telecommunications and Networking Program University of Pittsburgh Learning Objective To develop the modeling and mathematical skills

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

arxiv: v3 [cs.ni] 3 May 2017

arxiv: v3 [cs.ni] 3 May 2017 Modeling Request Patterns in VoD Services with Recommendation Systems Samarth Gupta and Sharayu Moharir arxiv:1609.02391v3 [cs.ni] 3 May 2017 Department of Electrical Engineering, Indian Institute of Technology

More information

FINAL Tuesday, 20 th May 2008

FINAL Tuesday, 20 th May 2008 Data Communication & Networks FINAL Exam (Spring 2008) Page 1 / 23 Data Communication & Networks Spring 2008 Semester FINAL Tuesday, 20 th May 2008 Total Time: 180 Minutes Total Marks: 100 Roll Number

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Performance Of Common Data Communications Protocols Over Long Delay Links An Experimental Examination 1. Introduction

Performance Of Common Data Communications Protocols Over Long Delay Links An Experimental Examination 1. Introduction Performance Of Common Data Communications Protocols Over Long Delay Links An Experimental Examination Hans Kruse McClure School of Communication Systems Management Ohio University 9 S. College Street Athens,

More information

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print,

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print, ANNEX B - Communications Protocol Overheads The OSI Model is a conceptual model that standardizes the functions of a telecommunication or computing system without regard of their underlying internal structure

More information

Daniel A. Menascé, Ph. D. Dept. of Computer Science George Mason University

Daniel A. Menascé, Ph. D. Dept. of Computer Science George Mason University Daniel A. Menascé, Ph. D. Dept. of Computer Science George Mason University menasce@cs.gmu.edu www.cs.gmu.edu/faculty/menasce.html D. Menascé. All Rights Reserved. 1 Benchmark System Under Test (SUT) SPEC

More information

Optical networking technology

Optical networking technology 1 Optical networking technology Technological advances in semiconductor products have essentially been the primary driver for the growth of networking that led to improvements and simplification in the

More information

A simple mathematical model that considers the performance of an intermediate node having wavelength conversion capability

A simple mathematical model that considers the performance of an intermediate node having wavelength conversion capability A Simple Performance Analysis of a Core Node in an Optical Burst Switched Network Mohamed H. S. Morsy, student member, Mohamad Y. S. Sowailem, student member, and Hossam M. H. Shalaby, Senior member, IEEE

More information

Buffer Management for Self-Similar Network Traffic

Buffer Management for Self-Similar Network Traffic Buffer Management for Self-Similar Network Traffic Faranz Amin Electrical Engineering and computer science Department Yazd University Yazd, Iran farnaz.amin@stu.yazd.ac.ir Kiarash Mizanian Electrical Engineering

More information

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15 Introduction to Real-Time Communications Real-Time and Embedded Systems (M) Lecture 15 Lecture Outline Modelling real-time communications Traffic and network models Properties of networks Throughput, delay

More information

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am 15-744 Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am (please submit through e-mail to zhuoc@cs.cmu.edu and srini@cs.cmu.edu) Name: A Congestion Control 1. At time t, a TCP connection

More information

INTERNATIONAL TELECOMMUNICATION UNION

INTERNATIONAL TELECOMMUNICATION UNION INTERNATIONAL TELECOMMUNICATION UNION TELECOMMUNICATION STANDARDIZATION SECTOR STUDY PERIOD 21-24 English only Questions: 12 and 16/12 Geneva, 27-31 January 23 STUDY GROUP 12 DELAYED CONTRIBUTION 98 Source:

More information

Title: Proposed modifications to Performance Testing Baseline: Throughput and Latency Metrics

Title: Proposed modifications to Performance Testing Baseline: Throughput and Latency Metrics 1 ATM Forum Document Number: ATM_Forum/97-0426. Title: Proposed modifications to Performance Testing Baseline: Throughput and Latency Metrics Abstract: This revised text of the baseline includes better

More information

Advanced Application Reporting USER GUIDE

Advanced Application Reporting USER GUIDE Advanced Application Reporting USER GUIDE CONTENTS 1.0 Preface: About This Document 5 2.0 Conventions 5 3.0 Chapter 1: Introducing Advanced Application Reporting 6 4.0 Features and Benefits 7 5.0 Product

More information

Introduction: Two motivating examples for the analytical approach

Introduction: Two motivating examples for the analytical approach Introduction: Two motivating examples for the analytical approach Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgement: this lecture is partially based on the slides of Dr. D. Manjunath Outline

More information

A Study of Burstiness in TCP Flows

A Study of Burstiness in TCP Flows A Study of Burstiness in TCP Flows Srinivas Shakkottai 1, Nevil Brownlee 2, and K. C. Claffy 3 1 Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign, USA email:

More information

Telecommunication of Stabilizing Signals in Power Systems

Telecommunication of Stabilizing Signals in Power Systems Telecommunication of Stabilizing Signals in Power Systems Guillaume J. Raux, Ali Feliachi, and Matthew C. Valenti Advanced Power Engineering Research Center Lane Department of Computer Science & Electrical

More information

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols Integrated services Reading: S. Keshav, An Engineering Approach to Computer Networking, chapters 6, 9 and 4 Module objectives Learn and understand about: Support for real-time applications: network-layer

More information

packet-switched networks. For example, multimedia applications which process

packet-switched networks. For example, multimedia applications which process Chapter 1 Introduction There are applications which require distributed clock synchronization over packet-switched networks. For example, multimedia applications which process time-sensitive information

More information

Contents The Definition of a Fieldbus An Introduction to Industrial Systems Communications.

Contents The Definition of a Fieldbus An Introduction to Industrial Systems Communications. Contents Page List of Tables. List of Figures. List of Symbols. Dedication. Acknowledgment. Abstract. x xi xv xxi xxi xxii Chapter 1 Introduction to FieldBuses Systems. 1 1.1. The Definition of a Fieldbus.

More information

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Performance Evaluation of Scheduling Mechanisms for Broadband Networks Performance Evaluation of Scheduling Mechanisms for Broadband Networks Gayathri Chandrasekaran Master s Thesis Defense The University of Kansas 07.31.2003 Committee: Dr. David W. Petr (Chair) Dr. Joseph

More information

The Internet and the Web. recall: the Internet is a vast, international network of computers

The Internet and the Web. recall: the Internet is a vast, international network of computers The Internet and the Web 1 History of Internet recall: the Internet is a vast, international network of computers the Internet traces its roots back to the early 1960s MIT professor J.C.R. Licklider published

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

A Non-Parametric Approach to Generation and Validation of Synthetic Network Traffic

A Non-Parametric Approach to Generation and Validation of Synthetic Network Traffic The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL A Non-Parametric Approach to Generation and Validation of Synthetic Network Traffic Félix Hernández-Campos ndez-campos Kevin Jeffay Don Smith Department

More information

Markov Chains and Multiaccess Protocols: An. Introduction

Markov Chains and Multiaccess Protocols: An. Introduction Markov Chains and Multiaccess Protocols: An Introduction Laila Daniel and Krishnan Narayanan April 8, 2012 Outline of the talk Introduction to Markov Chain applications in Communication and Computer Science

More information

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/220850337 On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement

More information

Networking Quality of service

Networking Quality of service System i Networking Quality of service Version 6 Release 1 System i Networking Quality of service Version 6 Release 1 Note Before using this information and the product it supports, read the information

More information

Configuring IP Services

Configuring IP Services CHAPTER 8 Configuring IP Services This chapter describes how to configure optional IP services supported by the Cisco Optical Networking System (ONS) 15304. For a complete description of the commands in

More information

Load Balancing with Minimal Flow Remapping for Network Processors

Load Balancing with Minimal Flow Remapping for Network Processors Load Balancing with Minimal Flow Remapping for Network Processors Imad Khazali and Anjali Agarwal Electrical and Computer Engineering Department Concordia University Montreal, Quebec, Canada Email: {ikhazali,

More information

Replicate It! Scalable Content Delivery: Why? Scalable Content Delivery: How? Scalable Content Delivery: How? Scalable Content Delivery: What?

Replicate It! Scalable Content Delivery: Why? Scalable Content Delivery: How? Scalable Content Delivery: How? Scalable Content Delivery: What? Accelerating Internet Streaming Media Delivery using Azer Bestavros and Shudong Jin Boston University http://www.cs.bu.edu/groups/wing Scalable Content Delivery: Why? Need to manage resource usage as demand

More information

Markov Model Based Congestion Control for TCP

Markov Model Based Congestion Control for TCP Markov Model Based Congestion Control for TCP Shan Suthaharan University of North Carolina at Greensboro, Greensboro, NC 27402, USA ssuthaharan@uncg.edu Abstract The Random Early Detection (RED) scheme

More information

COMPUTER NETWORKS PERFORMANCE. Gaia Maselli

COMPUTER NETWORKS PERFORMANCE. Gaia Maselli COMPUTER NETWORKS PERFORMANCE Gaia Maselli maselli@di.uniroma1.it Prestazioni dei sistemi di rete 2 Overview of first class Practical Info (schedule, exam, readings) Goal of this course Contents of the

More information

INCREASING THE EFFICIENCY OF NETWORK INTERFACE CARD. Amit Uppal

INCREASING THE EFFICIENCY OF NETWORK INTERFACE CARD. Amit Uppal INCREASING THE EFFICIENCY OF NETWORK INTERFACE CARD By Amit Uppal A Thesis Submitted to the Faculty of Mississippi State University in Partial Fulfillment of the Requirements for the Degree of Master of

More information

SIMULATION FRAMEWORK MODELING

SIMULATION FRAMEWORK MODELING CHAPTER 5 SIMULATION FRAMEWORK MODELING 5.1 INTRODUCTION This chapter starts with the design and development of the universal mobile communication system network and implementation of the TCP congestion

More information

CHAPTER 5. QoS RPOVISIONING THROUGH EFFECTIVE RESOURCE ALLOCATION

CHAPTER 5. QoS RPOVISIONING THROUGH EFFECTIVE RESOURCE ALLOCATION CHAPTER 5 QoS RPOVISIONING THROUGH EFFECTIVE RESOURCE ALLOCATION 5.1 PRINCIPLE OF RRM The success of mobile communication systems and the need for better QoS, has led to the development of 3G mobile systems

More information

A Real-Time Network Simulation Application for Multimedia over IP

A Real-Time Network Simulation Application for Multimedia over IP A Real-Time Simulation Application for Multimedia over IP ABSTRACT This paper details a Secure Voice over IP (SVoIP) development tool, the Simulation Application (Netsim), which provides real-time network

More information

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?

More information