Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers

Size: px
Start display at page:

Download "Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers"

Transcription

1 Design, Implementation and Performance of Resource Management Scheme for TCP Connections at Web Proxy Servers Takuya Okamoto Tatsuhiko Terai Go Hasegawa Masayuki Murata Graduate School of Engineering Science, Osaka University -3 Machikaneyama, Toyonaka, Osaka , Japan Phone: , Fax: {tak-okmt, Cybermedia Center, Osaka University -30 Machikaneyama, Toyonaka, Osaka , Japan Phone: , Fax: {hasegawa, Abstract A great deal of research has been devoted to tackling the problem of network congestion against the increase of Internet traffic. However, there has been little concern regarding improvements of the performance of Internet servers in spite of the projections that the bottleneck is now being shifted from the network to endhosts. We have previously proposed a scheme called SSBT (Scalable Socket Buffer Tuning) that is intended to be used to improve the performance of Web servers by maintaining their resources for TCP connections effectively and fairly. On the current Internet, however, a significant amount of Web document transfer requests are sent through Web proxy servers. Accordingly, in this paper, we propose a new resource/connection management scheme for Web proxy servers to improve their performance and reduce the elapsed Web document transfer time when using proxy servers. We validate the effectiveness of our proposed scheme through simulation experiments, and confirm conclusively that it can be used to manage resources effectively. Additionally, we implement the proposed scheme on an actual Web proxy server, and examine its performance by using benchmark tests that consider the Web access model, in terms of the proxy server throughput and document transfer delay. Introduction The rapid increase of users on the Internet has been the impetus for many research efforts toward dissolving network congestion against an increase of network traffic. However, there has been little work in the area of improvements to the performance of Internet hosts in spite of the projected shift of the performance bottleneck from the network to the endhosts. There are already hints of this scenario s emergence as evidenced by the proliferation of busy Web servers on the present-day Internet that currently receive hundreds of document transfer requests every second during peak volume periods. In [], we have proposed SSBT (Scalable Socket Buffer Tuning) which is intended to improve the performance of Web servers by maintaining their resources effectively and fairly. SSBT is comprised of two major components; E-ATBT (Equation-based Automatic TCP Buffer Tuning) and SMR (Simple Memory-copy Reduction) schemes. In E-ATBT, we maintain an expected throughput value for each active TCP connection that is determined by an analytic estimation result of the TCP throughput [2]. The expected value is characterized by packet loss rate, RTT (Round Trip Time), and RTO (Retransmission Time Out), which are easily monitored by the sender hosts. The result of this tuning process is that a send socket buffer is assigned to each connection based on its expected throughput with consideration of the maxmin fairness among the connections. The SMR scheme provides a set of socket system calls in order to reduce the number of memory copy operations at the sender host in TCP data transfer tasks. SMR is alike as other schemes [3, 4], however, it is simpler to implement. We have validated the effectiveness of our proposed mechanisms through the simulation and implementation experiments. The results we obtained for SSBT indicated that it is able to reduce Web document transfer times by /5 when compared with those obtained for conventional Web servers. In the current Internet many requests for Web document transfer are made via Web proxy servers [5]. Since the proxy servers are usually prepared by ISPs (Internet Service Providers) for their customers, such proxy servers must accommodate a large number of HTTP accesses simultaneously. Furthermore, the proxy servers must handle both upward TCP connections (from the proxy server to Web servers) and downward TCP connections (from the client hosts to the proxy server). Hence, the proxy server becomes a likely spot for bottlenecks to occur during Web document transfers, even when the bandwidth of the network and the Web server performance are large enough. It is the contention that any effort expended to study ways to reduce the transfer time of Web documents must consider improvements the proxy server performance. The bulk of the past reported research on improvements of the Web proxy server performance was focused on cache replacement algorithms [6, 7]. In [8], the authors have evaluated the

2 performance of Web proxy servers focusing on the difference between HTTP/.0 and HTTP/. through simulation experiments, including the effect of using cookies and aborting the document transfer by the client hosts. However, there are few works on the resource management at the proxy server, and no effective mechanism has been proposed. In this paper, we first discuss several problems that arise in the handling of TCP connections at the Web proxy server. One of these problems involves the assignment of socket buffer for TCP connections at the proxy server. When a TCP connection is not assigned the proper size send/receive socket buffer based on its expected throughput, the assigned socket buffer may be left unused or insufficient because its size might be inadequate for performing the intended task, which results in a waste of the socket buffer. Another problem is the management of persistent TCP connections, which waste the resources of a busy proxy server. When a proxy server must accommodate many persistent TCP connections without the use of an effective management scheme, its resources continue to be assigned to these connections whether these are active or not. It results in that new TCP connections cannot be established since the server resources become short. We propose a new resource management scheme for proxy servers that has been capable for solving those problems. Our proposed scheme has the following two features: One is an enhanced E-ATBT, which is a revision of our previous E-ATBT proposed for proxy servers. Differently from Web servers, the proxy server must handle upward and downward TCP connections and behave as a TCP receiver host to obtain Web documents from Web servers. We therefore enhanced the E-ATBT in order to effectively handle the dependency of upward and downward TCP connections and to dynamically assign the receive socket buffer. The other is a connection management scheme that prevents newly arriving TCP connection requests from being rejected at the proxy server due to a lack of resources. The scheme involves the management of persistent TCP connections provided by HTTP/., which intentionally tries to close them when the resources at the proxy server are shorthanded. We validate an effectiveness of our proposed scheme through simulation and implementation experiments. In the simulation experiments, we evaluate the essential performance and characteristics of our proposed scheme by comparing them with those of the original proxy server. We further show the results of the implementation experiments, and confirm the effectiveness of connection management scheme in terms of the proxy server throughput and document transfer delay. The rest of this paper is organized as follows. In Section 2, we outline the current Web proxy servers and their resources as they regard TCP connections, and discuss the merits and demerits of persistent TCP connections over HTTP/.. In Section 3, we propose a new resource management scheme for proxy servers, and confirm its effectiveness by describing the results of our simulation experiments in Section 4. Additionally, we present some implementation issues of our proposed schemes on an actual proxy server, and confirm the effectiveness of the scheme by using some benchmark tests in Section 5. Finally, we present our concluding remarks in Section 6. 2 Background In this section, we describe the background of our research on Web proxy servers in Subsection 2.. We discuss the potential that persistent connections have to improve Web document transfer times. However, as will be made apparent, it requires a careful treatment at the proxy server, which will be described in Subsection Web Proxy Server A Web proxy server works as an agent for Web client hosts that request Web documents. See Figure. When it receives a Web document transfer request from a client host, it obtains the requested document from the original Web servers on behalf of the client host and delivers it to the client host. It also caches the obtained Web documents. When other client hosts request the same document, it then transfers the cached document, which results in a much reduced document transfer time. For example, it was reported in [8] that using Web proxy servers reduces document transfer times by up to 30%. Furthermore, when the cache is hit, the document transfer is performed without any connection to Web servers. Thus, the congestion that occurs within a network and at Web servers can be reduced. The proxy server accommodates a large number of connections from Web client hosts and to Web servers. It is a different point from Web servers. The proxy server behaves as a sender host for downward TCP connections (between client hosts and the proxy server) and as a receiver host for upward TCP connections (between a proxy server and Web servers). Therefore, if the resource management scheme is not appropriately configured at the proxy server, the document transfer time increases even when the network is not congested or the Web server load is not high. That is, careful and effective resource management is a critical issue to improve the performance of a Web proxy server. On the current Internet, however, most proxy servers including those in [9, 0] lack such considerations. The resources at the Web proxy server that we focus on in this paper are mbuf, file descriptor, control blocks, and socket buffer. These are closely related to the performance of TCP connections when transferring Web documents. Mbuf, file descriptor, and control blocks are resources for TCP connections. The socket buffer is used for storing/receiving the transferred documents by the TCP connections. When those resources are slight, the proxy server is unable to establish a new TCP connection. Thus, the client host has to wait for existing TCP connections to be closed and their assigned resources to be released. If this cannot be accomplished, the proxy server rejects the request. In what follows, we introduce the resources of Web proxy servers as they regard TCP connections. Although in our discussion we consider FreeBSD 4.0 [], we believe that the gist of our discussions can also be applied to other OSs, such as Linux. Mbuf Each TCP connection is assigned an mbuf, which is located in the kernel memory space and used to move the transmission data between the socket buffer and the network interface. When the data size is larger than the mbuf, the data is stored to another 2

3 Web servers get the document from the original Web server TCP connection cannot be established as the amount of memory spaces becomes short. Internet Upward TCP connection Downward TCP connection Internet Request the document to the original Web server Web proxy server Client hosts No Hit Request a document Figure : Web proxy server Hit deliver the document memory space, called a mbuf cluster, which is listed to the mbuf. Several mbuf clusters are used for storing data according to its size. The number of mbufs prepared by the OS is configured in building the kernel; the default number is 4096 in FreeBSD [2]. Since each TCP connection is assigned at least one mbuf when established, the default number of connections a proxy server can simultaneously establish is It would be too small for busy proxy servers. File descriptor A file descriptor is assigned to each file in a file system so that the kernel and user applications can identify the file. It is also associated with a TCP connection when it is established. It is called a socket file descriptor. The number of connections that can be established simultaneously is limited to the number of file descriptors prepared by OS. The default number of file descriptors is 064 in FreeBSD [2]. In contrast to the case of mbuf, the number of file descriptors can be changed after the kernel is booted. However, since user applications, such as Squid [9], occupy the memory space according to the number of available file descriptors when they boot, it is very difficult to inform the applications of the change in the number of available file descriptors at the run time. That is, we cannot change the number of file descriptors used by the applications dynamically. Control blocks When establishing a new TCP connection, it is necessary to use more memory space for the data structures that are used in storing the connection information, such as inpcb, tcpcb, and socket. The inpcb structure is used to store the source and destination IP addresses, the port numbers, and so on. The tcpcb structure is for storing network information, such as the RTT (Round Trip Time), RTO (Retransmission Time Out), and congestion window size, which are used for TCP congestion control [3]. The s ocket structure is used for storing the information about the socket. The maximum number of these structures that can be built in the memory space is initially 064. Since the memory space for these data structures are set in building the kernel and it is unchangeable while OS is running, the new Socket buffer The Socket buffer is used for data transfer operations between user applications and the sender/receiver TCP. When the user application transmits data using TCP, the data is copied to the send socket buffer and subsequently it is copied to the mbufs (or mbuf clusters). The size of the assigned socket buffer is a key issue for the effective transfer of data by TCP. Suppose that a server host is sending TCP data to two client hosts; one a 64 Kbps dial-up (say, client A) and the other a 00 Mbps LAN (client B). If the server host assigns equal size send socket buffers to both client hosts, it is likely that the amount of the assigned buffer is too large for client A and too small for client B, because of the differences of capacity (more strictly, bandwidth-delay products) of their connections. For an effective buffer allocation to both client hosts, a compromise of the buffer usage should be taken into account. In [], we proposed an E-ATBT scheme that assigns a send socket buffer to each TCP connection dynamically according to its expected throughput value, which is estimated from the observed network parameters, such as the packet loss probability, RTT, and RTO. That is, a sender host calculates the average throughput of each TCP connection based on the analysis as reported in [2], from the above three parameters. We then calculate the required send socket buffer size as multiplication of the estimated throughput and RTT of the TCP connection. By taking into account the observed network parameters, the resources at the Web server are appropriately allocated to connections in various network environments. E-ATBT is also applicable to Web proxy servers, since the proxy servers accommodate many TCP connections issued by client hosts in various environments. However, since proxy servers have a dependency that exists between upward and downward TCP connections, a simple application of E-ATBT is insufficient. Furthermore, the proxy server behaves as a receiver host for the upward TCP connection to the Web server, we have to consider a management scheme for the receive socket buffer, which was not considered in the original E-ATBT. 2.2 Persistent TCP Connection of HTTP/. In recent years, many Web servers and client hosts (namely, Web browsers) support a persistent connection option, which is one of the most important functions of HTTP/. [4]. In the older version of HTTP (HTTP/.0), the TCP connection between server and client hosts is immediately closed when the document transfer was completed. However, since Web documents have many in-line images, it is necessary to establish TCP connections many times to download them in HTTP/.0. This results in a significant increase of document transfer time since the average size of the Web documents at typical Web servers is about 0 KBytes [5, 6]. The use of the three-way handshake in each TCP connection establishment makes the situation worse. In the HTTP/. persistent TCP connection the server preserves the status of the TCP connection, including the congestion window size, RTT, RTO, ssthresh, and so on, when it finishes the document transfer. Then it re-uses the connection and its sta- 3

4 hit ratio : h RTT: rttc RTO: rtoc RTT: rtts RTO: rtos loss probability: p c loss probability: ps Client Host HTTP Proxy Server Web Server Figure 2: Analysis model tus when other documents are transferred using the same HTTP session. In this way, the three-way handshake can be avoided. However, the proxy server keeps the TCP connection established, irrespective of whether the connection is active (in use for packet transfer) or not. That is, the resources at the server are wasted when the TCP connection is inactive. Accordingly, a significant portion of the resources may be wasted in order to maintain the persistent TCP connections at the proxy server to accommodate many TCP connections. In what follows, we introduce a simple estimation to show how many TCP connections are active or idle, and to explain that many resources are wasted in the typical Web proxy server when native persistent TCP connections are utilized. For this purpose, we use a network topology shown in Figure 2, where a Web client host and a Web server are connected via a proxy server, and derive the probability that a persistent TCP connection is active, that is, in use sending TCP packets. Notations in the figure, p c, rtt c, and rto c are the packet loss ratio, RTT, and RTO between the client host and the proxy server, respectively. Similarly, p s, rtt s, and rto s are those between the proxy server and the Web server. The packet size is fixed at m. The mean throughput of the TCP connection between the proxy server and the client host, and that between the Web server and the proxy server, denoted as ρ c and ρ s respectively, can be obtained from by using the analysis results presented in our previous work [7]. Note that in [7], a more accurate analysis of TCP throughput estimation than those in [2] and [8] was provided, especially for the case of small file transfers. Using the results presented in [7], we can derive the time for a Web document transfer via a proxy server. For this purpose, we also introduce the parameters h and f, which respectively represent the cache hit ratio of the document at the Web proxy server and the size of the document being transferred. Note that the proxy server is likely to cache Web page, which includes the main document and some in-line images. That is, when the main document is found at the proxy server s cache, the following in-line images are likely to be cached, and vice versa. Thus, h is not an adequate metric when we examine the effect of the persistent connection, but the following observation is also applicable to the above-mentioned case. When the requested document is cached by the proxy server, a request to the original Web server is not required, and the document is directly delivered from the proxy server to the client host. When the proxy server does not have the requested document, on the other hand, it must be transferred from the appropriate Web server to the client host via the proxy server. Thus, the document transfer time, T (f), can be determined as follows; ) ) T (f) = h (S c + fρc +( h) (S c + S s + fρc + fρs where S c and S s represent the connection setup times of a TCP connection between the client host and the proxy server and that between the proxy server and the Web server, respectively. To derive S c and S s, we must consider the effect of the persistent connection provided by HTTP/. which omits the three-way handshake. Here, we define X c as the probability that the TCP connection between the client host and the proxy server is kept connected by the persistent connection, and X s as the corresponding probability to the TCP connection between the proxy server and the Web server. Then S c and S s can be described as follows; S c = X c 2 rtt c +( X c ) 3 2 rtt c () S s = X s 2 rtt s +( X s ) 3 2 rtt s (2) X c and X s are dependent on the length of persistent timer, T p. That is, if the idle time between two successive document transfers is smaller than T p, the TCP connection can be used for the second document transfer. If the idle time is larger than T p,on the other hand, the TCP connection has been closed and a new TCP connection must be established. According to results in [5], where the authors modeled the access pattern of a Web client host, the idle time between Web document transfers follows a Pareto distribution whose probability density function is given as; p(x) = αk α x α+ (3) where α =.5 and k =. Then we can calculate X c and X s as follows; X c = d(t p ) (4) X s = ( h) d(t p ) (5) where d(x) is the cumulative distribution function of p(x). From Eqs.() (5), the average document transfer time, T (f), can be determined. Finally, we can derive U, which is the utilization of the persistent TCP connection as follows; U = Tp 0 p(x) T (f) T (f)+x dx +( d(t p)) T (f) T (f)+t p Figure 3 plots the probability that a TCP connection is active, as a function of the length of persistent timer T p, in cases of various parameter sets of rtt c, p c, rtt s, and p s. Here we set h to 0.5, the packet size to 460 KBytes, and f to 2 KBytes according to the average size of Web documents reported in [5]. We can see from these figures that the utilization of the TCP connection is very low, regardless of the network condition (RTTs and the packet loss ratios on links between the proxy server and the client host/the Web server). Thus, if idle TCP connections are kept established at the proxy server, a large part of the resources at the proxy server are wasted. Furthermore, we can observe that the utilization becomes large when the length of the persistent timer is small (< 5 sec). This is because the smaller value of T p can prevent the emergence of situations where the proxy server s resources are wasted. One solution to this problem is to simply discard HTTP/. and to use HTTP/.0, as the latter closes the TCP connection immediately upon the completion of a document transfer. However, HTTP/. has other elegant mechanisms, such as pipelining and 4

5 Connection Utilization Connection Utilization rtt s =0.2 sec, p s =0. rtt s =0.2 sec, p s =0.0 rtt s =0.02 sec, p s =0.0 rtt s =0.02 sec, p s = Persistent Timer [sec] (a) rtt c =0.02 sec, p s = 0.00 rtt s =0.2 sec, p s =0. rtt s =0.2 sec, p s =0.0 rtt s =0.02 sec, p s =0.0 rtt s =0.02 sec, p s = Persistent Timer [sec] (b) rtt c =0.2 sec, p s = 0.0 Figure 3: Probability that a TCP connection is active content negotiation [4]. We should therefore develop an effective resource management scheme for use under HTTP/.. Our solution is that as the resources become short, the proxy server intentionally closes persistent TCP connections that are unnecessarily wasting the resources at the proxy server. We will describe our scheme in detail in the next section. 3 Algorithm In this section, we propose a new resource management scheme suitable to Web proxy servers, which solves the problems pointed out in the previous section. 3. New Socket Buffer Management Method As described in the previous section, a proxy server behaves as a TCP receiver host when it obtains a requested document from the Web server. We thus have to incorporate a receive socket buffer management algorithm, which was not considered in the original E-ATBT []. Also, we have to consider the dependence of the upward and downward TCP connections. In the following subsections, we propose E 2 -ATBT, which is a revision of our original E-ATBT, having two additional schemes to eliminate these two problems. 3.. Handling the Relation between Upward and Downward TCP Connections A Web proxy server relays a document transfer request to a Web server for a Web client host. Thus, there is a close relation between an upward TCP connection (from the proxy server to the Web server) and a downward TCP connection (from the client host to the proxy server). That is, the difference in expected throughput of both connections should be taken into account when socket buffers are assigned to both connections. For example, when the throughput of a certain downward TCP connection is larger than that of other concurrent downward TCP connections, the larger size of socket buffer should be assigned to the TCP connection by using E-ATBT. However, if the throughput of the upward TCP connection corresponding to the downward TCP connection is low, the send socket buffer assigned to the downward TCP connection is likely not to be fully utilized. In this case, the unused send socket buffer should be assigned to the other concurrent TCP connections having smaller socket buffers, hence, that the throughputs of those TCP connections would be improved. There is one problem that must be overcome to realize the above-mentioned method. TCP connections can be identified with the control block, called tcpcb, by the kernel. However, the relation between the upward and downward connections cannot be determined. Two possible way to overcome this problem can be considered: The proxy server monitors the utilization of the send socket buffer for downward TCP connections. Then, it decreases the assigned buffer size of connections whose send socket buffer is not fully utilized. When the proxy server sends the document transfer request to the Web server, the proxy server attaches information of the relation to the packet header. The former algorithm can be realized only by the modification of the proxy server. On the other hand, the latter algorithm needs the interaction of the HTTP protocol. In the higher abstract model, the above two algorithms have a similar effect. However, the latter has an implementation difficulty despite its ability to achieve precise control. In the simulation experiments described in the next section, we will assume that the proxy server knows the dependency of the downward and upward TCP connections. We use this assumption to confirm the essential performance of the proposed algorithm Control of Receive Socket Buffer In most of past researches, it was assumed that a receiver host has enough large size of receive socket buffer, based on the consideration that the performance bottleneck of the data transfer is not at the endhosts, but within the network. Therefore, many OSs assign a small sized receive socket buffer to each TCP connection. For example, the default size of the receive socket buffer in the FreeBSD system is 6 KBytes. As reported in [9], however, it is now regarded as very small because the network bandwidth 5

6 is dramatically increased in the current Internet, and the performance of Internet servers has become increasingly higher and higher. To avoid performance limitation introduced by the receive socket buffer, the receiver host should adjust its receive socket buffer size to correspond to the congestion window size of the sender host. This can be done by monitoring the utilization of the receive socket buffer, or by adding information about the window size to the data packet header, similar to that described in the previous subsection. In the simulation experiments of the next section, we suppose that the proxy server can obtain complete information about required sizes of the receive socket buffers of upward TCP connections and control them according to the required size. 3.2 Connection Management Method As explained in Subsection 2.2, the careful treatment of persistent TCP connections on the proxy server is necessary for the efficient usage of the resources at the proxy server. We propose a new management scheme of persistent TCP connections at the proxy server by considering the amount of resource remaining. The key idea is as follows. When the load of the proxy server is low and the remaining resources are sufficient, it tries to keep as many TCP connections open as possible. When the resources at the proxy server are going to be short, the proxy server tries to close the persistent TCP connections and makes the resources free, so that the released resources be used for new TCP connections. For realizing the above-mentioned control method, the remaining resources of a proxy server should be monitored. We also have to maintain persistent TCP connections at the proxy server to keep and close them according to the monitored value of the utilization of the resources. The implementation issues for the management of persistent TCP connections will be discussed in Section 5. For the further effective resource usage, we also add a mechanism that the amount of resources assigned to the persistent TCP connections is decreased gradually after the connection becomes inactive. The socket buffer is not needed at all when the TCP connection is idle. Therefore, we gradually decrease the send/receive socket buffer of persistent TCP connections by taking account of the fact that as the connection idle time continues, the possibility that the TCP connection is ceased becomes large. In the next section, we evaluate the effect of the above algorithms through simulation experiments. 4 Simulation Experiments In this section, we evaluate the performance of our proposed scheme through simulation experiments using ns-2 [20]. Figure 4 shows the simulation model. In the model, the bandwidths of the links between the client hosts and the proxy server and those between the proxy server and the Web servers are all set to 00 Mbps. To see the effect of various network conditions, the packet loss probability on each link is randomly selected from 0.000, , 0.00, and 0.0. The propagation delay of each link between the client hosts and the proxy server is also varied from 0 msec and 00 msec, and that between the proxy propagation delay : msec loss probability : H r = 0.5 Web proxy server Web servers Client Hosts propagation delay : 0 00 msec loss probability : # of client hosts : 50, 00, 200, 500 # of Web servers : 50 Figure 4: Network Model for Simulation Experiments server and the Web servers is from 0 msec and 200 msec. The number of Web servers is fixed at 50, and that of the client hosts is changed as 50, 00, 200 and 500. We ran 000 sec simulation in each experiment. In the simulation experiments, each client host randomly selects one of the Web servers and generates a document transfer request via the proxy server. The distribution of the requested document size follows [5], that is, it is given by the combination of a log-normal distribution for small documents and a Pareto distribution for large ones. The access model of the client hosts also follows that in [5], where the client host first requests a main document, and then requests some in-line images, which are included in the document after a short interval (following [5], we call it active off time), and then requests the next document after a somewhat longer interval (inactive off time). Note that since we focus on the resource and connection management of proxy servers, we have not considered detailed algorithms of the caching behavior at the proxy server, including the cache replacement algorithm. Instead, we set the cache hit ratio at the proxy server, h, to 0.5. Using h, the proxy server decides either to transfer the requested document to the client host directly, or to deliver it to the client host after downloading it from the Web server. The proxy server has 3200 KBytes of socket buffer and assigns it as send/receive socket buffer to the TCP connections. What this means is that the original scheme can establish at most 200 TCP connections concurrently since it fixedly assigns 6 KBytes of send/receive socket buffer to each TCP connection. In what follows, we compare the performances of the following four schemes; scheme () which does not use any enhanced algorithm presented in this paper scheme (2) which uses E 2 -ATBT scheme (3) which uses E 2 -ATBT and the connection management scheme described in Subsection 3.2, but does not use the algorithm that gradually decreases the socket buffer, i.e., the socket buffer size remains unchanged after documents are transferred scheme (4) which uses all of the proposed algorithms, that are E 2 -ATBT and the connection management scheme comprised of the algorithm that gradually decreases the size of the socket buffer assigned to the persistent TCP connections 6

7 Note that for schemes (3) and (4), we do not explicitly consider the amount and the threshold value of each resource explained in Subsection 2. (mbuf, file descriptor, control blocks, and socket buffer). Instead, we introduce N max (= 200), the maximum number of connections, which can be established simultaneously, to simulate the limitation of the proxy server resources. In schemes () and (2), newly arrived requests are rejected when the number of TCP connections in the proxy server is N max. On the other hand, schemes (3) and (4) forcibly terminate some of persistent TCP connections that are inactive for the document transfer, and establish new TCP connections. For scheme (4), we exclude persistent TCP connections from calculation process of the E 2 -ATBT algorithm, and halve the assigned size of the socket buffer every second. The minimum size of the assigned socket buffer is KByte. 4. Comparison of HTTP/.0 and HTTP/. Before evaluating the proposed method, we first compare the effects of HTTP/.0 and HTTP/. on the proxy server performance, to confirm the necessity of the connection management scheme proposed in Subsection 2.2. In Figure 5, we plot the performance of the proxy server as a function of the number of client hosts in schemes () and (2). Here we define the performance of proxy server by the total size of the documents transferred in both directions by the proxy server during an elapsed simulation time of 000 sec simulation time. It can be clearly observed from this figure that the performance of scheme (2) is improved as against scheme () in the both cases of HTTP/.0 and HTTP/.. It is because each TCP connection is assigned to a proper size of the socket buffer by E 2 - ATBT. Also, we show that the performance of the proxy server with HTTP/. is higher than that of HTTP/.0 when the number of client hosts is small (50 or 00). The reason is that the document transfer with HTTP/. uses the persistent TCP connection, which can avoid the three-way handshake for establishing a new TCP connection to treat successive document transfer requests. On the other hand, the proxy server performance in HTTP/. becomes further worse than that of HTTP/.0 when the number of client hosts is 200, which indicates that the proxy server load is high. This is because many persistent TCP connections at the proxy server are not in actual use and waste the resources of the proxy server. It results in that TCP connections for new document transfer requests cannot be established. However, since HTTP/.0 closes the TCP connection immediately after transferring the requested document, new TCP connections can be soon established even when the load of the proxy server is high. Thus, although the persistent TCP connection can improve the proxy server performance when the load of the proxy server is low, as the load becomes high it acts to significantly decrease the proxy server performance. This result explicitly indicates the need for careful management of persistent TCP connections at a proxy server, since the utilization of persistent TCP connections is very low as shown in Subsection Evaluation of Proxy Server Performance We first investigate the performance observed at the proxy server. In Figure 6, we show the performance of the proxy server as Total Transfer Size [MB] HTTP/.0 : scheme () HTTP/. : scheme () 50 HTTP/.0 : scheme (2) HTTP/. : scheme (2) # of Client Hosts Figure 5: Simulation Results: Comparison of HTTP/.0 and HTTP/. a function of the number of client hosts, where we change the length of the persistent timer of persistent TCP connections at the proxy server to 5 sec, 5 sec and 30 sec in Figures 6(a), 6(b), and 6(c), respectively. It is clear from Figure 6 that the performance of the original scheme (scheme ()) decreases as the number of client hosts become larger than 200. This is because when the number of client hosts is larger than N max, the proxy server begins to reject some of document transfer requests, although most of N max TCP connections are idle, which has been analytically shown in Subsection 2.2. It means that they do nothing but waste the resources of the proxy server. The results of scheme (2) in Figure 6 shows that E 2 -ATBT can improve the proxy server performance regardless of the number of client hosts. However, it also shows the performance degradation when the number of client hosts is large. This means that E 2 -ATBT is not enough to solve the problem of idle persistent TCP connections, and that it is necessary to introduce a connection management scheme to overcome this problem. We can also see that scheme (3) can significantly improve the performance of the proxy server, especially when the number of client hosts is large. It is because as the proxy server cannot accept all the incoming connections from the client hosts (which corresponds to the case where the number of client hosts is larger than 200 in Figure 6), scheme (3) closes idle TCP connections so that newly arriving TCP connections can be established. The result is that the number of TCP connections that actually transfer document is increased largely. Scheme (4) also can improve the performance of the proxy server, especially when the number of client hosts is 00 or 200. In the case of a larger number of client hosts (500 hosts), however, the degree of performance improvements is slightly decreased. It can be explained as follows. When the number of client hosts is small, most of the persistent TCP connections at the proxy server are kept established. Therefore, the socket buffer assigned to the persistent TCP connections can be effectively re-assigned to other active TCP connections by using scheme (4). When the number of client hosts is large, on the other hand, the persistent TCP connections are likely to be closed before scheme (4) begins to decrease the assigned socket buffer. It results in the case that scheme (4) can do almost nothing against the persistent TCP connections. Next, the effect of the length of the persistent timer is evalu- 7

8 scheme () scheme (2) scheme (3) scheme (4) scheme () scheme (2) scheme (3) scheme (4) Total Transfer Size [MB] # of Client Hosts Total Transfer Size [MB] # of Client Hosts (a) persistent timer: 5 sec (b) persistent timer: 5 sec scheme () scheme (2) scheme (3) scheme (4) Total Transfer Size [MB] # of Client Hosts (c) persistent timer: 30 sec Figure 6: Simulation result ated. We first focus on schemes () and (2) in Figure 6. In the case of 50 client hosts, the performance becomes higher as the persistent timer becomes larger. The reason is that in the case of the longer persistent timer, the successive document transfers can be performed by persistent connections and the connection setup time can be removed. In the case of a larger number of client hosts, however, the performance degrades when the persistent timer is large. This is caused by the apparent waste of the proxy server resources by resulting from the idle TCP connections. It can also be observed from Figures 6(a) through 6(c) that schemes (3) and (4) can provide much higher performance than schemes () and (2), regardless of the length of the persistent timer, especially when the number of client hosts is large. This is because schemes (3) and (4) can manage persistent TCP connections as expected. It means that schemes (3) and (4) can utilize the merits of persistent TCP connections in the case of smaller number of client hosts, and that they can avoid the proxy server resources from being wasted and assign them effectively for active TCP connections in the case of the larger number of client hosts. 4.3 Evaluation of Response Time We next show the evaluation results of response times of document transfer. We define the response time as the time from when a client host issues a document transfer request to when it finishes receiving the requested document. Figure 7 shows the simulation results. We plot the response time as a function of the document size in the cases of the four schemes. From this figure, we can clearly observe that the response time is much improved when our proposed scheme is applied especially for the large number of client hosts (Figures 7 (b)-(d)). However, when the number of client hosts is 50, the proposed scheme does not help improving the response time. In this case, the server resources are enough to accommodate 50 client hosts and all TCP connections are immediately established at the proxy server. Therefore, the response time cannot be improved so much. Note that since E 2 -ATBT can improve the throughput of TCP data transfer to some degree, the proxy server performance can be improved, as was shown in the previous subsection. Although schemes (3) and (4) can improve the response time largely, there is little difference between the two schemes. This can be explained as follows. Scheme (4) decreases the assigned socket buffer to persistent TCP connections and re-assign it to 8

9 Response Time [sec] 00 0 scheme () scheme (2) scheme (3) scheme (4) Response Time [sec] 00 0 scheme () scheme (2) scheme (3) scheme (4) e+06 e+07 e+08 Document Size [Byte] e+06 e+07 Document Size [Byte] (a) Number of Client Hosts: 50 (b) Number of Client Hosts: 00 Response Time [sec] 00 0 scheme () scheme (2) scheme (3) scheme (4) Response Time [sec] 00 0 scheme () scheme (2) scheme (3) scheme (4) e+06 e+07 Document Size [Byte] e+06 e+07 e+08 Document Size [Byte] (c) Number of Client Hosts: 200 (d) Number of Client Hosts: 500 Figure 7: Simulation Result: Response Time other active TCP connections. Although the throughput of the active TCP connections becomes improved, its effect on the response time is very small compared with the effect of introducing scheme (3). However, scheme (4) is worth to be used at the proxy server, since scheme (4) can give a good effect on the proxy server performance as shown in Figure 6. 5 Implementation and Experiments In this section, we first show an implementation overview of our proposed scheme on an actual machine running FreeBSD 4.0. We then discuss some of the experimental results in order to confirm the effectiveness of the proposed scheme. 5. Implementation Issues Our proposed scheme consists of two algorithms; the enhanced E-ATBT, (E 2 -ATBT) proposed in Subsection 3., and the connection management scheme described in Subsection 3.2. In [], we have confirmed an effectiveness of the original E-ATBT algorithm through some experiments. To realize E 2 -ATBT, we need to implement new mechanisms that take account for the connection dependency and controlling receive socket buffer at the proxy server. Two methods can be considered for this purpose as described in Subsection 3.; monitoring the utilization of send/receive socket buffer, and adding some information to data/ack packets. As a first step, we are now implementing the latter method, using additional bits to identify the connection dependency and the size of send socket buffer at the Web server. Since this method requires cooperation between the server and client hosts, it is not a realistic solution. However, we can know a upper limit on performance that we can expect from using our 9

10 time scheduling list time scheduling list Client host Web proxy server WWW server ( sfd, proc ) # of users 50, 00, 200, 500 upper limit : 400 connections threshhold : 300 connections persistent timer : 5 seconds cache hit Ratio : 0.5 ( sfd, proc ) Figure 9: Implementation Experiment System NULL insert NULL delete Figure 8: Connection Management Scheme proposed method, by comparing it with the former observation method. We will include the additional results of the abovementioned algorithms in the final paper. To implement the connection management scheme, we have to monitor the utilization of resources at the proxy server, and maintain an adequate number of persistent TCP connections. Monitoring the resources at the proxy server is done as follows. The resources for establishing TCP connections in our case are mbuf, file descriptor, and control blocks, as described in Subsection 2.. The resources cannot all be changed dynamically once the kernel is booted. However, the total and remaining amounts of these resources can be observed in the kernel system. Therefore, we introduce threshold values of utilization for these resources, and if one of the utilization levels of those resources reaches its threshold, the proxy server starts closing the persistent TCP connections and releases the resources assigned to those connections. Figure 8 sketches our mechanism for managing persistent TCP connections at the proxy server. When a TCP connection finishes transmitting a requested document and becomes idle, the proxy server records the socket file descriptor and the process number as a new entry in the time scheduling list, which is used by the kernel to handle the persistent TCP connections. Note that a new entry is added to the end of the time scheduling list. When the proxy server decides to close the persistent TCP connection, it selects the connection from the top of the time scheduling list. In this way, the proxy server can close the oldest persistent connection first. When a certain persistent TCP connection in the list becomes active before being closed, or when it is closed by persistent timer expiration, the proxy server removes the corresponding entry from the list. All operations on the time scheduling list can be performed by using simple pointer manipulations. 5.2 Implementation Experiments We now present the effectiveness of our implemented scheme through some experiments. Figure 9 shows our experimental system. In this system, we implement our proposed scheme at the proxy server, running the Squid proxy server [9] on FreeBSD 4.0. The amount of proxy server resources is set such that the proxy server can accommodate up to 400 TCP connections simultane- ously. The threshold value, at which the proxy server begins to close persistent TCP connections, is set to 300 connections. The proxy server monitors the utilization of its resources every second. We intentionally set the size of the proxy server cache to be 024 KBytes, so that the cache hit ratio becomes 0.5. The length of the persistent timer is set to 5 seconds. The client host uses httperf [2] to generate document transfer requests. As in the simulation experiments, the access model of each user at the client host (the distribution of the requested document size and think time between successive requests) follows that reported in [5]. When the request is rejected by the proxy server, due to lack of resources, the client host resends the request immediately. We will compare the proposed scheme and the original scheme in which no mechanism proposed in this paper is used. In each of experimental results presented in the below, the average value of ten times experiments will be shown Evaluation of Proxy Server Performance We first evaluate the proxy server performance. In Figure 0, we compare throughput values of the proposed scheme and the original scheme as a function of the number of users at the client host. Here we define the throughput of the proxy server as the total number of documents sent/received by the proxy server, divided by the experiment time (0 minutes). It is clear that when the number of users is large, the throughput of the proposed scheme is larger than that of the original scheme, whereas both provide almost the same throughput when the number of users is small. When the number of users is small, the proxy server can accommodate all users TCP connections without lack of the proxy server resources. When the number of users becomes large, the original scheme cannot accept all of the document transfer requests since many TCP connections established at the proxy server are idle and occupy the server resources, as described by the analysis in Subsection 2.2. On the other hand, the proposed scheme can accept a larger number of document transfer requests than the original scheme, since the proxy server forces the idle TCP connections to close when the proxy server resources become short and assigns the released resources to newly arriving TCP connections. Table summarizes the number of TCP connections established at the proxy server for document transfer during the experimental run. When the number of users becomes large, the number of established TCP connections is increased by using the proposed scheme. It results in an increase of the server throughput as explained above. 0

Design, Implementation and Evaluation of Resource Management System for Internet Servers

Design, Implementation and Evaluation of Resource Management System for Internet Servers Design, Implementation and Evaluation of Resource Management System for Internet Servers Paper ID: 193 Total number of pages: 14 Abstract A great deal of research has been devoted to solving the problem

More information

Design, Implementation and Evaluation of. Resource Management System for Internet Servers

Design, Implementation and Evaluation of. Resource Management System for Internet Servers Design, Implementation and Evaluation of Resource Management System for Internet Servers Kazuhiro Azuma, Takuya Okamoto, Go Hasegawa and Masayuki Murata Graduate School of Information Science and Technology,

More information

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput Master s Thesis Title TCP Congestion Control Mechanisms for Achieving Predictable Throughput Supervisor Prof. Hirotaka Nakano Author Kana Yamanegi February 14th, 2007 Department of Information Networking

More information

A transport-layer approach for achieving predictable throughput for Internet applications

A transport-layer approach for achieving predictable throughput for Internet applications Seventh International Conference on Networking A transport-layer approach for achieving predictable throughput for Internet applications Go Hasegawa, Kana Yamanegi and Masayuki Murata Graduate School of

More information

Congestion control mechanism of TCP for achieving predictable throughput

Congestion control mechanism of TCP for achieving predictable throughput Congestion control mechanism of TCP for achieving predictable throughput Kana Yamanegi Go Hasegawa Masayuki Murata Graduate School of Information Science and Technology, Osaka University 1-5 Yamadaoka,

More information

Performance Modeling and Evaluation of Web Systems with Proxy Caching

Performance Modeling and Evaluation of Web Systems with Proxy Caching Performance Modeling and Evaluation of Web Systems with Proxy Caching Yasuyuki FUJITA, Masayuki MURATA and Hideo MIYAHARA a a Department of Infomatics and Mathematical Science Graduate School of Engineering

More information

TCP Throughput Analysis with Variable Packet Loss Probability for Improving Fairness among Long/Short-lived TCP Connections

TCP Throughput Analysis with Variable Packet Loss Probability for Improving Fairness among Long/Short-lived TCP Connections TCP Throughput Analysis with Variable Packet Loss Probability for Improving Fairness among Long/Short-lived TCP Connections Koichi Tokuda Go Hasegawa Masayuki Murata Graduate School of Information Science

More information

On Network Dimensioning Approach for the Internet

On Network Dimensioning Approach for the Internet On Dimensioning Approach for the Internet Masayuki Murata ed Environment Division Cybermedia Center, (also, Graduate School of Engineering Science, ) e-mail: murata@ics.es.osaka-u.ac.jp http://www-ana.ics.es.osaka-u.ac.jp/

More information

Hierarchically Aggregated Fair Queueing (HAFQ) for Per-flow Fair Bandwidth Allocation in High Speed Networks

Hierarchically Aggregated Fair Queueing (HAFQ) for Per-flow Fair Bandwidth Allocation in High Speed Networks Hierarchically Aggregated Fair Queueing () for Per-flow Fair Bandwidth Allocation in High Speed Networks Ichinoshin Maki, Hideyuki Shimonishi, Tutomu Murase, Masayuki Murata, Hideo Miyahara Graduate School

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Brad Karp UCL Computer Science CS 3035/GZ01 29 th October 2013 Part I: Transport Concepts Layering context Transport goals Transport mechanisms 2 Context:

More information

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment

Master s Thesis. Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Master s Thesis Title Congestion Control Mechanisms for Alleviating TCP Unfairness in Wireless LAN Environment Supervisor Professor Hirotaka Nakano Author Masafumi Hashimoto February 15th, 21 Department

More information

Transport Protocols and TCP

Transport Protocols and TCP Transport Protocols and TCP Functions Connection establishment and termination Breaking message into packets Error recovery ARQ Flow control Multiplexing, de-multiplexing Transport service is end to end

More information

Analysis of a Window-Based Flow Control Mechanism based on TCP Vegas in Heterogeneous Network Environment Keiichi Takagaki Hiroyuki Ohsaki

Analysis of a Window-Based Flow Control Mechanism based on TCP Vegas in Heterogeneous Network Environment Keiichi Takagaki Hiroyuki Ohsaki Analysis of a Window-Based Flow Control Mechanism based on TCP Vegas in Heterogeneous Network Environment Keiichi Takagaki Hiroyuki Ohsaki Masayuki Murata Graduate School of Engineering Science, Osaka

More information

Transport Layer. Application / Transport Interface. Transport Layer Services. Transport Layer Connections

Transport Layer. Application / Transport Interface. Transport Layer Services. Transport Layer Connections Application / Transport Interface Application requests service from transport layer Transport Layer Application Layer Prepare Transport service requirements Data for transport Local endpoint node address

More information

Analysis and Improvement of Fairness between TCP Reno and Vegas for Deployment of TCP Vegas to the Internet

Analysis and Improvement of Fairness between TCP Reno and Vegas for Deployment of TCP Vegas to the Internet Analysis and Improvement of Fairness between TCP Reno and Vegas for Deployment of TCP Vegas to the Internet Go Hasegawa, Kenji Kurata and Masayuki Murata Graduate School of Engineering Science, Osaka University

More information

Streaming Video and TCP-Friendly Congestion Control

Streaming Video and TCP-Friendly Congestion Control Streaming Video and TCP-Friendly Congestion Control Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs) Video Application

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

School of Engineering Department of Computer and Communication Engineering Semester: Fall Course: CENG415 Communication Networks

School of Engineering Department of Computer and Communication Engineering Semester: Fall Course: CENG415 Communication Networks School of Engineering Department of Computer and Communication Engineering Semester: Fall 2012 2013 Course: CENG415 Communication Networks Instructors: Mr Houssam Ramlaoui, Dr Majd Ghareeb, Dr Michel Nahas,

More information

c) With the selective repeat protocol, it is possible for the sender to receive an ACK for a packet that falls outside of its current window.

c) With the selective repeat protocol, it is possible for the sender to receive an ACK for a packet that falls outside of its current window. Part 1 Question 1 [0.5 Marks] Suppose an application generates chunks of 40 bytes of data every 20 msec, and each chunk gets encapsulated by a TCP segment and then an IP datagram. What percentage of each

More information

Design and Implementation of A P2P Cooperative Proxy Cache System

Design and Implementation of A P2P Cooperative Proxy Cache System Design and Implementation of A PP Cooperative Proxy Cache System James Z. Wang Vipul Bhulawala Department of Computer Science Clemson University, Box 40974 Clemson, SC 94-0974, USA +1-84--778 {jzwang,

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - B COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START Page 1 of 20 MIDTERM EXAMINATION #1 - A COMPUTER NETWORKS : 03-60-367-01 U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E Fall 2008-75 minutes This examination document

More information

Performance of UMTS Radio Link Control

Performance of UMTS Radio Link Control Performance of UMTS Radio Link Control Qinqing Zhang, Hsuan-Jung Su Bell Laboratories, Lucent Technologies Holmdel, NJ 77 Abstract- The Radio Link Control (RLC) protocol in Universal Mobile Telecommunication

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Reliable Transport I: Concepts and TCP Protocol

Reliable Transport I: Concepts and TCP Protocol Reliable Transport I: Concepts and TCP Protocol Stefano Vissicchio UCL Computer Science COMP0023 Today Transport Concepts Layering context Transport goals Transport mechanisms and design choices TCP Protocol

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

===================================================================== Exercises =====================================================================

===================================================================== Exercises ===================================================================== ===================================================================== Exercises ===================================================================== 1 Chapter 1 1) Design and describe an application-level

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers Go Hasegawa and Masayuki Murata Cybermedia Center, Osaka University -3, Machikaneyama, Toyonaka, Osaka 560-853, Japan

More information

Stability Analysis of a Window-based Flow Control Mechanism for TCP Connections with Different Propagation Delays

Stability Analysis of a Window-based Flow Control Mechanism for TCP Connections with Different Propagation Delays Stability Analysis of a Window-based Flow Control Mechanism for TCP Connections with Different Propagation Delays Keiichi Takagaki Hiroyuki Ohsaki Masayuki Murata Graduate School of Engineering Science,

More information

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Transport Layer PREPARED BY AHMED ABDEL-RAOUF Transport Layer PREPARED BY AHMED ABDEL-RAOUF TCP Flow Control TCP Flow Control 32 bits source port # dest port # head len sequence number acknowledgement number not used U A P R S F checksum Receive window

More information

Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks

Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks Mobile Information Systems 9 (23) 295 34 295 DOI.3233/MIS-364 IOS Press Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks Keisuke Goto, Yuya Sasaki, Takahiro

More information

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts

F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts Pasi Sarolahti Nokia Research Center pasi.sarolahti@nokia.com Markku Kojo, Kimmo Raatikainen University of Helsinki Department of Computer

More information

UNIT IV -- TRANSPORT LAYER

UNIT IV -- TRANSPORT LAYER UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission

More information

Managing Caching Performance and Differentiated Services

Managing Caching Performance and Differentiated Services CHAPTER 10 Managing Caching Performance and Differentiated Services This chapter explains how to configure TCP stack parameters for increased performance ant throughput and how to configure Type of Service

More information

WITH the evolution and popularity of wireless devices,

WITH the evolution and popularity of wireless devices, Network Coding with Wait Time Insertion and Configuration for TCP Communication in Wireless Multi-hop Networks Eiji Takimoto, Shuhei Aketa, Shoichi Saito, and Koichi Mouri Abstract In TCP communication

More information

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS

CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS 28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the

More information

A Method of Dynamic Interconnection of VLANs for Large Scale VLAN Environment

A Method of Dynamic Interconnection of VLANs for Large Scale VLAN Environment Engineering Industrial & Management Engineering fields Okayama University Year 2005 A Method of Dynamic Interconnection of s for Large Scale Environment Kiyohiko Okayama Nariyoshi Yamai Takuya Miyashita

More information

TCP/IP Performance ITL

TCP/IP Performance ITL TCP/IP Performance ITL Protocol Overview E-Mail HTTP (WWW) Remote Login File Transfer TCP UDP IP ICMP ARP RARP (Auxiliary Services) Ethernet, X.25, HDLC etc. ATM 4/30/2002 Hans Kruse & Shawn Ostermann,

More information

TCP: Flow and Error Control

TCP: Flow and Error Control 1 TCP: Flow and Error Control Required reading: Kurose 3.5.3, 3.5.4, 3.5.5 CSE 4213, Fall 2006 Instructor: N. Vlajic TCP Stream Delivery 2 TCP Stream Delivery unlike UDP, TCP is a stream-oriented protocol

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Solution to Question 1: ``Quickies'' (25 points, 15 minutes)

Solution to Question 1: ``Quickies'' (25 points, 15 minutes) Solution to Question : ``Quickies'' (25 points, 5 minutes) What is meant by the term statistical multiplexing? Answer: In statistical multiplexing, data from multiple users (senders) is sent over a link.

More information

CMPE150 Midterm Solutions

CMPE150 Midterm Solutions CMPE150 Midterm Solutions Question 1 Packet switching and circuit switching: (a) Is the Internet a packet switching or circuit switching network? Justify your answer. The Internet is a packet switching

More information

Implementation Experiments on HighSpeed and Parallel TCP

Implementation Experiments on HighSpeed and Parallel TCP Implementation Experiments on HighSpeed and TCP Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Outline Introduction Background of and g Why to evaluate in a test-bed network A refined algorithm

More information

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms Overview Chapter 13 TRANSPORT Motivation Simple analysis Various TCP mechanisms Distributed Computing Group Mobile Computing Winter 2005 / 2006 Distributed Computing Group MOBILE COMPUTING R. Wattenhofer

More information

Final Exam for ECE374 05/03/12 Solution!!

Final Exam for ECE374 05/03/12 Solution!! ECE374: Second Midterm 1 Final Exam for ECE374 05/03/12 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam.

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

RD-TCP: Reorder Detecting TCP

RD-TCP: Reorder Detecting TCP RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies

More information

Question Points Score total 100

Question Points Score total 100 CS457: Computer Networking Date: 3/21/2008 Name: Instructions: 1. Be sure that you have 8 questions 2. Be sure your answers are legible. 3. Write your Student ID at the top of every page 4. This is a closed

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START MIDTERM EXAMINATION #2 NETWORKING CONCEPTS 03-60-367-01 U N I V E R S I T Y O F W I N D S O R - S c h o o l o f C o m p u t e r S c i e n c e Fall 2011 Question Paper NOTE: Students may take this question

More information

Computer Communication Networks Midterm Review

Computer Communication Networks Midterm Review Computer Communication Networks Midterm Review ICEN/ICSI 416 Fall 2018 Prof. Aveek Dutta 1 Instructions The exam is closed book, notes, computers, phones. You can use calculator, but not one from your

More information

A Study on TCP Buffer Management Algorithm for Improvement of Network Performance in Grid Environment

A Study on TCP Buffer Management Algorithm for Improvement of Network Performance in Grid Environment A Study on TCP Buffer Management Algorithm for Improvement of Network Performance in Grid Environment Yonghwan Jeong 1, Minki Noh 2, Hyewon K. Lee 1, and Youngsong Mun 1 1 School of Computing, Soongsil

More information

Investigating Forms of Simulating Web Traffic. Yixin Hua Eswin Anzueto Computer Science Department Worcester Polytechnic Institute Worcester, MA

Investigating Forms of Simulating Web Traffic. Yixin Hua Eswin Anzueto Computer Science Department Worcester Polytechnic Institute Worcester, MA Investigating Forms of Simulating Web Traffic Yixin Hua Eswin Anzueto Computer Science Department Worcester Polytechnic Institute Worcester, MA Outline Introduction Web Traffic Characteristics Web Traffic

More information

A Relative Bandwidth Allocation Method Enabling Fast Convergence in XCP

A Relative Bandwidth Allocation Method Enabling Fast Convergence in XCP A Relative Bandwidth Allocation Method Enabling Fast Convergence in XCP Hanh Le Hieu,KenjiMasui 2, and Katsuyoshi Iida 2 Graduate School of Science and Engineering, Tokyo Institute of Technology 2 Global

More information

Considering Spurious Timeout in Proxy for Improving TCP Performance in Wireless Networks

Considering Spurious Timeout in Proxy for Improving TCP Performance in Wireless Networks Considering Spurious Timeout in Proxy for Improving TCP Performance in Wireless Networks YuChul Kim Telecommunication R&D Center, Samsung Electronics,Co., Ltd yuchul.kim@samsung.com DongHo Cho Communication

More information

SPDY - A Web Protocol. Mike Belshe Velocity, Dec 2009

SPDY - A Web Protocol. Mike Belshe Velocity, Dec 2009 SPDY - A Web Protocol Mike Belshe Velocity, Dec 2009 What is SPDY? Concept SPDY is an application layer protocol for transporting content over the web with reduced latency. Basic Features 1. Multiplexed

More information

Documents. Configuration. Important Dependent Parameters (Approximate) Version 2.3 (Wed, Dec 1, 2010, 1225 hours)

Documents. Configuration. Important Dependent Parameters (Approximate) Version 2.3 (Wed, Dec 1, 2010, 1225 hours) 1 of 7 12/2/2010 11:31 AM Version 2.3 (Wed, Dec 1, 2010, 1225 hours) Notation And Abbreviations preliminaries TCP Experiment 2 TCP Experiment 1 Remarks How To Design A TCP Experiment KB (KiloBytes = 1,000

More information

CS457 Transport Protocols. CS 457 Fall 2014

CS457 Transport Protocols. CS 457 Fall 2014 CS457 Transport Protocols CS 457 Fall 2014 Topics Principles underlying transport-layer services Demultiplexing Detecting corruption Reliable delivery Flow control Transport-layer protocols User Datagram

More information

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver

A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver 1 A Routing Protocol for Utilizing Multiple Channels in Multi-Hop Wireless Networks with a Single Transceiver Jungmin So Dept. of Computer Science, and Coordinated Science Laboratory University of Illinois

More information

A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION

A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION A CONTROL THEORETICAL APPROACH TO A WINDOW-BASED FLOW CONTROL MECHANISM WITH EXPLICIT CONGESTION NOTIFICATION Hiroyuki Ohsaki, Masayuki Murata, Toshimitsu Ushio, and Hideo Miyahara Department of Information

More information

TCP-Peach and FACK/SACK Options: Putting The Pieces Together

TCP-Peach and FACK/SACK Options: Putting The Pieces Together TCP-Peach and FACK/SACK Options: Putting The Pieces Together Giacomo Morabito, Renato Narcisi, Sergio Palazzo, Antonio Pantò Dipartimento di Ingegneria Informatica e delle Telecomunicazioni University

More information

Bandwidth Allocation & TCP

Bandwidth Allocation & TCP Bandwidth Allocation & TCP The Transport Layer Focus Application Presentation How do we share bandwidth? Session Topics Transport Network Congestion control & fairness Data Link TCP Additive Increase/Multiplicative

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

3. Evaluation of Selected Tree and Mesh based Routing Protocols

3. Evaluation of Selected Tree and Mesh based Routing Protocols 33 3. Evaluation of Selected Tree and Mesh based Routing Protocols 3.1 Introduction Construction of best possible multicast trees and maintaining the group connections in sequence is challenging even in

More information

Chapter 3 Review Questions

Chapter 3 Review Questions Chapter 3 Review Questions. 2. 3. Source port number 6 and destination port number 37. 4. TCP s congestion control can throttle an application s sending rate at times of congestion. Designers of applications

More information

Outline 9.2. TCP for 2.5G/3G wireless

Outline 9.2. TCP for 2.5G/3G wireless Transport layer 9.1 Outline Motivation, TCP-mechanisms Classical approaches (Indirect TCP, Snooping TCP, Mobile TCP) PEPs in general Additional optimizations (Fast retransmit/recovery, Transmission freezing,

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Brad Karp UCL Computer Science CS 3035/GZ01 31 st October 2013 Outline Slow Start AIMD Congestion control Throughput, loss, and RTT equation Connection

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

CS244a: An Introduction to Computer Networks

CS244a: An Introduction to Computer Networks Grade: MC: 7: 8: 9: 10: 11: 12: 13: 14: Total: CS244a: An Introduction to Computer Networks Final Exam: Wednesday You are allowed 2 hours to complete this exam. (i) This exam is closed book and closed

More information

Master s Thesis. A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems

Master s Thesis. A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems Master s Thesis Title A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems Supervisor Professor Masayuki Murata Author Hideto Horiuchi February 14th, 2007 Department

More information

Overview Content Delivery Computer Networking Lecture 15: The Web Peter Steenkiste. Fall 2016

Overview Content Delivery Computer Networking Lecture 15: The Web Peter Steenkiste. Fall 2016 Overview Content Delivery 15-441 15-441 Computer Networking 15-641 Lecture 15: The Web Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 Web Protocol interactions HTTP versions Caching Cookies

More information

Web, HTTP and Web Caching

Web, HTTP and Web Caching Web, HTTP and Web Caching 1 HTTP overview HTTP: hypertext transfer protocol Web s application layer protocol client/ model client: browser that requests, receives, displays Web objects : Web sends objects

More information

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse Good Ideas So Far 15-441 Computer Networking Lecture 17 TCP & Congestion Control Flow control Stop & wait Parallel stop & wait Sliding window Loss recovery Timeouts Acknowledgement-driven recovery (selective

More information

Programming Project. Remember the Titans

Programming Project. Remember the Titans Programming Project Remember the Titans Due: Data and reports due 12/10 & 12/11 (code due 12/7) In the paper Measured Capacity of an Ethernet: Myths and Reality, David Boggs, Jeff Mogul and Chris Kent

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Lecture 8. TCP/IP Transport Layer (2)

Lecture 8. TCP/IP Transport Layer (2) Lecture 8 TCP/IP Transport Layer (2) Outline (Transport Layer) Principles behind transport layer services: multiplexing/demultiplexing principles of reliable data transfer learn about transport layer protocols

More information

THE NETWORK PERFORMANCE OVER TCP PROTOCOL USING NS2

THE NETWORK PERFORMANCE OVER TCP PROTOCOL USING NS2 THE NETWORK PERFORMANCE OVER TCP PROTOCOL USING NS2 Ammar Abdulateef Hadi, Raed A. Alsaqour and Syaimak Abdul Shukor School of Computer Science, Faculty of Information Science and Technology, University

More information

Name Student ID Department/Year. Midterm Examination. Introduction to Computer Networks Class#: 901 E31110 Fall 2006

Name Student ID Department/Year. Midterm Examination. Introduction to Computer Networks Class#: 901 E31110 Fall 2006 Name Student ID Department/Year Midterm Examination Introduction to Computer Networks Class#: 901 E31110 Fall 2006 9:20-11:00 Tuesday November 14, 2006 Prohibited 1. You are not allowed to write down the

More information

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

ECEN Final Exam Fall Instructor: Srinivas Shakkottai ECEN 424 - Final Exam Fall 2013 Instructor: Srinivas Shakkottai NAME: Problem maximum points your points Problem 1 10 Problem 2 10 Problem 3 20 Problem 4 20 Problem 5 20 Problem 6 20 total 100 1 2 Midterm

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Competing Tahoe Senders Adam Stambler Brian Do Group 7 12/7/2010. Computer and Communication Networks : 14:332:423

Competing Tahoe Senders Adam Stambler Brian Do Group 7 12/7/2010. Computer and Communication Networks : 14:332:423 Competing Tahoe Senders Adam Stambler Brian Do Group 7 12/7/2010 Computer and Communication Networks : 14:332:423 1 Break Down of Contributions Adam Stambler Brian Do Algorithm Design 50 50 Coding 70 30

More information

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical

More information

Receiver-initiated Sending-rate Control based on Data Receive Rate for Ad Hoc Networks connected to Internet

Receiver-initiated Sending-rate Control based on Data Receive Rate for Ad Hoc Networks connected to Internet Receiver-initiated Sending-rate Control based on Data Receive Rate for Ad Hoc Networks connected to Internet Akihisa Kojima and Susumu Ishihara Graduate School of Engineering, Shizuoka University Graduate

More information

Switch Configuration message sent 1 (1, 0, 1) 2

Switch Configuration message sent 1 (1, 0, 1) 2 UNIVESITY COLLEGE LONON EPATMENT OF COMPUTE SCIENCE COMP00: Networked Systems Problem Set istributed: nd November 08 NOT ASSESSE, model answers released: 9th November 08 Instructions: This problem set

More information

******************************************************************* *******************************************************************

******************************************************************* ******************************************************************* ATM Forum Document Number: ATM_Forum/96-0518 Title: Performance of TCP over UBR and buffer requirements Abstract: We study TCP throughput and fairness over UBR for several buffer and maximum window sizes.

More information

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network

Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Effects of Applying High-Speed Congestion Control Algorithms in Satellite Network Xiuchao Wu, Mun Choon Chan, and A. L. Ananda School of Computing, National University of Singapore Computing 1, Law Link,

More information

CIS 632 / EEC 687 Mobile Computing

CIS 632 / EEC 687 Mobile Computing CIS 632 / EEC 687 Mobile Computing TCP in Mobile Networks Prof. Chansu Yu Contents Physical layer issues Communication frequency Signal propagation Modulation and Demodulation Channel access issues Multiple

More information

Router buffer re-sizing for short-lived TCP flows

Router buffer re-sizing for short-lived TCP flows Router buffer re-sizing for short-lived TCP flows Takeshi Tomioka Graduate School of Law Osaka University 1- Machikaneyama, Toyonaka, Osaka, 5-3, Japan Email: q5h@lawschool.osaka-u.ac.jp Go Hasegawa, Masayuki

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

15-441: Computer Networks Homework 3

15-441: Computer Networks Homework 3 15-441: Computer Networks Homework 3 Assigned: Oct 29, 2013 Due: Nov 12, 2013 1:30 PM in class Name: Andrew ID: 1 TCP 1. Suppose an established TCP connection exists between sockets A and B. A third party,

More information

Improving the Robustness of TCP to Non-Congestion Events

Improving the Robustness of TCP to Non-Congestion Events Improving the Robustness of TCP to Non-Congestion Events Presented by : Sally Floyd floyd@acm.org For the Authors: Sumitha Bhandarkar A. L. Narasimha Reddy {sumitha,reddy}@ee.tamu.edu Problem Statement

More information

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING ECE361 Computer Networks Midterm March 09, 2016, 6:15PM DURATION: 75 minutes Calculator Type: 2 (non-programmable calculators) Examiner:

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

User Datagram Protocol (UDP):

User Datagram Protocol (UDP): SFWR 4C03: Computer Networks and Computer Security Feb 2-5 2004 Lecturer: Kartik Krishnan Lectures 13-15 User Datagram Protocol (UDP): UDP is a connectionless transport layer protocol: each output operation

More information

Congestion / Flow Control in TCP

Congestion / Flow Control in TCP Congestion and Flow Control in 1 Flow Control and Congestion Control Flow control Sender avoids overflow of receiver buffer Congestion control All senders avoid overflow of intermediate network buffers

More information

Chapter 6. What happens at the Transport Layer? Services provided Transport protocols UDP TCP Flow control Congestion control

Chapter 6. What happens at the Transport Layer? Services provided Transport protocols UDP TCP Flow control Congestion control Chapter 6 What happens at the Transport Layer? Services provided Transport protocols UDP TCP Flow control Congestion control OSI Model Hybrid Model Software outside the operating system Software inside

More information