Packet Marking for Web traffic in Networks with RIO Routers
|
|
- Roland Beasley
- 5 years ago
- Views:
Transcription
1 Packet Marking for Web traffic in Networks with RIO Routers Marco Mellia- Politecnico di Torino, Torino - Italy Ion Stoica- University of California, Berkeley - CA 97 Hui Zhang- Carnegie Mellon University - PA, Abstract RIO routers, initially proposed in the context of Differentiated Services networks, distinguish between two classes of packets In-profile and Outprofile based on a one-bit marking in packet headers. Traditionally, packetmarking is performed by ingress routers according to a user-profile specified in terms of bandwidth. In this paper, we consider an alternative strategy in which packets are marked based on the state of individual TCP flows. This marking scheme can be implemented either at the end systems or ingress routers (thus avoiding changes at end systems). In addition, we present a variant of our scheme that integrates the Diffserv style packet-marking based on the user-profile. By extensive simulations, we demonstrate that the proposed scheme can significantly reduce the completion time of TCP flows. In particular, the completion time of short TCP flows is halved. These results are consistent over different TCP variants and in the presence of ECN. I. INTRODUCTION In today s Internet, routers process all packets uniformly. As a result, only one basic service is provided: best-effort datagram delivery. There have been a number of efforts to enhance the Internet to support QoS and service differentiation [], []. To support this, routers need to implement more sophisticated scheduling and buffer management policies that discriminate packets according to the service classes they belong to. While these more sophisticated router functionalities are designed mainly to support QoS and service differentiation, in this paper, we study whether it is possible to exploit the new router functionalities to improve the performance of the best-effort TCP traffic, and in particular on WWW-traffic, which is the dominant type of the traffic in today s Internet. We consider a simple buffer management algorithm called RIO, which distinguishes between two classes of packets, Inprofile (IN) and Out-profile (OUT), based on a one-bit marking in packets headers. RIO was proposed to support one type of Differentiated Services called Assured Service or Expected Capacity Service [], []. When the buffer starts to build up, a RIO router always drops the OUT before the IN ones. In this way, IN packets will receive a better service than OUT packets. In this paper, we study the feasibility of improving the performance of TCP flows in a network with RIO routers by marking packets according to per flow TCP state. The key observation is that TCP performance suffers significantly when it operates in the small window regime. This is because losses during the small window regime may cause retransmission timeouts (RTOs), which will ultimately result in TCP entering the Slow Start phase. The objective of the TCP-aware marking algorithm is then to selectively mark packets in order to reduce the possibility of TCP entering this undesirable state. We propose a simple scheme and evaluate its performance by simulation. The proposed scheme can reduce the completion time of short lived TCP flows by half. These gains are consistent over different TCP variants ranging from TCP-RENO to the combination of Early Congestion Notification with TCP-SACK. The scheme can be easily implemented in end systems. In addition, we describe an implementation in which only ingress routers mark the packets. In this way, the marking algorithm can be implemented as a value-added service at the network edge without changing the end systems. The rest of the paper is organized as follows: the following two sections review the RIO buffer management scheme and the TCP congestion control algorithm. In Section IV we present a packet marking scheme that improves TCP performance under a wide range of scenarios. We then evaluate the proposed scheme through simulations in Section V. In Section VI we discuss implementation and deployment issues. Finally, we summarize our findings in Section VII. II. THE RIO ALGORITHM The RIO algorithm distinguishes between two classes of packets, IN and OUT. While both type of packets share the same FIFO queue, they are dropped according to different policies. Intuitively, RIO can be viewed as the combination of two RED algorithms [] with different drop probabilities curves such that the OUT packets are dropped earlier than the IN packets. Thus, two sets of REDlike parameters are used: fmin th in;max th in; max p in;avg ing, which are used to compute the dropping probability of IN packets, and fmin th out;max th out; max p out;avg total g, which are used to compute the dropping probability of OUT packets. There are several points worth noting. First, since max th out and min th in are set such that max th out < min th in, a RIO router will first drop OUT packets when queue starts to build up, then drop all OUT packets if the queue continues to build up, and finally drops IN packets as a last resort. Second, avg in, which is used to compute the dropping probability of IN packets, accounts for the IN packets only, whereas avg total, which is used to compute the dropping probability of OUT packets, accounts for both IN and OUT packets. Note that if all packets are of the same type (i.e., either IN or OUT), then RIO reduces to RED. Unless otherwise specified, in the case of RED we use the parameters suggested in [], i.e., (min th, max th, max p) (; ; :). In the case of RIO we use the same set of parameters for the OUT packets, and we use (; 7; :) for the IN packets. Note that as long as only a small fraction of traffic is marked, the average queue size of IN packets will be virtually zero, which means that the actual values of the parameters used by RIO-IN will have little impact on the marked traffic. This allows us to study our marking algorithms independent of the RIO-IN parameters. Finally, we assume a total buffer size of packets, i.e., three times the (min th threshold. III. TCP CONGESTION CONTROL Due to the predominance of TCP traffic, the TCP congestion control mechanism plays a major role in the dynamics of today s Internet traffic. While there are many variants of TCP [], [], [7], [], they share a common design principle: the loss of a packet is interpreted by the source as a congestion notification. When such an event occurs, the source simply slows down. TCP infers that a packet is lost whenever one of the following two events occurs: a Retransmission Time Out (RTO), or the arrival of three duplicate ACKs. Of these two events, RTO is the most undesirable one as the timeout period is usually very large. There are two reasons for this. First, the computation of the timeout period takes into account the variance of the Round Trip Time (RTT) which can be quite high. Second, most of today s operating systems use a very coarse timer granularity (i.e., ms). In practice, this leads to a minimum timeout period of about ms. In addition, once an RTO occurs, TCP also resets -7-7-//$7. (C) IEEE
2 Group [bytes] Length [packets] Group 7 9 [bytes] Length [packets] 9 the congestion window to one and re-enters the Slow Start phase. As a result, RTO events can significantly affect the TCP throughput. In contrast, three duplicate ACKs trigger Fast Retransmit and Fast Recovery, which is by far a more efficient way of detecting and handling the congestion. Unfortunately, there are many cases in which Fast Retransmit cannot be triggered. As a result, RTO remains the last resort to detect packet losses, which significant affects the TCP performance. In the next section we illustrate this point by presenting a common scenario that lead to RTO: the small window regime. A. Small window regime TCP performance degrades greatly when the congestion window is small. This is commonly known as the small window regime. This can happen either when the data transfer is small or when the loss rate is relatively high. Previous studies have shown that these situations are quite common due to the large fraction of short WWW transfers, and due to the relatively large number of highly congested links in today s Internet [], []. In the case of small transfers, the added latency due to RTO can be significant. This timer is initialized to seconds, and then updated based on the Round Trip Time (RTT) measurements, by using an exponential weighted moving average algorithm. Thus, if the first packet of a TCP connection is lost, the source has to wait for seconds before retransmitting it. Note that even if the RTT estimation is accurate, the coarse timer granularity can force the sender to be idle for a time period comparable to the actual completion time. To illustrate this, consider a scenario in which FTP flows and WWW-like traffic share a bottleneck link of Mbps with a propagation delay of ms. The router that manages the bottleneck link implements a RED buffer management scheme. Ten ingress routers are connected to it by Mbps links with ms of propagation delays. Ten egress router are connected to the other end of the bottleneck link. Between each ingress and egress pair we assume one continuous backlogged TCP flow. To simulate WWW-like traffic pattern, a number of TCP-RENO sources(receivers) are connected to each ingress(egress) router. The flows arrival rates are modeled by Poisson processes. The flows lengths are drawn from the distribution shown in Table I. To derive this distribution we use the AT&T Labs recent measurements of the Internet traffic []. In particular, we divide the flows in identical groups starting from the shortest to the longest flow, and then we compute the average flow length in each group. For simplicity, in our simulations we use a fixed packet size of 7 bytes. The average length over all flows is, bytes, or.7 packets. Note that, as expected, a large percentage of these flows are short: almost % of flows last less than five packets. We conduct experiments for seconds simulation time, and we set the average flow arrival rates such that the offered load corresponds to 9% of the bottleneck link capacity. In Figure we plot the completion times for all the WWW-like sources, using different patterns to differentiate the flow length. For easy visualization flows are grouped in the increasing order of their packet length. 9 Completion Time [s] TABLE I AVERAGE FLOW LENGTH PER GROUP (MTU7 BYTES ). Flow Number Fig.. Completion times for different flow length sources: TCP-RENO and RED buffer management. The layers of points shown in Figure correspond to different completion times. For example, the completion times for flows with only packet can be (a) between. and. seconds if no packet is lost; (b) about seconds if the first packet is lost; and (c) about 9 seconds if both the first packet and its retransmission are lost. The results for -packet flows are similar, except the fact that when the second or the third packets are lost, the timer calculation is more accurate, allowing the source to retransmit the dropped packet faster. Moreover, for flows that are more then packet long, the Fast Recovery algorithm can be triggered, allowing the source to quickly retransmit the dropped packet. IV. TCP AWARE PACKET M ARKING SCHEME In this section we propose a generic TCP aware packet marking scheme which is designed to be used in a network in which routers implement the RIO buffer management scheme. The key idea is to use the packet marking in order to avoid the RTO to occur. To achieve this goal we use the fact that with RIO, IN packets are delivered with a very high probability. More precisely, our algorithm takes advantage of this by marking packets as IN in order to allow TCP to exit as fast as possible from the undesirable states. In particular: mark the first several packets of the flow. This will protect the first packet against loss, and it will allow TCP to safely exit the small window regime. mark several packets after a RTO occurs. The purpose of this is to make sure that TCP exits the small window regime, and, if possible, the Slow Start phase. mark several packets after receiving three DUP-ACKs. The idea here is to allow TCP to exit the Fast Recovery phase, before losing another packet. We present two packet marking schemes. The first one is tightly integrated with the TCP protocol and takes full advantage of the TCP status; the second one is instead decoupled from the TCP status, and thus can be implemented in the edge node. Moreover, the schemes differ in () how many marked packets a source is allowed to send when it starts transmission or when it receives a congestion notification event, and () which packets are marked. With each source we associate a status variable N that represents the number of IN packets that the source is allowed to send. When the source starts transmission, N is initialized to a default value Ns. Every time the source detects a congestion notification event, N is increased by a constant additional number of IN packets Na. Finally, each time a marked packet is sent, N is decremented //$7. (C) IEEE
3 We first propose a packet marking scheme tightly integrated with the TCP protocol. In particular, we assume that when a new TCP flow starts, the source is allowed to send up to Ns marked packets. sstresh Subsequently, the source is allowed to send up to Na marked packets at the beginning of a Slow Start phase, and up to Na cwnd packets at a beginning of a Fast Recovery phase. The intuition behind the above scheme is the following. By allowing a source to send up to sstresh marked packets at the beginning of a Slow Start phase, we increase the probability that the source exits this phase and enters the Congestion Avoidance. Similarly, allowing the source to send cwnd marked packets at the beginning of a Fast Recovery phase, will protect the entire window of packets during this phase. 9 Completion Time [s] A. PM: Packet Marking scheme integrated with TCP Flow Number Fig.. Completion time for the different flow length sources: TCP-RENO and RIO buffer management with PM. B. S-PM: Simplified Packet Marking scheme The packet marking scheme described in the previous section has one major drawback: since it requires to know the state variables sstresh and cwnd to perform packet marking, this scheme has to be tightly integrated with the TCP algorithm. This would require to change the TCP implementation on every platform, which at the minimum it will seriously slow down the pace at which such a scheme can be adopted. To avoid this problem, we propose a second scheme which does not need to know the values of the internal TCP state variables. The key change from the previous scheme is to use a constant value for Na (instead of a value that depends on sstresh and cwnd). We call the new scheme the Simplified Packet Marking scheme, or S-PM, for short, and we discuss it in more details in Section implementation. For the sake of simplicity, here we assume that only data packets can be dropped by the network; ACKs are always received by the sender. In fact, note that we can employ a symmetric scheme, in which the receiver will always send a marked ACK in response to a marked packet. (see section VI). C. Algorithm interpretation The network signals the congestion to a TCP source either implicitly by dropping a packet, or explicitly by setting ECN bits. Allowing a TCP source to send marked packets after it receives the congestion notification closes the feed-back loop. From the network perspective the marked packets mean that the TCP source has received the congestion notification and now it is adapting its rate. By protecting the marked packets the network tries to avoid punishing the flows that are already adapting, and instead signal the congestion to other flows. The rule of allowing an initial number of marked packets has a similar interpretation. When a new connection enters the network, it doesn t have enough information about the level of congestion in the network. Then marking the first packets of a flow notifies the network that the new flow is too young and potentially fragile for congestion notification. After the initial small window regime is ended, the TCP flow stops sending IN packets, and allows the network to give feedback about congestion. In summary, our packet marking scheme can be seen as a part of a control loop that is established between the source and the network: using the marking scheme, the source can reply to the network con In this paper we use the same variable notations, as found in the BSD. source code: sstresh denotes the Slow Start Threshold, while cwnd denote the current value of the Congestion Window. Early Congestion Notification (ECN) [9] has been proposed to provide better support for the TCP congestion control. The basic idea is to explicitly advertise the congestion to end-hosts by using a bit in the IP header, instead of a packet drop. gestion notification message; in turn, the network can use this information to choose the next flow to be notified. V. S IMULATION RESULTS We present simulation results to asses the performance of the proposed algorithms. In the first set of simulations we consider the scenario presented in Section III-A. We repeat the simulation with the same parameters used to obtain the results plotted in Figure. The only difference is that now we assume that each TCP source employs PM, and the bottleneck router implements the RIO buffer manage, and, according to ment scheme. In addition, we assume that Ns sstresh after a RTO, the description of PM in section IV-A, Na and Na cwnd after entering the Fast-Recovery status. Figure plots the distribution of the flows completion times. By comparing Figures and, it can be immediately noticed that the layer corresponding to the first packet being dropped is no longer present. In addition, note that all flows that send a number of packets less or equal to five appear to not lose any packets. In fact, the first flows that lose packets are the ones that send eight packets (notice the layer at the first RTO value in this case). The reason for this is simply because a TCP source implementing PM sends the first Ns packets always marked as IN, and therefore the probability that these packets are dropped is negligible. In contrast, the following packets are marked as OUT, and as a result the probability that they are dropped is much higher. This explains why eight packet flows are the first to incur losses. It is interesting to note that sending eight packets represents a worst case scenario. In particular, dropping one of the last three packets will force a RTO, as the sender will receive at most two duplicated ACKs. To gauge the effectiveness of our scheme we define a general metric, called Relative Gain and denoted by, as the relative improvement obtained by using PM. More precisely, ) F (x) F (ref F (ref ) () where F represents the measured quantity (e.g., completion time, throughput), ref represent the reference system employing the RED buffer management discipline, and x represents the system employing the RIO buffer management discipline, and either PM or S-PM packet marking schemes. A. Completion Time vs. Offered Load We consider how does the average completion time of a flow varies as a function of the offered load, and what are the improvements that can be achieved by using PM. In doing this we consider four versions of TCP: RENO, NEWRENO, SACK, and RENO FS(), the last being -7-7-//$7. (C) IEEE
4 Average Completion Time lower_bound RENO NEWRENO SACK RENO FS() ECN-SACK Normal PM Gain versus RENO Normal RENO NEWRENO SACK RENO FS() ECN-SACK Normal PM Web Offered Load Fig.. Average completion time for WWW-like traffic versus the offered load. a modified version of RENO proposed in []. In addition, we run a set of simulations with TCP-SACK in a network which provides support for ECN [9]. Figure shows the average completion time versus the offered load. As a base line comparison we plot the minimum completion time, evaluated by assuming that no packets are ever dropped and ignoring the queueing delay. In all the experiments presented in this section the average confidence interval is equal to. Mbps and the maximum is equal to. Mbps, with a confidence level of 9%. As expected, the completion time increases dramatically as the offered load approaches the link capacity. This is mainly due to the packet loss that forces TCP to scale down its transmission rate, and therefore increase its completion time. Note that as the loss number increases, the probability to have several packets dropped in the same congestion window also increases. Ultimately, this will force TCP to enter in the Slow Start phase, which will significantly increase the completion time. As shown in Figure, by using PM we are able to consistently reduce the completion time across all TCP versions (black dots). On the top of Figure we report the Relative Gain in terms of completion time for various TCP versions with or without PM, as compared to a reference network in which each TCP source implements RENO, and each router implements RED. There are two things worth noting. First, the PM is again effective in improving the performance of every TCP version. Second, these improvements are significantly larger than the improvements achieved by the more sophisticated versions TCP-NEWRENO and SACK over TCP-RENO. Note that even using ECN does not improve the average completion time, as the dropping rate is still very high due to the high offered load and bursty TCP behavior. Moreover, the ECN settings are more reliable for the long lived flows, where the high number of packets sent provides the network a high rate of feedback to the sources. This is also confirmed by Table II, which reports the average completion times for different flow lengths with a network load of 9%. Note that by using PM we reduce the completion times for all flows. In the case of short flows we achieve a gain close to 7%. Note also that long lived flows are able to reach % of gain. Using ECN instead, will reduce only the completion time for the long lived flows, while only slightly improving the performance of the short lived one. It must also be noted that RENO FS() guarantees better performance when the network load is relatively low, but, at the same time, it is also the one which performs the worse when the network is heavy loaded. This is due to the fact that sending at minimum three packets can be a wrong decision when the networks is heavy loaded, and can potentially cause instability. Instead, the gain obtained using PM is always greater than % and increases up to % when the network is heavy loaded. Gain: Normal versus PM Web Offered Load RENO NEWRENO SACK RENO FS() ECN-SACK Web Offered Load Fig.. Completion Time Relative Gain: with respect to TCP-RENO on the top and same TCP with or without PM on the bottom ṪABLE II AVERAGE COMPLETION TIME PER GROUP AND SPEEDUP GAIN. flow length Comp Time [s] [%] [pkts] Normal ECN PM ECN PM The bottom part of Figure plots the gain in terms of completion time obtained in the case of each TCP version with and without PM. What is important to note here is that by using PM we achieve similar improvements in all cases. This shows again that PM is equally effective in improving the performance of each TCP version. Also note that the gain increases with the offered load, as long as the offered load is smaller than 9%. However, when the offered load exceeds 9%, the gain starts to decrease. This is because in a highly loaded network, the number of marked packets generated due to the packet losses can become large enough to overflow the buffer. As a result the marked packets are no longer protected. VI. IMPLEMENTATION ISSUES In this section we consider three possible variants of our marking scheme, based on where the packet-marking is performed: (a) integrated with the TCP protocol stack, (b) at the ingress router, or (c) at the source by a proxy module. A. Integrating PM into the TCP protocol stack In this case, the marking scheme is integrated in the TCP stack, as described in section IV-A. The advantage of this scheme is that -7-7-//$7. (C) IEEE
5 Average Completion Time [s] AM PFM Completion time [s] AM: Marked flows AM: Normal flows PFM: Marked flows PFM: Normal flows Toker rate [Mbps] Fig.. Completion time of WWW-like sources versus the token rate T. the entire TCP state is exposed, and therefore it can be used to make optimal marking decision. The problem with this approach is that it is hard to limit the offered fraction of IN packets based only on these state variables. There are at least two solutions that address this problem. The first is to use a signaling protocol to inform the source of how many IN packets it can send. The second is to have the ingress router remarking some of the IN packets as OUT. While the second approach is simpler, as it does not require a signaling protocol, un-marking packets without any knowledge of the TCP state can significantly degrade the performance. B. Deploying S-PM at ingress routers In this case, we implement S-PM at the ingress routers, without any changes at the end-hosts. Note that this implementation is fully compatible with the Differentiated Services architecture. In particular, the ingress router has to () identify the beginning of a new flow, () mark the first number of packets according to section V, () identify packet losses in the network, and then () mark the following packets as IN. The flow identification operation can be done very efficiently using a prefix matching scheme or snooping TCP signaling packets, while identifying a packet loss can be done looking at the flow sequence number in the TCP header. Indeed, if the sequence number is not greater than the previous one, and the TCP payload is not empty, then the source is retransmitting old data. As a result the router can conclude that the flow enters either the Fast Recovery or the Slow Start phase. However, since the ingress router does not have the possibility to know the internal TCP state, it cannot differentiate between the two phases. As result, we use the simplified version of packet marking algorithm described in IV-B. Note that the same scheme can be used on the reverse path, by using the ACK sequence number. The values N s;n a can be set by the network manager to provision the resources. Each ingress node can have an IN ratio that is allowed to spend, and it can distribute them to all flows that are present. Using a S-PM scheme that is implemented in the network, will guarantee all the end systems better performance, whichever TCP version they are using, and will also simplify the control upon the congestion in the IN part of the network. C. Deploying S-PM at end-hosts An alternative to deploying S-PM at the ingress routers is to implement a simple module at the end-host which snoops the TCP packets and mark them according to the S-PM scheme. Since this approach does not require changes in the TCP stack, it is relatively simple to develop and deploy. However, note that similar to the case in which PM is integrated into the TCP stack, the ingress router has to be able 7 Toker rate [Mbps] Fig.. Completion time of WWW-like sources versus the token rate T : % S-PM aware and % best effort flows. to unmark packets if needed, in order to avoid congestion of the IN portion of the traffic. Since the performance of this scheme is basically identical to the one in which the S-PM is fully implemented at ingresses, next we present simulation results only for the former one. D. Simulation Results with S-PM deployed at ingress To compare the different solutions, we implemented two types of packet markers as queue objects in the ns- simulator. Both markers use a leaky bucket model to mark packets as IN. The token depth is denoted by T max, the current token level is denoted by T, and the token arrival rate is denoted by T. While in both cases tokens are shared between all connections, the two markers differ in the way the token are shared. In particular, we have: ffl Aggregate Marker (AM): the node does not perform a per flow marking, and a packet that enter the queue is marked as IN as soon as there are tokens in the leaky bucket profile. ffl Per Flow Marker (PFM): as described in section VI-B, the marker is able to recognize new flows and packet drops per flows. Flow i that has been granted IN packets can use them only if there are tokens in the aggregated leaky bucket, i.e., T >. Note that N (i) is decremented even if the packet can not be marked as IN when T<. Figure reports the average completion time in the same network scenario as the one described in section III-A, with WWW offered load equal to % of the bottleneck capacity. We assume a token bucket with the depth T max, and a token arrival rate T that varies from to Mbps. This results in an aggregate rate of the IN traffic on the bottleneck link of up to Mbps. When PFM scheme is used, the completion time is decreasing as the IN load is increasing up to a threshold (about Mbps), after which the performance degrades. This is because as N s increases, the portion of the buffer dedicated to the IN packets becomes eventually congested, which negates the benefit of sending marked packets. In turn this will cause the OUT packets to be dropped with high probability. As a result, once a flow ceases to send marked packets, its subsequent packets are dropped with high probability. This will likely cause TCP to lose multiple packets within the same window, which will significantly affect its performance. In contrast, by using AM the completion time increases with the aggregate token rate profile. This is not surprising, since the longer the flow is, the higher the probability to have a larger congestion window is. This will result in sending more packets per unit of time and then having a larger probability of using an IN token in the aggregate marker. This makes short lived flows more sensitive to a higher dropping probability, which explains the significant increase in the average completion time //$7. (C) IEEE
6 TABLE III COMPLETION TIME WITH DIFFERENT RTTS. Mbps ms Mbps ms Web Web Web Node RTT Completion time [s] Gain Id [ms] Normal PFM % D. PM-aware and Best-Effort Flows A natural question is what happens when traditional best-effort TCP flows compete with PM-aware flows. This scenario will likely happen in the initial deployment stages in which only a subset of ingress routers and/or end-hosts will be PM-aware. To answer this question, we modify the network bottleneck scenario such that only the first five ingress routers implement PFM; the last five router are normal routers. As a result only half of the traffic will be S-PMaware. Figure shows the completion time for the S-PM-aware and best-effort flows as a function of the token arrival rate at the PFM ingress routers. The dashed lines correspond to the scenario where PFM is used, while the solid lines show the results when AM is used. Note that when the token rate increases, the completion times of the PFM aware flows decrease irrespective of the marking scheme. In contrast, the completion times of the best-effort flows increase with the token rate. Further, note that when using AM, the completion times of the best effort flows increase very fast. In contrast, PFM does not exhibit this problem. This makes PFM the natural choice in a heterogeneous network where PM is not completely deployed, as it can avoid starvation of the best-effort traffic. D. Simulations with more complex topologies To gauge the effects of different RTTs on the TCP performance, we consider a topology similar to the one presented in section III-A, where the propagation delay of ingress link i is i ms, i.e., the first ingress router has ms delay, while the last has ms delay. The total aggregate IN profile token rate is set to Mbps. The offered load is set to % of the bottleneck link capacity. Since PFM consistently outperforms AM, in this section we present only the results obtained by using PFM. We report the simulation results in Table III. When PFM is used, the average completion time gain for short flows is in the range %- %. Note that the flows with a smaller RTT obtain better gains, as the impact of eliminating the RTOs has a much higher impact in these cases. Table IV reports the average completion times measured on the parking lot topology in Figure 7. The topology consists of five RIO routers connected by Mbps links with ms delay. Two ingress routers are connected to each RIO router by Mbps links with ms delay. One FTP source and a number of WWW-like traffic sources are connected to each ingress router. All the receivers are connected to the last router. Each ingress router has an IN token arrival rate of. Mbps, which corresponds to a total aggregate IN rate of Mbps on the last bottleneck link. The offered load of the WWW-like traffic on the last link is %. As shown in Table IV, when ingress routers implement PFM, the average gain in the completion time is % %. These results suggest that our packet-marking scheme will be also highly effective in more realistic scenarios. Web Web R R R R R Web Web Web 7 Web Fig. 7. Parking-lot topology. TABLE IV COMPLETION TIME ON A PARKING LOT TOPOLOGY. Node Completion time [s] Gain Id Normal PFM % VII. SUMMARY 9 Receivers In this paper, we study several TCP-aware packet marking schemes in networks with RIO routers. By marking packets based on the state of TCP congestion control algorithms, it is possible to significantly reduce the probability of TCP entering undesirable states, such as retransmission timeouts and small window regime. This directly translates into performance improvements of TCP. Through extensive simulation experiments, we show that the proposed scheme can reduce the completion times of short lived TCP flows by half. We propose a strategy to implement the packet marking scheme at ingress routers. This allows the marking scheme to be offered by network providers as a value-added service. In addition it can be integrated together with the profile-based marking scheme, which is also implemented at ingress routers. REFERENCES [] R. Braden, D. Clark, S. Shenker, Integrated Services in the Internet Architecture: an Overview, IETF Request For Comment, June., 99 [] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, An Architecture for Differentiated Services, IETF Request For Comment 7, Dec. 99 [] David D. Clark and Wenjia Fang, Explicit Allocation of Best Effort Packet Delivery Service, IEEE/ACM Transaction On Networking Vol:, NO:, pp. -7, Aug. 99 [] S. Floyd, V. Jacobson, Random Early Detection gateways for Congestion Avoidance, IEEE/ACM Transaction on Networking, Vol:, NO:, Aug. 99 [] V. Jacobson, Congestion Avoidance and Control, Sigcomm, Standford, CA, pp. -9, Aug. 9 [] K. Fall, S. Floyd, Simulation-based Comparison of Tahoe, Reno, and SACK TCP, Computer Communication Review, Vol:, NO:, pp. -, Jul. 99 [7] S. Floyd, T. Henderson, The NewReno Modification to TCP s Fast Recovery Algorithm, RFC, Apr. 999 [] M. Mathis, J. Mahdavi, S. Floyd, S., A. Romanow, TCP Selective Acknowledgement Options, RFC, Apr. 99 [9] Ramakrishnan, K.K., and Floyd, S., A Proposal to add Explicit Congestion Notification (ECN) to IP RFC, Jan. 999 [] Hashem, E., Analysis of random drop for gateway congestion control, Report LCS TR-, Laboratory for Computer Science, MIT, Cambridge, MA, pp., 99 [] UCB/LBNL/VINT Network Simulator - ns (version ) [] A. Feldmann,, Rexford, J., Caceres, R. Efficient policies for carrying Web traffic over flowswitched networks, IEEE/ACM Transactions on Networking, Vol:, NO:, pp. 7-, Dec. 99 [] K.C. Claffy, H.-W. Braun, G.C. Polyzos, A parameterizable methodology for Internet traffic flow profiling Selected Areas in Communications, IEEE Journal on, Vol:, NO:, pp. -9, Oct. 99 [] M. Allman, S. Floyd, C. Partridge, Increasing TCP s Initial Window, RFC, Sep. 99 [] V. N. Padmanabhan, R. H. Katz, TCP Fast Start: A Technique for Speeding Up Web Transfers, Proceedings of the IEEE Globecom 9 Internet Mini-Conference, Sydney, Australia, Nov //$7. (C) IEEE
TCP-aware packet marking in networks with DiffServ support q
Computer Networks 42 (200) 8 00 www.elsevier.com/locate/comnet TCP-aware packet marking in networks with DiffServ support q Marco Mellia a, *, Ion Stoica b, Hui Zhang c a Dipartimento di Elettronica, Politecnico
More informationRED behavior with different packet sizes
RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,
More informationBuffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain
Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification Chunlei Liu Raj Jain Department of Computer and Information Science The Ohio State University, Columbus, OH 432-277
More informationTransmission Control Protocol. ITS 413 Internet Technologies and Applications
Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion
More informationA Report on Some Recent Developments in TCP Congestion Control
A Report on Some Recent Developments in TCP Congestion Control Sally Floyd June 5, 2000 Abstract This paper discusses several changes either proposed or in progress for TCP congestion control. The changes
More informationCS268: Beyond TCP Congestion Control
TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s
More informationAnalyzing the Receiver Window Modification Scheme of TCP Queues
Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington
More informationENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3
Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics
More informationCongestion control in TCP
Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed
More informationTCP Congestion Control in Wired and Wireless networks
TCP Congestion Control in Wired and Wireless networks Mohamadreza Najiminaini (mna28@cs.sfu.ca) Term Project ENSC 835 Spring 2008 Supervised by Dr. Ljiljana Trajkovic School of Engineering and Science
More informationFast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission
Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver
More informationimage 3.8 KB Figure 1.6: Example Web Page
image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the
More informationPerformance Consequences of Partial RED Deployment
Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing
More informationTCP based Receiver Assistant Congestion Control
International Conference on Multidisciplinary Research & Practice P a g e 219 TCP based Receiver Assistant Congestion Control Hardik K. Molia Master of Computer Engineering, Department of Computer Engineering
More informationCS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control
: Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And
More informationA Survey on Quality of Service and Congestion Control
A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek
More informationRD-TCP: Reorder Detecting TCP
RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies
More informationCC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments
CC-SCTP: Chunk Checksum of SCTP for Enhancement of Throughput in Wireless Network Environments Stream Control Transmission Protocol (SCTP) uses the 32-bit checksum in the common header, by which a corrupted
More informationPerformance Analysis of TCP Variants
102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable
More informationComputer Networking. Queue Management and Quality of Service (QOS)
Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you
More informationTCP Smart Framing: a Segmentation Algorithm to Improve TCP Performance
TCP Smart Framing: a Segmentation Algorithm to Improve TCP Performance Marco Mellia, Michela Meo, Claudio Casetti Dipartimento di Elettronica, Politecnico di Torino, 10129 Torino, Italy mellia, michela,
More informationEnhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways
Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com
More informationEVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.157
More informationTCP congestion control:
TCP congestion control: Probing for usable bandwidth: Ideally: transmit as fast as possible (cwnd as large as possible) without loss Increase cwnd until loss (congestion) Loss: decrease cwnd, then begin
More informationMEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS
MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS Harjinder Kaur CSE, GZSCCET, Dabwali Road, Bathinda, Punjab, India, sidhuharryab@gmail.com Gurpreet Singh Abstract CSE, GZSCCET, Dabwali
More informationIJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):
IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Performance Evaluation of TCP in the Presence of in Heterogeneous Networks by using Network
More informationUNIT IV -- TRANSPORT LAYER
UNIT IV -- TRANSPORT LAYER TABLE OF CONTENTS 4.1. Transport layer. 02 4.2. Reliable delivery service. 03 4.3. Congestion control. 05 4.4. Connection establishment.. 07 4.5. Flow control 09 4.6. Transmission
More informationImproving TCP Performance over Wireless Networks using Loss Predictors
Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it
More informationRandom Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks
Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail
More informationROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL
ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL SEIFEDDINE KADRY 1, ISSA KAMAR 1, ALI KALAKECH 2, MOHAMAD SMAILI 1 1 Lebanese University - Faculty of Science, Lebanon 1 Lebanese University - Faculty of Business,
More informationA Report on Some Recent Developments in TCP Congestion Control
A Report on Some Recent Developments in TCP Congestion Control Sally Floyd October 9, To appear in IEEE Communications Magazine, April Abstract This paper discusses several changes to TCP s congestion
More informationCongestion / Flow Control in TCP
Congestion and Flow Control in 1 Flow Control and Congestion Control Flow control Sender avoids overflow of receiver buffer Congestion control All senders avoid overflow of intermediate network buffers
More informationADVANCED COMPUTER NETWORKS
ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,
More informationCore-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet
Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely
More informationTraffic Management using Multilevel Explicit Congestion Notification
Traffic Management using Multilevel Explicit Congestion Notification Arjan Durresi, Mukundan Sridharan, Chunlei Liu, Mukul Goyal Department of Computer and Information Science The Ohio State University
More informationTCP-Peach and FACK/SACK Options: Putting The Pieces Together
TCP-Peach and FACK/SACK Options: Putting The Pieces Together Giacomo Morabito, Renato Narcisi, Sergio Palazzo, Antonio Pantò Dipartimento di Ingegneria Informatica e delle Telecomunicazioni University
More informationRecap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness
Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different
More informationEnhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways
Enhancing TCP Throughput over Lossy Links Using ECN-Capable Capable RED Gateways Haowei Bai Honeywell Aerospace Mohammed Atiquzzaman School of Computer Science University of Oklahoma 1 Outline Introduction
More informationOutline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste
Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,
More informationECE 333: Introduction to Communication Networks Fall 2001
ECE 333: Introduction to Communication Networks Fall 2001 Lecture 28: Transport Layer III Congestion control (TCP) 1 In the last lecture we introduced the topics of flow control and congestion control.
More informationDelay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks
Delay Performance of the New Explicit Loss Notification TCP Technique for Wireless Networks Wenqing Ding and Abbas Jamalipour School of Electrical and Information Engineering The University of Sydney Sydney
More informationA New Fair Window Algorithm for ECN Capable TCP (New-ECN)
A New Fair Window Algorithm for ECN Capable TCP (New-ECN) Tilo Hamann Department of Digital Communication Systems Technical University of Hamburg-Harburg Hamburg, Germany t.hamann@tu-harburg.de Jean Walrand
More informationTCP Congestion Control
TCP Congestion Control What is Congestion The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting
More informationTCP Congestion Control
What is Congestion TCP Congestion Control The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting
More informationAppendix B. Standards-Track TCP Evaluation
215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error
More informationRate Based Pacing with Various TCP Variants
International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Rate Based Pacing with Various TCP Variants Mr. Sreekanth Bandi 1, Mr.K.M.Rayudu 2 1 Asst.Professor, Dept of CSE,
More informationCS 349/449 Internet Protocols Final Exam Winter /15/2003. Name: Course:
CS 349/449 Internet Protocols Final Exam Winter 2003 12/15/2003 Name: Course: Instructions: 1. You have 2 hours to finish 2. Question 9 is only for 449 students 3. Closed books, closed notes. Write all
More informationTCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED
TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control
More informationCS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP
CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering
More informationTCP in Asymmetric Environments
TCP in Asymmetric Environments KReSIT, IIT Bombay Vijay T. Raisinghani TCP in Asymmetric Environments 1 TCP Overview Four congestion control algorithms Slow start Congestion avoidance Fast retransmit Fast
More informationChapter 24 Congestion Control and Quality of Service 24.1
Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control
More informationTCP Flavors Simulation Evaluations over Noisy Environment
International Journal of Information Engineering and Applications 2018; 1(1): 11-17 http://www.aascit.org/journal/information TCP Flavors Simulation Evaluations over Noisy Environment Elsadig Gamaleldeen
More informationOverview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD
Overview 15-441 Computer Networking Lecture 9 More TCP & Congestion Control TCP congestion control TCP modern loss recovery TCP modeling Lecture 9: 09-25-2002 2 TCP Congestion Control Changes to TCP motivated
More informationPERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK
I.J.E.M.S., VOL.2 (3) 211: 163-171 ISSN 2229-6X PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION Jai Kumar and U.C. Jaiswal Department of Computer Science and Engineering, Madan
More informationTCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)
TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance
More informationResource allocation in networks. Resource Allocation in Networks. Resource allocation
Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers
More informationIncorporation of TCP Proxy Service for improving TCP throughput
Vol. 3, 98 Incorporation of Proxy Service for improving throughput G.P. Bhole and S.A. Patekar Abstract-- slow start algorithm works well for short distance between sending and receiving host. However
More informationReasons not to Parallelize TCP Connections for Fast Long-Distance Networks
Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks Zongsheng Zhang Go Hasegawa Masayuki Murata Osaka University Contents Introduction Analysis of parallel TCP mechanism Numerical
More informationTransport Layer (Congestion Control)
Transport Layer (Congestion Control) Where we are in the Course Moving on up to the Transport Layer! Application Transport Network Link Physical CSE 461 University of Washington 2 Congestion Collapse Congestion
More informationInternet Networking recitation #10 TCP New Reno Vs. Reno
recitation #0 TCP New Reno Vs. Reno Spring Semester 200, Dept. of Computer Science, Technion 2 Introduction Packet Loss Management TCP Reno (RFC 258) can manage a loss of at most one packet from a single
More informationTransport Protocols and TCP: Review
Transport Protocols and TCP: Review CSE 6590 Fall 2010 Department of Computer Science & Engineering York University 1 19 September 2010 1 Connection Establishment and Termination 2 2 1 Connection Establishment
More informationOverview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers
Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control
More informationENSC 835: COMMUNICATION NETWORKS
ENSC 835: COMMUNICATION NETWORKS Evaluation of TCP congestion control mechanisms using OPNET simulator Spring 2008 FINAL PROJECT REPORT LAXMI SUBEDI http://www.sfu.ca/~lsa38/project.html lsa38@cs.sfu.ca
More informationReview: Performance Evaluation of TCP Congestion Control Mechanisms Using Random-Way-Point Mobility Model
Review: Performance Evaluation of TCP Congestion Control Mechanisms Using Random-Way-Point Mobility Model Rakesh K Scholar (M.Tech) The Oxford College of Engineering Bangalore Mrs. Kalaiselvi Asst. Prof,
More informationChapter III. congestion situation in Highspeed Networks
Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and
More informationCS321: Computer Networks Congestion Control in TCP
CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a
More informationEquation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals
Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute
More informationImproving the Performance of Interactive TCP Applications using End-point Based and Network Level Techniques
Improving the Performance of Interactive TCP Applications using End-point Based and Network Level Techniques Varun G Menon Assistant Professor Department of Computer Science and Engineering M.E.T S School
More informationLecture 14: Congestion Control"
Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati
More informationIncrementally Deployable Prevention to TCP Attack with Misbehaving Receivers
Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers Kun Gao and Chengwen Chris Wang kgao, chengwen @cs.cmu.edu Computer Science Department Carnegie Mellon University December 15,
More informationRECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,
More information100 Mbps. 100 Mbps S1 G1 G2. 5 ms 40 ms. 5 ms
The Influence of the Large Bandwidth-Delay Product on TCP Reno, NewReno, and SACK Haewon Lee Λ, Soo-hyeoung Lee, and Yanghee Choi School of Computer Science and Engineering Seoul National University San
More informationCongestion Control and Resource Allocation
Problem: allocating resources Congestion control Quality of service Congestion Control and Resource Allocation Hongwei Zhang http://www.cs.wayne.edu/~hzhang The hand that hath made you fair hath made you
More informationStudying Fairness of TCP Variants and UDP Traffic
Studying Fairness of TCP Variants and UDP Traffic Election Reddy B.Krishna Chaitanya Problem Definition: To study the fairness of TCP variants and UDP, when sharing a common link. To do so we conduct various
More informationDepartment of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005
Name: SID: Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005 There are 10 questions in total. Please write your SID
More informationLecture 14: Congestion Control"
Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control
More informationPerformance Analysis of Assured Forwarding
Internet Engineering Task Force Internet Draft Expires: August 2000 Mukul Goyal Arian Durresi Raj Jain Chunlei Liu The Ohio State University February 2000 Performance Analysis of Assured Forwarding Status
More informationTM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS
INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 232-7345 TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS Reza Poorzare 1 Young Researchers Club,
More informationCongestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University
Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest
More informationCongestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources
Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets
More informationTHE TCP specification that specifies the first original
1 Median Filtering Simulation of Bursty Traffic Auc Fai Chan, John Leis Faculty of Engineering and Surveying University of Southern Queensland Toowoomba Queensland 4350 Abstract The estimation of Retransmission
More informationCongestion Collapse in the 1980s
Congestion Collapse Congestion Collapse in the 1980s Early TCP used fixed size window (e.g., 8 packets) Initially fine for reliability But something happened as the ARPANET grew Links stayed busy but transfer
More informationA Survey of Recent Developments of TCP. Sally Floyd ACIRI (AT&T Center for Internet Research at ICSI) October 17, 2001
A Survey of Recent Developments of TCP Sally Floyd ACIRI (AT&T Center for Internet Research at ICSI) October 17, 2001 IEEE Annual Computer Communications Workshop 1 An overview of this session: This talk:
More informationChapter III: Transport Layer
Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control
More informationCHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS
28 CHAPTER 3 EFFECTIVE ADMISSION CONTROL MECHANISM IN WIRELESS MESH NETWORKS Introduction Measurement-based scheme, that constantly monitors the network, will incorporate the current network state in the
More informationCS 268: Computer Networking
CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay
More informationWireless TCP Performance Issues
Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the
More informationVariable Step Fluid Simulation for Communication Network
Variable Step Fluid Simulation for Communication Network Hongjoong Kim 1 and Junsoo Lee 2 1 Korea University, Seoul, Korea, hongjoong@korea.ac.kr 2 Sookmyung Women s University, Seoul, Korea, jslee@sookmyung.ac.kr
More informationAnalysis of the interoperation of the Integrated Services and Differentiated Services Architectures
Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures M. Fabiano P.S. and M.A. R. Dantas Departamento da Ciência da Computação, Universidade de Brasília, 70.910-970
More informationImpact of transmission errors on TCP performance. Outline. Random Errors
Impact of transmission errors on TCP performance 1 Outline Impact of transmission errors on TCP performance Approaches to improve TCP performance Classification Discussion of selected approaches 2 Random
More informationReliable Transport II: TCP and Congestion Control
Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design
More information15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers
TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6
More informationProblems and Solutions for the TCP Slow-Start Process
Problems and Solutions for the TCP Slow-Start Process K.L. Eddie Law, Wing-Chung Hung The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto Abstract--In this
More informationEE122 MIDTERM EXAM: Scott Shenker, Ion Stoica
EE MITERM EXM: 00-0- Scott Shenker, Ion Stoica Last name Student I First name Login: ee- Please circle the last two letters of your login. a b c d e f g h i j k l m n o p q r s t u v w x y z a b c d e
More informationStateless Proportional Bandwidth Allocation
Stateless Proportional Bandwidth Allocation Prasanna K. Jagannathan *a, Arjan Durresi *a, Raj Jain **b a Computer and Information Science Department, The Ohio State University b Nayna Networks, Inc. ABSTRACT
More informationTransport Protocols & TCP TCP
Transport Protocols & TCP CSE 3213 Fall 2007 13 November 2007 1 TCP Services Flow control Connection establishment and termination Congestion control 2 1 TCP Services Transmission Control Protocol (RFC
More informationOperating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich
Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion
More informationWhere we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks
Operating Systems and Networks Network Lecture 0: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion
More informationISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions.
ISSN:2320-0790 A New TCP Algorithm to reduce the number of retransmissions in Wireless Networks A Beulah, R Nita Marie Ann Assistant Professsor, SSN College of Engineering, Chennai PG Scholar, SSN College
More informationPerformance Enhancement Of TCP For Wireless Network
P a g e 32 Vol. 10 Issue 12 (Ver. 1.0) October 2010 Global Journal of Computer Science and Technology Performance Enhancement Of TCP For Wireless Network 1 Pranab Kumar Dhar, 2 Mohammad Ibrahim Khan, 3
More information