Using an Artificially Intelligent Congestion Avoidance Algorithm to Augment TCP Reno and TCP Vegas Performance

Size: px
Start display at page:

Download "Using an Artificially Intelligent Congestion Avoidance Algorithm to Augment TCP Reno and TCP Vegas Performance"

Transcription

1 Using an Artificially Intelligent Congestion Avoidance Algorithm to Augment TCP Reno and TCP Vegas Performance John Tobler Department of Computer Sciences University of Wisconsin-Madison Stirling Martin Department of Computer Sciences University of Wisconsin-Madison 1 Abstract By using an artificially intelligent congestion avoidance algorithm in TCP NewReno and TCP Vegas, we sought to improve throughput by avoiding packet losses due to congestion. The TCP stack uses an artificial neural network (ANN) to detect the current level of network congestion. The congestion window is reduced proportionally to the level of congestion predicted by the ANN at the beginning of each RTT. We compared AINewReno to NewReno throughput and loss characteristics over a series of ns-2 simulations. AINewReno maintained equivalent throughput and reduced packet loss by at least 26% versus NewReno. Interestingly, AIVegas did not achieve better throughput or reduce the number of packets lost when compared to standard Vegas. This results from the combination of the aggressive packet retransmission policy inherent in Vegas and the AI congestion avoidance scheme. While this work leaves room for future investigation, our AI congestion avoidance scheme shows promise for use of AI in source-based congestion avoidance schemes. 2 Introduction 2.1 Overview Because many common Internet applications including Web browsers, programs, and file transfer utilities use TCP as their transport protocol, the congestion control mechanism of TCP can have a large effect on the performance of the global Internet. The widely deployed TCP Reno and heavily scrutinized TCP Vegas congestion avoidance algorithms have weaknesses that tend to reduce goodput and throughput. The most widely implemented version of TCP is Reno 1. The Reno congestion avoidance implementation is a reactive scheme that determines available network bandwidth by creating packet loss. Reno increases the congestion window linearly during congestion avoidance and reduces the congestion window when either three duplicate ACK s are received or a coarse grained timeout occurs. Because Reno uses loss to determine the available bandwidth, half of the current congestion window size of information is expected to be lost during a round-trip time (RTT) when a timeout occurs. Reno makes no attempt to detect network congestion, and as a result, retransmissions are commonplace and the goodput (i.e., the ratio of bytes transmitted excluding duplicates to total bytes transmitted) of Reno connections is reduced. In an attempt to avoid the inefficiencies inherent in reactive congestion avoidance schemes, Brakmo et al. [2] proposed TCP Vegas to proactively sense network congestion. While many congestion avoidance algorithms have been proposed [8] [13], Vegas uses an approach similar to Wang and Crowcroft s Tri-S scheme [12]. The Tri-S scheme is based on the idea that a flattening sending rate indicates network congestion. Vegas compares the current and expected throughput every RTT and readjusts the congestion window accordingly. The throughput calculations are based on the accurate measurement of a minimum RTT value for the connection--basertt. Vegas measures the basertt value at the beginning of the connection and will re-adjust the value if a given RTT is less than the current basertt. In addition, the window adjustment algorithm is dependent on two thresholds α and β. These thresholds correspond to the number of extra buffers the connection is occupying in the network. Varying these thresholds can alter the aggressiveness of the Vegas implementation. Vegas has been the subject of much scrutiny since the initial claims of Brakmo et al. [2] of 37 to 71 percent higher throughput than TCP Reno. Hengartner et al. [4] show that the congestion avoidance algorithm for Vegas only contributes marginally to the improved Vegas throughput. Ahn et al. [1] validated these initial results but found that Vegas does not receive its fair share of network bandwidth when competing with the more aggressive Reno. In addition, Mo et al. [10] identified two additional problems with Vegas congestion avoidance. Rerouting of packets in the middle of a connection can result in Vegas misidentifying network congestion and lead to a reduction of connection throughput. Similarly, Vegas is susceptible to inducing persistent congestion. Both problems derive from inaccuracies in determining the basertt for a given connection. 1 This includes Reno variants such as NewReno. 1

2 For both Reno and Vegas, the congestion avoidance algorithms can reduce throughput. The Reno congestion avoidance algorithm does not attempt to detect network congestion and thus many retransmissions result. If Reno was able to accurately detect network congestion, it could back-off prior to losing packets. This would result in an increase in goodput for Reno connections. In addition, because fewer packets would be retransmitted, the throughput of the Reno connection would likely increase. Conversely, the congestion avoidance algorithm for Vegas is perhaps too conservative, resulting in reduced connection throughput, especially when competing with more aggressive Reno senders. If the congestion avoidance algorithm were more aggressive, the throughput of Vegas connections could improve. We developed an artificially intelligent congestion detection system that can be used in both Reno and Vegas implementations. The algorithm is centered on an Artificial Neural Network (ANN) that is fed current network measurements and outputs the current degree of network congestion. By implementing this algorithm in the widely deployed Reno variant NewReno, we reduced the number of retransmissions necessary and maintained throughput. Conversely, we were unable to increase the throughput of Vegas or reduce the number of retransmission because of a conflict between the AI congestion avoidance algorithm and the inherent aggressive retransmission algorithm in Vegas. 2.2 Goals By using an artificially intelligent congestion avoidance algorithm, we sought to achieve the following: 3 Related Work 1. Improve the throughput of both NewReno and Vegas. 2. Insure that the modified NewReno and Vegas stacks are at least as fair as TCP NewReno. 3. Minimally alter the efficiency of the NewReno and Vegas stacks. 4. Through careful feature selection (i.e., the measurements the ANN will be using to determine congestion), we hope to eliminate the reliance of the Vegas congestion avoidance scheme on the basertt measurement and the thresholds α and β. If we can successfully train an ANN to not rely upon the basertt for identifying network congestion, we will likely eliminate the problems Mo et al. [10] outlined with respect to inaccurate basertt calculations. Beginning with Jacobsen and Karel s [7] work, congestion avoidance has been a significant area for network research. Initially, TCP did not attempt to proactively detect congestion until Brakmo et al. [2] developed TCP Vegas. Vegas uses numerous heuristics to pace sending and to avoid inducing network congestion. To our knowledge, no artificially intelligent TCP congestion avoidance has ever been investigated. In addition to TCP congestion avoidance, numerous studies have focused on moving the task of congestion avoidance from the end-points into the network infrastructure. Source-quench, RED [6], and ECN [5][11] are all router-based congestion avoidance schemes. While router-based congestion avoidance has shown promise, we prefer to investigate source-based schemes. By only requiring changes at the source, users will reap the benefits of new technology by simply installing new software on their machine. Router based schemes most likely require network administrator intervention before users see benefit. 4 Methods 4.1 ns-2 Simulator We evaluated our implementation using the ns-2 simulator ( Previous Vegas work had primarily been performed on x-sim, a simulator based on the x-kernel architecture [4]. We chose ns-2 because it has been widely accepted as the network research simulation standard. 4.2 ANN Training ANN Background We used a multi-layered ANN to classify network congestion conditions. ANN s are supervised-learning classifiers that require training on a set of data with labeled (i.e., categorized) examples; this set of data is called the training set. The ANN training was done using backpropagation, which is the standard algorithm used for training 2

3 neural networks. This algorithm attempts to minimize the squared-error between the network output values and the target value for these outputs. The algorithm searches a weight space (defined by all possible weight values for each arc in the network) for an error minimum. Because a non-linear, multi-layered network was used, the algorithm is not guaranteed to find the global minimum error, but rather it may find a local minimum [9]. ANN s are considered slow learners because they take on the order of tens of hours to train. This was not important because the network was trained offline. Once trained, ANN s are very fast classifiers--on the order of 100 or so instructions for our purposes. Speed of classification is vitally important because this must take place during the standard operation of the TCP stack. Because we used a fully connected, multi-layered ANN, the ANN can learn nonlinear functions [9]. This means that the ANN should be able to accurately learn and quickly express the function relating network conditions to packet loss Problem Definition and Features We defined the problem of AI congestion avoidance as follows: Given: Do: On a once per RTT basis, determine the value of features describing the current network conditions. Determine the percentage of the current congestion window (CWND) that can be transmitted before packets will be lost due to congestion. In order for our approach to be successful, we need to describe network conditions with features that may elucidate network congestion. We chose the following features: 1. Percentage of packets that have been retransmitted as of the current RTT. 2. Percent change in CWND from the previous 10 RTT s 2. These features will capture the window size changes used by Jain s CARD approach [8]. 3. Percent change in RTT from the previous 10 RTT s. 4. Percent change in throughput from the previous 10 RTT s By looking back 10 RTT s for changes in throughput, we believed the ANN could establish whether the transmission is seeing reduced throughput due to congestion. By using change in RTT and CWND features, the ANN should be able to determine the precise cause of the changes in throughput (i.e., a reduction in RTT and or a reduction in CWND). It is important to note that an accurate basertt measurement is not required for our description of network conditions. In this way, TCP Vegas will not be susceptible to the problems described by Mo et al. [10]. In addition, percentages were used to describe the features so specific values of RTT, CWND, throughput, and number of retransmitted packets would not be learned. This will assist in the ability of the ANN to generalize and perform successfully for connections with different raw RTT, CWND, throughput, and retransmission values Training the ANN To collect training data, we altered a TCP Vegas stack to increase the congestion window linearly during congestion avoidance. For each RTT, we randomly decreased the value of CWND to insure we trained over many changes in CWND. For every packet transmitted in the congestion avoidance state, we recorded the network conditions represented by the features described in section We categorized these packets by a boolean output value in which a 1 indicates the packet was retransmitted at some point and a 0 indicates no retransmission was necessary. Since the network measurements were made only once per RTT (i.e., all packets sent in a given RTT have the same feature values), we were able to compress our training data into examples that described the network conditions for a given RTT and the percentage of the current CWND that was successfully transmitted before the first retransmission. We collected training data over 3 distinct collections of simulations constructed to be representative of low, high, and bursty network traffic conditions. Each collection of simulations consisted of examples generated during 120 simulations with the associated background traffic and variations in bandwidth, link delay, and transfer size. We created three training folds from the three simulation datasets using the leave-one-out method in which train1 contains 2 Because ns-2 uses a fixed packet size, we were able to use change in CWND size to represent the change in the number of bytes per RTT. In a real TCP implementation, it is likely that these features would need to be changed to a percentage change in the number of bytes sent for a given RTT. 3

4 all datasets except the first, and train2 contains all datasets except the second, etc. The corresponding test set for each fold is the dataset that was left out. We trained an ANN over 10,000 epochs for each of the three training/test folds. The ANN s consisted of 31 input units and an equal number of hidden units. We used early stopping to avoid overfitting the training data by training too long. To decide when to stop training, we created a tuning set of data by randomly removing 10% of the training data. After each epoch we measured the current accuracy of the ANN s predictions on the tuning set. The ANN s arc weights for the cycle that performs the best on the tuning set are the weights used to classify the test set. When classifying an example, the ANN can make two errors: False Back-off: The ANN suggests a back-off in CWND that is greater than the test example indicates is necessary. False Increase: The ANN suggests that a greater percentage of the current CWND can be sent without retransmission than indicated by the test example. A false increase may result in a packet being lost due to congestion. This loss will result in either a fast-retransmission or a coarse grained timeout, both of which can reduced connection throughput. A false back-off may also diminish the optimal throughput for a connection. The connection could send more data, but the CWND has been reduced. Thus, a well-trained ANN will minimize both the false increase and back-off error in order to achieve high connection throughput. Table 1 shows average error rates over the test example from the three training/test folds. The results are divided into sub-groups based on the test example output value. Test examples with output values less than 1 represent congestion conditions in which the CWND should be reduced. Test examples with an output value of 1 represent noncongestion conditions in which a full CWND of packets can be sent without the need for retransmission. The results of training with all of the features lead to a very good average error rate of 5%. We characterized the ANN as aggressive because the overall false increase to back-off error ratio is 1.9:1. For congestion examples, this ratio increases as the degree of congestion increases. We are not overly concerned with congestion examples indicating high levels of congestion (e.g. output values 0.0 < 0.5). This level of congestion may result in loss no matter what degree of CWND reduction. This ANN is too aggressive for moderate to low congestion levels (i.e., > 0.5) and will likely not allow for sufficient back-off in the face of congestion. We hoped to increase accuracy and decrease the false increase to back-off error ratio, so we sought to reevaluate our features. Using the information gain ideas presented in Quinlan s ID3 [9] decision tree building algorithm, we determined the features that best assisted in identifying congestion. We divided the training examples into congestion and noncongestion groups as described above. Information gain is a measure of how well the given feature separates the training examples and is calculated as follows: Entropy( S) P( congestion) log 2 P( congestion) P( non congestion) log 2 P( non congestion) (Eq. 1) where S is a set of all training examples, and P(congestion) and P(non-congestion) are estimated by computing the fractions of examples in S with output values of 1 and less than 1 respectively. Sv LT Sv GT InfoGain ( S, F, v) Entropy( S) ( Entropy( Sv ) + Entropy( Sv )) (Eq. 2) S LT S GT where v is a value that separates at least one congestion and non-congestion example for continuous feature F (e.g. deltartt1 4%), Sv LT is the subset of S for which feature F has values less than or equal to v, and Sv GT is the subset of S for which feature F has values greater than v. In general, we only want to use the features with the most information, because non-informative features could obscure the function being learned. 4

5 Table 1. ANN Training Error Results Training Set Average Error (%) False Increase Error (%) False Back-off Error (%) Increase/Backoff Error Ratio Percent Correct (%) All Features < < < < = Feature Selected by ID < < < < = All Features (Weight 15) < < < < = All Features (Weight 7) < < < < = All percentages are averaged over 3 training/test folds. The sub-groups for each training set are ranges of test example output values. The output value is the percentage of the current CWND that can be sent before the first packet retransmission. The percent correct column corresponds to test examples whose output matches the ANN output ± We implemented ID3 to simply pick the root node for a decision tree. We recorded the calculated information gain over all of the training data for every feature. We averaged the information gain for the CWND, RTT and throughput features over all RTT s and over every pair of RTT s. The results are shown in Figure 1. Figure 1 (a) shows the relative information contained in changes in CWND, RTT, and throughput. Changes CWND contains the most information about whether a packet will need to be retransmitted in the next RTT. This is not surprising considering changes in CWND directly affect the number of packets on the network. While not shown in Figure 1, the fraction of packets retransmitted was more informative than any of the CWND features. Interestingly, Figure 1 (b-d) shows the information in the previous two RTT s is at least 25% or more informative that information in RTT s Thus, we decided to eliminate features describing changes in RTT, CWND, and throughput for RTT s Again, we trained an ANN over epochs for each of the three training/test folds using only the selected features (Table 1). Surprisingly, the false increase/back-off error ratio increased by 20% although the overall predictive accuracy remained the same. The ANN s became more aggressive that those trained with all of the features. We concluded that while the information gain for changes in CWND, RTT, and throughput in RTT s 3-10 is significantly lower that the information gain in RTT s 1-2, the additional information contained in these features allows the ANN to discriminate between congestion and non-congestion network conditions. Therefore, all features must be used to more accurately predict network congestion. In an attempt to place more emphasis on avoiding false increases, we weighted the training examples that represent network congestion by a factor of 15. This weighting factor was selected because there were roughly 15 times as many non-congestion examples as congestion examples in the training data. We implemented the weighting factor by training each congestion example fifteen consecutive times during a given epoch. Because of this weighting factor, the ANN would be trained on a roughly equal number of congestion and non-congestion examples. As Table 1 shows, the false increase to back-off error drops dramatically with weighting. Unfortunately, this ANN may be too conservative because the ANN is more likely to error by overestimating congestion levels. Ideally, the ANN would make an equal proportion of false increase and false back-off errors. Such an ANN would likely accurately predict network congestion over a large sample of predictions. Therefore, we weighted congestion examples by a factor of 7, retrained, and achieved a desirable false increase to back-off ratio of 0.9:1. Because of this balance between aggression and conservativeness, we decided that a weighting factor of 7 was appropriate for our implementation. 5

6 Finally we compiled the training data into a single dataset and trained for 10,000 epochs with a weighting factor of 7 for congestion examples. After training, the network weights were stored and used to initialize the ANN s used for the TCP NewReno and Vegas stack implementations. Figure 1. Average Information Gain (Normalized by Info Gain for CWND1-10) Information Gain for Changes in CWND, RTT, and Thruput over 10 RTT's CWND RTT Thruput Average Information Gain (Normalized by Info Gain for CWND1-2) CWND 1-2 Information Gain for Changes in CWND over 2 RTT's CWND 3-4 CWND 5-6 CWND 7-8 CWND 9-10 (a) (b) Average Information Gain (Normalized by Info Gain for RTT1-2) Information Gain for Changes in RTT over 2 RTT's RTT 1-2 RTT 3-4 RTT 5-6 RTT 7-8 RTT 9-10 Average Information Gain (Normalized by Info Gain for TP1-2) Information Gain for Changes in Thruput over 2 RTT's TP 1-2 TP 3-4 TP 5-6 TP 7-8 TP 9-10 (c) (d) 5 Implementation We augmented both TCP NewReno and TCP Vegas stacks for ns-2 simulation. Once per RTT, the stacks will collect current RTT, CWND, throughput, and retransmitted packet information. When in the congestion avoidance state the stack will execute the following logic once per RTT: 1. Compile the RTT, CWND, and throughput data over the past 10 RTT s. If there are not 10 RTT s worth of information, the change in RTT, change in CWND, and change in throughput features for the unspecified RTT s are assigned values of Calculate the current the ratio of number of retransmitted packets and packets sent. 3. Enter this information into an ANN that has been initialized with the weights established from the final ANN training described in section Multiply the current CWND by the output of the ANN. The output will be a number between 0 and 1 representing the percentage of the current CWND that can be sent before a retransmission will be necessary. 5. If the CWND is less than 4, we set the CWND to 4. This will minimize the number of coarse-grained timeouts by allowing enough packets to be sent to possibly generate a fast retransmit after a packet loss. 6. Linearly increase the CWND by 1/CWND for all ACK s received during the RTT. This algorithm is a modified hybrid of NewReno and Vegas. Like NewReno, CWND always linearly increases while in congestion avoidance, but in the spirit of Vegas, the CWND can be reduced when congestion is detected. 6

7 6 Results Benchmark Simulations Figure 2. Benchmark Network Topology Legend Either an FTP or HTTP traffic generator TCP traffic sink Simple drop tail queue router 1.544Mbps 10ms delay link (T1) 100Mbps 1 ms delay link In order to determine the success of our artificially intelligent NewReno (AINewReno) and Vegas (AIVegas) stack implementations, we ran benchmark simulations with TCP Vegas and TCP NewReno. We chose NewReno rather than Reno because this variant is widely deployed. The benchmark topology is shown in Figure 2. This topology was chosen for its simplicity. The senders are either FTP or HTTP traffic generators. The FTP senders all transmit 5MB files using either a NewReno or Vegas TCP implementation. The HTTP senders add a bursty quality to the network traffic. They have exponentially distributed off times between page requests. Three traffic models were benchmarked. The traffic was designed to model as closely as possible real low, high, and bursty network traffic. The low traffic consisted of 2 FTP senders and 30 HTTP senders. We believe most users are not transferring large files at any given time and thus we chose a 15:1 ratio of Web users to FTP users. The high traffic simulations consisted of a 50% increase in both FTP and HTTP senders. We did not want to have huge amounts of network congestion that resulted in inevitable loss (i.e., even at the lowest CWND sizes, loss is likely to occur). The high traffic simulations were designed to generate loss, but allow for a more conservative TCP implementation to avoid this loss via intelligent management of the congestion window size. Finally, we designed a group of bursty simulations. While the low and high traffic models have a bursty quality because of the HTTP senders, the bursty simulations increased the bursty nature of the traffic by adding on/off behavior to the FTP senders. The results of the benchmark simulations are shown in Table 2. Our benchmark simulations do not show improved throughput for Vegas over NewReno. The original Brakmo et al. [2] and subsequent studies [4] [10] compare Vegas to Reno. NewReno has enhanced functionality that can result in the congestion window remaining open even after a loss of consecutive packets. Since our simulations use drop-tail queues, packet loss will likely result in consecutive packets being lost, so we expect NewReno to handle such losses efficiently. The results show that NewReno is able to achieve high throughput in spite of lower goodput relative to Vegas. Vegas is able to avoid packet retransmissions, but its conservative approach to opening the CWND appears to harm throughput. It is important to note that Vegas conservatively opens CWND in both slow-start and congestion avoidance, but for our purposes, we hope changing the congestion avoidance algorithm will result in improved throughput. The results also show that Vegas senders do not receive their fair-share of network bandwidth when competing with aggressive NewReno senders. Vegas throughput is reduced by up to 35% when competing with NewReno senders rather than Vegas senders. In addition, we see NewReno senders increasing throughput by up to 50% when competing with Vegas rather than NewReno senders. Based on these simulations we see opportunities to improve both NewReno and Vegas. NewReno retransmits many packets. If this could be reduced, the throughput for NewReno could be improved. In addition, Vegas is too conservative, and in spite of high goodput relative to NewReno, Vegas lags in throughput. If Vegas could adopt a more aggressive congestion avoidance strategy, throughput could increase. 7

8 Table 2. Benchmark Simulation Results Sender/Background Thruput Thruput Ratio vs. Retransmissions Goodput Timeouts Senders (KB/s) NewReno/NewReno (KB) NewReno/NewReno Low High Bursty NewReno/Vegas Low High Bursty Vegas/NewReno Low High Bursty Vegas/Vegas Low High Bursty All results are averaged over 5 ns-2 simulations where low traffic is modeled by an average of 2 FTP and 30 Telnet senders, high traffic is modeled by an average of 3 FTP and 45 Telnet senders, and bursty traffic is modeled by low traffic with bursts of FTP senders starting and stopping. All HTTP senders send according to an exponential distribution and all FTP senders send continuously according to the specified TCP implementation AINewReno Results We reran the benchmark simulations using our AINewReno stack, which implemented the congestion avoidance algorithm described in section 5. It is important to note that the ANN used in the AINewReno implementation did not train on the network configuration used in these evaluation simulations. Tables 3 and 4 show the throughput and loss results of the AINewReno evaluation simulations. When comparing NewReno/NewReno to AINewReno/AINewReno, the results show that AINewReno achieves nearly the same throughput as NewReno. In fact, we see an increase in throughput over low traffic. This is likely because our ANN is able to more accurately detect moderate to low levels of congestion than higher levels of congestion (Table 1). Table 4 shows that under low traffic conditions, packet loss was reduced by 58%, which allowed the AINewReno implementation to improve throughput over NewReno. This trend is also observed in the high traffic simulations. Although the throughput does not increase for AINewReno, we still observed a 26% reduction in packet loss. Interestingly, AINewReno throughput performance was somewhat reduced in the bursty simulations even though we saw a 40% reduction in packet loss. The likely cause is over zealous back-offs during a burst that harms overall throughput. Overall, we view AINewReno to be superior to NewReno because it achieves virtually equal throughput and reduces packet loss by at least 26%. Tables 3 and 4 also show that Table 3. AINewReno Throughput Results AINewReno is at least as fair as NewReno. Sender/Background Thruput Thruput Ratio vs. NewReno senders achieve improved Senders (KB/s) NewReno/NewReno throughput when competing with NewReno/NewReno AINewReno background traffic in all Low simulations except high traffic. We do not High view the 4% reduction in NewReno Bursty throughput in the high traffic simulation as AINewReno/NewReno significant and attribute it to a small sample Low size. In addition, AINewReno does not lose High a large amount of network bandwidth when Bursty competing with NewReno. The 22% NewReno/AINewReno reduction seen in the bursty case is likely Low attributed to over zealous back-offs in the High face of large bursts of congestion. It is Bursty important to note that Vegas senders saw at AINewReno/AINewReno least a 26% reduction in throughput when Low competing with NewReno (Table 2). Thus High we conclude that AINewReno is far superior Bursty to Vegas with respect to receiving its fair share of network bandwidth. 8

9 Table 4. AINewReno Packet Loss Results Sender/Background Retransmissions Retransmission Ratio vs. Goodput Timeouts Senders (KB) NewReno/NewReno NewReno/NewReno Low High Bursty AINewReno/NewReno Low High Bursty NewReno/AINewReno Low High Bursty AINewReno/AINewReno Low High Bursty All results in Tables 3 and 4 are averaged over 5 ns-2 simulations where low traffic is modeled by an average of 2 FTP and 30 Telnet senders, high traffic is modeled by an average of 3 FTP and 45 Telnet senders, and bursty traffic is modeled by low traffic with bursts of FTP senders starting and stopping. All HTTP senders send according to an exponential distribution and all FTP senders send continuously according to the specified TCP implementation. The congestion avoidance behavior of AINewReno versus NewReno can best be understood by tracing changes in CWND size (Figure 3). Initially AINewReno keeps the CWND at a smaller size than NewReno and thus is not optimally using network bandwidth. Because AINewReno is more conservative than NewReno, it is able to avoid the retransmission induced cut downs of the CWND seen in the NewReno trace at 55.5 and 56.5 seconds. Because AINewReno has not induced as much packet loss as NewReno, it able to use more bandwidth while NewReno is backing off. In this way, AINewReno achieves roughly the same throughput as NewReno while losing fewer packets. Figure 3. CWND (Packets) NewReno AINewReno Backoffs NewReno vs. AINewReno CWND Trace Time (Seconds) 9

10 6.1.3 AIVegas Results Given the behavior of AINewReno, we anticipated high throughput and a marginal increase in packet loss for AIVegas when compared to Vegas. Surprisingly, throughput was reduced up to 35% and the number of retransmissions was increased by up to a factor of 34. Not only does AIVegas achieve poor throughput, it also appears to be more aggressive than NewReno. AIVegas reduces the throughput of Vegas senders by 4% more than NewReno over all simulations. These results were particularly puzzling since Vegas is more conservative that NewReno in slow-start as well as congestion avoidance. The conservative Vegas slow-start algorithm remained unchanged in the AIVegas implementation so the slow-start and congestion avoidance algorithms in AIVegas should still be less aggressive than NewReno. We focused on the excessive number of packets retransmitted in order to explain the results. Vegas is much more aggressive than NewReno with respect to retransmitting packets. When a duplicate ACK is received, Vegas checks whether a fine-grained timeout has occurred. If so, the packet is immediately retransmitted. In the ns-2 Vegas implementation, if the packet has not previously been retransmitted, the congestion window will only be reduced by 25% and subsequently increased by the current number of duplicate ACK s. Only if the packet has been retransmitted multiple times does the CWND get cut in half. During our simulations, the CWND was reduced by 25% on over 90% of retransmissions. In this way, Vegas can keep the congestion window open in the face of a few losses. The aggressive retransmission behavior of Vegas combined with an aggressive congestion avoidance can lead to high levels of network congestion. While the AIVegas congestion avoidance algorithm backs-off when it senses congestion, the accuracy of such detection actually gets worse at higher degrees of congestion (Table 1). Thus, if during the low to moderate levels of congestion, the AIVegas stack does not sufficiently back-off, high levels of congestion may be achieved that are not well detected by the ANN. This will results in large numbers of packet retransmissions and reduced throughput. Thus, AIVegas was a dysfunctional stack because of its aggressive retransmission policy combined with inadequate back-offs in the AI congestion avoidance scheme. 7 Discussion Our AINewReno implementation achieved most of the goals we set forth for our artificially intelligence congestion avoidance algorithm. We were unable to significantly alter throughput, but we were able to reduce the number of retransmitted packets by at least 26%. Thus we conclude that AINewReno is more network friendly and performs better than NewReno. Conversely, the AIVegas implementation failed to increase throughput or reduce the number of retransmissions. This results from the combination of the aggressive retransmit algorithm inherent in Vegas and the AI congestion avoidance scheme. Since Vegas is not widely deployed, we think these results hold little significance. Although our AINewReno results are encouraging, we see opportunity to improve these results. The following are future areas of investigation using ANN learners: 1. Our system contains a feedback loop in which we make decisions on how much to change the CWND based partly on how much CWND has changed in previous RTT s. We tried to eliminate this loop by not using CWND or throughput features, but the performance of the implementation degraded significantly. Our information gain analysis indicated that CWND size appears to be vital to assessing network congestion conditions. Clever modification to the implementation would be necessary to eliminate this feedback loop. 2. We would like to train on more examples that were collected on a real network versus the ns-2 simulator. While we had roughly 45,000 training examples, it may be necessary to have on the order of millions to insure maximal accuracy. In addition, these examples should be collected via transmissions traversing disparate topologies. Router and link behavior are important factors when assessing the level of congestion. 3. Our algorithm was designed to be simple but it has no mathematical backing. Perhaps a mathematically derived formula could better define the proper action to take given the level of network congestion. AI implementations may be able to outperform less intelligent protocols like the NewReno congestion avoidance algorithm. Using supervised learning algorithms such as backpropagation with ANN s has a few drawbacks. The first is the need for large sets of training examples. Retraining may be necessary if new links or routers are installed that greatly alter packet loss behavior. These issues may best be solved by using AI learning techniques, such as reinforcement learning, that are able to learn over time. While there is opportunity to expand or improve upon our work, a foundation has been laid for further investigation. 10

11 8 Acknowledgements Thanks to Joel Sommers for numerous tips on ns-2. Perhaps ns-2 will be compatible with the most current g++ or gcc compiler in the near future to ease the pain inflicted on grad students simply trying to compile the application. 9 References [1] Ahn, J.S., P.B. Danzig, Z. Liu, and L. Yan. Evaluation of TCP Vegas: Emulation and Experiment. In Proceedings of ACM SIGCOMM 95, , August [2] Brakmo L.S., S.W. O Malley, and L. Peterson. TCP Vegas: New Techniques for Congestion Detection and Avoidance. In Proceedings Of ACM SIGCOMM 94, 24-35, London, October [3] Hengartner, U., J. Bolliger, and T Gross. TCP Vegas Revisited. In Proceedings of IEEE INFOCOM 2000, Tel Aviv, Israel, March [4] Hutchinson, N.C., and L.L. Peterson. The x-kernel: An Architecture for Implementing Network Protocols. IEEE Transactions on Software Engineering, 17(1):64-76, January [5] Floyd, S. TCP and Explicit Congestion Notification. ACM Computer Communication Review, 24(5):10-23, October [6] Floyd, S., and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Transactions on Networking, 1(4): , August [7] Jacobsen, V. and M.J. Karels. Congestion Avoidance and Control. In Proceedings ACM SIGCOMM 88, , August [8] Jain, R. A Delay-Based Approach for Congestion Avoidance in Interconnected Heterogeneous Computer Networks. ACM Computer Communication Review, 19(5):56-71, October [9] Mitchell, T. Machine Learning. McGraw-Hill, Boston, [10] Mo, J., R.J. La, V. Anantharam, and J. Walrand. Analysis and Comparison of TCP Reno and Vegas. In Proceedings of INFOCOM 99, , March [11] Ramaskrishnan, K.K., and R. Jain. A Binary Feedback Scheme for Congestion Avoidance in Computer Networks. ACM Transactions on Computer Systems, 8(2): , [12] Wang, Z., and J. Crowcroft. A New Congestion Control Scheme: Slow Start and Search (Tri-S). ACM Computer Communication Review, 21(1):32-43, January [13] Wang, Z., and J. Crowcroft. Eliminating Periodic Packet Losses in 4.3-Tahoe BSD TCP Congestion Control Algorithm. ACM Computer Communication Review, 22(2):9-16, April

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

TCP Vegas Revisited. U. Hengartner 1, J. Bolliger 1 and Th. Gross 1;2. ETH Zürich Carnegie Mellon University CH 8092 Zürich Pittsburgh, PA 15213

TCP Vegas Revisited. U. Hengartner 1, J. Bolliger 1 and Th. Gross 1;2. ETH Zürich Carnegie Mellon University CH 8092 Zürich Pittsburgh, PA 15213 TCP Vegas Revisited U. Hengartner 1, J. Bolliger 1 and Th. Gross 1;2 1 Departement Informatik 2 School of Computer Science ETH Zürich Carnegie Mellon University CH 8092 Zürich Pittsburgh, PA 15213 Abstract

More information

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion Issues Two sides of the same coin pre-allocate resources to avoid congestion (e.g. telephone networks) control congestion

More information

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK

A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK A COMPARATIVE STUDY OF TCP RENO AND TCP VEGAS IN A DIFFERENTIATED SERVICES NETWORK Ruy de Oliveira Federal Technical School of Mato Grosso Brazil Gerência de Eletroeletrônica E-mail: ruy@lrc.feelt.ufu.br

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

An Enhanced Slow-Start Mechanism for TCP Vegas

An Enhanced Slow-Start Mechanism for TCP Vegas An Enhanced Slow-Start Mechanism for TCP Vegas Cheng-Yuan Ho a, Yi-Cheng Chan b, and Yaw-Chung Chen a a Department of Computer Science and Information Engineering National Chiao Tung University b Department

More information

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Fast Retransmit Problem: coarsegrain TCP timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Sender Receiver

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle Dissertation Defense May 14, 2003 Advisor: Kevin Jeffay Research Question Can the use of exact timing information improve

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Explicit Congestion Notification for Error Discrimination

Explicit Congestion Notification for Error Discrimination Explicit Congestion Notification for Error Discrimination A practical approach to Improve TCP performance over wireless networks M. A. Alnuem, J. E. Mellor, R. J. Fretwell Mobile Computing, Networks and

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project (2002) TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation

More information

Performance Comparison of TFRC and TCP

Performance Comparison of TFRC and TCP ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS FINAL PROJECT Performance Comparison of TFRC and TCP Spring 2002 Yi Zheng and Jian Wen {zyi,jwena}@cs.sfu.ca

More information

Congestion Control. Resource allocation and congestion control problem

Congestion Control. Resource allocation and congestion control problem Congestion Control 188lecture8.ppt Pirkko Kuusela 1 Resource allocation and congestion control problem Problem 1: Resource allocation How to effectively and fairly allocate resources among competing users?

More information

Rate Based Pacing with Various TCP Variants

Rate Based Pacing with Various TCP Variants International OPEN ACCESS Journal ISSN: 2249-6645 Of Modern Engineering Research (IJMER) Rate Based Pacing with Various TCP Variants Mr. Sreekanth Bandi 1, Mr.K.M.Rayudu 2 1 Asst.Professor, Dept of CSE,

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control COMPUTER NETWORKS CS 45201 CS 55201 CHAPTER 6 Congestion Control COMPUTER NETWORKS CS 45201 CS 55201 CHAPTER 6 Congestion Control P. Farrell and H. Peyravi Department of Computer Science Kent State University

More information

TCP Westwood: Efficient Transport for High-speed wired/wireless Networks

TCP Westwood: Efficient Transport for High-speed wired/wireless Networks TCP Westwood: Efficient Transport for High-speed wired/wireless Networks Mario Gerla, Medy Sanadidi, Ren Wang and Massimo Valla UCLA Computer Science 1 Outline 1. TCP Overview 2. Bandwidth Estimation and

More information

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS Harjinder Kaur CSE, GZSCCET, Dabwali Road, Bathinda, Punjab, India, sidhuharryab@gmail.com Gurpreet Singh Abstract CSE, GZSCCET, Dabwali

More information

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project (2002) TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Congestion Collapse in the 1980s

Congestion Collapse in the 1980s Congestion Collapse Congestion Collapse in the 1980s Early TCP used fixed size window (e.g., 8 packets) Initially fine for reliability But something happened as the ARPANET grew Links stayed busy but transfer

More information

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Performance Evaluation of TCP in the Presence of in Heterogeneous Networks by using Network

More information

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) TETCOS Transmission Control Protocol (TCP) Comparison of TCP Congestion Control Algorithms using NetSim @2017 Tetcos. This document is protected by copyright, all rights reserved Table of Contents 1. Abstract....

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks Congestion control in TCP Contents Principles TCP congestion control states Congestion Fast Recovery TCP friendly applications Prof. Andrzej Duda duda@imag.fr http://duda.imag.fr

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang ENSC 835 project TCP performance over satellite links Kenny, Qing Shao Qshao@cs.sfu.ca Grace, Hui Zhang Hzhange@cs.sfu.ca Road map Introduction to satellite communications Simulation implementation Window

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Network Working Group. Category: Experimental February TCP Congestion Control with Appropriate Byte Counting (ABC)

Network Working Group. Category: Experimental February TCP Congestion Control with Appropriate Byte Counting (ABC) Network Working Group M. Allman Request for Comments: 3465 BBN/NASA GRC Category: Experimental February 2003 TCP Congestion Control with Appropriate Byte Counting (ABC) Status of this Memo This memo defines

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Research Article ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3 Address for Correspondence 1 Asst. Professor, Department of Electronics

More information

Wireless TCP Performance Issues

Wireless TCP Performance Issues Wireless TCP Performance Issues Issues, transport layer protocols Set up and maintain end-to-end connections Reliable end-to-end delivery of data Flow control Congestion control Udp? Assume TCP for the

More information

Transport Layer (Congestion Control)

Transport Layer (Congestion Control) Transport Layer (Congestion Control) Where we are in the Course Moving on up to the Transport Layer! Application Transport Network Link Physical CSE 461 University of Washington 2 Congestion Collapse Congestion

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

A Survey on Quality of Service and Congestion Control

A Survey on Quality of Service and Congestion Control A Survey on Quality of Service and Congestion Control Ashima Amity University Noida, U.P, India batra_ashima@yahoo.co.in Sanjeev Thakur Amity University Noida, U.P, India sthakur.ascs@amity.edu Abhishek

More information

TCP Congestion Control. Housekeeping. Additive Increase/Multiplicative Decrease. AIMD (cont) Pick up folders for exam study Exam next Friday, Nov.

TCP Congestion Control. Housekeeping. Additive Increase/Multiplicative Decrease. AIMD (cont) Pick up folders for exam study Exam next Friday, Nov. Fall 01 CptS/EE 555 3 Fall 01 CptS/EE 555 4 TCP Congestion Control Idea assumes best-effort network (FIFO or FQ routers)each source determines network capacity for itself uses implicit feedback ACKs pace

More information

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways Haowei Bai AES Technology Centers of Excellence Honeywell Aerospace 3660 Technology Drive, Minneapolis, MN 5548 E-mail: haowei.bai@honeywell.com

More information

Tuning RED for Web Traffic

Tuning RED for Web Traffic Tuning RED for Web Traffic Mikkel Christiansen, Kevin Jeffay, David Ott, Donelson Smith UNC, Chapel Hill SIGCOMM 2000, Stockholm subsequently IEEE/ACM Transactions on Networking Vol. 9, No. 3 (June 2001)

More information

TCP over Wireless. Protocols and Networks Hadassah College Spring 2018 Wireless Dr. Martin Land 1

TCP over Wireless. Protocols and Networks Hadassah College Spring 2018 Wireless Dr. Martin Land 1 TCP over Wireless Protocols and Networks Hadassah College Spring 218 Wireless Dr. Martin Land 1 Classic TCP-Reno Ideal operation in-flight segments = cwnd (send cwnd without stopping) Cumulative ACK for

More information

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL SEIFEDDINE KADRY 1, ISSA KAMAR 1, ALI KALAKECH 2, MOHAMAD SMAILI 1 1 Lebanese University - Faculty of Science, Lebanon 1 Lebanese University - Faculty of Business,

More information

Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012

Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012 Assignment 10: TCP and Congestion Control Due the week of November 14/15, 2012 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

Impact of transmission errors on TCP performance. Outline. Random Errors

Impact of transmission errors on TCP performance. Outline. Random Errors Impact of transmission errors on TCP performance 1 Outline Impact of transmission errors on TCP performance Approaches to improve TCP performance Classification Discussion of selected approaches 2 Random

More information

What is Congestion? Congestion: Moral of the Story. TCP Approach. Transport Layer: TCP Congestion Control & Buffer Management

What is Congestion? Congestion: Moral of the Story. TCP Approach. Transport Layer: TCP Congestion Control & Buffer Management Transport Layer: TCP Congestion Control & Buffer Management Congestion Control What is congestion? Impact of Congestion Approaches to congestion control TCP Congestion Control End-to-end based: implicit

More information

Chaoyang University of Technology, Taiwan, ROC Nan-Kai Institute of Technology, Taiwan, ROC

Chaoyang University of Technology, Taiwan, ROC Nan-Kai Institute of Technology, Taiwan, ROC TCP-Taichung: A RTT-Based Predictive Bandwidth Based with Optimal Shrink Factor for TCP Congestion Control in Heterogeneous Wired and Wireless Networks Ben-Jye Chang 1,Shu-YuLin 1, and Ying-Hsin Liang

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 232-7345 TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS Reza Poorzare 1 Young Researchers Club,

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

TCP congestion control:

TCP congestion control: TCP congestion control: Probing for usable bandwidth: Ideally: transmit as fast as possible (cwnd as large as possible) without loss Increase cwnd until loss (congestion) Loss: decrease cwnd, then begin

More information

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < : A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks Visvasuresh Victor Govindaswamy,

More information

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract:

Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: Title Problems of TCP in High Bandwidth-Delay Networks Syed Nusrat JJT University, Rajasthan, India Abstract: The Transmission Control Protocol (TCP) [J88] is the most popular transport layer protocol

More information

Performance Evaluation of TCP Westwood. Summary

Performance Evaluation of TCP Westwood. Summary Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data? Congestion Control End Hosts CSE 51 Lecture 7, Spring. David Wetherall Today s question How fast should the sender transmit data? Not tooslow Not toofast Just right Should not be faster than the receiver

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

TCP Veno: Solution to TCP over Wireless

TCP Veno: Solution to TCP over Wireless TCP Veno: Solution to TCP over Wireless Franklin FU Presented by Franklin Fu Asst Professor School of Computer Engineering Nanyang Technological University Singapore January 31, 2004, 5:00am Singapore

More information

TCP Congestion Control in Wired and Wireless networks

TCP Congestion Control in Wired and Wireless networks TCP Congestion Control in Wired and Wireless networks Mohamadreza Najiminaini (mna28@cs.sfu.ca) Term Project ENSC 835 Spring 2008 Supervised by Dr. Ljiljana Trajkovic School of Engineering and Science

More information

ISSN: International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013

ISSN: International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 Balanced window size Allocation Mechanism for Congestion control of Transmission Control Protocol based on improved bandwidth Estimation. Dusmant Kumar Sahu 1, S.LaKshmiNarasimman2, G.Michale 3 1 P.G Scholar,

More information

TCP Revisited CONTACT INFORMATION: phone: fax: web:

TCP Revisited CONTACT INFORMATION: phone: fax: web: TCP Revisited CONTACT INFORMATION: phone: +1.301.527.1629 fax: +1.301.527.1690 email: whitepaper@hsc.com web: www.hsc.com PROPRIETARY NOTICE All rights reserved. This publication and its contents are proprietary

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

RD-TCP: Reorder Detecting TCP

RD-TCP: Reorder Detecting TCP RD-TCP: Reorder Detecting TCP Arjuna Sathiaseelan and Tomasz Radzik Department of Computer Science, King s College London, Strand, London WC2R 2LS {arjuna,radzik}@dcs.kcl.ac.uk Abstract. Numerous studies

More information

Flow and Congestion Control Marcos Vieira

Flow and Congestion Control Marcos Vieira Flow and Congestion Control 2014 Marcos Vieira Flow Control Part of TCP specification (even before 1988) Goal: not send more data than the receiver can handle Sliding window protocol Receiver uses window

More information

Applying TCP Congestion Control Schemes on NDN and their Implications for NDN Congestion Control. Shuo Yang

Applying TCP Congestion Control Schemes on NDN and their Implications for NDN Congestion Control. Shuo Yang Applying TCP Congestion Control Schemes on NDN and their Implications for NDN Congestion Control Shuo Yang Abstract While congestion control in TCP has been well studied, the future Internet architecture

More information

ANALYSIS OF TCP ALGORITHMS IN THE RELIABLE IEEE b LINK

ANALYSIS OF TCP ALGORITHMS IN THE RELIABLE IEEE b LINK ANALYSIS OF TCP ALGORITHMS IN THE RELIABLE IEEE 80.11b LINK Lukas Pavilanskas Department Of Telecommunications Vilnius Gediminas Technical University Naugarduko 41, Vilnius, LT-037, Lithuania E-mail: lukas.pavilanskas@el.vtu.lt

More information

THE TCP specification that specifies the first original

THE TCP specification that specifies the first original 1 Median Filtering Simulation of Bursty Traffic Auc Fai Chan, John Leis Faculty of Engineering and Surveying University of Southern Queensland Toowoomba Queensland 4350 Abstract The estimation of Retransmission

More information

CSC 4900 Computer Networks: TCP

CSC 4900 Computer Networks: TCP CSC 4900 Computer Networks: TCP Professor Henry Carter Fall 2017 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

Chapter III: Transport Layer

Chapter III: Transport Layer Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control What is Congestion The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

TCP Congestion Control

TCP Congestion Control What is Congestion TCP Congestion Control The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca

CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca Based partly on lecture notes by David Mazières, Phil Levis, John Janno< Last Class CLOSED Passive open Close Close LISTEN Introduction to TCP

More information

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks

A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks A Bottleneck and Target Bandwidth Estimates-Based Congestion Control Algorithm for High BDP Networks Tuan-Anh Le 1, Choong Seon Hong 2 Department of Computer Engineering, Kyung Hee University 1 Seocheon,

More information

Cloud e Datacenter Networking

Cloud e Datacenter Networking Cloud e Datacenter Networking Università degli Studi di Napoli Federico II Dipartimento di Ingegneria Elettrica e delle Tecnologie dell Informazione DIETI Laurea Magistrale in Ingegneria Informatica Prof.

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Analyzing the Receiver Window Modification Scheme of TCP Queues

Analyzing the Receiver Window Modification Scheme of TCP Queues Analyzing the Receiver Window Modification Scheme of TCP Queues Visvasuresh Victor Govindaswamy University of Texas at Arlington Texas, USA victor@uta.edu Gergely Záruba University of Texas at Arlington

More information

Chapter 3 Transport Layer

Chapter 3 Transport Layer Chapter 3 Transport Layer 1 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

EVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2

EVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.157

More information

Lecture 4: Congestion Control

Lecture 4: Congestion Control Lecture 4: Congestion Control Overview Internet is a network of networks Narrow waist of IP: unreliable, best-effort datagram delivery Packet forwarding: input port to output port Routing protocols: computing

More information

A New Fair Window Algorithm for ECN Capable TCP (New-ECN)

A New Fair Window Algorithm for ECN Capable TCP (New-ECN) A New Fair Window Algorithm for ECN Capable TCP (New-ECN) Tilo Hamann Department of Digital Communication Systems Technical University of Hamburg-Harburg Hamburg, Germany t.hamann@tu-harburg.de Jean Walrand

More information

CS Transport. Outline. Window Flow Control. Window Flow Control

CS Transport. Outline. Window Flow Control. Window Flow Control CS 54 Outline indow Flow Control (Very brief) Review of TCP TCP throughput modeling TCP variants/enhancements Transport Dr. Chan Mun Choon School of Computing, National University of Singapore Oct 6, 005

More information

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday Lecture 15: TCP over wireless networks Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday TCP - recap Transport layer TCP is the dominant protocol TCP provides in-order reliable byte stream abstraction

More information

The War Between Mice and Elephants

The War Between Mice and Elephants The War Between Mice and Elephants (by Liang Guo and Ibrahim Matta) Treating Short Connections fairly against Long Connections when they compete for Bandwidth. Advanced Computer Networks CS577 Fall 2013

More information

Problems and Solutions for the TCP Slow-Start Process

Problems and Solutions for the TCP Slow-Start Process Problems and Solutions for the TCP Slow-Start Process K.L. Eddie Law, Wing-Chung Hung The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto Abstract--In this

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain

Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification Chunlei Liu Raj Jain Department of Computer and Information Science The Ohio State University, Columbus, OH 432-277

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control Lecture material taken from Computer Networks A Systems Approach, Third Ed.,Peterson and Davie, Morgan Kaufmann, 2003. Computer Networks: TCP Congestion Control 1 TCP Congestion

More information

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich Operating Systems and Networks Network Lecture 10: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks Operating Systems and Networks Network Lecture 0: Congestion Control Adrian Perrig Network Security Group ETH Zürich Where we are in the Course More fun in the Transport Layer! The mystery of congestion

More information