Problems and Solutions for the TCP Slow-Start Process

Similar documents
TCP Congestion Control

TCP Congestion Control

Congestion control in TCP

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol (TCP)

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

TCP based Receiver Assistant Congestion Control

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

TCP Congestion Control in Wired and Wireless networks

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Computer Networking Introduction

Congestion Collapse in the 1980s

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

A Survey on Quality of Service and Congestion Control

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

ENSC 835 project (2002) TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

TCP congestion control:

CS4700/CS5700 Fundamentals of Computer Networks

CS Transport. Outline. Window Flow Control. Window Flow Control

Improving TCP Performance over Wireless Networks using Loss Predictors

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

Transport Layer (Congestion Control)

Wireless TCP Performance Issues

CS321: Computer Networks Congestion Control in TCP

COMP/ELEC 429/556 Introduction to Computer Networks

An Enhanced Slow-Start Mechanism for TCP Vegas

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Linux 2.4 Implementation of Westwood+ TCP with Rate Halving : A Performance Evaluation over the Internet

image 3.8 KB Figure 1.6: Example Web Page

CS3600 SYSTEMS AND NETWORKS

Rate Based Pacing with Various TCP Variants

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

TCP Congestion Control 65KB W

Bandwidth Allocation & TCP

Lecture 4: Congestion Control

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

RED behavior with different packet sizes

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter III. congestion situation in Highspeed Networks

Chapter 6. What happens at the Transport Layer? Services provided Transport protocols UDP TCP Flow control Congestion control

CSE 461. TCP and network congestion

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

8. TCP Congestion Control

ENSC 835: COMMUNICATION NETWORKS

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

TCP Congestion Control

Impact of transmission errors on TCP performance. Outline. Random Errors

CSCI Topics: Internet Programming Fall 2008

TCP Congestion Control in Wired and Wireless Networks

ECE 461 Internetworking. Problem Sheet 6

ECE 333: Introduction to Communication Networks Fall 2001

Flow and Congestion Control Marcos Vieira

Performance Analysis of TCP Variants

Fuzzy based Tuning Congestion Window for Improving End-to-End Congestion Control Protocols

Advanced Computer Networks

ADVANCED COMPUTER NETWORKS

Chapter III: Transport Layer

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Internet Networking recitation #10 TCP New Reno Vs. Reno

Advanced Computer Networks

TCP Westwood: Efficient Transport for High-speed wired/wireless Networks

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Performance Analysis of TCP Variants under MANET Environment and using NS-2

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Communication Networks

cs/ee 143 Communication Networks

TCP Performance. EE 122: Intro to Communication Networks. Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim

Design of Network Dependent Congestion Avoidance TCP (NDCA-TCP) for Performance Improvement in Broadband Networks

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Transport Protocols and TCP: Review

Improving the Ramping Up Behavior of TCP Slow Start

Performance Evaluation of TCP Westwood. Summary

Lecture 15: Transport Layer Congestion Control

ECE4110, Internetwork Programming, QUIZ 2 - PRACTICE Spring 2006

Outline. CS5984 Mobile Computing

TCP Congestion Control

Chapter 3 Transport Layer

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

CSE 123A Computer Networks

TCP Review. Carey Williamson Department of Computer Science University of Calgary Winter 2018

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

TRANSMISSION CONTROL PROTOCOL

Flow and Congestion Control (Hosts)

Transport Protocols and TCP

CS 557 Congestion and Complexity

Networked Systems and Services, Fall 2017 Reliability with TCP

Improved Model for a Non-Standard TCP Behavior

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Experimental Study of TCP Congestion Control Algorithms

EVALUATING THE DIVERSE ALGORITHMS OF TRANSMISSION CONTROL PROTOCOL UNDER THE ENVIRONMENT OF NS-2

Acknowledgment packets. Send with a specific rate TCP. Size of the required packet. XMgraph. Delay. TCP_Dump. SlidingWin. TCPSender_old.

Network Working Group. Category: Experimental February TCP Congestion Control with Appropriate Byte Counting (ABC)

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Transcription:

Problems and Solutions for the TCP Slow-Start Process K.L. Eddie Law, Wing-Chung Hung The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto Abstract--In this paper, we shall investigate different performance issues regarding the existing Slow-Start mechanism in Transmission Control Protocol (TCP) flow control designs. The Slow-Start process is used whenever a new TCP connection is initiated. The design of the Slow-Start process is simple, and it has not been modified since 1988. Even though the Slow-Start mechanism is only one part of the TCP congestion control family, it indeed can affect system performance quite significantly. In this paper, we shall outline the packet burst loss problem that may arise with the TCP Slow-Start mechanism. Whenever this problem occurs, each TCP Reno connection could not proceed ahead and it relies only on the timeout recovery process though the network resources are idle. In this paper, we suggest two modified Slow-Start schemes that maybe able to resolve this problem that occurs in TCP Reno. Index terms Transmission Control Protocol, TCP Reno, TCP Niagara, Slow-Start, Congestion Avoidance, Fast Retransmit, Fast Recovery, Window Bound, Rate of Change of Round-Trip Time, First Timeout Recovery 1 INTODUCTION Transmission Control Protocol (TCP) specification [Postel81] stipulates the packet formats and methods of communications between two end hosts. We can design and modify the TCP flow control mechanisms as long as they retain all of its communication primitives defined in the specification. The flow and congestion control mechanisms used in TCP should retain all its communication protocol designs as defined in its specification. Currently, there is a de facto standard for the TCP congestion control through the Internet Engineering Task Force (IETF) [Allman99]. This recommendation is commonly known as TCP Reno. TCP in the Internet allows different connections to share network resources, and enables connection reliability between applications at two end-systems. When there is too much network traffic, network congestion may occur that may introduce instability to TCP traffic, and it may further instigate packet or connection drops. In order to provide connection stability, TCP regulates traffic going through the networks through several flow control mechanisms. There exist four flow control mechanisms in the Reno algorithm, which are the Slow-Start (SS), the Congestion Avoidance (CA), the Fast Retransmit, and the Fast Recovery. Though the flow control in Reno is well designed in general, continuous research is being carrying out for designing new end-to-end TCP flow and congestion controls. TCP Vegas was the example that was developed by Brakmo et al.[brak94, Brak95]. There is an improved version on Reno and it uses the concept of partial acknowledgements. This newer version is known as NewReno [Floyd99]. Usually, each of these different versions may only different in one flow control mechanism in the whole family. TCP is using sliding window flow control. There is a parameter, the congestion window (cwnd), that is used to control the sliding window size. For each TCP connection at the sender-side, there an one associated cwnd parameter. The amount of network traffic that can be generated without receiving acknowledgements from the receiver is bounded above by the sizes of the sliding window, cwnd, and the receive buffer. With the advancement of technology, the receive buffer size increases because of the low cost of memory. However, the design of TCP Reno provides only a loose bound on the cwnd. Consequently, TCP Reno provides a comparatively aggressive approach in delivering information through the networks. Therefore, the performance of a connection will likely be affected by other system and network parameters that are not associated with the controls on the congestion window in the Reno algorithm. In TCP Reno, both the Slow-Start and Congestion Avoidance processes carry out addition operations on congestion window; only the Fast Recovery mechanism reduces the size of a congestion window. In this paper, we intend to design a responsible and responsive Slow-Start process. Therefore, we investigate different shortfalls in existing design in this paper. We further examine several alternative approaches that may be able to improve the design on Slow-Start. In the following, we examine what system factors will be affecting the design of the Slow-Start process. Even though there are several different TCP flow control family, they are all using the same Slow- Start process [Law02]. Therefore, if Slow-Start performs badly, all TCP flow control families that use the same Slow- Start will fail to achieve good flow control performance. In order to verify the design performance, we use the OPNET software simulation toolkit for all simulations carried out in this paper. In section 2, we describe the operations of the Slow-Start process. We examine different design parameters that may relate to the performance of the Slow-Start mechanism. The current TCP protocol design are onlyh interested in information such as the congestion window and round-trip time. Indeed, they only provide partial information regarding the network congestion. In section 3, we shall propose several designs that could be considered as a replacement of the existing Slow-Start. In section 4, we conclude the findings that we made in this paper. 2 SLOW-START IN TCP RENO TCP is a self-clocking protocol. It uses acknowledgements as clocks to strobe new packets into the network. When there are no segments in transit, such as at the beginning of a connection or after a retransmit timeout, the sender does not expect to receive any acknowledgements to serve as a strobe. Slow-Start is the mechanism that can

increase the amount of data in-transit between source and destination. With this Slow-Start process, it should be able to uniformly space data segments on a per round-trip time basis. The objective of the Slow-Start is to ramp up the traffic flow without sending huge amount of traffic at the beginning of the connection that may lead to network congestion unnecessarily. As aforementioned, instead of sending all the information from its send buffer altogether after establishing a connection, a host initiates a connection by sending only a small amount of information, that is, either one or two packet segments. On perceiving that there is sufficient bandwidth on the channel, the Slow-Start will increase the sending rate exponentially in time. Typically, it adds the cwnd by one Sender s Maximum Segment Size (SMSS) upon receiving an acknowledgement (ACK) from the receiver in TCP Reno. Through this operation, the congestion window in the Slow-Start will expand exponentially by approximately doubling the size of the cwnd parameter per round trip time. Apart from the cwnd parameter, there exists another parameter that associates the cwnd between the Slow-Start and Congestion Avoidance processes. This parameter is known as the Slow-Start threshold, ssthresh. The role of ssthresh is simple that it marks the one-way transition point from Slow- Start to Congestion Avoidance whenever the cwnd reaches the value of this threshold. On the other hand, a connection in Congestion Avoidance will never re-enter the Slow-Start phase again unless timeout occurs. Currently, only a timeout event or the Fast Recovery and Fast Retransmit processes will reduce the size of the ssthresh. Whenever a new connection starts, the ssthresh is prefixed with a default value, i.e., 64Kbytes. In TCP Reno, cwnd can only get increased in both the Slow-Start and Congestion Avoidance processes, and it is increased linearly in time during the CA process. The Slow-Start process stops whenever the cwnd reaches to a preset value of the ssthresh, this connection then relies on other congestion control mechanisms that control the values of the cwnd and ssthresh. With the advancement on the switching fabric, the huge bandwidth available on optical fiber, and the improved computing power at end systems, the status of the networks may change rapidly at different time instants and at different nodes of the networks. This actually implies that pre-setting a default value of ssthresh may not be the correct design even though it will change subsequently. This is because there are always some scenarios that may lead to certain unfavourable conditions to each newly started TCP connection. This paper examines several options in designing the Slow-Start process. For example, in one of the following proposed solutions, we allow an TCP connection to switch back Slow-Start process from the Congestion Avoidance stage. A preset value for the Slow-Start threshold should not be considered as a correct or a complete design. TCP Reno allows modification to the ssthresh through the Fast Recovery mechanism. However, it is a reactive approach and may happen too late after numerous packet losses. We would like to remove the ssthresh parameter or update it more actively during the initial phase of the connection. On the other hand, there are many other related parameters that the Slow-Start process has ignored. In the following, we firstly show the shortfalls in the existing design through extensive simulations. client stations Router A buffer size: b The bottleneck link t prop : 1-way propagation delay R : A B link rate, bits/sec Figure 1: Simulation Model. Router B Server The simulation model is in a dumbbell topology, as shown on Figure 1. We can modify the delay and bandwidth on each of the links. Moreover, we can adjust the buffer size in router A to test the packet loss related performance; as well as the receive buffer size at server for each connection. In all testing scenarios, the SMSS is set to 1KBytes, and the receive buffer for each connection at the server is set to approximately 64 TCP segments (65535 bytes), and all the access link channel rates for clients and server are set to 10Mbps with 5msec delay. Different traffic flows from different clients can set up TCP connections with the server. All TCP connections are for FTP file transfers. Data is backlogged for each FTP connection; therefore, whenever an TCP connection has spare cwnd spaces and are allowed by the receive buffer, it will send the data segments immediately. With the elasticity of TCP connection, all FTP connections attempt to seize the network bandwidth at the bottleneck link as much as possible. For the server, it is designed to have sufficient processing power to intake all arriving segments from all clients in the networks instantly. Therefore, the only bottleneck location is the link between routers A and B. Practically, the size of a receive buffer is usually not large at the moment. With the low cost of memory and the complexity of new programs, it is likely to set a larger receive buffer soon. In the simulations, we set the size of the receive buffer to 64Kbytes. It is only a moderately large window size, but this setup actually allows the sender to send more segments into the networks in a short burst. On the other hand, if there is infinite buffer in networks, all TCP flow control mechanisms will work flawlessly. We investigate the TCP performance under different packet loss conditions. Therefore, different critical buffer sizes are set in order to test the performance of the Slow-Start process in TCP Reno. If the critical buffer is set to 30-segment size in router A, two stations run TCP Reno with ssthresh at default value, 64Kbytes, send FTP data at the time instant, 30sec. With this limited network buffer, serious bursty segment losses occur for both connections in this scenario. Both connections can not proceed any further even though the network is in idle. These idle connections only rely on timeout events to send

information slowly. This is an obvious design flaw in the Slow-Start process. In the following section, we shall investigate two different methods that can help solving this bursty segment loss and connection hanging problems. through the design of TCP Vegas [Brak94, Brak95]. In the following, two methods are suggested to improve the quality of the Slow-Start process without requiring any modifications on TCP protocol primitives. Both of these designs can work together or separately. The improvements made with these designs require extra computing power at the sending hosts, but they are executable with today's technology 3.1 Window Bound and Rate of Change of Round-Trip Time Figure 2: ssthresh Breakdown with Two TCP Reno Connections with t prop =100msec, b=30segments, and R=2Mbps. 3 TWO CORRECTIVE DESIGNS This event relates to the default setting of the ssthresh. If the value of ssthresh is small, a connection may take a longer time to reach the channel capacity. On the other hand, if it is too large, this may instigate heavy burst loss, and subsequently numerous timeout events as shown. Actually, failed TCP connections can easily be demonstrated when the available resources to this connection are that the receive buffer ssthresh, and critical buffer ssthresh/2 with congested or low-speed bottleneck bandwidth. We have to be aware that the bottleneck link with limited bandwidth maybe at a different place from the critical buffer location in practice. A preset value design of ssthresh should not be considered as a correct or a complete design. We should re-consider if we should change the Slow-Start design completely, or we should adjust the default value of this ssthresh dynamically. Through the simulation results from Figure 2, we note that available network buffer size, the ssthresh, and the last advertised receive buffer size are important factors in TCP flow control performance. However, available network buffer is difficult to estimate. Its availability highly relates to the packet loss issue and the speed of the connection. Unfortunately, the buffer size is not estimated in current TCP flow control families. Since there is a last advertised window field in TCP header, sender is able to know the available receive buffer size. When the receive window is comparatively smaller than the ssthresh, then the allowable packet rate will be bounded by this advertised window instead of cwnd. This can be observed The Slow-Start threshold is set 64KBytes, but with the window scaling option, the advertised window may get larger than this preset default value of ssthresh in future. The result shown in Figure 2 demonstrates the ineffectiveness of this ssthresh. In this proposed design, we modify this ssthresh or simply convert it to another parameter, the window bound (WIB) in terms of octets. Through the designs of TCP Vegas, we notice that the correct size of cwnd should operate mostly smaller than the receive buffer. In this situation, the advertised window provides good traffic flow because the required bandwidth is comparatively small. Then it slowly consumes network buffer resources and it will not introduce unnecessary bursty packet loss. In this scenario, each TCP flow operates as a friendly traffic stream and the ssthresh breakdown problem as shown in Figure 2 is not noticeable. Based on TCP protocol specification [Postel81], TCP can only send the maximum byte size of information according to either the cwnd or the last advertised window. In this proposed first method, we introduce an upper bound on the congestion window during the Slow-Start phase. With many simulations, we like to contain the value of cwnd within one half of the maximum advertised window during the course of the connection. Therefore, we define: WIB = maximum advertised window received/2. (EQ1) We propose to provide dynamic flow between different flow control mechanisms. More specifically, we allow the process transition from Slow-Start to Congestion Avoidance, as well as from Congestion Avoidance to Slow-Start. This design does not occur in any TCP flow control family. In TCP Vegas, the authors examined the differences between the expected throughput rate and the actual throughput rate. At the University of Toronto, we are proposing a new TCP flow control family and it is known as TCP Niagara. This new family almost changes the designs of all existing flow control mechanisms. Moreover, we introduce the concept of smoothing average of the measured round-trip time (RTT). We define the following variables: RTT: the last measured round trip time, in seconds; sartt n : the smoothing average value of the round trip time of a TCP connection for the nth returned ACK packet between two end-systems, with a weighting parameter a; c: rate of change on round trip time with respect to the its last moving average,

RTT sartt = sarttn 1 n 1 c. (EQ2) We remove the threshold parameter, ssthresh, but we introduce an upper bound variable, WIB, and a rate changing variable of RTT, i.e., c. The variable, c, defines the transition point between two flow control mechanisms, Slow-Start and Congestion Avoidance. Moreovcr, we define a constant K for this transition variable; currently K is suggested to be at least 0.5. We further propose to calculate c that is based on a smoothing average of RTT, sartt. It should be able to avoid heavy fluctuations on calculating c that may cause rapid swinging between Slow-Start and Congestion Avoidance processes. In this design, all new TCP connection will start with the Slow-Start process. The operation details are as follows: In Slow-Start, o If c > K and cwnd < WIB, keep running Slow-Start; o Otherwise, move to Congestion Avoidance. In Congestion Avoidance, o If c < K and cwnd < WIB, move to Slow- Start; o Otherwise, keep running Congestion Avoidance. set at 64 KBytes was not appropriate. It further indicated that when advertised window was as large as ssthresh, then the effective window was not constrained properly and that might cause the ssthresh breakdown. Hence, the advertised window may actually be a more important parameter for the stability of an TCP connection. In the simulations, we set K to 0.5, and WIB according to (EQ1), i.e., 32KBytes. All four TCP flows could operate properly in the simulations. Furthermore, it is able to evenly share network resources were among four connections between 50 to 55 seconds in time. 3.2 First Timeout Recovery (FTR) Process For the second corrective design, we describe an intuitive solution to solve the ssthresh breakdown. For the Slow-Start breakdown, a connection just dangles around because of burst loss and it relies on timeout recovery to keep the connection in active condition. The network bandwidth is left idle without proper usage after the connection is hanged. Actually, we gain knowledge regarding the networks through the first initial timeout process, and a connection can adjust itself based on this learnt information. We introduce a method that is known as the First Timeout Recovery (FTR) process. The design of FTR is simple and straight forward, but the result can solve the ssthresh breakdown immediately. Figure 3: Four TCP Niagara Connections with WIB, K=0.5, t prop =100msec, b=30segments, R=2Mbps. In Figure 3, we kept the stringent conditions in the network model as those in Figure 2. We deployed the Congestion Avoidance process from TCP Niagara. The operations of the TCP Niagara are similar to TCP Vegas but it guarantees fairness between flows with similar propagation delays. From this result, the congestion window for each connection mostly moved between 20 to 30KBytes. It informed us that ssthresh Figure 4: Two TCP Reno Connections with First Timeout Recovery, t prop =100msec, b=10segments, and R=2Mbps.. In this design, we keep the ssthresh parameter. However, we propose a simple change on setting ssthresh after the first timeout event of a new TCP connection. The rules of the FIR are to: Record the value of the ssthresh at the first timeout, reset it as o ssthresh = ssthresh / 2. Resume the TCP connection from the Slow-Start phase again at the first timeout with the reset ssthresh, send and duplicate all packets that have

sequence numbers behind the first packet that instigate the timeout event. Then return to the normal TCP flow control system. The FTR worked well as it was demonstrated in Figure 4. In the simulations, we retained the unfavorable network conditions for the two TCP Reno connections. Furthermore, the size of the network buffer is changed to hold only 10 packets in router A. As a result, the ssthresh is cut down to 16KBytes with the FIR and both connections switched to Congestion Avoidance earlier after the first timeout. Subsequently, there are segment losses but they are not in a bursty manner. These packet loss situations could be recovered through the Fast Retransmit and Fast Recovery processes in TCP Reno. The concept of FTR is simple. However, there are drawbacks on this design in which we may many duplicated packets through the networks. As a result, the goodput performance of the connection may be degraded. The design of FTR is similar to go-back-n flow control in a congestion control situation. We should not deploy FTR if all other flow control mechanisms are working properly. More importantly, we only intend to use this method after the first timeout. On other timeout occasions, we rely on the normal timeout recovery process in TCP. The role of FTR attempts to provide an estimate on the available network buffer that is uneasy to estimate through all available measurements. From the simulation result as shown in Figure 4, the ssthresh is modified again through the Fast Recovery process, which might be able to provide a more accurate setting of the ssthresh if the connection was in Congestion Avoidance process before packet loss. This is the main reason we shall only carry out FTR upon the first timeout. 5 REFERENCE: [Allman99] M. Allman, V. Paxson, W. Stevens, TCP Congestion Control, RFC 2581, IETF, April 1999. [Brak94] L. Brakmo, S. O'Malley, L. Peterson, TCP Vegas: New Techniques for Congestion Detection And Avoidance, in Proc. SIGCOMM 94 Symposium, pp.24-35, Aug. 1994. [Brak95] L. Brakmo, L. Peterson, TCP Vegas: End-to-End Congestion Avoidance on A Global Internet, in IEEE. J. Selected Areas Commun., Vol.13, No.8, pp.1465-1480, Oct. 1995. [Floyd99] S. Floyd, T. Henderson, The NewReno Modification to TCP s Fast Recovery Algorithm, RFC 2582, IETF, April 1999. [Jacob88] V. Jacobson, Congestion Avoidance And Control, in Proc. SIGCOMM 88 Symposium, pp.314-329, Aug. 1988. [Law02] K.L.E. Law, W.-C. Hung, Problems Investigation in TCP Flow Control Mechanisms, to appear in Dynamics, Continuous, Discrete and Impulsive Systems Journal. [Postel81] J. Postel, Transmission Control Protocol, RFC 793, IETF, September 1981. 4 CONCLUSIONS We found that the original goal of the Slow-Start process that attempts to ramp up traffic rapidly may reverse the system performance by hanging connections with long timeout recovery processes. This unfavorable condition certainly depends heavily on different system and network settings. However, it is sufficient to indicate that there are insufficiencies in existing Slow-Start process. We proposed two possible remedial solutions in this paper that were able to correct the problems as indicated. Rate of change on roundtrip time allows a fast ramp up phase without using the rigidly preset value of the ssthresh. However, we need to restrict this aggressive multiplicative addition event that entails serious bursty packet loss. Therefore, we introduce a window bound constraint for this dynamic rate of change of round-trip time operation. This bound shall be based on the available buffer size at the receiver. Another approach was proposed, the First Timeout Recovery process. This is a reactive design that happens only after the first timeout event, but its goal is to provide a reasonable setting of the traditional ssthresh parameter that may relate to possible throughput rate of the bottleneck link in the networks.