Performance Evaluation of TCP Westwood. Summary

Similar documents
Transmission Control Protocol. ITS 413 Internet Technologies and Applications

CS4700/CS5700 Fundamentals of Computer Networks

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS3600 SYSTEMS AND NETWORKS

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Performance Analysis of TCP Variants

TCP Congestion Control

TCP Congestion Control

CS321: Computer Networks Congestion Control in TCP

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

Wireless TCP Performance Issues

image 3.8 KB Figure 1.6: Example Web Page

Improving TCP Performance over Wireless Networks using Loss Predictors

CS Transport. Outline. Window Flow Control. Window Flow Control

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

Bandwidth Allocation & TCP

COMP/ELEC 429/556 Introduction to Computer Networks

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018

PERFORMANCE COMPARISON OF THE DIFFERENT STREAMS IN A TCP BOTTLENECK LINK IN THE PRESENCE OF BACKGROUND TRAFFIC IN A DATA CENTER

Impact of transmission errors on TCP performance. Outline. Random Errors

Chapter III: Transport Layer

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

Congestion control in TCP

CSCI Topics: Internet Programming Fall 2008

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

TCP Congestion Control

ENRICHMENT OF SACK TCP PERFORMANCE BY DELAYING FAST RECOVERY Mr. R. D. Mehta 1, Dr. C. H. Vithalani 2, Dr. N. N. Jani 3

TCP Review. Carey Williamson Department of Computer Science University of Calgary Winter 2018

Chapter 24. Transport-Layer Protocols

Computer Networking Introduction

ADVANCED COMPUTER NETWORKS

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Transport Protocols and TCP

TCP based Receiver Assistant Congestion Control

Chapter 6. What happens at the Transport Layer? Services provided Transport protocols UDP TCP Flow control Congestion control

CSCI Topics: Internet Programming Fall 2008

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1

ECE 333: Introduction to Communication Networks Fall 2001

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

TCP Westwood and Easy Red to Improve Fairness in High-speed Networks. PfHsn 2002 Berlin, 22 April 2002

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control

Fall 2012: FCM 708 Bridge Foundation I

TCP WITH FASTER RECOVERY. Claudio Casetti Mario Gerla Scott Seongwook Lee Saverio Mascolo Medy Sanadidi

Lecture 3: The Transport Layer: UDP and TCP

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

ENSC 835: COMMUNICATION NETWORKS

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

TCP congestion control:

Lecture 4: Congestion Control

Transport Protocols & TCP TCP

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, steam: r Development of reliable protocol r Sliding window protocols

Transport Layer. -UDP (User Datagram Protocol) -TCP (Transport Control Protocol)

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, Development of reliable protocol Sliding window protocols

Department of Computer and IT Engineering University of Kurdistan. Transport Layer. By: Dr. Alireza Abdollahpouri

TRANSMISSION CONTROL PROTOCOL

TCP Congestion Control

TCP Congestion Control in Wired and Wireless networks

Flow and Congestion Control Marcos Vieira

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Communication Networks

Outline. CS5984 Mobile Computing

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Flow and Congestion Control (Hosts)

II. Principles of Computer Communications Network and Transport Layer

Chapter 3 Transport Layer

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

TCP Congestion Control

Lecture 8. TCP/IP Transport Layer (2)

Internet Networking recitation #10 TCP New Reno Vs. Reno

Transport Protocols and TCP: Review

Congestions and Control Mechanisms in Wired and Wireless Networks

TRANSMISSION CONTROL PROTOCOL

Transmission Control Protocol (TCP)

Chapter 3 Transport Layer

CSE 461. TCP and network congestion

TCP over Wireless PROF. MICHAEL TSAI 2016/6/3

CMPE 257: Wireless and Mobile Networking

Mid Term Exam Results

UNIT IV -- TRANSPORT LAYER

MIDTERM EXAMINATION #2 OPERATING SYSTEM CONCEPTS U N I V E R S I T Y O F W I N D S O R S C H O O L O F C O M P U T E R S C I E N C E

Chapter 3- parte B outline

Congestion Collapse in the 1980s

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Chapter 3 Transport Layer

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Answers to Sample Questions on Transport Layer

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

Chapter III. congestion situation in Highspeed Networks

Advanced Computer Networks

Modeling the Goodput of TCP NewReno in Cellular Environments

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL

Transcription:

Summary This project looks at a fairly new Transmission Control Protocol flavour, TCP Westwood and aims to investigate how this flavour of TCP differs from other flavours of the protocol, especially TCP Reno which is currently the most commonly used version in the Internet. The purpose of the project is to discover, via simulation, how the changes which TCP Westwood implements in the TCP sender affect the transmission of TCP packets and to evaluate its performance, in comparison with TCP Reno, to see if the changes constitute an improvement in the protocol. There were several stages involved in the process of conducting the project: investigating the Transmission Control Protocol investigating the differences between the various flavours of TCP, especially TCP Westwood and TCP Reno learning how to use the network simulator ns-2 designing simulation experiments writing simulation scripts for the ns simulator running these scripts and collating their results using these results to evaluate the performance of TCP Westwood comparing the findings with those of other researchers. i

Contents Chapter 1 - Introduction 1 1.1 Aims of the project 1 1.2 Minimum Requirements 1 Chapter 2 - Background to the Problem 3 2.1 Introduction 3 2.2 TCP Basics 3 2.2.1 Congestion Control 4 2.3 TCP Flavours 7 2.3.1 TCP Tahoe 7 2.3.2 TCP Reno 7 2.3.3 TCP Vegas 8 2.3.4 TCP SACK 9 2.3.5 TCP Westwood 9 2.3.6 Comparison of TCP Flavours 11 Chapter 3 - Experiment Design 12 3.1 Introduction 12 3.2 Dumb-bell Topology 12 3.2.1 Experiments to be conducted 13 3.3 Sending Two Types of Traffic across a Network 13 3.4 Lossy Links 14 3.5 Summary 14 Chapter 4 - Implementation 15 4.1 Introduction 15 4.2 Simulation 15 4.3 Discussion of the experiments 16 4.3.1 TCP Traffic across a bottleneck link 17 4.3.2 TCP Traffic across a lossy link 20 4.3.3. TCP Traffic competing with UDP Traffic 22 4.4 Comparison of results with findings of other research 26 Chapter 5 - Project Evaluation 27 5.1 Introduction 27 5.2 Experimental Conclusions 27 5.2.1 Packets Dropped 27 5.2.2 Throughput 27 5.3 Success of the Project 28 5.4 Possible Improvements 29 5.5 Possible Extensions 29 Chapter 6 - Conclusion 30 References 31 Appendix A - Reflection on the Project Experience 33 ii

Appendix B - Simulation Scripts 34 B.1 TCP Traffic over a Bottleneck Link 34 B.2 Mixed Traffic over a Bottleneck Link 36 B.3 TCP Traffic over a Lossy Link 38 Appendix C - Perl Scripts for Data Extraction 41 Appendix D - Experiment Results 45 D.1 Packets Dropped 45 D.2 Average TCP Throughput 47 D.3 Variation in TCP Throughput 49 D.4 UDP Throughput 50 D.5 Slow Start Threshold and Congestion Window 51 iii

Table of Figures 2.2.1 The congestion control algorithm 6 2.2.2 Fast recovery 7 3.2 Dumb-bell topology 12 4.3 EXPERIMENT RESULTS 4.3.1 TCP Traffic across a bottleneck link 4.3.1.1 Packets dropped when bottleneck bandwidth causes congestion 17 4.3.1.2 Average throughput when bottleneck bandwidth causes congestion 17 4.3.1.3 Variation in throughput when bottleneck bandwidth causes congestion 18 4.3.1.4 Packets dropped when buffer size causes congestion 18 4.3.1.5 Average throughput when buffer size causes congestion 19 4.3.1.6 Variation in throughput when buffer size causes congestion 19 4.3.2 TCP Traffic across a lossy link 4.3.2.1 Packets dropped with lossy link 20 4.3.2.2 Average throughput with lossy link 20 4.3.2.3 Variation in throughput with lossy link 21 4.3.2.4 cwnd and ssthresh for TCP Reno 22 4.3.2.5 cwnd and ssthresh for TCP Westwood 22 4.3.3 TCP competing with UDP 4.3.3.1 Packets dropped when bottleneck bandwidth causes congestion 23 4.3.3.2 Average throughput when bottleneck bandwidth causes congestion 24 4.3.3.3 UDP throughput when bottleneck bandwidth causes congestion 24 4.3.3.4 Packets dropped when buffer size causes congestion and TCP competing with UDP 25 4.3.3.5 Average throughput when buffer size causes congestion and TCP competing with UDP 25 4.3.3.6 UDP throughput when buffer size causes congestion 26 Appendix D: EXPERIMENT RESULTS D.1 Packets dropped D.1.1 when bottleneck bandwidth causes congestion 45 D.1.2 dropped when buffer size causes congestion 45 D.1.3 with lossy link 46 D.1.4 when bottleneck bandwidth causes congestion and TCP competing with UDP 46 D.1.5 when buffer size causes congestion and TCP competing with UDP 46 D.2 Average throughput D.2.1 when bottleneck bandwidth causes congestion 47 D.2.2 when buffer size causes congestion 47 D.2.3 with lossy link 47 D.2.4 when bottleneck bandwidth causes congestion and TCP competing with UDP 48 D.2.5 when buffer size causes congestion and TCP competing with UDP 48 D.3 Variation in throughput D.3.1 when bottleneck bandwidth causes congestion 49 D.3.2 when buffer size causes congestion 49 D.3.3 with lossy link 50 iv

D.4 UDP throughput D.4.1 when bottleneck bandwidth causes congestion 50 D.4.2 when buffer size causes congestion 51 D.5 cwnd and ssthresh D.5.1 cwnd and ssthresh for TCP Reno 51 D.5.2 cwnd and ssthresh for TCP Westwood 51 v

Chapter 1. Introduction 1.1 Aims of the project The aim of this project is to investigate a new flavour of the Transmission Control Protocol, TCP Westwood, and to evaluate its performance, comparing it with the currently predominant flavour, TCP Reno. TCP Westwood was recently proposed to modify TCP Reno in such a way as to provide faster recovery (by not reducing the values of the congestion window and the slow start threshold too conservatively) and to provide better congestion avoidance than Reno. TCP Westwood can be deployed in wired, wireless and mixed networks [5]. Westwood sets its slow start threshold in a completely different way to Reno and this should cause the protocol to perform differently. Research literature (including [5] which states that improvements in throughput of up to 550% have been observed) suggests vast improvements in performance occur when TCP Westwood is utilised. This makes the protocol interesting and worth further investigation. The purpose of this project is, therefore, to investigate this new flavour's enhancements and to see how they affect the performance in various congestion situations. The method used to evaluate the performance of TCP Westwood in this project is simulation, using the network simulator ns-2. 1.2 Minimum Requirements The project had the following minimum requirements: to gain an understanding of TCP to understand the differences between the various TCP flavours to understand how TCP Westwood is different from the other flavours and how these differences affect its performance to design simulation experiments to compare TCP Westwood with TCP Reno 1

The following tasks were possible extensions to these minimum requirements: to write and run simulation scripts to conduct the above experiments to use the results of these scripts to evaluate the performance of TCP Westwood to compare these results with those obtained by other researchers The next chapter introduces the Transfer Control Protocol and explores the differences between some of its many flavours. 2

Chapter 2. Background to the Problem 2.1 Introduction An understanding of how the various flavours of TCP differ and how these differences affect their performance is necessary for the purpose of evaluating the performance of TCP Westwood. TCP Westwood was recently proposed to modify TCP Reno in such a way as to provide faster recovery (by not reducing the values of the congestion window and the slow start threshold too conservatively) and to provide better congestion avoidance than Reno. Its performance is better than that of TCP Reno in both wired and wireless networks and also in mixed networks which use both wired and wireless connections [5]. 2.2 TCP Basics TCP stands for Transmission Control Protocol. TCP is a reliable end-to-end protocol which adapts to network conditions to guarantee transmission of data. It was designed to provide reliable performance even when the network on which it runs is unreliable [18]. It is the Transmission Control Protocol, rather than the Internet protocol (IP), which is responsible for guaranteeing the delivery of packets. This means that TCP needs to implement timers or some other mechanism to discern when packet loss occurs. It also needs to retransmit packets when they are lost. TCP uses a sliding window protocol to determine when packet loss has occurred. This works as follows: Whenever the sender transmits a segment, it also starts a timer When the segment arrives at the receiver, the receiver sends back a segment (which may or may not contain data, depending on whether the receiver has any data it wishes to transmit) which include an acknowledgement number which identifies the next segment the receiver expects to receive. If the sender's timer expires before it receives the acknowledgement (ACK), then the sender retransmits the segment. Otherwise (i.e. if the ACK is received before the timer expires) the segment has been received successfully and no further action is necessary. This protocol is not foolproof. It is possible for segments to experience long delays in transmission. They eventually arrive at the receiver, but in the meantime the sender has timed out and retransmitted such packets since it believes them to have been lost. It is also possible that segments arrive at the 3

receiver out of order (it is the TCP receiver which is responsible for rearranging packets into the correct order once they have been received) causing some packets to be received but not acknowledged since TCP acknowledges segments in order. Hence higher numbered packets may be received but cannot be acknowledged until a lower numbered segment, which has taken a different route through the network and been delayed, arrives. If this occurs it is possible that the sender will time out for these higher numbered segments since the delay in acknowledgement is so great. The sender will then retransmit all such segments, even though this is unnecessary [18]. TCP Selective Acknowledgements (see section 2.3.4) attempts to address this problem. The size of the segments which are transmitted is decided by the TCP software. However, each segment. including its header, must fit into 65,515 bytes, which is the size of the IP payload. The size of segments may also be constrained by the segment size which the network over which they are being transmitted can support and by the amount of data the receiver can cope with. These considerations aside, TCP may put data from more than one write operation into one segment. Similarly, it may split the data from one write and put it into several separate packets. No matter what size the segment is, every TCP segment has a header which is always 20 bytes. It may also include data, although it is permissible for a segment to be solely a header [18]. When an application passes data to TCP to be transmitted, the TCP software may decide to transmit it immediately, or it may buffer the data. The application can override this decision to buffer in the case of urgent data, by using a PUSH flag [18]. A TCP connection requires both the sender and the receiver to create TCP sockets and to explicitly establish a connection between these sockets. This connection is full duplex i.e. data can travel in both directions at the same time [18]. TCP uses acknowledgements and for this reason does not support multicasting or broadcasting since the large number of acknowledgement packets which would be generated in such circumstances could overwhelm the sender [18]. 2.2.1 Congestion Control TCP congestion control uses an end-to-end approach since TCP receives no feedback about the state of the network from the IP layer. The TCP sender must therefore monitor the network to infer when 4

congestion is occurring. For example, when packet loss occurs, the TCP sender knows that there is congestion. Packet loss can be identified in two ways. Firstly, the TCP sender may time out. Secondly, the sender may receive three duplicate acknowledgements for the same segment of data. This indicates that the next segment in the sequence has been lost [13]. The TCP sender uses slow start and congestion avoidance algorithms to control the amount of data it transmits. These algorithms require a variable known as the congestion window (cwnd) which limits the amount of data the sender can send without the receipt of an ACK. Another variable, the slow start threshold (ssthresh) is also needed. The decision regarding which algorithm (slow start or congestion avoidance) to use at any given time is based on the value of ssthresh [1]. The decision criteria are as follows: when cwnd < ssthresh, slow start should be used when cwnd > ssthresh, congestion avoidance should be used [17] These two algorithms are described in greater detail below, and the way in which they affect the size of the congestion window is depicted in fig 2.1. TCP uses an additive-increase multiplicative-decrease (AIMD) algorithm to control congestion [11]. When a TCP connection is initialised, the congestion window is set to the size of the maximum segment permissible [1]. The slow start threshold variable is initially 64KB [18]. This initial period is known as slow start since the transmission begins very slowly. However, during slow start, the transmission rate increases exponentially since it is likely that the sender is greatly under-using the available bandwidth. Every time an ACK is received by the sender before its timer expires, the sender doubles its congestion window [1]. This exponential growth occurs until cwnd becomes equal to ssthresh, at which point the sender begins to increase its cwnd linearly. Every time an ACK is received by the sender before its timer expires, the sender increases its cwnd by one segment. This is the congestion avoidance stage [18]. This linear growth continues until a loss event occurs. The way in which TCP reacts depends on the type of loss event [18]. 5

If a timeout occurs, the sender resets ssthresh to be equal to half the value of cwnd at the time of the timeout. The variable cwnd is also reset to one maximum segment and the slow start phase is re-entered [18]. This is demonstrated in fig 2.2.1, below. cwnd (Kb) 22 20 18 16 14 12 10 8 6 4 2 0 0 2 4 6 8 10 12 14 transmission fig 2.2.1 - the congestion control algorithm At the start of fig 2.2.1, the slow start threshold is currently set to 16Kb. When transmission begins, the amount of data sent in each transmission is doubled until this threshold value is reached - this occurs at the fourth transmission in the graph. At this point, transmission begins to increase linearly. At transmission 8, when the congestion window has reached a value of 20Kb, a timeout occurs and the slow start threshold is reset to half the value of the congestion window (1/2 * 20 = 10), while the congestion window itself is reset to 1Kb. At this point, slow start begins again, until the new threshold value (cwnd becomes 10Kb in transmission 13) is reached and transmission continues to grow linearly. If the sender receives triple duplicate acknowledgements, it resets cwnd to be equal to half the value of cwnd at the time of the receipt of the ACKs and resets ssthresh to be equal to the new value of cwnd. This is known as fast recovery [17]. As well as fast recovery, TCP uses a fast retransmit algorithm when triple duplicate acknowledgements occur. When the third duplicate acknowledgement arrives at the sender, the sender takes this as an indication that a packet has been lost and immediately retransmits it, without waiting for a time-out [1]. Fig 2.2.2, below, demonstrates the fast recovery mechanism. 6

20 cwnd (Kb) 15 10 5 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 transmission fig 2.2.2 - fast recovery As in fig 2.2.1, ssthresh is currently valued at 16Kb at the start of the graph in fig 2.2.2. Once again there is exponential growth of cwnd until this threshold is reached, at which point linear growth begins. The difference is that, at the 8 th transmission, triple duplicate acknowledgements are received by the sender. At this point, cwnd is halved, to become 10Kb and ssthresh is also reset to 10Kb. Linear growth then continues, since the value of cwnd is already equal to the value of ssthresh. 2.3 TCP Flavours There are many different flavours of the Transmission Control Protocol. The differences between the flavours mostly lie in the congestion control algorithms they implement. This section describes some of the flavours in use today - Tahoe, Reno, Vegas, SACK and Westwood. 2.3.1 TCP Tahoe TCP Tahoe was the predecessor to TCP Reno. In the case of either type of loss event, Tahoe cut its congestion window to 1 packet and began the slow start algorithm [13]. Fig 2.1 above would therefore depict the cwnd for TCP Tahoe for both types of loss event. 2.3.2 TCP Reno Reno is the TCP flavour predominantly in use on the Internet at this time. Its congestion control mechanism is the additive-increase multiplicative-decrease algorithm (AIMD) explained in section 2.2.1 7

and depicted in figs 2.2.1 and 2.2.2. Reno improved upon Tahoe by introducing the fast recovery mechanism. The AIMD algorithm means that TCP Reno overreacts to loss events [11]. 2.3.3 TCP Vegas TCP Vegas attempts to avoid congestion from occurring in the network at all. It attempts to detect network congestion before packets are lost. When it detects congestion it lowers its transmission rate in order to reduce congestion, and thus prevent packets from being lost. Vegas predicts packet loss by monitoring the round trip time (RTT) of packets. The longer the round trip time observed, the more congested the network is [13]. Unlike Reno, which uses multiplicative decrease of cwnd in the occurrence of a loss event, Vegas uses linear decrease. The reason this is feasible is that Vegas performs its decrease before loss events occur [15]. Vegas can be implemented alongside any other flavour of TCP on a network. The differences all occur at the TCP sender, the receiver is unchanged [3]. Although Vegas can interoperate with TCP Reno, its performance is affected by Reno in the case of a bottleneck in the network [12]. Since Vegas detects congestion earlier than Reno (which only becomes aware of congestion when packets are actually lost) and reduces its transmissions it sacrifices some of the bandwidth available to it. Reno, however, continues to increase its congestion window so long as no packet loss occurs and, in this instance, will take over the bandwidth which Vegas has given up. Although Vegas is attempting to reduce congestion, Reno will continue to increase congestion by increasing its transmissions. In the presence of a bottleneck, Reno is not fair towards the Vegas implementation. Vegas is also unable to obtain a fair share of the available bandwidth in the presence of TCP Westwood. Vegas itself is fair [11]. Brakmo et al [3] found that TCP Vegas achieved between 40% and 70% better throughput than TCP Reno thanks to its more efficient use of the bandwidth available to it. They also found that the amount of 8

data Vegas retransmitted was between one-half and one-fifth the amount retransmitted by Reno, showing that less packet loss occurred. 2.3.4 TCP SACK TCP Selective Acknowledgements, or TCP SACK, attempts to avoid the problems that can occur due to data packets arriving out of order (as explained in section 2.1). It also attempts to improve the performance when multiple packets from one window of data are lost [14]. SACK works by the receiver acknowledging all packets it has received, not just those received in order. This means that the sender knows exactly which packets have been lost and need not retransmit those packets which have been correctly received [14]. Providing this extra information to the TCP sender makes the TCP implementation more complex [15]. 2.3.5 TCP Westwood TCP Westwood was recently proposed to modify TCP Reno in such a way as to provide faster recovery (by not reducing the values of the congestion window and the slow start threshold too conservatively) and to provide better congestion avoidance than Reno. Its performance is better than that of TCP Reno in both wired and wireless networks and also in mixed networks which use both wired and wireless connections [5]. TCP Westwood monitors the rate at which the sender receives ACKs and uses this to calculate the available bandwidth of the connection being used. This measurement is done continuously. The sender can then use this estimate of the bandwidth to determine the new size of its congestion window and slow start threshold when loss events occur. This is known as faster recovery [5]. The congestion control mechanism of TCP Westwood is an additive-increase adaptive-decrease (AIAD) algorithm. When ACK packets are received, the sender increases its congestion window in the same way as TCP Reno increases its value of cwnd. In other words, the value of cwnd grows exponentially until ssthresh is reached, at which point it begins to grow linearly. The difference between Reno and Westwood is in the way they react to loss events. TCP Westwood computes its new cwnd and ssthresh 9

values based on the bandwidth available to it, thus utilising the bandwidth more efficiently and reducing the rate of sending as little as possible [11]. If there are long periods of time during which no acknowledgements are received, then the estimate of the bandwidth is decreased. Duplicate ACKs cause the bandwidth estimation to be increased in the same way as first ACKs since they show that a packet has got through [5]. The following equation shows how the bandwidth estimate, b k is calculated upon receipt of an ACK at time t k : b k = d k / t k - t k-1 where d k is the number of bytes acknowledged at time t k t k-1 is the time at which the previous ACK was received by the sender [5]. TCP Westwood, like Reno, is unable to tell the difference between losses caused by congestion and random losses i.e. those caused by a lossy link [19]. Grieco and Mascolo [11] found that TCP Westwood is fair when co-existing with TCP Reno. Not only is it fair, it is actually fairer than Reno in allocating bandwidth. However, it is not fair towards Vegas which is unable to obtain a fair amount of bandwidth when co-existing with either Reno or Westwood. They also found that TCP Westwood provides better goodput than TCP Reno. Casetti et al [5] found that TCP Westwood greatly improves performance over mixed wired and wireless networks. They observed throughput improvements of up to 550% over TCP Reno. Gerla et al [9] investigated how the use of multiple paths affects TCP transmissions. If packets are sent over different paths, it is likely that they will arrive out of order at the TCP receiver, causing duplicate acknowledgements to be generated. It was found that TCP Reno misinterprets these duplicate acknowledgements as a sign of network congestion whereas TCP Westwood is not sensitive to these duplicate acknowledgements since it uses an estimate of the available bandwidth in order to detect congestion. TCP Westwood has been further developed into several new flavours including Westwood+ [8] and Paced-Westwood [16]. These flavours are outside the scope of this project. 10

2.3.6 Comparison of TCP Flavours The table below shows how three TCP flavours - Reno, Vegas and Westwood - react when both types of packet loss event occur. Loss event: 3 duplicate ACKs Timeout TCP Flavour cwnd ssthresh cwnd ssthresh Reno ssthresh ½ current cwnd 1 packet ½ current cwnd Vegas ssthresh ½ current cwnd 1 packet ½ current cwnd Westwood BWE * RTT min / segment size BWE * RTT min / segment size 1 packet BWE * RTT min / segment size Abbreviations: cwnd - congestion window ssthresh - slow start threshold BWE - bandwidth estimation RTT min - minimum round trip time Note that although TCP Vegas reacts to loss events in exactly the same way as TCP Reno, these loss events occur less often with TCP Vegas due to its congestion avoidance mechanism, as described in section 2.3.3. In the next chapter, the experimental design is presented. 11

Chapter 3. Experiment Design 3.1 Introduction Since the improvements which have been made in TCP Westwood are modifications to the congestion control mechanism of the protocol, the experiments which should be used to evaluate its performance should simulate congestion situations within a network. Each congestion situation investigated must also be simulated using TCP Reno in order to compare the two flavours of the protocol and to establish whether Westwood does, indeed, provide an improvement. There are various factors which contribute to congestion in a network. These include the number of data senders, the protocol the senders implement, the rate at which the senders transmit their data, the bandwidth available, the size of the buffers available at each router in the network and the type of data (TCP, UDP etc.) being sent. 3.2 Dumb-bell Topology The basic network topology to be used in the performance evaluation experiments is the dumb-bell topology, which is illustrated in fig 3.2 below. TCP Senders TCP Receivers Router A Bottleneck link Router B fig. 3.2 - dumb-bell topology The parameters of this simple topology can be varied in several ways in order to create congestion situations which will allow the performance of TCP Westwood to be investigated. For each situation, the topology must be implemented twice - once with all senders implementing TCP Reno, and once with all senders implementing TCP Westwood. In each experiment, only one parameter should be varied. All 12

other parameters should remain the same throughout the experiment to ensure that the cause of the congestion is known. It is also important that the length of each simulation is sensible. If the duration is too short, not enough data will be gathered and few events will occur in the simulation. If the duration is too long, the trace files generated will be so huge that the post processing of the data will take an unreasonable amount of time. 3.2.1 Experiments to be conducted with the dumb-bell topology Using the topology depicted in fig 3.2 above, the following means of creating a congestion situation in the network will be investigated: - varying the bandwidth of the bottleneck link - varying the size of the buffer at Router A 3.3 Sending two types of traffic across a network The User Datagram Protocol (UDP) is a connectionless protocol which, unlike TCP, does not implement any flow control or error control [18]. This means that when congestion occurs in a network across which UDP senders are transmitting data, the senders do not modify their behaviour in any way. By failing to respond to congestion situations, UDP senders can make congestion in a network worse. Further problems arise when UDP and TCP both run across the same network, since TCP will reduce its transmission in the case of congestion while UDP uses any available bandwidth. As the TCP sender reduces its transmission, the UDP sender will take the bandwidth surrendered by the TCP sender. TCP Reno may overreact in its reduction of its congestion window and slow start threshold when congestion occurs. TCP Westwood, however, alters its congestion window and slow start threshold according to the bandwidth it perceives to be available [5]. This may allow TCP Westwood to perform better when competing for bandwidth with a UDP sender, since it will relinquish less of the available bandwidth when congestion occurs, and this possibility should be investigated. To this end, the experiments described in section 3.2.1 above should be conducted a second time, with two of the senders and receivers in the dumb-bell topology implementing UDP rather than TCP. 13

3.4 Lossy Links During transmission, packet loss is not always due to congestion in the network. The presence of a lossy link in the network, that is one with physical defects which interfere with transmission, may also cause packets to be dropped. Both Reno and Westwood are unable to tell the difference between losses caused by congestion and random losses i.e. those caused by a lossy link [19]. Since Westwood reacts to packet loss differently to Reno, experiments in which the bottleneck link is lossy should be conducted to see whether TCP Westwood can provide an improvement in performance in these situations. 3.5 Summary Each experiment will involve various parameters. The following parameters will remain constant in every experiment: packet size bandwidth of access links delay of access links delay of the bottleneck link Overall the number of senders and receivers will also remain constant, although in certain experiments some of these nodes may implement UDP rather than TCP, while in other experiments only TCP will be implemented. According to the experiment being conducted, the following parameters will be varied: bandwidth of bottleneck link buffer size at router A probability of loss on bottleneck link The next chapter explains the implementation of the experiments. 14

Chapter 4. Implementation 4.1 Introduction This chapter begins with a brief discussion of why simulation was considered the appropriate methodology for the performance evaluation. It then goes on to discuss the simulation experiments which were conducted and the results obtained. The graphs which depict the results for each experiment are provided in the appropriate sections of this chapter. They can also be found in Appendix D, grouped according to the data they represent in order to enable an easier comparison of the way each performance metric is affected. The conclusions which can be drawn from the results provided here, can be found in chapter 6. 4.2 Simulation There are various methods available for evaluating the performance of a protocol. These include direct experimentation on a network, mathematical modelling and simulation. Direct experimentation may yield results which other methods miss. However, it is expensive and complex [4]. The downside to mathematical modelling is that it requires assumptions to be made, which may mean that the models may not accurately reflect reality. The tool of simulation was chosen for this investigation because it offers a good compromise and is easier and cheaper than the other methods. Although other simulators such as YATS (Yet Another Tiny Simulator) [2] exist, it was the network simulator ns-2 which was used for the purpose of performance evaluation. This particular simulator was chosen since it supports a wide range of protocols [4]. This range includes TCP Westwood, which led to ns-2 being chosen since other simulators do not support this new flavour, which is essential for the purposes of this project. In order to obtain data from the simulator, scripts (see Appendix B) were written which created trace files. These trace files were then post processed using two Perl scripts (see Appendix C) to extract the required data. In the writing of ns scripts, two tutorials [6, 10] were useful, as was the ns manual [7]. 15

4.3 Discussion of the Experiments Five experiments were conducted for the purposes of performance evaluation and these are described in detail below. Each experiment used the dumb-bell topology (see fig 3.2, section 3.2) and investigated the throughput (the amount of data received per second by the TCP receivers) and the number of packets lost when a given parameter was altered in the network. In the experiments which involved only TCP traffic, the throughput was looked at in two ways - firstly, the average throughput across the TCP receivers was calculated; and secondly, the variation in the throughput was calculated since throughput was not always equally shared among the TCP receivers. Throughput was studied since it is a good indicator of performance across a network - a large throughput indicates that a large amount of data is being correctly transmitted and received and that this is taking place quickly. The greater the throughput, the better the performance of the protocol being used. The number of packets dropped was chosen as a performance metric to be studied since the loss of packets indicates a congestion situation has occurred and it is the congestion control mechanism which differs between TCP Reno and TCP Westwood, and thus creates the potential for improvement. When TCP is used, any packets which are dropped must be retransmitted which may affect performance by increasing the volume of traffic to be sent. In each experiment the access links between the senders and router A, and between router B and the receivers, each had a bandwidth of 100Mb and a delay of 1ms (see fig 3.2). The size of all packets transmitted was 2000. The parameters of the bottleneck link between routers A and B, and the size of the buffer at A were dependent on the experiment being conducted. Each simulation was run several times to test the scripts when they were initially written. The results which follow were obtained from one run of each experiment. Ideally, simulations would be run several times and the results averaged (see section 5.4). Some of the experiments provided results which appeared unusual (see fig. 4.3.1.6, where the throughput variation for TCP Reno increases greatly when the buffer size is 40, for example) and these experiments were repeated to ensure the results were correct. 16

4.3.1 TCP Traffic across a bottleneck link For the purposes of the two experiments described in this section, a dumb-bell topology with four TCP senders and four TCP receivers was used. The bottleneck link had a delay of 40ms. Each simulation ran for 40s and each sender transmitted File Transfer Protocol (FTP) packets throughout the simulation. The first experiment used a buffer size of 50 at router A, and the parameter which was varied was the bandwidth of the bottleneck link. This was varied from 0.5Mb to 5Mb in 0.5Mb increments. The results can be seen in figs 4.3.1.1, 4.3.1.2 and 4.3.1.3 below. Packets dropped 35 30 25 20 15 10 5 0 0.5 1.5 2.5 3.5 4.5 Bandwidth (Mb) Reno Westwood fig 4.3.1.1 - packets dropped when bottleneck bandwidth causes congestion Throughput (Kbps) 80 70 60 50 40 30 20 10 0 0.5 1.5 2.5 3.5 4.5 Bandwidth (Mb) Reno Westwood fig 4.3.1.2 - average throughput when bottleneck bandwidth causes congestion 17

Variation in throughput (Kbps) 50 40 30 20 10 0 0.5 1.5 2.5 3.5 4.5 Bandwidth (Mb) Reno Westwood fig 4.3.1.3 - variation in throughput when bottleneck bandwidth causes congestion Fig 4.3.1.1 shows that the number of packets dropped when using TCP Westwood tends to be fairly high regardless of the bandwidth, while the number of packets dropped when using Reno reduces as the bandwidth increases and thus congestion decreases. Based on this performance metric, Reno tends to (although does not always) outperform TCP Westwood. Fig 4.3.1.2 shows that in this scenario TCP Westwood provided only a very slight improvement over Reno in terms over average throughput across the four receivers. However, in general, the throughput was shared more fairly among the receivers when TCP Westwood was used as demonstrated by fig 4.3.1.3 which shows a smaller variation in the throughput at the receivers with Westwood for most of the simulations. The second experiment kept the bottleneck bandwidth constant at 5Mb, and varied the buffer size at router A from 10 to 70 in increments of 10. The results can be seen in figs 4.3.1.4, 4.3.1.5 and 4.3.1.6. Packets dropped 600 500 400 300 200 100 0 10 20 30 40 50 60 70 Buffer size Reno Westwood fig 4.3.1.4 - packets dropped when buffer size causes congestion 18

Throughput (Kbps) 76 74 72 70 68 66 64 10 20 30 40 50 60 70 Reno Westwood Buffer size fig 4.3.1.5 - average throughput when buffer size causes congestion Variation in throughput (Kbps) 90 80 70 60 50 40 30 20 10 0 10 20 30 40 50 60 70 Buffer size Reno Westwood fig 4.3.1.6 - variation in throughput when buffer size causes congestion This second experiment provided results very similar to the first. In this case, TCP Westwood always caused more packets to be dropped than TCP Reno, although the number of packets dropped by Westwood did decrease as the buffer size increased as would be expected since the larger buffer was able to store the packets the TCP Westwood senders were transmitting (see fig 4.3.1.4). Again, as fig 4.3.1.5 shows, average throughput was generally slightly higher with TCP Westwood, although this was not always the case. In this experiment, however, TCP Westwood shows a marked improvement over TCP Reno when the variation in throughput is examined, demonstrating a much greater fairness than Reno provides. 19

4.3.2 TCP Traffic across a lossy link For the experiment described in this section, a dumb-bell topology with four TCP senders and four TCP receivers was again used. Each simulation ran for 40s and each sender transmitted FTP packets throughout the simulation. Routers A and B each had a buffer size of 50, and the bottleneck link had a bandwidth of 5Mb and a delay of 40 ms. The bottleneck link was a lossy link, whose probability of loss was varied from 0% to 0.5% in increments of 0.1%. The results can be seen in figs 4.3.2.1, 4.3.2.2 and 4.3.2.3. Packets dropped 120 100 80 60 40 20 0 0 0.1 0.2 0.3 0.4 0.5 Probability of loss (%) Reno Westwood fig 4.3.2.1 - packets dropped with lossy link Throughput (Kbps) 75.5 75 74.5 74 73.5 73 0 0.1 0.2 0.3 0.4 0.5 Probability of loss (%) Reno Westwood fig 4.3.2.2 - average throughput with lossy link 20

Variation in throughput (Kbps) 50 40 30 20 10 0 0 0.1 0.2 0.3 0.4 0.5 Reno Westwood Probability of loss (%) fig 4.3.2.3 - variation in throughput with lossy link Fig 4.3.2.1 shows that, no matter what the probability of loss across the lossy link might be, TCP Westwood always causes more packets to be dropped than TCP Reno. This seems to indicate that it is not just the lossy link which is causing packet loss (see section 6.2.1 for a discussion of this). Fig 4.3.2.2, showing the average throughput, appears to be inconclusive - in some situations Westwood provides improvements of up to 0.5Kbps, while in others Reno provides similar improvements over Westwood. With this experiment, a comparison of figs 4.3.2.2 and 4.3.2.3 is useful, since it can be seen that in each case where Reno provided an improvement in average throughput, TCP Westwood was actually providing a fairer distribution of that throughput amongst the receivers. That is, Reno was causing one or two receivers to enjoy large throughputs at the expense of the other nodes, while Westwood distributed the throughput more fairly. Although this lowered the overall throughput slightly, each receiver obtained a fairly large throughput. An extra experiment using a lossy link was conducted to demonstrate how the two TCP flavours vary their slow start thresholds and congestion windows when loss events occur. This experiment used only one TCP sender and one receiver. The access links had a bandwidth of 100Mb and a delay of 1ms. The bottleneck link had a bandwidth of 5Mb, a delay of 40ms and a probability of loss of 0.5%. The results of this experiment can be seen in figs 4.3.2.4 and 4.3.2.5. 21

fig 4.3.2.4 - cwnd and ssthresh for TCP Reno fig 4.3.2.5 - cwnd and ssthresh for TCP Westwood A comparison of these two graphs shows that the two TCP flavours set their slow start thresholds very differently. In fig 4.3.2.4, which depicts the threshold and the congestion window as set by TCP Reno, the threshold is is relatively low for much of the simulation's duration. In contrast, fig 4.3.2.5, which shows TCP Westwood's slow start threshold and congestion window values, shows clear evidence of the faster recovery of TCP Westwood. That is, the slow start threshold is set equal to the estimate of the available bandwidth after a timeout. This leads to a much higher slow start threshold being implemented by TCP Westwood, and this is clearly visible in fig 4.3.2.5. It should be noted that fig 4.3.2.4 shows no timeout events, whereas fig 4.3.2.5 shows two, where the value of cwnd suddenly drops. Despite the fact that timeouts have occurred with TCP Westwood, and not with TCP Reno, Westwood still implements a greater ssthresh. 4.3.3 TCP Traffic competing with UDP traffic The two experiments described in this section used a dumb-bell topology with two TCP senders, two UDP senders, two TCP receivers and two UDP receivers. Each simulation ran for 40s and both TCP senders transmitted FTP packets throughout the simulation while both UDP senders transmitted Constant Bit Rate (CBR) packets throughout. For these experiments, the total throughput for the UDP receivers was calculated along with the average throughput for the TCP receivers. The two experiments conducted 22

with UDP competing for bandwidth used the same network parameters as the two experiments described in section 4.3.1. That is, one experiment investigated the effects of altering the bottleneck bandwidth, while the other investigated the alteration of the buffer size. The results of the first experiment involving UDP, in which the bottleneck bandwidth was altered from 0.5Mb to 5Mb in 0.5Mb increments to create a congestion situation, can be observed in figs 4.3.3.1, 4.3.3.2 and 4.3.3.3 below. Number of packets dropped 60 55 50 45 40 35 30 25 20 0.5 1.5 2.5 3.5 4.5 Reno Westwood Bandwidth of bottleneck link (Mb) fig 4.3.3.1 - packets dropped when bottleneck bandwidth causes congestion and TCP competing with UDP 23

120 Throughput (Kbps) 100 80 60 40 20 Reno Westwood 0 0.5 1.5 2.5 3.5 4.5 Bandwidth (Mb) fig 4.3.3.2 - average TCP throughput when bottleneck bandwidth causes congestion and TCP competing with UDP Throughput (Kbps) 550 500 450 400 350 300 250 0.5 1.5 2.5 3.5 4.5 Bandwidth (Mb) UDP competing with Reno UDP competing with Westwood fig 4.3.3.3 - UDP throughput when bottleneck bandwidth causes congestion Yet again, 4.3.3.1 shows that TCP Reno causes less packets to be dropped than does TCP Westwood. However, in this experiment TCP Westwood shows, in fig 4.3.3.2, a more notable increase in average 24

throughput compared to TCP Reno, and this average throughput is higher for TCP Westwood in all cases. Fig 4.3.3.3 shows the total UDP throughput when the protocol is competing with each flavour of TCP. It can be seen that TCP Westwood reduces the UDP throughput slightly, thus ensuring a more friendly distribution of the bandwidth and showing that TCP Westwood can compete more effectively with UDP than can Reno. The results of the second experiment, in which the buffer size was altered, from 10 to 70 in increments of 10, to create congestion, can be seen in figs 4.3.3.4, 4.3.3.5 and 4.3.3.6. Number of packets dropped 160 140 120 100 80 60 40 20 10 20 30 40 50 60 70 Buffer size Reno Westwood fig 4.3.3.4- packets dropped when buffer size causes congestion and TCP competing with UDP Throughput (Kbps) 120 115 110 105 100 95 90 85 80 10 20 30 40 50 60 70 Reno Westwood Buffer size fig 4.3.3.5 - average TCP throughput when buffer size causes congestion and TCP competing with UDP 25

Throughput (Kbps) 530 528 526 524 522 520 518 10 20 30 40 50 60 70 Buffer size UDP competing with Reno UDP competing with W estwood fig 4.3.3.6 - UDP throughput when buffer size causes congestion Fig 4.3.3.4 shows TCP Westwood still causing more packets to be dropped than TCP Reno. Fig 4.3.3.5, however, shows the greatest improvement seen in the average throughput when TCP Westwood is used. Fig 4.3.3.6 confirms these findings, showing a marked reduction in UDP throughput when TCP Westwood is competing for bandwidth, compared with TCP Reno, which struggles against UDP. 4.4 Comparison of results with findings of other research Comparing the findings of the above experiments with those reported in the literature on TCP Westwood, will provide a basis for examining the validity of the results obtained. This will be useful in evaluating the success of the project in the next chapter. Much of the literature [5, 9, 11] has looked at scenarios, such as wireless or satellite links, not studied in this project and thus their findings are not directly comparable with the results discussed here. Their findings were mentioned, however, in section 2.3.5. Zanella et al [19] found that the use of TCP Westwood always led to higher average throughput than did Reno. This clearly undermines some of the results discussed in section 4.3 since Westwood was found to achieve lower average throughput than Reno in some situations. Casetti et al [5] investigated the fairness of TCP Westwood and showed it to be as good as, if not better than, the fairness of TCP Reno. This supports the results discussed above, whereby the variation in throughput at the receivers was generally lower (and thus fairer) when TCP Westwood was used. This was also shown to be the case by Grieco and Mascolo [11]. In the next chapter, conclusions are drawn from these results, and the project is evaluated. 26

Chapter 5. Project Evaluation 5.1 Introduction This chapter looks at the way in which the project was conducted and evaluates the level of success which was achieved. It also aims to explain how the project could have been improved and to look at ways in which the project could be further developed in the future. 5.2 Experimental Conclusions The performance evaluation conducted here is based upon the results obtained through experimentation and discussed in chapter 4. It should be borne in mind that these results did not always agree with the literature on the topic. Sections 2.3.5 and 4.4 provide more detail on the findings reported in the literature and the way in which the performance of TCP Westwood was evaluated in these papers. 5.2.1 Packets Dropped It has been seen that TCP Westwood usually causes more packets to be dropped than TCP Reno in a congestion situation. This indicates that TCP Westwood, depending on the limiting factor in the network, is causing buffer capacity to be reached more quickly and more often than TCP Reno or is causing the bottleneck bandwidth to be fully utilised quicker than TCP Reno. Given the way in which the protocol's congestion control mechanism has been altered, this is to be expected. While TCP Reno has what is known as fast recovery, TCP Westwood has faster recovery. That is, Westwood resets its slow start threshold using its estimation of the available bandwidth after a loss event. This means that TCP Westwood generally sets a much higher threshold than does TCP Reno (see figs D.5.1 and D.5.2 in Appendix D) and so saturation of the available bandwidth is achieved much quicker. This would account for more packets being dropped, since Westwood does not reduce its transmission rate as much as Reno. 5.2.2 Throughput In the majority of cases, a higher throughput was achieved when TCP Westwood was used than when TCP Reno was implemented. However, in some instances (see figs D.2.2 and D.2.3 in Appendix D) 27

Reno did provide a slightly higher throughput. The experiments in which Westwood always outperformed Reno were those involving competition between TCP and UDP for the available bandwidth. This improvement was especially noticeable when the size of the buffer at the router caused the congestion situation (see fig D.2.5). The variation in the throughput was also investigated. Although Reno occasionally out-performed Westwood, in general TCP Westwood provided a fairer distribution of the available bandwidth, so that each receiver had a similar throughput. In every instance of Reno providing a higher average throughput than Westwood, a lower variation in throughput amongst the receivers, and thus a fairer distribution, was obtained by TCP Westwood. In those experiments involving UDP, the overall UDP throughput was also investigated. The findings here reinforced the observations made earlier, whereby TCP Westwood always provided a higher throughput in the presence of UDP transmissions. In both UDP experiments, the UDP throughput was lower in the presence of TCP Westwood than in the presence of TCP Reno. This indicates that Westwood is more able to compete with Reno, due to its faster recovery mechanism. By not backing off as much as Reno after loss events occur, TCP Westwood sacrifices less of its bandwidth to the UDP and thus achieves a better throughput. 5.3 Success of the Project The project has been successful in that it has fulfilled the objectives which were originally stated. That is, the Transmission Control Protocol and its various flavours have been investigated, simulation experiments have been designed and scripts to conduct these experiments using the network simulator ns- 2 have been written and run, the results obtained have been collated and discussed and, finally, the results have been used to compare the performance of TCP Westwood with that of TCP Reno and thus to evaluate the new flavour. However, although the overall aims have been achieved, this has not necessarily been done in the optimum manner. As discussed in section 4.4, the results obtained did not always agree with the results of other researchers. As already mentioned, however, much of the other research studied investigated different scenarios to those investigated here and thus direct comparison may not be conclusive in establishing the validity of the experiments'results. Had time permitted, the results could have been 28

improved and made more reliable by running each experiment multiple times and conducting statistical analyses of the results. 5.4 Possible Improvements The data extracted from the trace files generated by the simulation scripts were used to investigate two performance metrics - throughput and the number of packets dropped. There are, however, various other performance metrics such as the number of packets retransmitted, the goodput (throughput with retransmitted packets subtracted) and the packet delay in the network. Since it has been shown in chapter 4 that TCP Westwood does not provide an improvement in the number of packets dropped but does, sometimes, provide an improvement in the throughput, it would have been useful to investigate one or more of these other performance metrics to give a better overall picture of the way in which using TCP Westwood affects the transmission of packets. As mentioned in section 5.3, it would have been useful to conduct each experiment many times. The mean of the results could then have been calculated, along with such statistical derivatives as the standard deviation. This would have made the results more robust and more reliable by reducing the effects of any faulty results. 5.5 Potential Extensions The experiments conducted for the purposes of this investigation used a single, very simple topology. Real-world situations in which TCP is used are likely to involve much more complex topologies than the dumb-bell topology. Such topologies could be an area for further investigation into the differences in performance between TCP Westwood and Reno. The experiments have also been limited in the transmission media simulated - all the links in the dumbbell topology were physical links. Modern computer networks may involve wireless or satellite links in addition to or instead of fibre connections. Such wireless and mixed media networks have been investigated in the literature [5, 9, 11] and have shown TCP Westwood to provide significant improvements in throughput and goodput in such scenarios. A study of Westwood's performance over different transmission media would provide an interesting and possibly insightful extension to this project. 29