SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3

Size: px
Start display at page:

Download "SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3"

Transcription

1 SIMULATION STUDY OF AQM SCHEMES OVER LARGE SCALE NETWORKS IN DATA CENTERS USING NS-3 A Masters Project Submitted to the Graduate Faculty of The Louisiana State University, Agricultural and Mechanical College in partial fulfillment of the requirements for The degree of Master of Science in System Science In Division of Computer Science and Engineering, School of Electrical Engineering and Computer Sciences By Onakome Utejiro Okah-Avae March, 2014

2 Dedication This project is dedicated to my late mum who passed away exactly 10 years. I love you mum, even in death; you still saw your son through graduate school. ii

3 Acknowledgments I would like to give thanks to the most High God who brought me here and saw me through this great journey. I would like to thank my sister Okame Okah-Avae for encouraging me to further my studies and all the things she did for me during my journey here. God bless you. I am grateful to my supervisor and chairman of my committee Dr. Seung-Jong Park who taught me a lot; impacted my life in so many positive ways that I would always be grateful for. I would like to thank Dr Jianchua Chen and Dr Feng Chen for being part of my examination committee. I would like to thank Lin Xue, Chui-Hui and Praveen for all the ways they taught and all the knowledge they shared with me. I would like thank my remaining family and friends for all the prayers and moral support you gave to me. Lastly I would like to thank the investors of Open source, without you; a lot of knowledge would not be shared today. iii

4 TABLE OF CONTENTS DEDICATION.. II ACKNOWLEDGEMENTS... III TABLE OF CONTENTS.. IV TABLE OF FIGURES.... VIII ABSTRACT.... X CHAPTER 1: INTRODUCTION Introduction to Congestion in Data Center Networks Objective.. 3 CHAPTER 2: BACKGROUND Congestion Control Network-Assisted Congestion Control End-to-End Congestion Control Data Center Networks Congestion in Data Center TCP Incast Problem TCP Incast Problem Congestion control in Data Center ICTCP 10 iv

5 DCTCP Scheduling Work Conserving Scheduling First-Come-First-Serve Fair Queuing Approximated Fair Queuing Stochastic Fair Queuing Non-work Conserving Scheduling Burst Scheduling 14 CHAPTER 3: RELATED WORKS Queue Management Algorithm Droptail Active Queue Management Algorithm Random Early Detection Controlled delay Fair Queuing Controlled Delay Approximated Fair Dropping Queue Approximated Fair Control-Delay 19 CHAPTER 4: SIMULATION IMPLEMENTATION Network Simulator (NS-3) 20 v

6 4.2. Data Center Architecture Node Set Up Link Set Up On-Off Traffic Routing Environment Set Up Simulation Scenarios 24 CHAPTER 5: SIMULATION RESULTS Droptail Many-To-One Regular Traffic RED Many-To-One Regular Traffic CoDel Many-To-One Regular Traffic SFQ Many-To-One 37 vi

7 Regular Traffic Comparing AQM Schemes Aggregate Throughput Latency 40 CHAPTER 6: CONCLUSION AND FUTURE WORKS CONCLUSION FUTURE WORKS.. 42 REFERENCES 44 BIO 47 APPENDIX. 48 vii

8 TABLE OF FIGURES Figure 1: A data center owned by Microsoft in Chicago 2 Figure 2: TCP Reno Congestion Control 6 Figure 3: Data center architecture k-ary tree used in this simulation 21 Figure 4: Many to one scenario. 26 Figure 5: Regular traffic 26 Figure 6: Droptail throughput 16 hosts with Nix-Vector Routing 28 Figure 7: Droptail throughput 16 hosts with Global routing with ECMP. 29 Figure 8: Droptail throughput 1024 hosts with Global routing with ECMP. 29 Figure 9: Droptail throughput 16 hosts with Nix-Vector routing with regular traffic.30 Figure 10: RED throughput 16 hosts with Nix-Vector Routing 31 Figure 11: RED throughput 16 hosts with Global routing with ECMP. 32 Figure 12: RED throughput 1024 hosts with Global routing with ECMP. 33 Figure 13: RED throughput 16 hosts with Nix-Vector routing with regular traffic 33 Figure 14: CoDel throughput 16 hosts with Nix-Vector Routing 34 Figure 15: CoDel throughput 16 hosts with Global routing with ECMP. 35 Figure 16: CoDel throughput 1024 hosts with Global routing with ECMP. 35 Figure 17: CoDel throughput 16 hosts with Nix-Vector routing with regular traffic 36 viii

9 Figure 18: SFQ throughput 16 hosts with Nix-Vector Routing 37 Figure 19: SFQ throughput 16 hosts with Global routing with ECMP. 38 Figure 20: SFQ throughput 1024 hosts with Global routing with ECMP. 38 Figure 21: SFQ throughput 16 hosts with Nix-Vector routing with regular traffic 39 Figure 22: Aggregate throughput of AQM schemes in Many-to-one 40 Figure 23: Latency of the AQM schemes simulated 41 ix

10 Abstract Data center architecture of today consists of thousands of computers connected together through many inter-connecting switches to form large-scale networks using tree topology. Many of them communicate concurrently, so therefore sometimes congestion might appear. If TCP packets are being sent, during congestion each flow of communication is meant to be treated fairly. Most switches use the drop-tail scheduling but studies have shown that this scheduler can not guarantee fairness and as queues get full; the latency increases. So this packet drops cause reduction in throughput of those flows. Research is ongoing to find an Active Queue Management (AQM) scheme that can solve this fairness issue, though fairness is not the only important feature that is looked on. Latency is another feature that is important. In this project, I evaluate the behavior of different AQM schemes in large data center networks by simulating each scheme with open source discrete network simulator (NS-3). The fairness, the aggregate throughput and latency of each scheme are gotten. To our best knowledge, it is the first time that all these AQM schemes are evaluated and compared in NS3 simulated large scale data center networks. x

11 Chapter 1 Introduction 1.1. Introduction to Congestion in Data Center Networks Congestion occurs in different types of networks, be it LAN, WAN, MAN. As long as data is transferred within hosts in a network, congestion can happen. Data center networks are no exceptions to this problem; as data is always transmitted between servers or outside sources. A lot of data centers of today have (tens to hundreds of) thousands of hosts interconnected together in one location communicating (see Fig. 1). With these many hosts, there is bound to be network congestion in such a network. There are methods to reduce congestion, either by congestion control or congestion avoidance. Some transport layer protocols have their own congestion control mechanisms but in data center, that mechanism can not be relied on as the only source of congestion control. Switches and switches (from here on, switches would be used in place of routers and switches) play a huge part in congestion controlling in data centers as some switches use the dropping of received packets to alert the sending host (sender) to reduce its rate of transmission. Due to the large amount of data transmission that can occur in such an environment; it is good for the switch to be able handle as many flows of packets as possible, have a large enough buffer size (memory allocated to hold packets waiting to be forwarded out of a switch) for incoming packets to queue in, a good scheduling algorithm (to determine how resources or bandwidth is

12 allocated to packet flows on the queue) and a good queue management process that determines how packets in a queue are dropped when the buffer size passes a set threshold. A flow is when a sender wants to transmit data to a destination, the number of packets sent at an instance is called a flow. When a new scheduling algorithm or new queue management technique is created, it can not just be introduced into the data center network to test its capabilities. It is best for such an application to be tested in an environment that does not use as much resources as the live networks. Fig 1. A data center owned by Microsoft in Chicago [1]. A network simulation is used to mimic the behavior of computer networks in live environments. So it is better to introduce a new application in such an environment before testing it on live networks that already in use by live hosts because if there is an issue, the affected switches would have downtime which would probably cause network congestion at other switches. Though simulators are just application software, which produce near-accurate results compared 1

13 to the live systems and is not guaranteed that they would produce the same result as the live systems; the use of simulators is cost effective and it reduces the need to have a test bed data center just to test new protocols or applications A case study is how new protocols are tested presently in LSU. CRONtest is used to test new protocols being introduced during researches at the institution. CRON is an emulation test bed that uses the hardware at LSU data centers to emulate a computer network. Due to the limited number of nodes, there is a limit to the type and size of networks that can be emulated here. Also jobs are dependent of availability of the systems; so some times have to be scheduled for later times or users have to wait for there to be available systems. The simulation I propose solves this problem for simulating AQM schemes in large scale Data center networks Objective The objective of this study is as follows Simulate a large scale data center networks with the number of hosts ranging from 16 to 3456 with the use of network simulator (NS-3). This simulation can be used to study AQMs and scheduling algorithms. These hosts would be interconnected with switches through a fat tree topology. This simulation would generate a report that can be used to calculate end-to-end throughput (which can be used to analyze fairness), aggregate throughput and latency. 2

14 Chapter 2 Background 2.1. Congestion Control: Network congestion occurs when there are more packets being transmitted than the network link can handle. This leads to a large queuing delays, loss of packets and a reduction in throughput. With the ever increasing rate of transmission of data in networks, these congestions need to be avoided or managed whenever they arise. With the increase in available bandwidth, there is also seen to be an increase in demand of data transfer as more people are getting access to networks; therefore an increase in network link capacity does not solve the issue. End-to-end delay; which is the total nodal delay from a sender to a receiver; between two host systems is also one of the factors in network congestion. Nodal delay is the addition of processing, queuing, transmission and propagation delays. There are methods of congestion control: network-assisted and end-to-end congestion control. In end-to-end congestion control, the transport layer manages the congestion control and the transport layer is only available on host systems. The transport layer protocol controls the congestion based on observation of the rate of packets sent or received by this layer. Networkassisted congestion control on the other hand is handled by the network layer which is available on layer three devices and hosts. The IP layer handles the congestion by transmitting information to neighboring or dropping packets. 3

15 Network-Assisted Congestion Control ATM ABR networks is a network-assisted congestion control. In ATM, a virtual circuit is created between the sender and receiver path; whereby all switches between them would maintain state (reserve connection for only the transmission taking place). This allows the switches to monitor the behavior of each sender and signal to the sender to reduce its transmission rate when the switch becomes congested. ABR has the ability to speed up transmission rate if there is no congestion detected and signal the sender to reduce the transmission rate to a set minimum if there is congestion End-to-End Congestion Control Transport layer protocol TCP, has its own congestion control which controls the rate of transmission between a sender and a receiver. The sender makes use of end-to-end congestion control, if the sender perceives congestion in the network by either non-reception of an acknowledged packet or round-trip time (RTT), it reduces its sending rate. TCP uses the congestion window (cwnd) to control its rate of transmission. So when a sender receives an acknowledgment message (ACK) for a packet on time, it assumes that the network is congestion free and then adjusts the size of the cwnd based on this perceived information. The sender sets its RTT time based on the exponential weighted moving average (EWMA) values it calculates during transmission based on EstimatedRTT which is the average of SampleRTT calculated by TCP. But if the sender does not receive an ACK, TCP assumes that the network is congested, resends the packet and lowers its cwnd. By doing this, TCP reduces the amount of packets present in the network when it detects congestion. 4

16 Fig. 2 shows the behavior of TCP Reno when using its cwnd to control its rate of transmission. It starts transmission in a slow start state by only sending one packet; if it receives an ACK for this sent packet it doubles it rate of transmission until it reaches a threshold called slow start (ssthreshold). Fig.2: TCP Reno Congestion Control [2] After it exceeds this threshold, if ACKs are received it only increments by 1; ssthreshold is used as a medium by TCP for congestion avoidance where by it detects that congestion can happen after that set window and is preset. If an ACK is not received on time (timeout), TCP assumes that there is congestion and cuts the cwnd to half of the current and updates the ssthrehold to the value of that cwnd; but this doesn t mean the packet was lost. If TCP receives 3 ACKs with the same ACK number even though subsequent flows has been sent out. This is proof that packets 5

17 has been lost; therefore TCP reduces the cwnd back to the initial value of the cwnd at the beginning of slow start which is 1. This is TCP s way of avoiding congestion Data Center Networks Modern day Data centers consist of a large number of servers which are internetworked; with some of them distributed in different geographical locations. These servers communicate between themselves as well as the outside networks, so there is need for the network congestion to be kept minimal for ease of communication. Depending on the frequency of data transfer in data centers, network congestion can occur. Some Data centers belonging to companies like Google or Facebook which have a large number of data centers around the world experience network congestion in these centers. If many users demand data from a particular geological data center, network might become congested which reduces the response time the users receive. Also cloud computing makes use of data centers that are accessed by users (both paid and unpaid) who are accessing a service on that data center. If there is network congestion, this affects the business. Due to the interconnection in data centers; switches play a major role in network congestion. The delays in each switches that is passed by the packets as it transmits from the sender to the receiver, also amounts to the end-to-end throughput. This interconnection allows data center networks to have alternate routes between hosts; this allows packets to be routed through multiple paths like in the case of multipath TCP (MPTCP) [3] and equal cost multi-path (ECMP) [4]; or if there is network congestion or link failure, switches are able to choose alternate routes that can be used to forward these packets. MPTCP is an extension of TCP protocol allowing TCP 6

18 to use multiple paths (e.g. wifi and Ethernet) to send data between hosts to maximize the use of resources and improve throughput. Though each flow is sent with one medium. ECMP is based on multiple routes from one node to the destination node; this is implemented in switches. If there is more than one route in the routing table to a destination, the switch can send different flows going to the same destination through these different routes. This helps in load balancing the network traffic Congestion in Data Center There are a number of issues caused by network congestion that have been seen in data center networks examples are TCP Incast problem [5] and TCP Outcast problem [6] TCP Incast Problem TCP Incast occurs in data center networks with many-to-one data traffic that uses distributed storage or computing like mapreduce [7] which is a programming model used to process large data sets with parallel and distributed algorithms; whereby many servers are sending data to only one receiver. This starts with the receiver requesting data from the other servers and the sends respond simultaneously causing network congestion at the switch leading to the receiver. This causes packet losses that would require the senders to retransmit dropped packets after nonreceipt of ACK. This process can continue as packets are retransmitted which can cause a retransmission timeout (RTO) as TCP can not recover fast retransmission algorithm. RTO is when a sender has too 7

19 many packets sent out with no ACK, therefore causing the sender to stop sending packets for a while (usually a second or non less than 200ms). This causes packets to be bandwidth to drop and a collapse in throughput which degrades their performance. Also as there is a timeout where packets are not being sent, resources is being wasted TCP Outcast Problem TCP outcast also occurs when there is a many-to-one traffic but this problem is concerned with the fairness between a set of large data flows and small data flows. It is observed that in this scenario, when packets of small flows are competing with large flows to join queues on the limited buffered space in a switch; the packet in the small flows are dropped. The port receiving these small flows is said to enter port blackout. This causes the throughput of some flows to be drastically smaller than those of others Congestion Control in Data Centers TCP congestion control protocol is also used in data centers but due to the availability of alternate routes between hosts; researches have been ongoing to modify the present TCP standards to be able to make use of this ability. Extended TCP protocols like ICTCP [8] and DCTCP [9] has been proposed to solve congestion control as well as fully utilize the available bandwidth. 8

20 ICTCP Incast Congestion control TCP was proposed in 2010 by researchers in Microsoft to solve the TCP Incast problem. ICTCP uses congestion avoidance to control congestion. The receiver is responsible for detecting congestion and avoiding it when it estimates the probability of congestion occurring. The available bandwidth on the receiver side signals the receiver on the possibility of congestion thereby causing congestion control. The receiver estimates if the available bandwidth is enough to increase its receiver window and increasing it. Another method of congestion control of TCP Incast problem is to decrease the RTO value to a lesser time DCTCP Data Center TCP is another Microsoft research which uses ECN (Explicit Congestion Network) [10] feature to estimate congestion on the network. ECN uses an end-to-end like notification method to control congestion by allowing switches leave marks on the IP header to notify senders about pending congestion; which in return the sender can reduce its transmission rate. It extracts multibit feedback on congestion in a network from the single bit stream of ECN marks. The receiver uses the fraction of marked packets to estimate the level of congestion in the network. This protocol solves the TCP outcast problem in data centers by restoring some fairness. This protocol is explicitly for data center networks. 9

21 2.3. Scheduling The roles of switches have become useful in both congestion control and avoidance. When a switch receives a packet; it notes the source port, checks the header of the packet to determine the source IP address, destination IP address and uses this information to place this packet in the queue leading to the output port to the destination. Queues or buffers are used when the arrival rate of packets entering the switch is greater than the rate of packets leaving the switch, a queue is used to store the packets pending them being forwarded out of the switch. Then when in front of the queue; the packet is forwarded to the destination node or the next switch that would lead it to its destination with the bandwidth allocated to it by the scheduler. Another use of buffers is in a case where the incoming bandwidth is much higher than the bandwidth at the outgoing port. Buffers are used to equalize this flow so that the outgoing port is not overwhelmed by transmission that are too fast for it to process. A good switch in a data center network should be able to route and forward packets as soon as possible after it has been received. These switches can be equipped with a large enough memory to hold enough packets in it s queues while other packets are being processed for forwarding and forwarded. Another feature that can increase the efficiency and fairness of such switches are schedulers. A scheduler is used to determine how packets are placed in the queue and which packets are allocated resources like bandwidth. There are two types of scheduling, namely work conserving schedule and non-work conserving schedule. 10

22 Work Conserving Schedule. In work conserving schedule, whenever there is a packet in the queue waiting to be forwarded, the scheduler is never idle if there is a packet in the queue. There are a number of work conserving schedules like FCFS, Fair queuing, Approximate Fair queuing [11], and Stochastic Fair queuing [12] First-Come-First-Serve FCFS is the simplest form of scheduling; which is choosing the order in which packets entering through the input port are forwarded to the queue at the output port. It entails packets being serviced on a FCFS basis. As packets arrive, the scheduler allocates all the bandwidth to the packet at the beginning of the queue and transmits it out the output port. This scheduler seems logical but does not prioritize packets thereby important packets or packet flows whose deadline to reach its destination is near are not given priority in the allocation of bandwidth; which can cause unnecessary retransmission by the senders even though this packets are received, as they are received after the set RTT Fair Queuing. Fair queuing is one of the widely used concepts in scheduling. It uses the concept of round robin to allow each flow in the queue a fair share of the bandwidth. Each flow is mapped to the same queue which numerous memory references; as each packet of the same flow needs to be mapped to the same queue; when flows are fairly allocated bandwidth. 11

23 When the output port is available, the scheduler picks packets from each flow and sends it to the port one after the other. So if there are x flows of packets in the queue; each flow is allocated 1/x of the available bandwidth. Therefore each flow is allowed to send an equal share of the packets. This scheduler is unfair to large flows because smaller flows send out all their packets before these large flows can send out their packets Approximated Fair Queuing. AFQ is a variation of fair queuing which sorts the packets in the queue according to the deadline specified by the sender. It picks the flow with the closest to expiry of the deadline given to it and sends out as maximum number of packets it can send based on the allocated bandwidth to that flow Stochastic Fair Queuing. In SFQ, as packets enter into the switch; packets of the same flows are placed into hash buckets based on their source port, source and destination address. The hash function is used to map each source-destination pair to its queue index. Each hash bucket represents a unique flow. This scheduler removes the need of mapping each packet of the same flow in fair queuing; this scheduler places each packet in buckets so memory is conserved because there is no mapping. When dequeuing (exiting the queue) the scheduler picks packets from each harsh bucket in round robin technique. In SFQ packets can be re-ordered according to the order it which it was sent by the sender. 12

24 Non-Work Conserving Scheduling. Non-work conserving scheduling is used for it s burstiness and delay variation. It is when a scheduler can stay idle even when there are packets to be forwarded on the queue in the switch. This increases the end-to-end delay, loss of bandwidth and reduction in throughput. The queuing delay is bigger than that of the work conserving scheduling. The network depth in this scheduler is independent on buffer requirement. Examples of this schedule is burst scheduling [13] Burst scheduling. Burst scheduling was created for transmission of multimedia data. When packets arrive at the switch, a source regulator ensures that the packets in the flow satisfy a flow specification. Each packets of the same flow are kept in a queue with the first packet kept in front and sorted in the queue to the last packet in that flow. During this time, the scheduler is idle. When the packets are complete; the flow is transmitted in a burst like manner out through the output put to its destination. It selects the next flow to be transmitted based on the packet with the smallest virtual clock. 13

25 CHAPTER 3 RELATED WORKS 3.1. Queue Management Algorithm. As the packets entering a switch joins a queue; if the rate of packets joining the queue (enqueue) is larger than the rate of packets leaving the queue (dequeue) then the buffer space in the switch gets filled up. This can cause a phenomenon called Bufferbloat [14] whereby the large and frequently full buffers inside the network cause high latency in the network and poor network performance. If a threshold of packets in the queue is reached, packets need to be dropped. Either those trying to enter the queue or some already on the queue. This process is called queue management or buffer management Droptail Droptail is the simplest form of queuing management and most commercial switches make use of it. It entails packets entering the buffers in a FCFS scheduling technique and after the buffer space of the switch fills up, new packets trying to enter at the tail end are dropped. It allocates bandwidth according to the packet at the top of the queue which has been placed at the output port by the scheduler. 14

26 This process seems logical but does not prioritize packets thereby important packets or packet flows whose deadline to reach its destination is near are not given priority in the allocation of bandwidth; which can cause unnecessary retransmission by the senders even though this packets are received, as they are received after the set RTT. This is one of the primary causes of TCP Incast and Outcast whereby packets from different flows are dropped, or the even though a packet is delivered or an ACK is received, the packets are still retransmitted because the receiver did not receive this packet before the RTT expired. Also droptail can also lead to global synchronization of TCP connections because during congestion all senders reduced their transmission rate at the same time when packets are lost, which causes all senders to increase transmission at the same rate and buffer space gets filled up again quickly Active Queue Management algorithm. Due to the many issues arising in droptail queues, other active queue management algorithms (e.g. RED [15], CoDel [16], Fq_Codel [17], AFD [18], AFCD [19]) were created to solve these problem Random Early Detection (RED). RED is an AQM algorithm that was created over a two decades ago to solve the unfairness issue, global synchronization problems, bufferbloat raised in droptail and is used for congestion avoidance in switches. 15

27 RED monitors the average queue size in the switch and when a preset threshold of number of packets in the queue has been surpassed, it starts to mark all incoming packets based on probability; which depends on the average queue size; and drops or marks packets based on the type of transport layer protocol. Dropping of packets start before a queue fills up so bufferbloat does not occur. RED chooses packets to mark, notifying the sender of congestion. Which causes the senders to decrease their cwnd; therefore global synchronization problem is solved. RED allows burst of packets (sudden arrival of a large of flow of packets) and also keeps the average size of the queue low. It preserves high throughput as well as low latency. If the buffer space becomes full, the probability is set to 1 and all incoming packets are dropped Controlled delay (CoDel) RED queuing has a problem of it being needing it s parameters set for different sizes of network bandwidths for it to work well, CoDel solves this. It works well with any range of bandwidth size without it needing it s settings to be modified. CoDel does not use the size of the queue to manage the queue, it uses the time spent in which packets are buffered called queue time. It sets a queue time called target and can not dropped packets when the queue time is below this set target. CoDel also has a time called interval which is the time in situations of worst-case RTT of connections on the queue. If a packet exceeds target for an at least an interval amount of time, it is dropped. Whenever a packet is dropped, 16

28 control law sets the next drop time which is decreased in inverse proportion to the square root of the number of drops since it entered dropping state. CoDel has periods of good queue and bad queue; where good queue goes drains under one RTT but bad queue has delays and queues persist over several RTTs. CoDel treats both types of queues differently; it does not have to bother about dropping packets but during bad queues, CoDel has to react by dropping packets. CoDel also makes use of ECN to mark packets to inform the sender to reduce it s transmission rate Fair Queuing Controlled Delay (FqCoDel). CoDel has fairness issues, so a modified version of it was just recently implemented so as to add fair queuing to CoDel. This mixes both the properties of the two algorithms whereby, fq_codel uses the stochastic technique to place packets of the same flows into hash buckets, but does not reorder packets because CoDel uses a FCFS queue. Flows are linked with two round robin lists which gives priority to recent flows over older flows and packets are chosen in the round robin manner to be forwarded out the switch. Like CoDel, if the target is not reached, packets are not dropped but after it has been passed packets are dropped according to the target and interval. 17

29 Approximate Fair Dropping Queue AFD uses extra information in its queue management, it retains a shadow buffer which is used to sample incoming packets and a flow table used to count the amount of packets in a flow in the shadow buffer. AFD uses a FCFS queue with a probabilistic drop-on-arrival. It also uses the past measurement of the queue size and the recent history of the flow to estimate the current sending rate and drops packets in order to limit the throughput of that flow to its share. It uses the amount of traffic from a flow that has the probability to be dropped during an interval to estimate the flow s rate and fair share rate Approximate Fair Control-Delay (AfCoDel) This is a new AQM that was just implemented by a Ph.D student here at LSU, Lin Xue. This AQM combines enqueuing part AFD with the dequeuing of CoDel. AfCoDel combines these algorithms to produce a more efficient algorithm that allocates bandwidth fairly and has low latency which in turn increases the throughput. The shadow buffer and flow table are used to sample incoming flows and count the number of packets present in the queues at the flow table. During dequeuing, target and interval are set to determine how packets leave the output port or are dropped. 18

30 Chapter 4 Simulation Implementation 4.1. Network Simulator (NS-3). NS-3 [20] is open source discrete-event network simulator targeted for research and educational use. It is an upgrade from NS-2 which is also a network simulator that was written in programming language OTCL and C++. NS-3 improves on the limitations of NS-3 (e.g. the ability to assign IP addresses and ease to add new protocols) and it is not backward compatible with NS-2. NS-3 is written in C++ and Python, so the C++ source codes where migrated from NS-3. It uses the Python-based framework WAF for configuring, compiling and installing applications. NS-3 scripts are written in C++ and executed with WAF. An extension of NS-3 called NS-3- Direct code execution (NS-3-DCE) was implemented to enable NS-3 be used to simulate protocols kept in Linux kernels without modifying the files in the kernel. As at today, this extension is not fully functional so AQMs or schedulers can not be simulated with this extension. For this simulation a patched version of NS-3 [21] which has schedulers and AQM schemes implemented in them was used. NS-3 originally comes with only Droptail and RED schedulers. 19

31 4.2. Data center Architecture. The data center architecture used in this study is the k-ary fat tree [22]. With three levels of switches (Edge, Aggregate and Core). There k pods with (k/2) 2 number of core switches, with each containing two layers of k/2 number of switches. The lowest layer connected to k/2 number of hosts. Fig 3. shows the data center networks architecture used. Fig 3. Data center architecture k-ary tree used in this simulation. Data center architecture used in this study was gotten [23] Node Set Up. Nodes are set up in NS-3 in node containers. So each type of device is set up in the same node container; i.e. devices of different types (e.g. core and agg) are set up in different containers. The number of created nodes is dependent on the set value of k. A node container is used to create 20

32 nodes of the same type, it makes it easier for attributes to be assigned to all nodes unlike NS-2 where an iterative statement was needed. In NS-3 the attributes are just assigned to one container. The minrto of each node is set to 1ms Link Set Up. Point to point links are used to connect each connecting device together with the help of the NS- 3 function PointToPointHelper and these links are given a queuing delay of 100ms and bandwidth of 1GBps. This helper installs of the features of P2P on the links specified. The specific AQM or scheduler being used is installed on the link. NetDeviceContainer is used to connect each layer devices with it s above or below layers, and this NetDeviceContainer is assigned the IP address and subnet. For the connection of edges and hosts; the edges are first connected to the two-dimensional array of hosts through the P2P link and then each host (left and right) is connected to the edge. NetDeviceContainer link2 = p2p.install (host[i][j].get(h), edge[i].get(j)); hostsw[i][j].add(link2.get(0)); hostsw[i][j].add(link2.get(1)); On-Off Traffic. NS-3 uses OnOffHelper to generate a unicast traffic between two hosts, this helper installs all the features of an OnOff application; so as to perform the characteristics of an on and off pattern. For 21

33 this simulation, the OnTime and OffTime are not set so that this pattern is not produced in this simulation. The packet size is set to 1024 bytes which is 1kb, transmission speed is set at 1Gbps and maximum bytes is set to unlimited. This parameters are installed in all sending hosts, while a TCP Sink is installed on the receiver nodes. TCP Socket factory is the generated TCP packet traffic used in this simulation Routing. Two different types of routing protocols was used during the simulation to perform different scenarios. The Nix- Vector routing protocol was used and Global routing protocol with ECMP was used. The Nix-vector routing protocol includes the routing path in the packet header. The first transmission by the sender sets the path and all other packets have all that set path included in their headers. It is used in large-scale network topologies. So this routing was used in this simulation. Nix-vector in NS-3 has some draw backs like does not adapt to link failure and does not work with Openflow switches. Ipv4NixVectorHelper has to be added to a list of routing protocols. In order to use ECMP in NS-3, Global routing was used. In this scenario, flows heading to the same destination can split paths between two equal cost paths. Causing load balancing in the network and ultimately causing a higher throughput. Config::SetDefault("ns3::Ipv4GlobalRouting::RandomEcmpRouting", BooleanValue(true)); 22

34 4.3. Environment Set Up. This simulation was done on Ubuntu Operating system powered on a virtual machine with hard disk size of 15 GB and memory size of 4GB and 3 processors allocated to it. All simulations were run for 2 seconds and simulation stopped at 2.5 seconds Simulation Scenarios. Due to not all AQMs being available on NS-3, only 4 of them were used in this simulation, namely: CoDel Droptail RED SFQ To be able to properly simulate the behavior of a real data center large-scale network, certain scenarios where simulated in order to mimic such networks. Many to one scenario (e.g. in mapreduce) where many servers are sending packets to only one receiver. Regular traffic where servers are just generating random traffic by either one server is sending to another server, one server is receiving from numerous servers or one server is sending to numerous receivers. For this simulations different ranges were used in different scenarios. 16 and 1024 servers was used in many to one; and 16 was used in regular traffic. 23

35 In order to test the regular traffic, a random generator was used to randomly select IP addresses to add to the sink node (receiver), while another random generator was used to add an IP address to the sender. A while loop was used to make sure the receiver is not the same as the sender. Receiver script: podrand = rand() % num_pod + 0; swrand = rand() % num_edge + 0; hostrand = rand() % num_host + 0; hostrand = hostrand+2; char *add; add = tostring(10, podrand, swrand, hostrand); Sender script: while (rand1== podrand && swrand == rand2 && (rand3+2) == hostrand){ rand1 = rand() % num_pod + 0; rand2 = rand() % num_edge + 0; rand3 = rand() % num_host + 0; 24

36 Fig 4. Many to one scenario. [6] C1 C2 C3 C4 CORE A1 A2 A3 A4 A5 A6 A7 A8 T1 T2 T3 T4 T5 T6 T7 T8 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 Fig 5. Regular traffic [6] 25

37 Chapter 5 Simulation Results 5.1. Droptail Many to one. In the simulation for droptail queues in many to one traffic, three scenarios were tested. In case of 16 hosts with NIX-Vector routing. Only node two (server2) which is the neighboring node to the receiver has a high throughput. All other nodes experience very low throughput due to the congestion experienced between edge0 and Agg0, which causes packets to be dropped at these two routers. All packets from every server (minus server2) goes these two routers. So as cwnd increases and size of flows increase; these routers become full and when they reach full capacity, packets drop. This causes the servers to go restart transmission from slow start (1 cwnd). If this continues, servers go into timeout. While all these is happening at servers 3-16, server 2 hardly experiences congestion except in edge0. So might not go into timeout. The difference between the throughput of host2 and the second largest throughput of the other servers is high. This throughputs can be seen in Fig. 6. Flow-2 has a throughput of Kbps and the other flows range between Kbps and Kbps. It can be seen from Fig. 6 that this scenario suffers unfairness. 26

38 Droptail Throughput Kbps Flows Fig. 6. Droptail throughput 16 hosts with Nix-Vector Routing. ECMP with Global routing was used with the same number of hosts and the throughput significantly improved in the server10 with the throughput in server2 also increasing; thereby decreasing the difference between the largest and second largest throughput. The ECMP causes this effect on throughput because flows allowed to take alternate paths to server1, so the congestion between edge0 and agg0 is split into two. Which balances the load within the network. Fig. 7 shows the throughput of this scenario. It can be seen that the throughput of server one increases to Kbps while the lowest throughput is Kbps. For this simulation, the default maximum packets is changed to 350. This scenario too suffers from unfairness. 27

39 Throughput Kbps Droptail Flows Fig. 7. Droptail throughput 16 hosts with Global routing with ECMP. A further simulation using ECMP is run containing 1024 hosts with 1023 of those hosts sending data to only one server. From Fig. 8 shows the throughput; with the throughput sharply increasing from the Flow 1 to flow 8 and then there is a sharp fall in throughput. Droptail Throughput Kbps Flows Fig. 8: Droptail throughput 1024 hosts with Global routing with ECMP. 28

40 An oscillating rise and fall of throughput is seen between other flows. Eight hundred and seven of these servers did not have any throughput (0Kbps). The largest throughput is Kbps Regular traffic The Nix-vector routing protocol was used to simulate this scenario with 16 hosts generating 18 flows. Unfairness is also seen in this scenario but the difference between the highest and second highest is quite small. As seen in Fig. 9, flow 9 which is from server5 and server9 has the highest throughput of Kbps. While the flow 15 (between server13 and server10) has the lowest throughput of Kbps. Droptail Throughput Kbps Flows Fig. 9: Droptail throughput 16 hosts with Nix-Vector routing with regular traffic. 29

41 5.2. RED Many-to-one In the RED simulation, the queue limit is set to 350, minimum length of threshold is 200 and maximum length of threshold is 250. In the simulation with Nix-Vector routing, fairness is not attained in this simulation according to the throughput displayed in Fig.10. As shown in the figure, as the senders are further away from the receiver; the throughput reduces. But from the fifth flow, through put is almost constant. Though the throughputs are fairer droptail and CoDel; it is not as fair as the stochastic fair queue seen later. The highest throughput is Kbps on flow 1 and the smallest is Kbps on flow 13. RED Throughput Kbps Flows Fig.10: RED throughput 16 hosts with Nix-Vector Routing. 30

42 Using ECMP and global routing to simulate RED; unfairness in the throughput is seen in Fig.11. The first 3 flows have high throughputs; then throughput falls. The throughput ranges from 32,000Kbps to 55,100Kbps; with the highest throughput being Kbps and the lowest is Kbps. RED Throughput Kbps Flows Fig. 11: RED throughput 16 hosts with Global routing with ECMP. Using 1024 hosts, the throughput of 969 servers are equal to 0Kbps, while out of the remaining 55 servers; the highest throughput is Kbps which is shown in Fig Regular traffic During the simulation of regular traffic; RED has the closest fair network out of the schemes simulated in this study with throughput s range between 80,000Kbps and 320,000Kbps. The 31

43 largest throughput is Kbps in both flow 7 and 10, while the smallest is in flow 5 with Kbps as shown in Fig. 13. RED Throughput Kbps Flows Fig. 12: RED throughput 1024 hosts with Global routing with ECMP. RED Throughput Kbps Flows Fig. 13: RED throughput 16 hosts with Nix-Vector routing with regular traffic. 32

44 5.3. CoDel Many-to-one Simulation in CoDel through the patched NS-3. While simulating 16 hosts with Nix-Vector routing, looking at Fig. 14 it can be seen that there is unfairness. With the largest throughput at Kbps and the lowest throughput at Kbps CoDel Queue Throughput Kbps Flows Fig. 14: CoDel throughput 16 hosts with Nix-Vector Routing. In the case of Global routing with ECMP with 16 hosts, it has similar scenario like droptail where there are only two flows with high throughput. It can be seen from Fig.15 that there is an exponential fall in throughput in Flow 2. Which decreases to Kbps. 33

45 CoDel Throughput Kbps Flows Fig. 15: CoDel throughput 16 hosts with Global routing with ECMP. The largest throughput being flow 1 is Kbps and smallest is Kbps in flow 15. This simulation once again shows the unfair property in CoDel. In the experiment consisting of 1024 servers, each of the 1023 has more than 0Kbps throughput; 3000 CoDel Throughput Kbps Flows Fig. 16: CoDel throughput 1024 hosts with Global routing with ECMP. 34

46 but this not still ensure fairness in the network as seen in Fig. 16. The maximum throughput achieved in this simulation is Kbps in flow 4 and the minimum is 27.08Kbps in flow Regular traffic. CoDel s simulation in the scenario of regular traffic that was used in the study can be seen in Fig.17. It is seen that unfairness is present in this scenario with more than half of the flows not reaching up to 15,000Kbps; but seven of them had a throughput more than 100,000Kbps CoDel Throughput Kbps Flows Fig. 17: CoDel throughput 16 hosts with Nix-Vector routing with regular traffic. The highest throughput is fixed at Kbps and the lowest is Kbps which is in flow

47 5.4. SFQ Many-to-one In Nix-Vector, SFQ attains the closest to fairness even though it is not exactly fair. From Fig.18, it can be seen that the throughput difference between all the flows in not up to 12,000Kbps SFQ Queue Throughput Kbps Flows Fig. 18: SFQ throughput 16 hosts with Nix-Vector Routing. The largest throughput gotten is Kbps and lowest is Kbps. Therefore this scenario shows that SFQ is the fairer that all other queues that were used in this simulation. Using the ECMP and Global routing, from Fig.19; once again SFQ looks close to being fair in this case. The difference between the highest and lowest throughput does not surpass 10,000Kbps which is better than the other AQM schemes simulated. The largest throughput is on flow 6 which attained a throughput of Kbps and the lowest on flow 14 which is Kbps. 36

48 SFQ Throughput Kbps Flows Fig. 19: SFQ throughput 16 hosts with Global routing with ECMP. In the case of the 1024 hosts, SFQ does not exhibit fairness as most of it s flows does not have any throughput. The largest throughput is Kbps on flow 1. This is shown in Fig. 20. Throughput Kbps SFQ Flows Fig. 20: SFQ throughput 1024 hosts with Global routing with ECMP. The number of flows with 0Kbps throughput is 969 just like in the case of RED. 37

49 Regular traffic In this scenario, SFQ also does not display fairness, with the last 9 flows not having up to 15,000Kbps throughput. While the first 9 flows have more than 100,000Kbps throughput as seen on Fig SFQ Queue Throughput Kbps Flows Fig. 21: SFQ throughput 16 hosts with Nix-Vector routing with regular traffic. As seen from these simulations, in many-to-one scenarios; SFQ has the best fairness of throughput Comparing AQM Schemes Aggregate Throughput Aggregate throughput is the total amount of throughput that was attained in the network. In this simulations, each flow s throughput is added up in order to compare them. 38

50 Comparing the aggregate throughput as shown in Fig. 22, droptail clearly has the highest aggregate throughput and SFQ has the worst. It can be concluded that while SFQ has a little fairness, it lacks in throughput. Droptail on the other hand has a high throughput but is not a fair queue management technique as said earlier. Droptail has a throughput of value Kbps while SFQ has a throughput of Kbps. Aggregate Throughput Throughput Kbps CoDel Droptail RED SFQ AQM Schemes Fig. 22: Aggregate throughput of AQM schemes in Many-to-one Latency. Latency is referred to as any kind of delay in the network. Latency is very important in data center network; the less the latency, the better the performance in the network. Latency is compared between these AQM Schemes. As seen in Fig. 23, CoDel has a very low latency which is one of the advantages of CoDel. SFQ has the highest (worst) latency amongst the simulated AQMs with a high value of ns. The difference between SFQ and RED latency is minute. This is probably due to the processing 39

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Transport Protocols for Data Center Communication Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti Contents Motivation and Objectives Methodology Data Centers and Data Center Networks

More information

Advanced Computer Networks. Datacenter TCP

Advanced Computer Networks. Datacenter TCP Advanced Computer Networks 263 3501 00 Datacenter TCP Spring Semester 2017 1 Oriana Riva, Department of Computer Science ETH Zürich Today Problems with TCP in the Data Center TCP Incast TPC timeouts Improvements

More information

CS321: Computer Networks Congestion Control in TCP

CS321: Computer Networks Congestion Control in TCP CS321: Computer Networks Congestion Control in TCP Dr. Manas Khatua Assistant Professor Dept. of CSE IIT Jodhpur E-mail: manaskhatua@iitj.ac.in Causes and Cost of Congestion Scenario-1: Two Senders, a

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long 6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long Please read Chapter 19 of the 6.02 book for background, especially on acknowledgments (ACKs), timers,

More information

Congestion Control. Tom Anderson

Congestion Control. Tom Anderson Congestion Control Tom Anderson Bandwidth Allocation How do we efficiently share network resources among billions of hosts? Congestion control Sending too fast causes packet loss inside network -> retransmissions

More information

TCP Incast problem Existing proposals

TCP Incast problem Existing proposals TCP Incast problem & Existing proposals Outline The TCP Incast problem Existing proposals to TCP Incast deadline-agnostic Deadline-Aware Datacenter TCP deadline-aware Picasso Art is TLA 1. Deadline = 250ms

More information

Congestion Control in Datacenters. Ahmed Saeed

Congestion Control in Datacenters. Ahmed Saeed Congestion Control in Datacenters Ahmed Saeed What is a Datacenter? Tens of thousands of machines in the same building (or adjacent buildings) Hundreds of switches connecting all machines What is a Datacenter?

More information

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP CS 5520/ECE 5590NA: Network Architecture I Spring 2008 Lecture 13: UDP and TCP Most recent lectures discussed mechanisms to make better use of the IP address space, Internet control messages, and layering

More information

II. Principles of Computer Communications Network and Transport Layer

II. Principles of Computer Communications Network and Transport Layer II. Principles of Computer Communications Network and Transport Layer A. Internet Protocol (IP) IPv4 Header An IP datagram consists of a header part and a text part. The header has a 20-byte fixed part

More information

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS

MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS MEASURING PERFORMANCE OF VARIANTS OF TCP CONGESTION CONTROL PROTOCOLS Harjinder Kaur CSE, GZSCCET, Dabwali Road, Bathinda, Punjab, India, sidhuharryab@gmail.com Gurpreet Singh Abstract CSE, GZSCCET, Dabwali

More information

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks Random Early Detection (RED) gateways Sally Floyd CS 268: Computer Networks floyd@eelblgov March 20, 1995 1 The Environment Feedback-based transport protocols (eg, TCP) Problems with current Drop-Tail

More information

CSE 123A Computer Networks

CSE 123A Computer Networks CSE 123A Computer Networks Winter 2005 Lecture 14 Congestion Control Some images courtesy David Wetherall Animations by Nick McKeown and Guido Appenzeller The bad news and the good news The bad news: new

More information

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness Recap TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness 81 Feedback Signals Several possible signals, with different

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control : Computer Networks Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control TCP performance We ve seen how TCP the protocol works Sequencing, receive window, connection setup and teardown And

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018 CS 43: Computer Networks 19: TCP Flow and Congestion Control October 31, Nov 2, 2018 Five-layer Internet Model Application: the application (e.g., the Web, Email) Transport: end-to-end connections, reliability

More information

CSE 461. TCP and network congestion

CSE 461. TCP and network congestion CSE 461 TCP and network congestion This Lecture Focus How should senders pace themselves to avoid stressing the network? Topics Application Presentation Session Transport Network congestion collapse Data

More information

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat

The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat The Controlled Delay (CoDel) AQM Approach to fighting bufferbloat BITAG TWG Boulder, CO February 27, 2013 Kathleen Nichols Van Jacobson Background The persistently full buffer problem, now called bufferbloat,

More information

Computer Networking

Computer Networking 15-441 Computer Networking Lecture 17 TCP Performance & Future Eric Anderson Fall 2013 www.cs.cmu.edu/~prs/15-441-f13 Outline TCP modeling TCP details 2 TCP Performance Can TCP saturate a link? Congestion

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) TCP CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli) 1 Sources Fall and Stevens, TCP/IP Illustrated Vol. 1, 2nd edition Congestion Avoidance

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

ADVANCED COMPUTER NETWORKS

ADVANCED COMPUTER NETWORKS ADVANCED COMPUTER NETWORKS Congestion Control and Avoidance 1 Lecture-6 Instructor : Mazhar Hussain CONGESTION CONTROL When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 3 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Advanced Computer Networks. Flow Control

Advanced Computer Networks. Flow Control Advanced Computer Networks 263 3501 00 Flow Control Patrick Stuedi Spring Semester 2017 1 Oriana Riva, Department of Computer Science ETH Zürich Last week TCP in Datacenters Avoid incast problem - Reduce

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2015 1 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Chapter III: Transport Layer

Chapter III: Transport Layer Chapter III: Transport Layer UG3 Computer Communications & Networks (COMN) Mahesh Marina mahesh@ed.ac.uk Slides thanks to Myungjin Lee and copyright of Kurose and Ross Principles of congestion control

More information

Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon

Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon Data center Networking: New advances and Challenges (Ethernet) Anupam Jagdish Chomal Principal Software Engineer DellEMC Isilon Bitcoin mining Contd Main reason for bitcoin mines at Iceland is the natural

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks George Porter Thanks: Amin Vahdat, Dina Katabi and Alex C. Snoeren Lecture 14 Overview" TCP congestion control review Dukkipati

More information

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Lecture 21: Congestion Control CSE 123: Computer Networks Alex C. Snoeren Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren Lecture 21 Overview" How fast should a sending host transmit data? Not to fast, not to slow, just right Should not be faster than

More information

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014 1 Congestion Control In The Internet Part 2: How it is implemented in TCP JY Le Boudec 2014 Contents 1. Congestion control in TCP 2. The fairness of TCP 3. The loss throughput formula 4. Explicit Congestion

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

General comments on candidates' performance

General comments on candidates' performance BCS THE CHARTERED INSTITUTE FOR IT BCS Higher Education Qualifications BCS Level 5 Diploma in IT April 2018 Sitting EXAMINERS' REPORT Computer Networks General comments on candidates' performance For the

More information

Communication Networks

Communication Networks Communication Networks Spring 2018 Laurent Vanbever nsg.ee.ethz.ch ETH Zürich (D-ITET) April 30 2018 Materials inspired from Scott Shenker & Jennifer Rexford Last week on Communication Networks We started

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

image 3.8 KB Figure 1.6: Example Web Page

image 3.8 KB Figure 1.6: Example Web Page image. KB image 1 KB Figure 1.: Example Web Page and is buffered at a router, it must wait for all previously queued packets to be transmitted first. The longer the queue (i.e., the more packets in the

More information

DIBS: Just-in-time congestion mitigation for Data Centers

DIBS: Just-in-time congestion mitigation for Data Centers DIBS: Just-in-time congestion mitigation for Data Centers Kyriakos Zarifis, Rui Miao, Matt Calder, Ethan Katz-Bassett, Minlan Yu, Jitendra Padhye University of Southern California Microsoft Research Summary

More information

Chapter III. congestion situation in Highspeed Networks

Chapter III. congestion situation in Highspeed Networks Chapter III Proposed model for improving the congestion situation in Highspeed Networks TCP has been the most used transport protocol for the Internet for over two decades. The scale of the Internet and

More information

Multiple unconnected networks

Multiple unconnected networks TCP/IP Life in the Early 1970s Multiple unconnected networks ARPAnet Data-over-cable Packet satellite (Aloha) Packet radio ARPAnet satellite net Differences Across Packet-Switched Networks Addressing Maximum

More information

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines Resource allocation Drop disciplines Queuing disciplines Queuing 1 Congestion Control and Resource Allocation Handle congestion if and when it happens TCP Congestion Control Allocate resources to avoid

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Flow and Congestion Control

Flow and Congestion Control CE443 Computer Networks Flow and Congestion Control Behnam Momeni Computer Engineering Department Sharif University of Technology Acknowledgments: Lecture slides are from Computer networks course thought

More information

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am 15-744 Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am (please submit through e-mail to zhuoc@cs.cmu.edu and srini@cs.cmu.edu) Name: A Congestion Control 1. At time t, a TCP connection

More information

Congestion Avoidance

Congestion Avoidance COMP 631: NETWORKED & DISTRIBUTED SYSTEMS Congestion Avoidance Jasleen Kaur Fall 2016 1 Avoiding Congestion: Strategies TCP s strategy: congestion control Ø Control congestion once it occurs Repeatedly

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

Recap. More TCP. Congestion avoidance. TCP timers. TCP lifeline. Application Presentation Session Transport Network Data Link Physical

Recap. More TCP. Congestion avoidance. TCP timers. TCP lifeline. Application Presentation Session Transport Network Data Link Physical Recap ½ congestion window ½ congestion window More TCP Congestion avoidance TCP timers TCP lifeline Application Presentation Session Transport Network Data Link Physical 1 Congestion Control vs Avoidance

More information

On the Transition to a Low Latency TCP/IP Internet

On the Transition to a Low Latency TCP/IP Internet On the Transition to a Low Latency TCP/IP Internet Bartek Wydrowski and Moshe Zukerman ARC Special Research Centre for Ultra-Broadband Information Networks, EEE Department, The University of Melbourne,

More information

Performance Analysis of TCP Variants

Performance Analysis of TCP Variants 102 Performance Analysis of TCP Variants Abhishek Sawarkar Northeastern University, MA 02115 Himanshu Saraswat PES MCOE,Pune-411005 Abstract The widely used TCP protocol was developed to provide reliable

More information

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links Network Working Group Request for Comments: 1046 W. Prue J. Postel ISI February 1988 A Queuing Algorithm to Provide Type-of-Service for IP Links Status of this Memo This memo is intended to explore how

More information

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste Outline 15-441 Computer Networking Lecture 18 TCP Performance Peter Steenkiste Fall 2010 www.cs.cmu.edu/~prs/15-441-f10 TCP congestion avoidance TCP slow start TCP modeling TCP details 2 AIMD Distributed,

More information

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED TCP Congestion Control 15-744: Computer Networking L-4 TCP Congestion Control RED Assigned Reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [TFRC] Equation-Based Congestion Control

More information

DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks. David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz

DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks. David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz 1 A Typical Facebook Page Modern pages have many components

More information

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University Congestion Control Daniel Zappala CS 460 Computer Networking Brigham Young University 2/25 Congestion Control how do you send as fast as possible, without overwhelming the network? challenges the fastest

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Lecture 14: Congestion Control"

Lecture 14: Congestion Control Lecture 14: Congestion Control" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Amin Vahdat, Dina Katabi Lecture 14 Overview" TCP congestion control review XCP Overview 2 Congestion Control

More information

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers TCP & Routers 15-744: Computer Networking RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay Product Networks L-6

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Lecture 15: Datacenter TCP"

Lecture 15: Datacenter TCP Lecture 15: Datacenter TCP" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Mohammad Alizadeh Lecture 15 Overview" Datacenter workload discussion DC-TCP Overview 2 Datacenter Review"

More information

Request for Comments: S. Floyd ICSI K. Ramakrishnan AT&T Labs Research June 2009

Request for Comments: S. Floyd ICSI K. Ramakrishnan AT&T Labs Research June 2009 Network Working Group Request for Comments: 5562 Category: Experimental A. Kuzmanovic A. Mondal Northwestern University S. Floyd ICSI K. Ramakrishnan AT&T Labs Research June 2009 Adding Explicit Congestion

More information

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Outline Introduction What s wrong with TCP? Idea of Efficiency vs. Fairness XCP, what is it? Is it

More information

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday

Lecture 15: TCP over wireless networks. Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday Lecture 15: TCP over wireless networks Mythili Vutukuru CS 653 Spring 2014 March 13, Thursday TCP - recap Transport layer TCP is the dominant protocol TCP provides in-order reliable byte stream abstraction

More information

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers Overview 15-441 Computer Networking TCP & router queuing Lecture 10 TCP & Routers TCP details Workloads Lecture 10: 09-30-2002 2 TCP Performance TCP Performance Can TCP saturate a link? Congestion control

More information

Reliable Transport II: TCP and Congestion Control

Reliable Transport II: TCP and Congestion Control Reliable Transport II: TCP and Congestion Control Stefano Vissicchio UCL Computer Science COMP0023 Recap: Last Lecture Transport Concepts Layering context Transport goals Transport mechanisms and design

More information

CSE/EE 461 Lecture 16 TCP Congestion Control. TCP Congestion Control

CSE/EE 461 Lecture 16 TCP Congestion Control. TCP Congestion Control CSE/EE Lecture TCP Congestion Control Tom Anderson tom@cs.washington.edu Peterson, Chapter TCP Congestion Control Goal: efficiently and fairly allocate network bandwidth Robust RTT estimation Additive

More information

Packet Scheduling in Data Centers. Lecture 17, Computer Networks (198:552)

Packet Scheduling in Data Centers. Lecture 17, Computer Networks (198:552) Packet Scheduling in Data Centers Lecture 17, Computer Networks (198:552) Datacenter transport Goal: Complete flows quickly / meet deadlines Short flows (e.g., query, coordination) Large flows (e.g., data

More information

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985

Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 Network Working Group Request for Comments: 969 David D. Clark Mark L. Lambert Lixia Zhang M. I. T. Laboratory for Computer Science December 1985 1. STATUS OF THIS MEMO This RFC suggests a proposed protocol

More information

Transport layer issues

Transport layer issues Transport layer issues Dmitrij Lagutin, dlagutin@cc.hut.fi T-79.5401 Special Course in Mobility Management: Ad hoc networks, 28.3.2007 Contents Issues in designing a transport layer protocol for ad hoc

More information

Promoting the Use of End-to-End Congestion Control in the Internet

Promoting the Use of End-to-End Congestion Control in the Internet Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE/ACM Transactions on Networking May 1999 ACN: TCP Friendly 1 Outline The problem of Unresponsive Flows

More information

Data Center TCP (DCTCP)

Data Center TCP (DCTCP) Data Center TCP (DCTCP) Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research Stanford University 1

More information

TCP Congestion Control

TCP Congestion Control TCP Congestion Control What is Congestion The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

TCP Congestion Control

TCP Congestion Control What is Congestion TCP Congestion Control The number of packets transmitted on the network is greater than the capacity of the network Causes router buffers (finite size) to fill up packets start getting

More information

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL

ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL ROBUST TCP: AN IMPROVEMENT ON TCP PROTOCOL SEIFEDDINE KADRY 1, ISSA KAMAR 1, ALI KALAKECH 2, MOHAMAD SMAILI 1 1 Lebanese University - Faculty of Science, Lebanon 1 Lebanese University - Faculty of Business,

More information

CS457 Transport Protocols. CS 457 Fall 2014

CS457 Transport Protocols. CS 457 Fall 2014 CS457 Transport Protocols CS 457 Fall 2014 Topics Principles underlying transport-layer services Demultiplexing Detecting corruption Reliable delivery Flow control Transport-layer protocols User Datagram

More information

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online):

IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Performance Evaluation of TCP in the Presence of in Heterogeneous Networks by using Network

More information

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies CSCD 433/533 Priority Traffic Advanced Networks Spring 2016 Lecture 21 Congestion Control and Queuing Strategies 1 Topics Congestion Control and Resource Allocation Flows Types of Mechanisms Evaluation

More information

CS551 Router Queue Management

CS551 Router Queue Management CS551 Router Queue Management Bill Cheng http://merlot.usc.edu/cs551-f12 1 Congestion Control vs. Resource Allocation Network s key role is to allocate its transmission resources to users or applications

More information

Congestion Collapse in the 1980s

Congestion Collapse in the 1980s Congestion Collapse Congestion Collapse in the 1980s Early TCP used fixed size window (e.g., 8 packets) Initially fine for reliability But something happened as the ARPANET grew Links stayed busy but transfer

More information

CS 268: Computer Networking

CS 268: Computer Networking CS 268: Computer Networking L-6 Router Congestion Control TCP & Routers RED XCP Assigned reading [FJ93] Random Early Detection Gateways for Congestion Avoidance [KHR02] Congestion Control for High Bandwidth-Delay

More information

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects - WarpTCP WHITE PAPER Technology Overview -Improving the way the world connects - WarpTCP - Attacking the Root Cause TCP throughput reduction is often the bottleneck that causes data to move at slow speed.

More information

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles of reliable data transfer 3.5 Connection-oriented transport: TCP segment

More information

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook

EXPERIENCES EVALUATING DCTCP. Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook EXPERIENCES EVALUATING DCTCP Lawrence Brakmo, Boris Burkov, Greg Leclercq and Murat Mugan Facebook INTRODUCTION Standard TCP congestion control, which only reacts to packet losses has many problems Can

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Flow Control. Flow control problem. Other considerations. Where?

Flow Control. Flow control problem. Other considerations. Where? Flow control problem Flow Control An Engineering Approach to Computer Networking Consider file transfer Sender sends a stream of packets representing fragments of a file Sender should try to match rate

More information

CSC 8560 Computer Networks: TCP

CSC 8560 Computer Networks: TCP CSC 8560 Computer Networks: TCP Professor Henry Carter Fall 2017 Project 2: mymusic You will be building an application that allows you to synchronize your music across machines. The details of which are

More information

CS644 Advanced Networks

CS644 Advanced Networks What we know so far CS644 Advanced Networks Lecture 6 Beyond TCP Congestion Control Andreas Terzis TCP Congestion control based on AIMD window adjustment [Jac88] Saved Internet from congestion collapse

More information

Cloud e Datacenter Networking

Cloud e Datacenter Networking Cloud e Datacenter Networking Università degli Studi di Napoli Federico II Dipartimento di Ingegneria Elettrica e delle Tecnologie dell Informazione DIETI Laurea Magistrale in Ingegneria Informatica Prof.

More information

Flow and Congestion Control (Hosts)

Flow and Congestion Control (Hosts) Flow and Congestion Control (Hosts) 14-740: Fundamentals of Computer Networks Bill Nace Material from Computer Networking: A Top Down Approach, 6 th edition. J.F. Kurose and K.W. Ross traceroute Flow Control

More information

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START MIDTERM EXAMINATION #2 NETWORKING CONCEPTS 03-60-367-01 U N I V E R S I T Y O F W I N D S O R - S c h o o l o f C o m p u t e r S c i e n c e Fall 2011 Question Paper NOTE: Students may take this question

More information

Network Performance: Queuing

Network Performance: Queuing Network Performance: Queuing EE 122: Intro to Communication Networks Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim http://inst.eecs.berkeley.edu/~ee122/ Materials

More information