Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

Similar documents
Transport Layer (Congestion Control)

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Opera&ng Systems and Networks. Network Lecture 10: Conges&on Control. Adrian Perrig Network Security Group ETH Zürich

Bandwidth Allocation & TCP

Transport Layer (Congestion Control)

Congestion Collapse in the 1980s

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Congestion control in TCP

Computer Networks (Honor Track)

Communication Networks

Transport Layer (TCP/UDP)

Transport Layer (TCP/UDP)

Congestion Control in TCP

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

ADVANCED COMPUTER NETWORKS

Transport Layer (TCP/UDP)

Flow and Congestion Control Marcos Vieira

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

15-744: Computer Networking TCP

Congestion Control 3/16/09

TCP Congestion Control

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

TCP Congestion Control

TCP Congestion Control

Reliable Transport II: TCP and Congestion Control

CS3600 SYSTEMS AND NETWORKS

Lecture 14: Congestion Control"

Computer Network Fundamentals Spring Week 10 Congestion Control Andreas Terzis

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

CS4700/CS5700 Fundamentals of Computer Networks

Lecture 14: Congestion Control"

Reliable Transport II: TCP and Congestion Control

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

Lecture 15: Transport Layer Congestion Control

CSCI Topics: Internet Programming Fall 2008

Principles of congestion control

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Internet Protocols Fall Lecture 16 TCP Flavors, RED, ECN Andreas Terzis

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Chapter III: Transport Layer

6.033 Computer System Engineering

Goals of Today s Lecture! Congestion Control! Course So Far.! Congestion Control Overview! It s Not Just The Sender & Receiver! Congestion is Natural!

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

CS4700/CS5700 Fundamentals of Computer Networks

Computer Networking Introduction

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Performance Analysis of TCP Variants

Transport Layer Chapter 6

COMP/ELEC 429/556 Introduction to Computer Networks

Congestion Avoidance

One More Bit Is Enough

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 11, 2018

Chapter 3 Transport Layer

CSCI-1680 Transport Layer II Data over TCP Rodrigo Fonseca

CS321: Computer Networks Congestion Control in TCP

CSE 123A Computer Networks

CSC 4900 Computer Networks: TCP

Computer Networking

Chapter 3 Transport Layer

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

CSE 461. TCP and network congestion

Congestion Control for High Bandwidth-Delay Product Networks

Chapter 3 Transport Layer

CSCI Topics: Internet Programming Fall 2008

Congestion Avoidance

CSCE 463/612 Networks and Distributed Processing Spring 2017

Transport Layer Congestion Control

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

What is Congestion? Congestion: Moral of the Story. TCP Approach. Transport Layer: TCP Congestion Control & Buffer Management

Advanced Computer Networks

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

CS644 Advanced Networks

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 10

Overview. TCP congestion control Computer Networking. TCP modern loss recovery. TCP modeling. TCP Congestion Control AIMD

TCP Congestion Control

Fall 2012: FCM 708 Bridge Foundation I

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Introduc)on to Computer Networks

COMP/ELEC 429/556 Introduction to Computer Networks

In this lecture we cover a number of networking issues pertinent to the support of distributed computing. Much of the material is covered in more

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Flow and Congestion Control (Hosts)

Flow & Congestion Control

TCP Congestion Control. Lecture 16. Outline. TCP Congestion Control. Additive Increase / Multiplicative Decrease (AIMD)

8. TCP Congestion Control

Mid Term Exam Results

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

Chapter 24 Congestion Control and Quality of Service 24.1

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

CS 356: Introduction to Computer Networks. Lecture 16: Transmission Control Protocol (TCP) Chap. 5.2, 6.3. Xiaowei Yang

Transcription:

Course Reference Model Computer Networks 7 Application Provides functions needed by users Zhang, Xinyu Fall 204 4 Transport Provides end-to-end delivery 3 Network Sends packets over multiple links School of Software East China Normal University 2 Link Physical Sends frames over one or more links Sends bits as signals Topic Understanding congestion, a traffic jam in the network Later we will learn how to control it Congestion What s the hold up? Network Nature of Congestion Routers/switches have internal buffering for contention Nature of Congestion Simplified view of per port output queues Typically FIFO (First In First Out), discard when full Router Input............ Output Router = Input Buffer Fabric Output Buffer (FIFO) Queue Queued Packets

Nature of Congestion (2) Queues help by absorbing bursts when input > output rate But if input > output rate persistently, queue will overflow This is congestion Congestion is a function of the traffic patterns can occur even if every link have the same capacity Effects of Congestion What happens to performance as we increase the load? Effects of Congestion (2) What happens to performance as we increase the load? Effects of Congestion (3) As offered load rises, congestion occurs as queues begin to fill: Delay and loss rise sharply with more load Throughput falls below load (due to loss) Goodput may fall below throughput (due to spurious retransmissions) None of the above is good! Want to operate network just before the onset of congestion Bandwidth Allocation Important task for network is to allocate its capacity to senders Good allocation is efficient and fair Efficient means most capacity is used but there is no congestion Fair means every sender gets a reasonable share the network Bandwidth Allocation (2) Why is it hard? (Just split equally!) Number of senders and their offered load is constantly changing Senders may lack capacity in different parts of the network Network is distributed; no single party has an overall picture of its state 2

Topic Fairness of Bandwidth Allocation What s a fair bandwidth allocation? The max-min fair allocation Recall We want a good bandwidth allocation to be fair and efficient Now we learn what fair means Caveat: in practice, efficiency is more important than fairness Efficiency vs. Fairness Cannot always have both! Example network with traffic A B, B C and A C How much traffic can we carry? A B C Efficiency vs. Fairness (2) If we care about fairness: Give equal bandwidth to each flow A B: ½ unit, B C: ½, and A C, ½ Total traffic carried is ½ units Efficiency vs. Fairness (3) If we care about efficiency: Maximize total traffic in network A B: unit, B C:, and A C, 0 Total traffic rises to 2 units! A B C A B C 3

The Slippery Notion of Fairness Why is equal per flow fair anyway? A C uses more network resources (two links) than A B or B C Host A sends two flows, B sends one Not productive to seek exact fairness More important to avoid starvation Equal per flow is good enough Generalizing Equal per Flow Bottleneck for a flow of traffic is the link that limits its bandwidth Where congestion occurs for the flow For A C, link A B is the bottleneck A B C 0 Bottleneck Generalizing Equal per Flow (2) Max-Min Fairness Flows may have different bottlenecks For A C, link A B is the bottleneck For B C, link B C is the bottleneck Can no longer divide links equally A B C 0 Intuitively, flows bottlenecked on a link get an equal share of that link Max-min fair allocation is one that: Increasing the rate of one flow will decrease the rate of a smaller flow This maximizes the minimum flow Max-Min Fairness (2) To find it given a network, imagine pouring water into the network. Start with all flows at rate 0 2. Increase the flows until there is a new bottleneck in the network 3. Hold fixed the rate of the flows that are bottlenecked 4. Go to step 2 for any remaining flows Max-Min Example Example: network with 4 flows, links equal bandwidth What is the max-min fair allocation? 4

Max-Min Example (2) When rate=/3, flows B, C, and D bottleneck R4 R5 Fix B, C, and D, continue to increase A Max-Min Example (3) When rate=2/3, flow A bottlenecks R2 R3. Done. Bottleneck Bottleneck Bottleneck Max-Min Example (4) End with A=2/3, B, C, D=/3, and R2 R3, R4 R5 full Other links have extra capacity that can t be used Adapting over Time Allocation changes as flows start and stop Time Adapting over Time (2) Flow slows when Flow 2 starts Flow 3 limit is elsewhere Flow speeds up when Flow 2 stops Additive Increase Multiplicative Decrease (AIMD) Time 5

Topic Bandwidth allocation models Additive Increase Multiplicative Decrease (AIMD) control law AIMD! Sawtooth Bandwidth Allocation Models Open loop versus closed loop Open: reserve bandwidth before use Closed: use feedback to adjust rates Host versus Network support Who sets/enforces allocations? Window versus Rate based How is allocation expressed? TCP is a closed loop, host-driven, and window-based Bandwidth Allocation Models (2) We ll look at closed-loop, host-driven, and windowbased too Network layer returns feedback on current allocation to senders At least tells if there is congestion Transport layer adjusts sender s behavior via window in response How senders adapt is a control law Additive Increase Multiplicative Decrease AIMD is a control law hosts can use to reach a good allocation Hosts additively increase rate while network is not congested Hosts multiplicatively decrease rate when congestion occurs Used by TCP Let s explore the AIMD game AIMD Game Hosts and 2 share a bottleneck But do not talk to each other directly Router provides binary feedback Tells hosts if network is congested AIMD Game (2) Each point is a possible allocation Host Congested Fair Host Bottleneck Optimal Allocation Host 2 Router Rest of Network Efficient 0 Host 2 6

AIMD Game (3) AI and MD move the allocation Host AIMD Game (4) Play the game! Host Congested Congested Additive Increase Multiplicative Decrease Fair, y=x Optimal Allocation Efficient, x+y= A starting point Fair Efficient 0 Host 2 0 Host 2 AIMD Game (5) AIMD Sawtooth Always converge to good allocation! Host Congested Produces a sawtooth pattern over time for rate of each host This is the TCP sawtooth (later) A starting point Fair Host or 2 s Rate Multiplicative Decrease Additive Increase Efficient 0 Host 2 Time AIMD Properties Converges to an allocation that is efficient and fair when hosts run it Holds for more general topologies Other increase/decrease control laws do not! (Try MIAD, MIMD, AIAD) Requires only binary feedback from the network TCP Ack Clocking 7

Topic The self-clocking behavior of sliding windows, and how it is used by TCP The ACK clock Sliding Window ACK Clock Each in-order ACK advances the sliding window and lets a new segment enter the network ACKs clock data segments Tick Tock! 20 9 8 7 6 5 4 3 2 Data Ack 2 3 4 5 6 7 8 9 0 Benefit of ACK Clocking Consider what happens when sender injects a burst of segments into the network Queue Benefit of ACK Clocking (2) Segments are buffered and spread out on slow link Segments spread out Fast link Slow (bottleneck) link Fast link Fast link Slow (bottleneck) link Fast link Benefit of ACK Clocking (3) ACKs maintain the spread back to the original sender Benefit of ACK Clocking (4) Sender clocks new segments with the spread Now sending at the bottleneck link without queuing! Segments spread Queue no longer builds Slow link Slow link Acks maintain spread 8

Window (cwnd) Benefit of ACK Clocking (4) Helps the network run with low levels of loss and delay! The network has smoothed out the burst of data segments ACK clock transfers this smooth timing back to the sender Subsequent data segments are not sent in bursts so do not queue up in the network TCP Slow Start Topic How TCP implements AIMD, part Slow start is a component of the AI portion of AIMD Slow-Start Solution Start by doubling cwnd every RTT Exponential growth (, 2, 4, 8, 6, ) Start slow, quickly reach large values Fixed Slow-start Slow-start AI Time Slow-Start (Doubling) Timeline Additive Increase Timeline Increment cwnd by packet for each ACK Increment cwnd by packet every cwnd ACKs (or RTT) 9

TCP Tahoe (Implementation) Initial slow-start (doubling) phase Start with cwnd = (or small value) cwnd += packet per ACK Later Additive Increase phase cwnd += /cwnd packets per ACK Roughly adds packet per RTT Switching threshold (initially infinity) Switch to AI when cwnd > ssthresh Set ssthresh = cwnd/2 after loss Begin with slow-start after timeout 0