Congestion Control In the Network

Similar documents
Fair Queueing. Presented by Brighten Godfrey. Slides thanks to Ion Stoica (UC Berkeley) with slight adaptation by Brighten Godfrey

CS 268: Lecture 7 (Beyond TCP Congestion Control)

CS644 Advanced Networks

XCP: explicit Control Protocol

ADVANCED TOPICS FOR CONGESTION CONTROL

CS268: Beyond TCP Congestion Control

Congestion Control in the Network

EE 122: Router Support for Congestion Control: RED and Fair Queueing. Ion Stoica Oct. 30 Nov. 4, 2002

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

CS 268: Computer Networking

Congestion Control for High Bandwidth-Delay Product Networks

Lecture 14: Congestion Control"

One More Bit Is Enough

Congestion Control for High Bandwidth-delay Product Networks. Dina Katabi, Mark Handley, Charlie Rohrs

Congestion Control for High Bandwidth-delay Product Networks

In-network Resource Allocation (Scribed by Ambuj Ojha)

COMP/ELEC 429/556 Introduction to Computer Networks

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

Lecture 14: Congestion Control"

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

CS551 Router Queue Management

CSE 123A Computer Networks

Congestion Control and Resource Allocation

Congestion Control. Brighten Godfrey CS 538 January Based in part on slides by Ion Stoica

6.033 Spring 2015 Lecture #11: Transport Layer Congestion Control Hari Balakrishnan Scribed by Qian Long

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Router s Queue Management

Fairness, Queue Management, and QoS

CS557: Queue Management

TCP Congestion Control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

QoS Services with Dynamic Packet State

Introduc)on to Computer Networks

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

The Present and Future of Congestion Control. Mark Handley

Lecture 14: Congestion Control

6.033 Computer System Engineering

Reliable Transport II: TCP and Congestion Control

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

Network Layer Enhancements

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

Project Computer Networking. Resource Management Approaches. Start EARLY Tomorrow s recitation. Lecture 20 Queue Management and QoS

Internet Congestion Control for Future High Bandwidth-Delay Product Environments

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

CS 268: Lecture 8 Router Support for Congestion Control

On Network Dimensioning Approach for the Internet

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

Computer Network Fundamentals Fall Week 12 QoS Andreas Terzis

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Computer Networks Spring 2017 Homework 2 Due by 3/2/2017, 10:30am

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

Lecture 24: Scheduling and QoS

Improving XCP to Achieve Max-Min Fair Bandwidth Allocation

15-744: Computer Networking TCP

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

CS4700/CS5700 Fundamentals of Computer Networks

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

! Network bandwidth shared by all users! Given routing, how to allocate bandwidth. " efficiency " fairness " stability. !

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Congestion Control for High-Bandwidth-Delay-Product Networks: XCP vs. HighSpeed TCP and QuickStart

Network Performance: Queuing

Network Performance: Queuing

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Transport Layer (Congestion Control)

Unit 2 Packet Switching Networks - II

Bandwidth Allocation & TCP

Reliable Transport II: TCP and Congestion Control

Chapter III: Transport Layer

2.993: Principles of Internet Computing Quiz 1. Network

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

Congestion Avoidance

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

A Relative Bandwidth Allocation Method Enabling Fast Convergence in XCP

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

Core-Stateless Proportional Fair Queuing for AF Traffic

Congestion Control. Tom Anderson

Packet Scheduling and QoS

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Kommunikationssysteme [KS]

TCP on High-Speed Networks

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2017

Data Center TCP (DCTCP)

Data Center TCP(DCTCP)

Real-Time Protocol (RTP)

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

One More Bit Is Enough

CS4700/CS5700 Fundamentals of Computer Networks

Congestion Control 3/16/09

Transcription:

Congestion Control In the Network Brighten Godfrey cs598pbg September 9 2010 Slides courtesy Ion Stoica with adaptation by Brighten

Today Fair queueing XCP Announcements

Problem: no isolation between flows No protection: if a flow misbehaves it will hurt the other flows Example: 1 UDP (10 Mbps) and 31 TCP s sharing a 10 Mbps link Throughput(Mbps) 10.0000 7.5000 5.0000 2.5000 0 UDP RED 1 5 101 121314151617181920212 23242526272829 3132 32 3 Flow Number

A first solution Round-robin among different flows [Nagle 87] - One queue per flow Q 4

Round-Robin Discussion Advantages: protection among flows - Misbehaving flows will not affect the performance of wellbehaving flows - FIFO does not have such a property Disadvantages: - More complex than FIFO: per flow queue/state - Biased toward large packets a flow receives service proportional to the number of packets (When is this bad?) 5

Fair Queueing (FQ) [DKS 89] Define a fluid flow system: a system in which flows are served bit-by-bit - i.e., bit-by-bit round robin Advantages - Each flow will receive exactly its max-min fair rate -...and exactly its fair per-packet delay 6

Definition of fairness: Max-Min Fairness If link congested, compute f such that 8 6 2 10 4 4 2 f = 4: min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2

Max-Min Fairness computation Denote - C link capacity - N number of flows - r i arrival rate Max-min fair rate f computation: 1. compute C/N 2. if there are flows i such that r i <= C/N, update C and N 3. if no, f = C/N; terminate 4. go to 1 A flow can receive at most the fair rate, i.e., min(f, r i )

Example C = 10; r 1 = 8, r 2 = 6, r 3 = 2; N = 3 C/3 = 3.33 C = C r3 = 8; N = 2 C/2 = 4; f = 4 8 6 2 10 4 4 2 f = 4: min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2

Implementing Fair Queueing What we just saw was bit-by-bit round robin Can t do it can t interrupt transfer of a packet (why not?) Idea: serve packets in the order in which they would have finished transmission in the fluid flow system Strong guarantees - Each flow will receive exactly its max-min fair rate (+/- one packet size) -...and exactly its max-min fair per-packet delay (+/- one packet size) or better What does that mean? Suppose you have full access to a link whose bandwidth is equal to your max-min fair rate. Then your delay with FQ will be your delay on that virtual link (or better), plus or minus one packet size.

Example Flow 1 (arrival traffic) 1 2 3 4 5 6 time Flow 2 (arrival traffic) 1 2 3 4 5 time Service in fluid flow system 1 2 1 2 3 3 4 4 5 6 5 time Packet system 1 2 1 3 2 3 4 4 5 5 6 time 11

Guarantees Translating fluid to discrete packet model doesn t actually involve a lot of combinatorics. Theorem: each packet P will finish transmission at or before its finish time in fluid flow model. - assuming (for now) all packets are in queue at time 0 Proof: - Suppose P s finish time is T in fluid model. Need to show P finishes by T in packet model. - In packet model, what packets come before P or are P? Packets that have finished by T in fluid model. - Total amount to send: RT bits (possibly less: some packets may still be in progress) where R is link rate - Packet model remains busy sending packets the entire time (because packets have all arrived at time 0), so these will be sent in time <= RT / R = T.

Guarantees Translating fluid to discrete packet model doesn t actually involve a lot of combinatorics. Theorem: each packet P will finish transmission at or before its finish time in fluid flow model. - assuming (for now) all packets are in queue at time 0 So, why is the real guarantee (without assumption) only approximate (+/- one packet)?

Problem Recall algorithm: serve packets in the order in which they would have finished transmission in the fluid flow system So, need to compute finish time of each packet in the fluid flow system... but new packet arrival can change finish times of packets in the system (perhaps all packets!) Updating those times would be expensive

Solution: Virtual Time Key Observation: while the finish times of packets may change when a new packet arrives, the order in which packets finish doesn t! - Only the order is important for scheduling Solution: instead of the packet finish time maintain the number of rounds needed to send the remaining bits of the packet (virtual finishing time) - Virtual finishing time doesn t change upon packet arrival System virtual time = index of the round in the bitby-bit round robin scheme

System Virtual Time: V(t) Measure service, instead of time V(t) slope rate at which every active flow receives service - C = link capacity - N(t) = number of active flows in fluid flow system at time t virtual time V(t) real time Service in fluid flow system 1 2 1 2 3 3 4 4 5 6 5 16 real time

Fair Queueing Implementation Define - = virtual finishing time of packet k of flow i - = arrival time of packet k of flow i - = length of packet k of flow i Virtual finishing time of packet k+1 of flow i is Order packets by increasing virtual finishing time, and send them in that order 17

Weighted Fair Queueing (WFQ) What if we don't want exact fairness? - Maybe web traffic is more important than file sharing Assign weight w i to each flow i And change virtual finishing time 18

Simulation Example 1 UDP (10 Mbps) and 31 TCPs sharing a 10 Mbps link UDP (#1) TCP (#2). TCP (#32) 10 Mbps) UDP (#1) TCP (#2). TCP (#32) Throughput(Mbps) 10.0000 7.5000 5.0000 2.5000 Stateless solution: Random Early Detection (RED) Throughput(Mbps) 0.5000 0.3750 0.2500 0.1250 Stateful solution: Fair Queueing 0 1 2 3 4 5 6 7 8 9101 121314151617181920212 23242526272829303132 Flow Number 0 1 2 3 4 5 6 7 8 9101 121314151617181920212 23242526272829303132 Flow Number 19

Summary FQ does not eliminate congestion; it just manages the congestion You need both end-host congestion control and router support for congestion control - End-host congestion control to adapt rate - Router congestion control to protect/isolate Don t forget buffer management: you still need to drop in case of congestion. Which packets would you drop in FQ? - One possibility: packet from the longest queue 20

Today Fair queueing XCP Announcements

TCP congestion control performs poorly as bandwidth or delay increases Shown analytically in [Low01] and via simulations Avg. Utilization 50 flows in both directions Buffer = BW x Delay RTT = 80 ms Avg. Utilization 50 flows in both directions Buffer = BW x Delay BW = 155 Mb/s Bottleneck Bandwidth (Mb/s) Why? Round Trip Delay (sec) 22

TCP congestion control performs poorly as bandwidth or delay increases Shown analytically in [Low01] and via simulations Avg. TCP Utilization 50 flows in both directions Buffer = BW x Delay RTT = 80 ms Avg. TCP Utilization Because TCP lacks fast response Spare bandwidth is available TCP increases by 1 pkt/rtt even if spare bandwidth is huge When a TCP starts, it increases exponentially 50 flows in both directions Buffer = BW x Delay BW = 155 Mb/s Too many drops Flows ramp up by 1 pkt/rtt, Bottleneck taking forever Bandwidth to grab (Mb/s) the large bandwidth Round Trip Delay (sec) 23

Approach Routers inform sources - Don t have to guess about right rate to send - Can give different feedback to different senders Decouple congestion control from fairness High Utilization; Small Queues; Few Drops Bandwidth Allocation Policy

Characteristics of XCP Solution 1. Improved Congestion Control (in high bandwidth-delay & conventional environments): Small queues Almost no drops 2. Improved Fairness 3. Scalable (no per-flow state) 4. Flexible bandwidth allocation: min-max fairness, proportional fairness, differential bandwidth allocation, 25

XCP: An explicit Control Protocol 1. Congestion Controller 2. Fairness Controller 26

How does XCP Work? Round Trip Round Time Trip Time Congestion Congestion Window Window Feedback Feedback = + 0.1 packet Congestion Header 27

How does XCP Work? Round Trip Time Congestion Window Feedback Feedback = = + - 0.1 0.3 packet 28

How does XCP Work? Congestion Window = Congestion Window + Feedback XCP extends ECN and CSFQ (Core Stateless Fair Queueing) Routers compute feedback without any per-flow state 29

How Does an XCP Router Compute the Feedback? Congestion Controller Goal: Matches input traffic to link capacity & drains the queue Looks at aggregate traffic & queue Algorithm: Aggregate traffic changes by Δ Δ ~ Spare Bandwidth Δ ~ - Queue Size So, Δ = α d avg Spare - β Queue Fairness Controller Goal: Divides Δ between flows to converge to fairness Looks at a flow s state in Congestion Header Algorithm: If Δ > 0 Divide Δ equally between flows If Δ < 0 Divide Δ between flows proportionally to their current rates 30

How Does an XCP Router Compute the Feedback? Congestion Controller Congestion Goal: Matches input traffic to link capacity & drains Controller the queue Δ Fairness Controller Fairness Goal: Divides Δ between flows to converge Controller to fairness Looks at aggregate traffic & queue Algorithm: Aggregate traffic changes by Δ Δ ~ Spare Bandwidth Δ ~ - Queue Size So, Δ = α d avg Spare - β Queue Looks at a flow s state in Congestion Header but mult. by available capacity MIMD AIMD Algorithm: If Δ > 0 Divide Δ equally between flows If Δ < 0 Divide Δ between flows proportionally to their current rates 31

Getting the devil out of the details Congestion Controller Δ = α d avg Spare - β Queue Theorem: System converges to optimal utilization (i.e., stable) for any link bandwidth, delay, number of sources if: Fairness Controller Algorithm: If Δ > 0 Divide Δ equally between flows If Δ < 0 Divide Δ between flows proportionally to their current rates p i = h + max(φ, 0) d rtt i s i cwnd i rtt2 i s i cwnd i No Parameter Tuning (Proof based on Nyquist Criterion) No Per-Flow State

Subset of Results S 1 Bottleneck S 2 R1, R2,, Rn S n Similar behavior over: 33

XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Bandwidth Utilization as a function of Delay Avg. Utilization Avg. Utilization Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec)

XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Bandwidth Utilization as a function of Delay Avg. Utilization XCP increases proportionally to spare bandwidth Avg. Utilization α and β chosen to make XCP robust to delay Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec) 35

XCP Shows Faster Response than TCP Start 40 Flows Start 40 Flows Stop the 40 Flows Stop the 40 Flows XCP shows fast response!

XCP Deals Well with Short Web-Like Flows Average Utilization Average Queue Drops Arrivals of Short Flows/sec 37

XCP is Fairer than TCP Same RTT Different RTT Avg. Throughput Avg. Throughput Flow ID Flow ID (RTTs are 40 ms to 330 ms)

What have we fixed so far? TCP Problem Fills queues: adds loss, latency Slow to converge Loss congestion May not utilize full bandwidth Unfair to large-rtt Solution RED (partial), XCP XCP (except short flows) XCP XCP XCP Unfair to short flows? Is equal rates really fair?? Vulnerable to selfishness XCP & Fair Queueing (partial)

Today Fair queueing XCP Announcements

Office hours Tuesdays after class: is it good for you?

Project proposals Project proposals due 11:59 p.m. Tuesday Submit via email to Brighten 1/2 page, plaintext Describe: the problem you plan to address what will be your first steps what is the most closely related work, and why it has not addressed your problem if there are multiple people on your project team, who they are and how you plan to partition the work among the team.

Project proposals Come talk to me before your proposal Available until 6 pm today after class Or we can make an appointment

Also on Tuesday... First opportunity to present a paper! One of: Dukkipati & McKeown. Why Flow-Completion Time is the Right metric for Congestion Control and why this means we need new algorithms, CCR 2006 Alizadeh et al. Data Center TCP (DCTCP), SIGCOMM 2010 If you would like to present one of these two, email me by Friday morning. FCFS.