Network Performance: Queuing

Similar documents
Announcements. Network Performance: Queuing. Goals of Today s Lecture. Window Scaling. Window Scaling, con t. Window Scaling, con t

Network Performance: Queuing

TCP Performance. EE 122: Intro to Communication Networks. Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim

EECS 122: Introduction to Computer Networks Switch and Router Architectures. Today s Lecture

Flow and Congestion Control

Generic Architecture. EECS 122: Introduction to Computer Networks Switch and Router Architectures. Shared Memory (1 st Generation) Today s Lecture

Network layer (addendum) Slides adapted from material by Nick McKeown and Kevin Lai

Outline. EE 122: Networks Performance & Modeling. Outline. Motivations. Definitions. Timing Diagrams. Ion Stoica TAs: Junda Liu, DK Moon, David Zats

Announcements. IP Forwarding & Transport Protocols. Goals of Today s Lecture. Are 32-bit Addresses Enough? Summary of IP Addressing.

EE 122: Router Design

CS519: Computer Networks. Lecture 5, Part 4: Mar 29, 2004 Transport: TCP congestion control

TOC: Switching & Forwarding

Lecture 14: Congestion Control"

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

The Network Layer and Routers

CS4700/CS5700 Fundamentals of Computer Networks

TCP Service Model. Today s Lecture. TCP Support for Reliable Delivery. EE 122:TCP, Connection Setup, Reliability

Routers: Forwarding EECS 122: Lecture 13

TCP so far Computer Networking Outline. How Was TCP Able to Evolve

EE 122: IP Forwarding and Transport Protocols

Goals of Today s Lecture! Congestion Control! Course So Far.! Congestion Control Overview! It s Not Just The Sender & Receiver! Congestion is Natural!

Performance Modeling

Announcements. Designing IP. Our Story So Far (Context) Goals of Today s Lecture. Our Story So Far (Context), Con t. The Internet Hourglass

Solutions for Homework #4

CSE 123A Computer Networks

CSC 4900 Computer Networks: Network Layer

IP Addressing & Forwarding

The IP Data Plane: Packets and Routers

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Network Superhighway CSCD 330. Network Programming Winter Lecture 13 Network Layer. Reading: Chapter 4

Computer Networking

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2

Announcements. IP Addressing & Forwarding. Designing IP s Addresses. Goals of Today s Lecture. Grouping Related Hosts. IP Addresses (IPv4)

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

CS 5520/ECE 5590NA: Network Architecture I Spring Lecture 13: UDP and TCP

CSE 123A Computer Networks

Queue Management. Last Wed: CongesDon Control. Today: Queue Management. Packet Queues

Internet Protocols Fall Lecture 16 TCP Flavors, RED, ECN Andreas Terzis

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

CS457 Transport Protocols. CS 457 Fall 2014

Lecture 22: Buffering & Scheduling. CSE 123: Computer Networks Alex C. Snoeren

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Lecture 14: Congestion Control"

Sample Routers and Switches. High Capacity Router Cisco CRS-1 up to 46 Tb/s thruput. Routers in a Network. Router Design

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Routers: Forwarding EECS 122: Lecture 13

15-744: Computer Networking TCP

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

TOC: Switching & Forwarding

Quality of Service (QoS)

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Announcements. Quality of Service (QoS) Goals of Today s Lecture. Scheduling. Link Scheduling: Strict Priority. Link Scheduling: FIFO

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Router s Queue Management

Internet Design Principles

Professor Yashar Ganjali Department of Computer Science University of Toronto.

TCP Service Model. Announcements. TCP: Reliable, In-Order Delivery. Today s Lecture. TCP Support for Reliable Delivery. TCP Header

Sizing Router Buffers

Computer Networks. Instructor: Niklas Carlsson

CMSC 332 Computer Networks Network Layer

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2. Goals for Todayʼs Lecture. Role of Transport Layer

Congestion control in TCP

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

Recap. TCP connection setup/teardown Sliding window, flow control Retransmission timeouts Fairness, max-min fairness AIMD achieves max-min fairness

Quality of Service (QoS)

CSCI Topics: Internet Programming Fall 2008

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 11

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

Networking Acronym Smorgasbord: , DVMRP, CBT, WFQ

Announcements. Quality of Service (QoS) Goals of Today s Lecture. Scheduling. Link Scheduling: FIFO. Link Scheduling: Strict Priority

Flow and Congestion Control (Hosts)

Page 1. Goals for Today" Discussion" Example: Reliable File Transfer" CS162 Operating Systems and Systems Programming Lecture 11

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Congestion Control In the Network

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Goals for Today s Class. EE 122: Networks & Protocols. What Global (non-digital) Communication Network Do You Use Every Day?

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Chapter III: Transport Layer

Topic 4a Router Operation and Scheduling. Ch4: Network Layer: The Data Plane. Computer Networking: A Top Down Approach

Congestion Collapse in the 1980s

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control in Communication Networks

Key Network-Layer Functions

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

Transport Protocols Reading: Sections 2.5, 5.1, and 5.2

Transport Layer (Congestion Control)

Network Management & Monitoring Network Delay

Question. Reliable Transport: The Prequel. Don t parse my words too carefully. Don t be intimidated. Decisions and Their Principles.

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

Network Processors and their memory

Chapter 4 Network Layer: The Data Plane

Goal of Today s Lecture. EE 122: Designing IP. The Internet Hourglass. Our Story So Far (Context) Our Story So Far (Context), Con t

Transport Layer (Congestion Control)

CSE 461. TCP and network congestion

CSCI Computer Networks

CS4700/CS5700 Fundamentals of Computer Networks

UNIT IV -- TRANSPORT LAYER

Transcription:

Network Performance: Queuing EE 122: Intro to Communication Networks Fall 2006 (MW 4-5:30 in Donner 155) Vern Paxson TAs: Dilip Antony Joseph and Sukun Kim http://inst.eecs.berkeley.edu/~ee122/ Materials with thanks to Jennifer Rexford, Ion Stoica, and colleagues at Princeton and UC Berkeley 1 Announcements Additional reading for today s lecture: Peterson & Davie 3.4 2 1

Goals of Today s Lecture Finish discussion of Window Scaling TCP Throughput Equation Computes approximate long-running TCP performance for a given packet loss probability p Relationship between performance and queuing Router architecture Modeling of queuing systems & Little s Law FIFO queuing Active Queue Management - RED Explicit Congestion Notification (ECN) 3 Window Scaling Problem: 16 bits of advertised window only allows 65,535 bytes in flight Can significantly restrict throughput Solution: TCP window scaling option negotiated in initial SYN exchange shift.cnt specifies scaling factor for units used in window advertisement E.g., shift.cnt = 5 advertised window is 2 5 = 32-byte units 4 2

Window Scaling, con t Q: Now how large can the window be? A: Clearly, must not exceed 2 32 If it does, then can t disambiguate data in flight So, scaling 16 In fact, somewhat subtle requirements limit window to 2 30 to allow receiver to determine whether data fits in the offered window So, scaling 14 5 Window Scaling, con t Now we can go fast. Suppose in a high-speed LAN with RTT = 1 msec we can transmit at 1 GB/s. What problem arises if packets are occasionally delayed in the network for 10 msec? Sequence number wrap: can t tell earlier, delayed segments from later instances. Fix: another TCP option to associate (high-res) timestamps with TCP segments Essentially, adds more bits to sequence space (Side effect: no more need for Karn/Partridge restriction not to compute RTT for ACKs of retransmitted packets) 6 3

TCP Throughput Equation Consider a network path for which packets are lost with probability p (independently). Suppose a TCP connection achieves a maximum window size of W packets when using the path. On AIMD cut due to packet loss, window goes to W/2 Thus, average window is w = 3/4 W Total number of packets in the sawtooth : W/2 + (W/2 + 1) + (W/2 + 2) + + W Sawtooth spans about W/2 RTT s Total # packets (3/4 W) (W/2) = 3W 2 /8 Therefore: 3W 2 /8 1/p 7 TCP Throughput Equation, con t Given 3W 2 /8 1/p, then in terms of the average window w: 3 (4/3 w) 2 /8 = 1/p (since w = 3/4 W) 16/9 w 2 = 8/3p w 2 = 3/2p w = sqrt(3/2) / sqrt(p) For packets of B bytes, throughput is T = w B/RTT = sqrt(1.5) B/(RTT sqrt(p)) Implications: Long-term throughput falls as 1/RTT Long-term throughput falls as 1/sqrt(p) T = RTT 1.5B p Non-TCP transport can use equation to provide TCP-friendly congestion control 8 4

Where Does Loss (= p) Come From, Anyway? Routers & Queuing 9 Generic Router Architecture Input and output interfaces are connected through an interconnect Interconnect can be implemented by Shared memory Low capacity routers (e.g., PC-based routers) Shared bus Medium capacity routers Point-to-point (switched) bus High capacity routers Packets fragmented into cells Essentially a network inside the router! input interface Interconnect output interface 10 5

Shared Memory (1 st Generation) Shared Backplane CPU Route Table Buffer Memory CPU Memory Interface Interface MAC Interface MAC Interface MAC Typically < 0.5Gbps aggregate capacity Limited by rate of shared memory (* Slide by Nick McKeown) 11 Shared Bus (2 nd Generation) CPU Route Table Buffer Memory Card Card Card Buffer Memory Fwding Cache MAC Buffer Memory Fwding Cache MAC Buffer Memory Fwding Cache MAC Typically < 5Gb/s aggregate capacity; Limited by shared bus (* Slide by Nick McKeown) 12 6

Point-to-Point Switch (3 rd Generation) Switched Backplane Card CPU Card Card Interface CPU Local Buffer Memory Routing Table Local Buffer Memory Memory Fwding Table Fwding Table MAC MAC Typically ~ 100Gbps aggregate capacity (*Slide by Nick McKeown) 13 What a Router Looks Like Cisco GSR 12416 19 Juniper M160 19 6ft Capacity: 160Gb/s Power: 4.2kW s of Code: 8M (!) (circa year 2000) 3ft Capacity: 80Gb/s Power: 2.6kW 2ft 2.5ft Slide courtesy Nick McKeown 14 7

Input Interface Packet forwarding: decide to which output interface to forward each packet based on the information in packet header Examine packet header Lookup in forwarding table Update packet header Question: do we send the packet to the output interface immediately? input interface Interconnect output interface 15 Output Functions Buffer management: decide when and which packet to drop Scheduler: decide when and which packet to transmit Buffer Scheduler 1 2 16 8

Output Queued Routers Only output interfaces store packets Advantages Easy to design algorithms: only one congestion point Disadvantages Requires an output speedup (Ro/C) of N, where N is the number of interfaces not feasible input interface output interface Backplane R O C 17 Input Queued Routers Input interfaces store packets Easier to build since only need R C Though need to implement back pressure to know when to send But harder to build efficiently due to contention and head-of-line blocking input interface output interface Backplane C R 18 9

Head-of-line Blocking Cell at head of an input queue cannot be transferred, thus blocking the following cells Cannot be transferred because is blocked by orange cell Input 1 Output 1 Input 2 Input 3 Cannot be transferred because output buffer overflow Output 2 Output 3 Modern high-speed routers use combination of input & output queuing, with flow control & multiple virtual queues 19 5 Minute Break Questions Before We Proceed? 20 10

Simple Queuing - FIFO and Drop Tail Most of today s routers Transmission via FIFO scheduling First-in first-out queue Packets transmitted in the order they arrive Buffer management: drop-tail If the queue is full, drop the incoming packet 21 Queuing Example P bits Q bits Packet arrival P = 1 Kbit; R = 1 Mbps P/R = 1 ms 0 0.5 1 7 7.5 Time (ms) Q(t) 2 Kb 1.5 Kb 1 Kb 0.5 Kb Delay for packet that arrives at time t, d(t) = Q(t)/R + P/R Time packet 1, d(0) = 1ms packet 2, d(0.5) = 1.5ms packet 3, d(1) = 2ms 22 11

Queuing Theory Enormous literature exists on mathematical modeling of queues Abstractions: Arrivals ( customers ) comes to the system according to some probability distribution F of interarrival times Arrival rate is designated λ (e.g., packets/sec) System has a set of servers ( 1) Each arrival has a service time taken from another probability distribution, G Questions: Given F, G, λ how much delay do customers see? Unfortunately, most solutions require F with only weak correlations - far from the case for networking 23 Generalized Delay Diagram 1 2 Latest bit seen by time t Sender Receiver at point 1 at point 2 Delay (in network/system/router) time 24 12

Little s Theorem Assume a system where packets arrive at rate λ Let d be mean delay of packet, i.e., mean time a packet spends in the system Q: What is N, mean # of packets in the system? E.g., for a router N would give the size of the queue d = mean delay λ mean arrival rate system A: N = λ x d 25 Little s Theorem: Proof Sketch Latest bit seen by time t d(i) = delay of packet i x(t) = number of bits in transit (in the system) at time t Sender 1 2 d(i) Receiver x(t) time What is the system occupancy, i.e., average number of bits in transit between 1 and 2? T 26 13

Little s Theorem: Proof Sketch Latest bit seen by time t d(i) = delay of packet i x(t) = number of bits in transit (in the system) at time t Sender 1 2 d(i) Receiver S= area x(t) T time Average occupancy = S/T 27 Little s Theorem: Proof Sketch Latest bit seen by time t d(i) = delay of packet i x(t) = number of bits in transit (in the system) at time t 1 2 Sender S(N) P S(N-1) d(n-1) Receiver S= area x(t) T time S = S(1) + S(2) + + S(N) = P*(d(1) + d(2) + + d(n)) 28 14

Little s Theorem: Proof Sketch Latest bit seen by time t d(i) = delay of packet i x(t) = number of bits in transit (in the system) at time t 1 2 Sender S(N) P S(N-1) d(n-1) Receiver S= area x(t) time Average occupancy T S/T = (P*(d(1) + d(2) + + d(n)))/t = ((P*N)/T) * ((d(1) + d(2) + + d(n))/n) Average arrival rate Average delay 29 Little s Theorem: Proof Sketch Latest bit seen by time t d(i) = delay of packet i x(t) = number of bits in transit (in the system) at time t 1 2 Sender S(N) P S(N-1) d(n-1) Receiver S= area x(t) T time Average occupancy = (average arrival rate) x (average delay) 30 15

Refinements to FIFO Random Early Detection (RED) Explicit Congestion Notification (ECN) 31 Bursty Loss From Drop-Tail Queuing TCP depends on packet loss Packet loss is the indication of congestion In fact, TCP drives the network into packet loss by continuing to increase the sending rate Drop-tail queuing leads to bursty loss When a link becomes congested many arriving packets encounter a full queue And, as a result, many flows perceive congestion and in fact tend to become synchronized by nearsimultaneous loss 32 16

Slow Feedback from Drop Tail Feedback comes when buffer is completely full even though the buffer has been filling for a while Plus, the filling buffer is increasing RTT and the variance in the RTT Idea: give early feedback Get one or two flows to slow down, not all of them Get these flows to slow down before it is too late Spread out congestion detection to break synchronization 33 Random Early Detection (RED) Basic idea of RED Router notices that the queue is getting backlogged and randomly drops arriving packets to signal congestion Packet drop probability Drop probability increases with average queue length If buffer is below some level, don t drop anything otherwise, set drop probability as function of length Probability Average Queue Length 34 17

Properties of RED Drops packets before queue is full In the hope of reducing the rates of some flows Drops packets in proportion to each flow s rate High-rate flows have more packets and, hence, a higher chance of being selected Drops are spaced out in time Helps desynchronize TCP senders Tolerant of burstiness in the traffic By basing the decisions on average queue length 35 RED In Practice Hard to get the tunable parameters just right How early to start dropping packets? What slope for the increase in drop probability? What time scale for averaging the queue length? RED is implemented in practice E.g., modern routers have config option to turn it on But not clear it s used (due to tuning challenges) Many variations proposed ( Blue, FRED ) More generally, think of RED as example of how network can give more refined feedback to end systems regarding current available capacity An example of Active Queue Management (AQM) 36 18

Explicit Congestion Notification Early dropping of packets Good: gives early/refined feedback Bad: costs a packet drop to give the feedback Explicit Congestion Notification (ECN) Router instead marks the packet with an ECN bit which end system interprets as a sign of congestion Surmounting the challenges Must be supported by both end hosts as well as routers Requires two bits in the IP header (one for ECN mark, one to indicate sender understands ECN) Solution: taken from Type-Of-Service bits in IPv4 packet header Also requires 2 TCP header bits (to echo back to sender) 37 Summary TCP Throughput Equation - if connection not window-limited, then performance scales As 1/RTT As 1/sqrt(p) for loss probability p Router architecture: fabric interconnects input interfaces with output interfaces Implications of input-queued vs. output-queued Queuing theory & practice Little s Law relates arrival rate, service time, # in system Simple FIFO queuing predominates RED as exemplar of refined Active Queue Management And ECN as a way to avoid using drops for feedback Next lecture: Quality of Service (QOS) 38 19