Edge versus Host Pacing of TCP Traffic in Small Buffer Networks

Similar documents
Edge versus Host Pacing of TCP Traffic in Small Buffer Networks

The War Between Mice and Elephants

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Lecture 22: Buffering & Scheduling. CSE 123: Computer Networks Alex C. Snoeren

Streaming Video and TCP-Friendly Congestion Control

Router s Queue Management

CS644 Advanced Networks

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Lecture 14: Congestion Control"

Congestion control in TCP

Computer Networking

Computer Networks. Course Reference Model. Topic. Congestion What s the hold up? Nature of Congestion. Nature of Congestion 1/5/2015.

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

Appendix B. Standards-Track TCP Evaluation

Congestion Avoidance

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

CS268: Beyond TCP Congestion Control

TCP performance analysis through. processor sharing modeling

Performance Consequences of Partial RED Deployment

Promoting the Use of End-to-End Congestion Control in the Internet

Appendix A. Methodology

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Studying Fairness of TCP Variants and UDP Traffic

Buffer Sizing in a Combined Input Output Queued (CIOQ) Switch

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

The Effects of Asymmetry on TCP Performance

ADVANCED COMPUTER NETWORKS

Bandwidth Allocation & TCP

Transport Protocols for Data Center Communication. Evisa Tsolakou Supervisor: Prof. Jörg Ott Advisor: Lect. Pasi Sarolahti

Analyzing the Receiver Window Modification Scheme of TCP Queues

Brief Overview and Background

Buffer Management for Self-Similar Network Traffic

TCP and BBR. Geoff Huston APNIC

XCP: explicit Control Protocol

The Transport Control Protocol (TCP)

Affects of Queuing Mechanisms on RTP Traffic Comparative Analysis of Jitter, End-to- End Delay and Packet Loss

Lecture 14: Congestion Control"

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

TCP and BBR. Geoff Huston APNIC

CS4700/CS5700 Fundamentals of Computer Networks

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

Sharing Small Optical Buffers Between Real-Time and TCP Traffic

Congestion Control in Communication Networks

RED behavior with different packet sizes

The Present and Future of Congestion Control. Mark Handley

Circuit Breakers for Multimedia Congestion Control

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Congestion Control 3/16/09

The Impact of Delay Variations on TCP Performance

Performance Evaluation of TCP Westwood. Summary

On Network Dimensioning Approach for the Internet

THE TCP specification that specifies the first original

CS 268: Computer Networking

ADVANCED TOPICS FOR CONGESTION CONTROL

Performance Analysis of TCP Variants

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

The War Between Mice and Elephants

CS3600 SYSTEMS AND NETWORKS

CS321: Computer Networks Congestion Control in TCP

Investigating the Use of Synchronized Clocks in TCP Congestion Control

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Traffic Characteristics of Bulk Data Transfer using TCP/IP over Gigabit Ethernet

Implementing and Experimenting with XCP

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

On the Deployment of AQM Algorithms in the Internet

Enhancing TCP Throughput over Lossy Links Using ECN-capable RED Gateways

Improving TCP Performance over Wireless Networks using Loss Predictors

Investigating Forms of Simulating Web Traffic. Yixin Hua Eswin Anzueto Computer Science Department Worcester Polytechnic Institute Worcester, MA

Congestion Control Without a Startup Phase

Considerations for Sizing Buffers in Optical Packet Switched Networks

Tuning RED for Web Traffic

Congestion Control. Resource allocation and congestion control problem

On TCP friendliness of VOIP traffic

Transmission Control Protocol (TCP)

CS 356: Computer Network Architectures Lecture 19: Congestion Avoidance Chap. 6.4 and related papers. Xiaowei Yang

Wireless TCP Performance Issues

Operating Systems and Networks. Network Lecture 10: Congestion Control. Adrian Perrig Network Security Group ETH Zürich

Where we are in the Course. Topic. Nature of Congestion. Nature of Congestion (3) Nature of Congestion (2) Operating Systems and Networks

Network Management & Monitoring Network Delay

Incrementally Deployable Prevention to TCP Attack with Misbehaving Receivers

CSE 573S Protocols for Computer Networks (Spring 2005 Final Project)

Chapter 1. Introduction

On the Transition to a Low Latency TCP/IP Internet

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Network Management & Monitoring

Robust TCP Congestion Recovery

CS 268: Lecture 7 (Beyond TCP Congestion Control)

FIFO Service with Differentiated Queueing

Implementation Experiments on HighSpeed and Parallel TCP

Chapter III. congestion situation in Highspeed Networks

Variable Step Fluid Simulation for Communication Network

A Rant on Queues. Van Jacobson. July 26, MIT Lincoln Labs Lexington, MA

CUBIC. Qian HE (Steve) CS 577 Prof. Bob Kinicki

Congestion Control For Coded TCP. Doug Leith

Transcription:

Edge versus Host Pacing of TCP Traffic in Small Buffer Networks Hassan Habibi Gharakheili 1, Arun Vishwanath 2, Vijay Sivaraman 1 1 University of New South Wales (UNSW), Australia 2 University of Melbourne, Australia

Overview Increasing growth of data traffic Core network capacity growth Energy concerns Source: Cisco VNI Forecast, 2013 Use optical packet switching Sacrifice buffering function

Problem Small buffer network Reduced buffer size (GB MB/KB) Increase congestion and contention Performance loss TCP traffic Bursty

Existing solutions Alleviate contentions Wavelength conversion in the core Loss recovery Packet-level forward-error-correction (FEC) at edge nodes Traffic pacing

Traffic Pacing Host pacing (TCP pacing) Requires TCP stack modification Spreading packet transmission Impractical (Out of operator control) Costly (too many devices involved) Paced hosts that are penalised over unpaced hosts [A. Aggarwal et al, IEEE INFOCOM] Edge pacing Smoothing traffic prior injection into the core by edge nodes

Edge-Pacer Input: traffic with given delay constraint Output: smoothest traffic s.t. time-constraint Adjusts traffic release rate to maximise smoothness, subject to a given upper bound on packet delay

Edge-Pacing mechanism

Contributions Simulations of small-buffer core network Bottleneck vs. Non-bottleneck Low-speed vs. high-speed access links Short-lived vs. long-lived flows Different number of flows Different variants of TCP Selection model for edge pacing delay Benefits of incremental deployment

ns-2 Simulation Core is the small buffer link Simulation is run for 400 sec Data in the interval [100, 400]sec is used (capture the steady state)

Small buffer link as the bottleneck 10 edge links (ES 1 -ES 10 ) and (ED 1 -ED 10 ) 100Mbps uniformly distributed [5, 15]ms propagation delay Each edge link is fed by 100 access links uniformly distributed [5, 15]ms propagation delay 1000 end-host (= 10 * 100) having one TCP Reno flow each 1000 TCP flows start randomly distributed [0, 10]sec Core Link 100Mbps (bottleneck) 100ms delay FIFO queue with droptail queue management queue size is varied in terms of KB RTT : [224, 280]ms

Small buffer link as the bottleneck Aggregate TCP throughput Average per-flow goodput Outperformance of edge pacing (especially in the region of 5-15KB buffers)

Number of TCP flows Aggregate TCP throughput Average per-flow goodput Same setup as before, but alter the number of access links (10, 50, 100) feeding into the edge thus 100, 500 and 1000 flows respectively With small number of flows; individual flow burtiness contributes more to loss than simultaneous arrival host pacing effectively reduces the source burstiness

High-speed access links Aggregate TCP throughput Average per-flow goodput Setup for 1000 flows but access links operate at 100Mbps (enterprise, data-centre,..) Avg. goodput of 90Kbps, requires buffer; 20KB for unpaced 10KB for host-paced 5KB for edge-paced

Short-lived (mice) TCP flows Efficacy of edge pacing in combating short time-scale burstiness On-off traffic flow; On: size of transferred file follows Pareto distribution with mean 100 KB and shape parameter 1.2 Off: duration of the thinking period is exponentially distributed with mean 1 sec

Different versions of TCP Aggregate TCP throughput Average per-flow goodput Setup for 1000 flows, Low-speed access links, and Core bottleneck for all above variants of TCP, edge pacing offers better performance than host pacing

Small buffer link not the bottleneck Aggregate TCP throughput Packet loss rate at the core link Setup: Core @ 200Mbps 10 edges @ 40Mbps 10 access / edge @ [1, 2] Mbps Buffer < 10KB Zero packet loss TCP throughput is not sensitive to pacing when the small buffer link is not the bottleneck

Analysis Modeling TCP performance difficult due to its control feedback loops TCP throughput Edge pacer increases the mean RTT (i.e. RTT 0 ) Pacer with delay bound d, adds on average d/2 delay in each direction: RTT 0 RTT 0 + d

Analysis (cnt d) Aggregate traffic of TCP flows sharing small buffer, is Poisson-like with a certain rate λ Traffic burstiness [5]: Loss rate [5]:

Analysis (cnt d) Therefore; Low load / Small buffer High load / Large buffer

Analysis (cnt d) Low load / Small buffer (analysis) High load / Large buffer (analysis) Analysis assumes a fixed load Increasing pacing delay results low loss reduces, then loss reduction is compensated by TCP reaction of increasing offered load TCP throughput curve saturates in simulation by pacing delay reaches 10 ms Low load / Small buffer (sim) High load / Large buffer (sim)

Variation of pacing delay Small pacing delay d = 1 ms: ineffective at small buffer sizes large pacing delay such as d = 100 or 200 ms: detrimental at large buffer sizes Throughout our simulations we found that d = 10 ms performs well across entire range of buffer sizes

Practical deployment of Pacing Host pacing Edge pacing Throughput rises gradually as the fraction of hosts/edges that perform pacing increases, and therefore it would seem the benefits of pacing can be realised incrementally with progressive deployment

Practical deployment of Pacing Host pacing Edge pacing 30% pacing deployed (i.e. 300 out of 1000 flows perform TCP pacing in the case of host pacing and 3 out of 10 edge nodes perform pacing in the edge pacing case) Early adopters of host pacing can obtain worse performance than their non-pacing peers substantial disincentive for users to deploy host pacing However, for edge pacing; paced flows experience better performance than unpaced ones

Conclusion Energy concern of high-speed routers Optical switching reduced buffering Two different pacing technique to address TCP performance Edge-pacing performs as good as Host-pacing or better Clear incentive for incremental deployment of edge-pacing in operational network