Long-Haul TCP vs. Cascaded TCP

Similar documents
Buffer Requirements for Zero Loss Flow Control with Explicit Congestion Notification. Chunlei Liu Raj Jain

RED behavior with different packet sizes

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

Improved Model for a Non-Standard TCP Behavior

Congestion Control in TCP

Neural-based TCP performance modelling

Exercises TCP/IP Networking With Solutions

Congestion Control in TCP

An Analytical Model for Short-Lived TCP Flows in Heterogeneous Wireless Networks

An End-to-End Support for Short-Lived TCP Flows in Heterogeneous Wired-cum-Wireless Networks: An Analytical Study

TCP NewReno: Slow-but-Steady or Impatient?

Congestion Control in TCP

Telecommunication & Network Research Lab

Transport layer issues

Performance Modeling of SCTP Multihoming

Improving TCP Performance over Wireless Networks using Loss Predictors

EE122 MIDTERM EXAM: Scott Shenker, Ion Stoica

2.993: Principles of Internet Computing Quiz 1. Network

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Chapter 3 Transport Layer

A rate controller for long-lived TCP flows

Impact of transmission errors on TCP performance. Outline. Random Errors

Comparison of TCP Performance Models

CS321: Computer Networks Congestion Control in TCP

ETSF10 Internet Protocols Transport Layer Protocols

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Modeling Two-Windows TCP Behavior in Internet Differentiated Services Networks

On TCP-friendly Video Transfer

Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail Drop/RED Routers

A Generalization of a TCP Model: Multiple Source-Destination Case. with Arbitrary LAN as the Access Network

Performance Comparison of TFRC and TCP

Stochastic Petri Nets Models for the Performance Analysis of TCP Connections Supporting Finite Data Transfer

ETSN01 Exam Solutions

Lower Bound for Mean Object Transfer Latency in the Narrowband IoT Environment

TCPDelay: Constant bit-rate data transfer over TCP

Streaming Video and TCP-Friendly Congestion Control

A Fast and Accurate Analytical Model of TCP Performance

CS419: Computer Networks. Lecture 10, Part 3: Apr 13, 2005 Transport: TCP performance

New York University Computer Science Department Courant Institute of Mathematical Sciences

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Chapter III: Transport Layer

A Tree-Based Reliable Multicast Scheme Exploiting the Temporal Locality of Transmission Errors

Recap. More TCP. Congestion avoidance. TCP timers. TCP lifeline. Application Presentation Session Transport Network Data Link Physical

TCP Throughput Analysis with Variable Packet Loss Probability for Improving Fairness among Long/Short-lived TCP Connections

Internet Networking recitation #10 TCP New Reno Vs. Reno

Chapter III. congestion situation in Highspeed Networks

Computer Networking

Communication Networks

Tutorial 8 : Congestion Control

TCP Congestion Control

TCP Congestion Control

Appendix B. Standards-Track TCP Evaluation

TCP-friendly video transfer

On Receiver-Driven Layered Multicast Transmission

Robust TCP Congestion Recovery

Congestion Control. Brighten Godfrey CS 538 January Based in part on slides by Ion Stoica

Congestion Avoidance and Control. Rohan Tabish and Zane Ma

cs/ee 143 Communication Networks

Steady State Analysis of the RED Gateway: Stability, Transient Behavior, and Parameter Setting

Mean Waiting Delay for Web Object Transfer in Wireless SCTP Environment

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

TCP Congestion Control in Wired and Wireless networks

CSCD 330 Network Programming Winter 2015

EP2200 Performance analysis of Communication networks. Topic 3 Congestion and rate control

RD-TCP: Reorder Detecting TCP

Congestion / Flow Control in TCP

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Department of Computer and IT Engineering University of Kurdistan. Transport Layer. By: Dr. Alireza Abdollahpouri

TCP Congestion Control

Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 Final: 5/20/2005

Lecture 14: Congestion Control"

Name Student ID Department/Year. Midterm Examination. Introduction to Computer Networks Class#: 901 E31110 Fall 2006

Variable Step Fluid Simulation for Communication Network

CPE 548 Exam #1 (50 pts) February 17, 2016

Direct Link Communication I: Basic Techniques. Data Transmission. ignore carrier frequency, coding etc.

Performance Enhancement Of TCP For Wireless Network

A Round Trip Time and Time-out Aware Traffic Conditioner for Differentiated Services Networks

8. TCP Congestion Control

Network performance. slide 1 gaius. Network performance

CPSC 3600 HW #4 Fall 2017 Last update: 11/9/2017 Please work together with your project group (3 members)

Two approaches to Flow Control. Cranking up to speed. Sliding windows in action

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Packet Scheduling in Data Centers. Lecture 17, Computer Networks (198:552)

On the Transition to a Low Latency TCP/IP Internet

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

Problem Set 7 Due: Start of Class, November 2

On the accuracy of analytical models of TCP throughput

Lecture 15: Transport Layer Congestion Control

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2014

Analyzing the Receiver Window Modification Scheme of TCP Queues

Computer Networking. Queue Management and Quality of Service (QOS)

ICS 451: Today's plan. Sliding Window Reliable Transmission Acknowledgements Windows and Bandwidth-Delay Product Retransmission Timers Connections

Advanced Computer Networks

CMPE 257: Wireless and Mobile Networking

The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Interactions of Intelligent Route Control with TCP Congestion Control

Transcription:

Long-Haul TP vs. ascaded TP W. Feng 1 Introduction In this work, we investigate the bandwidth and transfer time of long-haul TP versus cascaded TP [5]. First, we discuss the models for TP throughput. For TP flows in support of bulk data transfer (i.e., long-lived TP flows), the TP throughput models have been derived [2, 3]. These models rely on the congestion-avoidance algorithm of TP. Though these models cannot be applied with short-lived TP connections, our interest relative to logistical networking is in longer-lived TP flows anyway, specifically TP flows that spend significantly more time in the steady-state congestion-avoidance phase rather than the transient slow-start phase. However, in the case where short-lived TP connections must be modeled, several TP latency models have been proposed [1, 4] and based on these latency models, the throughput and transfer time of short-lived TP connections are obtainable. Using the above models, the transfer times for a data file of size packets can be computed for both long-haul TP and cascaded TP. The performance of both systems is compared via their transfer times. One system is said to be preferred if its tranfer time is lower than that of the other. Based on these performance comparisons, we develop a decision model that decides whether to use the cascaded TP or long-haul TP. 2 TP Throughput Models In this section, we describe TP throughput and latency models that will be used to compute the transfer time in this paper. The discussion is separated for long-lived and short-lived TP flows. 2.1 Long-Lived TP Flows The throughput models for long-lived TP flows have been extensively studied [2, 3]. The two accepted models are developed by Mathis et al. [2] and Padhye et al. [3] and will be used in subsequent development of this paper. Before we summarize these models, we first start with some definitions: For a path in the network, let D and p be the round-trip time and packet loss probability, respectively, of the TP connection using the path. Also, define W max and T 0 be the maximum window size and the average retransmission time out (RTO) of a TP connection, respectively. Let b denote the 1

number of acknowledged packets by the receiver per one AK. For example, the original TP implementation acknowledges every b = 1 packet and the newer implementation with delayed acknowledgements sends one AK for roughly every b = 2 packets. These parameters are used in the following models. Mathis Model is a very simple approximation of TP throughput which models only TP congestion avoidance and fast retransmit. For some constant, the TP throughput is given by The typical value of is 3 2b. B(D, p) = D p packet/sec. (1) Padhye Model is significantly more complicated than the Mathis Model. It incorporates TP fast retransmit and retransmission timeout into the congestionavoidance model. The formula for TP throughput is given by B(D, p) = min{ W max D, 1 D 2bp + T 3 0 min(1, 3 3bp )p(1 + } packet/sec. (2) 8 32p2 ) Using the throughput formulas (1) and (2), the average transfer time of a long-lived TP flow having data of size can be approximated by T L = B(D, p). (3) Note that we neglect the slow-start phase because long-lived TP flows stay most of their lifetimes in the congestion-avoidance phase. In this project, we will first use the Mathis Model for TP thoughput in order to compare the performance of the long-haul TP and that of the cascaded TP. 2.2 hort-lived TP Flows Because the throughput models for long-lived TP flows are based on the congestionavoidance algorithms of TP, they cannot be applied to short-lived TP flows which are so short that their entire lifetimes are usually within the slow-start phase of TP. Here, we rely on the TP latency model proposed by [1]. According to [1], the expected transfer time of TP connections can be partitioned into three intervals, i.e., T = E [T ss ] + E [T loss ] + E [T ca ] where T ss, T loss and T ca are time durations in slow-start mode, loss-recovery after timeout, and congestion-avoidance mode. 1 Formulas for these time durations are available in [1]. 1 Note that here we drop the delay due to the delayed acknowledgement scheme of the first segment. This delay is constant for each operating system. 2

By computing T avg for long-haul TP and cascaded TP, their performance can be compared. Note that the formula for T is based on the derivation of [3] and is very complicated to compute. If the TP transfer size is large, then it is wiser to approximate the transfer time using the steady state (long-lived TP) model. But how large is large? 3 No Pipelining The derivations in the next two sections rely on the Mathis TP model (1). uppose that the long-haul TP has round-trip time D and packet-loss probability p. From (1), the bandwidth of this connection is approximately B(D, p) = D and the transfer time p for data of size is simply T lh = B(D, p) = D p. (4) Now, if this data is transmitted using another path and for this path the TP connection is broken into N cascaded TP connections where each TP connection is denoted by TP i, i = 1,..., N, then for each i = 2,..., N, TP i can start its transmission if TP i 1 has finished all data transfer. The transmission for the cascaded TP is said to be complete when the last TP connection has finished its transfer. Using this idea, the transfer time of data of size is T c = T i where for each i = 1,..., N, T i is the transfer time of TP i. For each i = 1,..., N, let D i and p i denote the round-trip time and the packet-loss probability for TP i, respectively. By applying Mathis formula (1) to each TP i, then T c = D i pi. (5) From the above computation, we prefer the cascaded TP when the transfer time T c is smaller than the original transfer time of long-haul TP. Thus, from (4) and (5), we choose the cascaded TP when D i pi < D p, or equivalently, D i pi < D p. Note that the decision does not depend on the transfer size but only on the statistics of the paths, namely the round-trip delay and the loss probability. 3

4 Pipelining In this section, we relax the assumption on the transmission of the cascaded TP by allowing pipelining. Pipelining means that the intermediate node can transmit the packet whenever it has the packet in its queue. o it does not have to wait until all the packets have arrived in order to start the transmission. However, in this transmission process, the incoming data will be buffered in the sending queue. To avoid packet drought in the buffer, the TP connection may wait for R transmission windows before starting its transmission. For this case, the bottleneck link is simply the link with lowest TP bandwidth. Using the Mathis Model (1), the bottleneck link is simply the link k = arg min{i = 1,..., N : D i pi }. Therefore, the total transfer time of the pipelining cascaded TP can be approximated by N T pc = B(D k, p k ) + R D i (6) where the first term is the bottleneck link transfer time and the second term comes from the first R end-to-end round-trip times that each connection waits before starting its transmission. By comparing (4) and (6), the pipelining cascaded TP is preferred if T pc < T lc, or equivalently, D k pk + R D i. < D p Here, the transfer size is involved in the decision. But it will has small effect if R N D i and the decision in this case is roughly D k pk < D p. 5 Note From the operating standpoint, the cascaded TP requires more overhead than the original TP since it needs to establish N TP connections instead of only 1 TP connection. In computing the decision, one might need to add the processing time T proc for cascaded TP which depends on the number of TP connections N and (maybe) on the transfer size. hort-lived TP still needs to be considered. References [1] N. ardwell,. avage and T. Anderson, Modeling TP latency, in Proceeding of IEEE INFOOM 2000, March 2000, Tel Aviv, Israel. 4

[2] M. Mathis, J. emke, J. Mahdavi and T. Ott, The macroscopic behavior of the TP congestion avoidance algorithm, omputer ommunication Review 27, 1997. [3] J. Padhye, V. Firoiu, D. Towsley and J. Kurose, Modeling TP throughput: A simple model and its empirical validation, in Proceedings of AM IGOMM 1998. [4] B. ikdar,. Kalyanaraman and K.. Vastola, An integrated model for the latency and steady-state throughput of TP connections, Performance Evaluation 46, eptember 2001, pp. 139-154. [5] M. wany and R. Wolski, Improving throughput with cascaded TP connections: the logistical session layer, Technical Report 2002-24, University of alifornia, anta Barbara. 5