CNT 6885 Network Review on Transport Layer

Similar documents
Lecture 8. TCP/IP Transport Layer (2)

CC451 Computer Networks

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Lecture 11. Transport Layer (cont d) Transport Layer 1

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

Chapter 3- parte B outline

Correcting mistakes. TCP: Overview RFCs: 793, 1122, 1323, 2018, TCP seq. # s and ACKs. GBN in action. TCP segment structure

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Go-Back-N. Pipelining: increased utilization. Pipelined protocols. GBN: sender extended FSM

Chapter 3 Transport Layer

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, Development of reliable protocol Sliding window protocols

Application. Transport. Network. Link. Physical

TCP reliable data transfer. Chapter 3 outline. TCP sender events: TCP sender (simplified) TCP: retransmission scenarios. TCP: retransmission scenarios

Chapter III: Transport Layer

Chapter 3 Transport Layer

Suprakash Datta. Office: CSEB 3043 Phone: ext Course page:

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, steam: r Development of reliable protocol r Sliding window protocols

Master Course Computer Networks IN2097

Chapter III: Transport Layer

32 bits. source port # dest port # sequence number acknowledgement number not used. checksum. Options (variable length)

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 10

Transport Layer: outline

CSC 8560 Computer Networks: TCP

CSE 4213: Computer Networks II

By Ossi Mokryn, Based also on slides from: the Computer Networking: A Top Down Approach Featuring the Internet by Kurose and Ross

Lecture 08: The Transport Layer (Part 2) The Transport Layer Protocol (TCP) Dr. Anis Koubaa

TCP: Overview RFCs: 793,1122,1323, 2018, 2581

Chapter 6 Transport Layer

Transport Layer: Outline

CSCI Topics: Internet Programming Fall 2008

Computer Networking Introduction

CSCD 330 Network Programming Winter 2015

Chapter 3 Transport Layer

Chapter 3 Transport Layer

CSC 4900 Computer Networks: TCP

Chapter 3 Transport Layer

Chapter 3 Transport Layer

RSC Part III: Transport Layer 3. TCP

Chapter 3 outline. 3.5 connection-oriented transport: TCP segment structure reliable data transfer flow control connection management

Computer Networking: A Top Down Approach

CS Lecture 1 Review of Basic Protocols

Data Communications & Networks. Session 9 Main Theme Network Congestion Causes, Effects, Controls, and TCP Applications. Dr. Jean-Claude Franchitti

LECTURE 3 - TRANSPORT LAYER

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 9

Chapter 3 Transport Layer

Master Course Computer Networks IN2097

CSCI Topics: Internet Programming Fall 2008

Chapter 3 Transport Layer

Lecture 5. Transport Layer. Transport Layer 1-1

Lecture 3 The Transport Control Protocol (TCP) Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

10 minutes survey (anonymous)

Foundations of Telematics

Transport protocols. Transport Layer 3-1

Computer Networking Introduction

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Chapter III: Transport Layer

Chapter 3 Transport Layer

CS 4390 Computer Networks. Pointers to Corresponding Section of Textbook

Chapter 3 Transport Layer

Flow and Congestion Control (Hosts)

Chapter 3: Transport Layer. Chapter 3 Transport Layer. Chapter 3 outline. Transport services and protocols

Chapter 3 Transport Layer

Fall 2012: FCM 708 Bridge Foundation I

COMP 431 Internet Services & Protocols. Transport Layer Protocols & Services Outline. The Transport Layer Reliable data delivery & flow control in TCP

TCP (Part 2) Session 10 INST 346 Technologies, Infrastructure and Architecture

CSCD 330 Network Programming

CSCI Topics: Internet Programming Fall 2008

Principles of congestion control

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

CSCD 330 Network Programming Spring 2018 Lecture 11a Transport Layer

Course on Computer Communication and Networks. Lecture 5 Chapter 3; Transport Layer, Part B

Chapter 3 Transport Layer

TCP : Fundamentals of Computer Networks Bill Nace

The Transport Layer Reliable data delivery & flow control in TCP. Transport Layer Protocols & Services Outline

CSC 401 Data and Computer Communications Networks

Mid Term Exam Results

Computer Communication Networks Midterm Review

rdt3.0: channels with errors and loss

Transmission Control Protocol

The Transport Layer: TCP & Reliable Data Transfer

CSCE 463/612 Networks and Distributed Processing Spring 2017

TCP. TCP: Overview. TCP Segment Structure. Maximum Segment Size (MSS) Computer Networks 10/19/2009. CSC 257/457 - Fall

The Transport Layer Reliable data delivery & flow control in TCP. Transport Layer Protocols & Services Outline

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Chapter 3 Transport Layer

CSC 4900 Computer Networks: TCP

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

CSC358 Week 5. Adapted from slides by J.F. Kurose and K. W. Ross. All material copyright J.F Kurose and K.W. Ross, All Rights Reserved

Computer Networks & Security 2016/2017

Chapter 3: Transport Layer. Chapter 3 Transport Layer. Chapter 3 outline. Transport services and protocols

Chapter 3 Transport Layer

Chapter 3: Transport Layer

Detecting half-open connections. Observed TCP problems

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Computer Networks. Transport Layer

Different Layers Lecture 20

Pipelined protocols: overview

Transcription:

CNT 6885 Network Review on Transport Layer Jonathan Kavalan, Ph.D. Department of Computer, Information Science and Engineering (CISE), University of Florida

User Datagram Protocol [RFC 768] no frills, bare bones Internet transport protocol best effort service, UDP segments may be: lost delivered out of order to app connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of Why is there a UDP? no connection establishment (which can add delay) simple: no connection state at sender, receiver small segment header no congestion control: UDP can blast away as fast as desired

UDP: more often used for streaming multimedia apps loss tolerant rate sensitive other UDP uses DNS SNMP reliable transfer over UDP: add reliability at application layer application-specific error recovery! Length, in bytes of UDP segment, including header 32 bits source port # dest port # length Application data (message) checksum UDP segment format

Pipelined protocols Pipelining: sender allows multiple, inflight, yet-to-be-acknowledged pkts range of sequence numbers must be increased buffering at sender and/or receiver Two generic forms of pipelined protocols: go-back-n, selective repeat

Go-Back-N Sender: k-bit seq # in pkt header window of up to N, consecutive unack ed pkts allowed ACK(n): ACKs all pkts up to, including seq # n - cumulative ACK may receive duplicate ACKs (see receiver) timer for each in-flight pkt timeout(n): retransmit pkt n and all higher seq # pkts in window

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 s o c k e t d o o r point-to-point: one sender, one receiver reliable, in-order byte steam: no message boundaries pipelined: TCP congestion and flow control set window a p p l i c a t i o n w r i t e s d a t a size send & receive buffers T C P s e n d b u f f e r s e g m e n t a p p l i c a t i o n r e a d s d a t a T C P r e c e i v e b u f f e r s o c k e t d o o r full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver

TCP segment structure URG: urgent data (generally not used) ACK: ACK # valid PSH: push data now (generally not used) RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP) 32 bits source port # dest port # head len sequence number acknowledgement number not used UAP R S F checksum Receive window Urg data pnter Options (variable length) application data (variable length) counting by bytes of data (not segments!) # bytes rcvr willing to accept

TCP seq. # s and ACKs Seq. # s: byte stream number of first byte in segment s data ACKs: seq # of next byte expected from other side cumulative ACK Q: how receiver handles out-of-order segments A: TCP spec doesn t say, - up to implementation User types C host ACKs receipt of echoed C Host A Host B Seq=42, ACK=79, data = C Seq=79, ACK=43, data = C Seq=43, ACK=80 simple telnet scenario host ACKs receipt of C, echoes back C time

TCP reliable data transfer TCP creates rdt service on top of IP s unreliable service Pipelined segments Cumulative acks TCP uses single retransmission timer Retransmissions are triggered by: timeout events duplicate acks Initially consider simplified TCP sender: ignore duplicate acks ignore flow control, congestion control

TCP sender events: data rcvd from app: Create segment with seq # seq # is bytestream number of first data byte in segment start timer if not already running (think of timer as for oldest unacked segment) timeout: retransmit segment that caused timeout restart timer Ack rcvd: If acknowledges previously unacked segments update what is known to be acked start timer if there

TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed Arrival of in-order segment with expected seq #. One other segment has ACK pending Arrival of out-of-order segment higher-than-expect seq. #. Gap detected Arrival of segment that partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK, ACKing both in-order segments Immediately send duplicate ACK, indicating seq. # of next expected byte Immediate send ACK, provided that segment starts at lower end of gap

TCP Window Dynamic

TCP Version Evolution 1982: TCP/IP Version 3 officially splited TCP and IP into two protocol layers 1988: the slow-start scheme was added as TCP-Tahoe Version 19??: the fast recovery schemes was added as TCP-Reno Version 1995: TCP-Vegas Version

Fast Recovery Time-out period often relatively long: long delay before resending lost packet Detect lost segments via duplicate ACKs. Sender often sends many segments backto-back If segment is lost, there will likely be many duplicate ACKs. If sender receives 3 ACKs for the same data, it supposes that segment after ACKed data was lost: fast retransmit: resend segment before timer expires

Host A Host B X timeout resend 2 nd segment time Figure 3.37 Resending a segment after triple duplicate ACK

Fast retransmit algorithm: event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else { increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { resend segment with sequence number y } a duplicate ACK for already ACKed segment fast retransmit

Fast retransmit algorithm: event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else { increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { resend segment with sequence number y } a duplicate ACK for already ACKed segment fast retransmit

TCP Flow Control receive side of TCP connection has a receive buffer: app process may be slow at reading from buffer flow control sender won t overflow receiver s buffer by transmitting too much, too fast speed-matching service: matching the send rate to the receiving app s drain rate

TCP Flow control: how it works (Suppose TCP receiver discards out-of-order segments) spare room in buffer = RcvWindow = RcvBuffer- [LastByteRcvd - LastByteRead] Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unacked data to RcvWindow guarantees receive buffer doesn t overflow

TCP Connection Management Recall: TCP sender, receiver establish connection before exchanging data segments initialize TCP variables: seq. #s buffers, flow control info (e.g. RcvWindow) client: connection initiator Socket clientsocket = new Socket("hostname","port number"); server: contacted by client Socket connectionsocket = Three way handshake: Step 1: client host sends TCP SYN segment to server specifies initial seq # no data Step 2: server host receives SYN, replies with SYNACK segment server allocates buffers specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data

TCP Connection Management (cont.) Closing a connection: client server client closes socket: clientsocket.close(); close FIN Step 1: client end system sends TCP FIN control segment to server Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN. timed wait closed ACK FIN ACK close

TCP Connection Management (cont.) Step 3: client receives FIN, replies with ACK. Enters timed wait - will respond with ACK to received FINs closing client FIN ACK FIN server closing Step 4: server, receives ACK. Connection closed. timed wait ACK closed Note: with small modification, can handle simultaneous closed

Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for network to handle different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!

Causes/costs of congestion: scenario 1 two senders, two receivers one router, infinite buffers no retransmission Host B Host A λ in : original data unlimited shared output link buffers λ out large delays when congested maximum achievable throughput

Causes/costs of congestion: scenario 2 one router, finite buffers sender retransmission of lost packet Host A λ in : original data λ out λ' in : original data, plus retransmitted data Host B finite shared output link buffers

Causes/costs of congestion: scenario 2 always: λ = λ (goodput) in out perfect retransmission only when loss: λ > λ in out retransmission of delayed (not lost) packet makes λ out larger (than perfect case) for same R/2 R/2 R/2 λ in λ out R/3 λ out λ out R/4 λ in R/2 λ in R/2 λ in R/2 a. costs of congestion: more work (retrans) for given goodput b. unneeded retransmissions: link carries multiple copies of pkt c.

Causes/costs of congestion: scenario 3 four senders multihop paths timeout/retransmit Host A Q: what happens as λand in increase? λ in : original data λ' in : original data, plus retransmitted data finite shared output link buffers λ out λ in Host B

Causes/costs of congestion: scenario 3 H o s t A λ o u t H o s t B Another cost of congestion: when packet dropped, any upstream transmission capacity used for that packet was wasted!

Approaches towards congestion control Two broad approaches towards congestion control: End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate the

TCP congestion control: additive increase, multiplicative decrease Approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs additive increase: increase CongWin by 1 MSS every RTT until loss detected multiplicative cdecrease: o n g s t i o n cut CongWin in half w i n d o w after loss 2 4 K b y t e s Saw tooth behavior: probing for bandwidth congestion window size 1 6 K b y t e s 8 K b y t e s time t i m e

TCP Congestion Control: details sender limits transmission: LastByteSent-LastByteAcked Roughly, rate = CongWin RTT CongWin Bytes/sec CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events

TCP Slow Start When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to When connection begins, increase rate exponentially fast until first loss event

TCP Slow Start (more) When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast RTT Host A Host B one segment two segments four segments time

Refinement: inferring loss After 3 dup ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Philosophy: 3 dup ACKs indicates network capable of delivering some segments timeout indicates a more alarming congestion scenario

Refinement Q: When should the exponential increase switch to linear? A: When CongWin gets Implementation: to 1/2 of its value before Variable Threshold timeout. At loss event, Threshold is set to 1/2 of CongWin just before loss event

Summary: TCP Congestion Control When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.

TCP sender congestion control State Event TCP Sender Action Commentary Slow Start (SS) ACK receipt for previously unacked data CongWin = CongWin + MSS, If (CongWin > Threshold) set state to Congestion Avoidance Resulting in a doubling of CongWin every RTT Congestion Avoidance (CA) ACK receipt for previously unacked data CongWin = CongWin+MSS * (MSS/CongWin) Additive increase, resulting in increase of CongWin by 1 MSS every RTT SS or CA Loss event detected by triple duplicate ACK Threshold = CongWin/2, CongWin = Threshold, Set state to Congestion Avoidance Fast recovery, implementing multiplicative decrease. CongWin will not drop below 1 MSS. SS or CA Timeout Threshold = CongWin/2, CongWin = 1 MSS, Set state to Slow Start Enter slow start SS or CA Duplicate ACK Increment duplicate ACK count for segment being acked CongWin and Threshold not changed

TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R

Why is TCP fair? Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally R equal bandwidth share Connection 2 throughput Connection 1 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase R

Fairness (more) Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly Fairness and parallel TCP connections nothing prevents app from opening parallel connections between 2 hosts. Web browsers do this Example: link of rate R supporting 9 connections; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2!