Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, steam: r Development of reliable protocol r Sliding window protocols

Similar documents
Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, Development of reliable protocol Sliding window protocols

Correcting mistakes. TCP: Overview RFCs: 793, 1122, 1323, 2018, TCP seq. # s and ACKs. GBN in action. TCP segment structure

Lecture 8. TCP/IP Transport Layer (2)

32 bits. source port # dest port # sequence number acknowledgement number not used. checksum. Options (variable length)

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Transport Layer: outline

CC451 Computer Networks

Chapter 3- parte B outline

Lecture 08: The Transport Layer (Part 2) The Transport Layer Protocol (TCP) Dr. Anis Koubaa

CNT 6885 Network Review on Transport Layer

Fall 2012: FCM 708 Bridge Foundation I

Transport Layer: Outline

CSC 4900 Computer Networks: TCP

Chapter III: Transport Layer

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

CSE 4213: Computer Networks II

Suprakash Datta. Office: CSEB 3043 Phone: ext Course page:

CSC 8560 Computer Networks: TCP

Chapter 3 outline. 3.5 connection-oriented transport: TCP segment structure reliable data transfer flow control connection management

10 minutes survey (anonymous)

Chapter III: Transport Layer

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 9

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Computer Communication Networks Midterm Review

Chapter 3 Transport Layer

Computer Networking Introduction

Chapter 3 Transport Layer

CS 4390 Computer Networks. Pointers to Corresponding Section of Textbook

TCP. TCP: Overview. TCP Segment Structure. Maximum Segment Size (MSS) Computer Networks 10/19/2009. CSC 257/457 - Fall

TCP: Overview RFCs: 793,1122,1323, 2018, 2581

CS Lecture 1 Review of Basic Protocols

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

COMP 431 Internet Services & Protocols. Transport Layer Protocols & Services Outline. The Transport Layer Reliable data delivery & flow control in TCP

TCP (Part 2) Session 10 INST 346 Technologies, Infrastructure and Architecture

CSC 401 Data and Computer Communications Networks

Go-Back-N. Pipelining: increased utilization. Pipelined protocols. GBN: sender extended FSM

Chapter 6 Transport Layer

Chapter 3 Transport Layer

The Transport Layer Reliable data delivery & flow control in TCP. Transport Layer Protocols & Services Outline

Master Course Computer Networks IN2097

Chapter 3 Transport Layer

The Transport Layer Reliable data delivery & flow control in TCP. Transport Layer Protocols & Services Outline

TCP reliable data transfer. Chapter 3 outline. TCP sender events: TCP sender (simplified) TCP: retransmission scenarios. TCP: retransmission scenarios

Lecture 11. Transport Layer (cont d) Transport Layer 1

CS450 Introduc0on to Networking Lecture 14 TCP. Phu Phung Feb 13, 2015

RSC Part III: Transport Layer 3. TCP

Lecture 5. Transport Layer. Transport Layer 1-1

Transport Protocols and TCP

CSCD 330 Network Programming

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

Chapter 3 Transport Layer

Chapter 3 Transport Layer

Application. Transport. Network. Link. Physical

CSCE 463/612 Networks and Distributed Processing Spring 2017

Flow and Congestion Control (Hosts)

Department of Computer and IT Engineering University of Kurdistan. Transport Layer. By: Dr. Alireza Abdollahpouri

Chapter 3 Transport Layer

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 10

Chapter 3 Transport Layer

The Transport Layer: TCP & Reliable Data Transfer

By Ossi Mokryn, Based also on slides from: the Computer Networking: A Top Down Approach Featuring the Internet by Kurose and Ross

Computer Networking: A Top Down Approach

Chapter 3 Transport Layer

Chapter III: Transport Layer

Foundations of Telematics

CSCI Topics: Internet Programming Fall 2008

Transmission Control Protocol

Course on Computer Communication and Networks. Lecture 5 Chapter 3; Transport Layer, Part B

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

TCP : Fundamentals of Computer Networks Bill Nace

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Lecture 3 The Transport Control Protocol (TCP) Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Computer Communication Networks Midterm Review

Routers. Session 12 INST 346 Technologies, Infrastructure and Architecture

LECTURE 3 - TRANSPORT LAYER

Chapter 3 Transport Layer

Chapter 3 Transport Layer

TCP. 1 Administrivia. Tom Kelliher, CS 325. Apr. 2, Announcements. Assignment. Read From Last Time

CSCD 330 Network Programming Winter 2015

Transport protocols. Transport Layer 3-1

rdt3.0: channels with errors and loss

Chapter 3 Transport Layer

Chapter 3 outline. TDTS06 Computer networks. Principles of Reliable data transfer. Reliable data transfer: getting started

CSC358 Week 5. Adapted from slides by J.F. Kurose and K. W. Ross. All material copyright J.F Kurose and K.W. Ross, All Rights Reserved

Chapter 3: Transport Layer. Chapter 3 Transport Layer. Chapter 3 outline. Transport services and protocols

Master Course Computer Networks IN2097

Chapter 3: Transport Layer

Distributed Systems. 5. Transport Protocols

Distributed Systems. 5. Transport Protocols. Werner Nutt

Lecture 12: Transport Layer TCP again

Pipelined protocols: overview

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 25, 2018

Mid Term Exam Results

Data Communications & Networks. Session 9 Main Theme Network Congestion Causes, Effects, Controls, and TCP Applications. Dr. Jean-Claude Franchitti

CSC 4900 Computer Networks: TCP

Chapter 3 Transport Layer

6.1 Internet Transport Layer Architecture 6.2 UDP (User Datagram Protocol) 6.3 TCP (Transmission Control Protocol) 6. Transport Layer 6-1

UNIT IV -- TRANSPORT LAYER

Transcription:

Outline r Development of reliable protocol r Sliding window protocols m Go-Back-N, Selective Repeat r Protocol performance r Sockets, UDP, TCP, and IP r UDP operation r TCP operation m connection management m flow control m congestion control TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r point-to-point: r full duplex data: socket door m one sender, one receiver r reliable, in-order byte steam: m no message boundaries r sliding window: m TCP congestion and flow control set window size r send & receive buffers application writes data TCP send buffer segment application reads data TCP receive buffer socket door m bi-directional data flow in same connection m MSS: maximum segment size (typical size 1460 why?) r connection-oriented: m handshaking (exchange of control msgs) init s sender, receiver state before data exchange r flow controlled: m sender will not overwhelm receiver

TCP segment structure URG: urgent data (generally not used) ACK: ACK # valid PSH: push data now (generally not used) RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP) 32 bits source port # dest port # head len sequence number acknowledgement number not used UAP R S F checksum Receive window Urg data pointer Options (variable length) application data (variable length) counting by bytes of data (not segments!) # bytes rcvr willing to accept (only 16 bits??) TCP Connection Management (includes establishment, closing, and reliable data delivery using sequence numbers and timers)

TCP Connection Management Recall: TCP sender, receiver establish connection before exchanging data segments r initialize TCP variables: m seq. #s m buffers, flow control info (e.g. RcvWindow) r Socket calls: m m client: connect() server: accept() Three way handshake: Step 1: client host sends TCP SYN segment to server m specifies initial seq # m no data Step 2: server host receives SYN, replies with SYNACK segment server allocates buffers m specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data m TCP Three way Handshake

TCP Connection Management (cont.) Closing a connection: (note: two army problem) client server client closes socket: clientsocket.close(); close FIN Step 1: client end system sends TCP FIN control segment to server ACK FIN close Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN. timed wait closed ACK TCP Connection Management (cont.) Step 3: client receives FIN, replies with ACK. m Enters timed wait - will respond with ACK to received FINs Step 4: server, receives ACK. Connection closed. closing client FIN ACK FIN server closing Note: with small modification, can handle simultaneous FINs. timed wait closed ACK closed

TCP Connection Management (cont) TCP server lifecycle TCP client lifecycle TCP seq. # s and ACKs Seq. # s: ACKs: m byte stream number of first byte in segment s data m seq # of next byte expected from other side m cumulative ACK User types C host ACKs receipt of echoed C Host A Host B Seq=42, ACK=79, data = C Seq=79, ACK=43, data = C Seq=43, ACK=80 host ACKs receipt of C, echoes back C simple telnet scenario time

TCP Round Trip Time and Timeout Q: how to set TCP timeout value? r longer than RTT m but RTT varies r too short: premature timeout m unnecessary retransmissions r too long: slow reaction to segment loss Q: how to estimate RTT? r SampleRTT: measured time from segment transmission until ACK receipt r SampleRTT will vary, want estimated RTT smoother m average several recent measurements, not just current SampleRTT How to measure RTT when there are retransmissions? TCP Round Trip Time and Timeout EstimatedRTT = (1- a)*estimatedrtt + a*samplertt r Exponential weighted moving average r influence of past sample decreases exponentially fast r typical value: a = 0.125

Example RTT estimation: RTT: gaia.cs.umass.edu to fantasia.eurecom.fr 350 300 RTT (milliseconds) 250 200 150 100 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 time (seconnds) SampleRTT Estimated RTT TCP Round Trip Time and Timeout Setting the timeout r EstimtedRTT plus safety margin m large variation in EstimatedRTT -> larger safety margin r first estimate of how much SampleRTT deviates from EstimatedRTT: DevRTT = (1-b)*DevRTT + b* SampleRTT-EstimatedRTT (typically, b = 0.25) Then set timeout interval: TimeoutInterval = EstimatedRTT + 4*DevRTT

TCP reliable data transfer r TCP creates reliable service on top of IP s unreliable service r sliding windows improve utilization r cumulative ACKs provide redundancy r TCP uses single retransmission timer r retransmissions are triggered by: m timeout events m duplicate ACKs r initially we consider simplified TCP sender: m ignore duplicate ACKs m ignore flow control, congestion control TCP sender events: data rcvd from app: r create segment with seq # r seq # is number of first data byte in segment r start timer if not already running (think of timer as for oldest unacked segment) r expiration interval: TimeOutInterval timeout: r retransmit segment that caused timeout r restart timer ACK rcvd: r if acknowledges previously unacked segments m update what is known to be ACKed m start timer if there are outstanding segments

TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed Arrival of in-order segment with expected seq #. One other segment has ACK pending Arrival of out-of-order segment higher-than-expect seq. #. Gap detected Arrival of segment that partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK, ACKing both in-order segments Immediately send duplicate ACK, indicating seq. # of next expected byte Immediate send ACK, provided that segment starts at lower end of gap Fast Retransmit r Time-out period often relatively long: m long delay before resending lost packet r Detect lost segments via duplicate ACKs. m sender often sends many segments back-to-back m if segment is lost, there will likely be many duplicate ACKs for that segment r If sender receives 3 ACKs for same data m it assumes that segment after ACKed data was lost m fast retransmit: resend segment before timer expires

Host A Host B triple duplicate ACKs seq # x1 seq # x2 seq # x3 seq # x4 seq # x5 resend seq X2 X ACK x1 ACK x1 ACK x1 ACK x1 timeout time TCP Options r Recall the TCP header r Options enable TCP to evolve to accommodate new technologies, etc. r Most options are quite short r Most are used only during SYN/SYNACK phase source port # dest port # head len 32 bits sequence number acknowledgement number not used UAP R S F checksum Receive window Urg data pointer Options (variable length) application data (variable length)

Maximum Segment Size (MSS) r Tells receiver not to send segments larger than specified r Typically based on lower layer limit (Ethernet) Window Scaling r 16-bit in window field means 64K limit r Window scaling shifts window field by specified value (255 in 8 bits, but limited to 14. why?)

Selective Acknowledgments (SACK) r Allows receiver to indicate it has received blocks of the stream, missing others r Whether permitted is negotiated at SYN time: Actual SACK header

SYN Flood DOS Attack r A classic denial of service (DOS) attack r Flood a node with TCP connection requests m Node allocates resources for connection m sends SYNACK segment m Last part of handshake (from attacker) never happens r Node eventually releases resources for halfopen connections, but requests come too fast r All kernel resources for TCP connections are consumed r Other, legitimate clients are shut out Solution r Instead of allocating half-open connection m Server picks seq # with hash of IP addresses, ports and a secret number (known only to server) m This is called a SYN cookie m Sends back SYNACK with the SYN cookie m Allocates no resources, and forgets the seq # m If ACK segment arrives, server regenerates the seq #. The ACK field of the arriving packet should equal that number, plus 1. m If so, go ahead and allocate resources for this legitimate client. m If an ACK never arrives (as in an attack) we have lost only the time to handle the SYN packet

Completed handshake attack r SYN Floods (were) most effective if launched from multiple clients m Distributed Denial of Service (DDoS) Attack r Some attacks get around SYN cookies by completing the TCP connection m Resources are allocated but not used r Much harder to defend against hard to tell legitimate clients from attackers TCP Flow Control and Congestion Control

TCP Flow Control r receive side of TCP connection has a receive buffer (tied to socket): IP datagrams (currently) unused buffer space TCP data (in buffer) r receiving process may be slow at reading from buffer (socket) application process flow control sender won t overflow receiver s buffer by transmitting too much, too fast r speed-matching service: matching send rate to receiving application s drain rate TCP Flow control: how it works IP datagrams (currently) unused buffer space rwnd RcvBuffer TCP data (in buffer) (Note: for now, assume TCP receiver discards out-oforder segments) r unused buffer space: application process = rwnd = RcvBuffer-[LastByteRcvd - LastByteRead] r receiver: advertises unused buffer space by including rwnd value in segment header (window advertisement) r sender: limits # of unacked bytes to rwnd m guarantees receiver s buffer doesn t overflow r Basically, an adaptive sliding window

Principles of Congestion Control Congestion: r informally: too many sources sending too much data too fast for network to handle r not the same as flow control! r manifestations: m lost packets (buffer overflow at routers) m long delays (queueing in router buffers) r a major issue in Internet performance Controlling Congestion two broad approaches: end-end congestion control: r no explicit feedback from network r congestion inferred from end-system observed loss, delay r approach taken by TCP network-assisted congestion control: r routers provide feedback to end systems m single bit indicating congestion m Used in early networks: SNA, DECbit, ATM) m explicit rate sender at which sender should transmit

TCP congestion control: r goal: TCP sender should transmit as fast as possible, but without congesting network m Q: how to find rate just below congestion level r decentralized: each TCP sender sets its own rate, based on implicit feedback: m ACK received: m segment arrived (a good thing!) m network not congested, so increase sending rate m ACK not received: m assume loss due to congested network m so decrease sending rate Congestion Window r In addition to a receiver window (from advertisements) the sender maintains a congestion window, cwnd r Rwnd indicates state of the receiver r Cwnd indicates state of the network r *** The send window is min(rwnd,cwnd)

TCP Slow Start r when connection begins, cwnd = 1 MSS (max segment size) m example: MSS = 500 bytes & RTT = 200 msec m initial rate = 20 kbps RTT Host A Host B one segment r available bandwidth may be >> MSS/RTT two segments m desirable to quickly ramp up to respectable rate r increase rate exponentially until first loss event or when threshold reached m double cwnd every RTT m done by incrementing cwnd by 1 for every ACK received four segments time TCP Congestion Control: more details segment loss event: reducing cwnd r timeout: no response from receiver m cut cwnd to 1 r 3 duplicate ACKs: at least some segments getting through (recall fast retransmit) m cut cwnd in half, less aggressively than on timeout ACK received: increase cwnd r slowstart phase: m increase exponentially fast (despite name) at connection start, or following timeout r congestion avoidance: m increase linearly

TCP congestion control: bandwidth probing r probing for bandwidth : increase transmission rate on receipt of ACK, until eventually loss occurs, then decrease transmission rate m continue to increase on ACK, decrease on loss (since available bandwidth is changing, depending on other connections in network) sending rate ACKs being received, so increase rate X X X X X loss, so decrease rate time TCP s sawtooth behavior Transitioning into/out of slowstart ssthresh: cwnd threshold maintained by TCP r on loss event: set ssthresh to cwnd/2 m remember (half of) TCP rate when congestion last occurred r when cwnd >= ssthresh: transition from slowstart to congestion avoidance phase L cwnd = 1 MSS ssthresh = 64 KB dupackcount = 0 timeout ssthresh = cwnd/2 cwnd = 1 MSS dupackcount = 0 retransmit missing segment duplicate ACK dupackcount++ slow start new ACK cwnd = cwnd+mss dupackcount = 0 transmit new segment(s),as allowed cwnd > ssthresh L timeout ssthresh = cwnd/2 cwnd = 1 MSS dupackcount = 0 retransmit missing segment congestion avoidance

TCP: congestion avoidance r when cwnd > ssthresh grow cwnd linearly m increase cwnd by 1 MSS per RTT m approach possible congestion slower than in slowstart m implementation: cwnd = cwnd + MSS/cwnd for each ACK received AIMD r ACKs: increase cwnd by 1 MSS per RTT: additive increase r loss: cut cwnd in half (non-timeout-detected loss ): multiplicative decrease AIMD: Additive Increase Multiplicative Decrease Popular flavors of TCP cwnd window size (in segments) ssthresh TCP Tahoe TCP Reno ssthresh Transmission round

Summary: TCP Congestion Control r when cwnd < ssthresh, sender in slow-start phase, window grows exponentially. r when cwnd >= ssthresh, sender is in congestionavoidance phase, window grows linearly. r when triple duplicate ACK occurs, ssthresh set to cwnd/2, cwnd set to ~ ssthresh r when timeout occurs, ssthresh set to cwnd/2, cwnd set to 1 MSS. Effects of Wireless LANs r Loss rates on wireless LANs are much higher than on wired networks r Some loss can be mitigated by retransmissions at the link layer (discussed later) r IP handoffs for mobile devices also cause loss. r What is TCP s response to packet loss? r Many solutions have been proposed (and are still being proposed!) to deal with this issue.

TCP Fairness fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R Why is TCP fair? Two competing sessions: r Additive increase gives slope of 1, as throughout increases r multiplicative decrease decreases throughput proportionally R equal bandwidth share Connection 2 throughput Connection 1 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase R

Fairness (more) Fairness and UDP r multimedia apps often do not use TCP m do not want rate throttled by congestion control r instead use UDP: m pump audio/video at constant rate, tolerate packet loss Fairness and parallel TCP connections r nothing prevents app from opening parallel connections between 2 hosts. r web browsers do this r example: link of rate R supporting 9 connections; m new app asks for 1 TCP, gets rate R/10 m new app asks for 11 TCPs, gets R/2! Summary r Developed basis of reliable protocol r Learned about sliding window protocols m Go-Back-N, Selective Repeat r Analyzed protocol performance r Discussed demultiplexing r UDP operation (ports, checksum) r TCP operation m connection management (sequence numbers) m flow control (receive window) m congestion control (congestion window)