Lecture 11. Transport Layer (cont d) Transport Layer 1

Similar documents
Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

CNT 6885 Network Review on Transport Layer

CC451 Computer Networks

Lecture 8. TCP/IP Transport Layer (2)

Chapter 3 Transport Layer

Goal and A sample Network App

Introduction to the Application Layer. Computer Networks Term B14

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

Chapter III: Transport Layer

TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581

Chapter 3 Transport Layer

Chapter 3 Transport Layer

Chapter 3 outline. 3.5 Connection-oriented transport: TCP. 3.6 Principles of congestion control 3.7 TCP congestion control

CSC 4900 Computer Networks: End-to-End Design

CMSC 322 Computer Networks Applications and End-To- End

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 10

TCP reliable data transfer. Chapter 3 outline. TCP sender events: TCP sender (simplified) TCP: retransmission scenarios. TCP: retransmission scenarios

CSCI Topics: Internet Programming Fall 2008

CSC 4900 Computer Networks: TCP

Correcting mistakes. TCP: Overview RFCs: 793, 1122, 1323, 2018, TCP seq. # s and ACKs. GBN in action. TCP segment structure

CSCD 330 Network Programming Winter 2015

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 4

Chapter 2 Application Layer. Lecture 4: principles of network applications. Computer Networking: A Top Down Approach

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, Development of reliable protocol Sliding window protocols

Chapter 3- parte B outline

CS 3516: Advanced Computer Networks

CSC 8560 Computer Networks: TCP

Outline. TCP: Overview RFCs: 793, 1122, 1323, 2018, steam: r Development of reliable protocol r Sliding window protocols

Principles of congestion control

Chapter 2. Application Layer. Chapter 2: Application Layer. Application layer - Overview. Some network apps. Creating a network appication

Go-Back-N. Pipelining: increased utilization. Pipelined protocols. GBN: sender extended FSM

Computer Networking Introduction

Chapter III: Transport Layer

Flow and Congestion Control (Hosts)

Master Course Computer Networks IN2097

Chapter 2: Application Layer. Chapter 2 Application Layer. Some network apps. Application architectures. Chapter 2: Application layer

Application. Transport. Network. Link. Physical

NWEN 243. Networked Applications. Transport layer and application layer

By Ossi Mokryn, Based also on slides from: the Computer Networking: A Top Down Approach Featuring the Internet by Kurose and Ross

Data Communications & Networks. Session 9 Main Theme Network Congestion Causes, Effects, Controls, and TCP Applications. Dr. Jean-Claude Franchitti

Congestion Control. Principles of Congestion Control. Network-assisted Congestion Control: ATM. Congestion Control. Computer Networks 10/21/2009

Transport protocols. Transport Layer 3-1

Transport layer. UDP: User Datagram Protocol [RFC 768] Review principles: Instantiation in the Internet UDP TCP

Transport layer. Review principles: Instantiation in the Internet UDP TCP. Reliable data transfer Flow control Congestion control

Chapter II: Application Layer

TCP: Overview RFCs: 793,1122,1323, 2018, 2581

Lecture 08: The Transport Layer (Part 2) The Transport Layer Protocol (TCP) Dr. Anis Koubaa

Chapter 3 Transport Layer

Transport Layer: outline

Congestion Control. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Chapter 3 Transport Layer

Chapter 6 Transport Layer

Congestion Control. Principles of Congestion Control. Network assisted congestion. Asynchronous Transfer Mode. Computer Networks 10/23/2013

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Master Course Computer Networks IN2097

CSCI Topics: Internet Programming Fall 2008

Chapter 3 Transport Layer

CSCD 330 Network Programming Spring 2018 Lecture 11a Transport Layer

Protocol Layers, Security Sec: Application Layer: Sec 2.1 Prof Lina Battestilli Fall 2017

Client/Server Computing

CS 43: Computer Networks. 19: TCP Flow and Congestion Control October 31, Nov 2, 2018

Transport Layer: Outline

3.7 TCP congestion. reliable data transfer. sliding window. Lecture 4: Transport layer III: flow control and congestion control & Network layer I: IP

Suprakash Datta. Office: CSEB 3043 Phone: ext Course page:

Mid Term Exam Results

Detecting half-open connections. Observed TCP problems

1. What is a Computer Network? interconnected collection of autonomous computers connected by a communication technology

Chapter 3 Transport Layer

CS 3516: Advanced Computer Networks

Foundations of Telematics

Review of Previous Lecture

CSE 4213: Computer Networks II

EECS 3214: Computer Network Protocols and Applications

CSCI Topics: Internet Programming Fall 2008

CS Lecture 1 Review of Basic Protocols

Lecture 15: Transport Layer Congestion Control

Chapter 2 Application Layer

Chapter 3: Transport Layer. Chapter 3 Transport Layer. Chapter 3 outline. Transport services and protocols

Fall 2012: FCM 708 Bridge Foundation I

CSC 401 Data and Computer Communications Networks

Transport Protocols and TCP

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Computer Networking: A Top Down Approach

Transport Layer Congestion Control

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

The Transport Layer Congestion control in TCP

Course on Computer Communication and Networks. Lecture 5 Chapter 3; Transport Layer, Part B

LECTURE 3 - TRANSPORT LAYER

Different Layers Lecture 21

32 bits. source port # dest port # sequence number acknowledgement number not used. checksum. Options (variable length)

Department of Computer and IT Engineering University of Kurdistan. Transport Layer. By: Dr. Alireza Abdollahpouri

ELEC / COMP 177 Fall Some slides from Kurose and Ross, Computer Networking, 5 th Edition

Foundations of Telematics

Chapter 2: Application layer

Lecture 04: Application Layer (Part 01) Principles and the World Wide Web (HTTP) Dr. Anis Koubaa

Transmission Control Protocol

CS321: Computer Networks Congestion Control in TCP

Good Ideas So Far Computer Networking. Outline. Sequence Numbers (reminder) TCP flow control. Congestion sources and collapse

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

Chapter 3 Transport Layer

Transcription:

Lecture 11 Transport Layer (cont d) Transport Layer 1

Agenda The Transport Layer (continue) Connection-oriented Transport (TCP) Flow Control Connection Management Congestion Control Introduction to the Application Layer Application Architectures Required Transport Services by An Application Transport Layer 2

Introduction: Transport Services and Protocols (Recall) provide logical communication between app processes running on different hosts transport protocols run in end systems send side: breaks application messages into segments, passes to the network layer receiving side: reassembles segments into messages, passes to app layer more than one transport protocol available to applications Internet: TCP and UDP application transport network data link physical application transport network data link physical Transport Layer 3

TCP Flow Control receive side of TCP connection has a receive buffer: flow control sender won t overflow receiver s buffer by transmitting too much, too fast app process may be slow at reading from buffer speed-matching service: matching the send rate to the receiving app s drain rate Transport Layer 4

TCP Flow Control: how it works (Suppose TCP receiver discards out-of-order segments) spare room in buffer = RcvWindow = RcvBuffer-[LastByteRcvd - LastByteRead] Receiver advertises spare room by including value of RcvWindow in segments Sender limits unacked data to RcvWindow guarantees receive buffer doesn t overflow Transport Layer 5

TCP Connection Management Recall: TCP sender, receiver establish connection before exchanging data segments initialize TCP variables: seq. #s buffers, flow control info (e.g. RcvWindow) client: connection initiator Socket clientsocket = new Socket("hostname","port number"); server: contacted by client Socket connectionsocket = welcomesocket.accept(); Three way handshake: Step 1: client host sends TCP SYN segment to server specifies initial seq # no data Step 2: server host receives SYN, replies with SYNACK segment server allocates buffers specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data Transport Layer 6

timed wait TCP Connection Management (cont.) Closing a connection: client closes socket: clientsocket.close(); close client server Step 1: client end system sends TCP FIN control segment to server close Step 2: server receives FIN, replies with ACK. Closes connection, sends FIN. closed Transport Layer 7

timed wait TCP Connection Management (cont.) Step 3: client receives FIN, replies with ACK. Enters timed wait - will respond with ACK to received FINs closing client server closing Step 4: server, receives ACK. Connection closed. Note: with small modification, can handle simultaneous FINs. closed closed Transport Layer 8

Principles of Congestion Control Congestion: informally: too many sources sending too much data too fast for network to handle different from flow control! manifestations: lost packets (buffer overflow at routers) long delays (queueing in router buffers) highly important problem! Transport Layer 9

Causes/costs of congestion: scenario 1 two senders, two receivers Host A l in : original data l out one router, infinite buffers Host B unlimited shared output link buffers transmission link with capacity C no retransmission large delays when congested maximum achievable throughput Transport Layer 10

Causes/costs of congestion: scenario 2 one router, finite buffers sender retransmission of lost packet Host A l in : original data l' in : original data, plus retransmitted data l out Host B finite shared output link buffers Transport Layer 11

Causes/costs of congestion: scenario 2 always: l = l (goodput) in out perfect retransmission only when loss: l > l in out retransmission of delayed (not lost) packet makes l in (than perfect case) for same l out larger R/2 R/2 R/2 R/3 l out l out l out R/4 l in R/2 l in R/2 l in R/2 a. b. c. costs of congestion: more work (retransmission) for given goodput unneeded retransmissions: link carries multiple copies of packets Transport Layer 12

Causes/costs of congestion: scenario 3 four senders multihop shared paths timeout/retransmit Q: what happens as l in and increase? l in Host A l in : original data l' in : original data, plus retransmitted data finite shared output link buffers l out Host B Transport Layer 13

Causes/costs of congestion: scenario 3 H o s t A l o u t H o s t B Another cost of congestion: when packet dropped, any upstream transmission capacity used for that packet was wasted! Transport Layer 14

Approaches towards Congestion Control Two broad approaches towards congestion control: End-end congestion control no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control routers provide feedback to end systems single bit indicating congestion (TCP/IP Explicit Congestion Notification ECN, ATM) explicit rate sender should send at Transport Layer 15

Goals of Congestion Control Throughput: Maximize goodput (i.e., data delivery) the total number of bits end-end Fairness: Give different sessions equal share Max-min fairness Maximize the minimum rate session Single link: Capacity R sessions m Each sessions: R/m Transport Layer 16

TCP Congestion Control: details Congestion Window sender limits data transmission: LastByteSent-LastByteAcked Roughly, rate = CongWin RTT CongWin Bytes/sec CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event via timeout or 3 duplicate ACKs TCP sender reduces rate (CongWin) after loss event Three mechanisms: AIMD slow start conservative after timeout events Transport Layer 17

TCP Congestion Control (1) Additive Increase, Multiplicative Decrease (AIMD) Approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs additive increase: increase CongWin by 1 maximum segment size (MSS) every RTT until loss detected multiplicative decrease: cut CongWin in half after loss 24 Kbytes congestion window Losses occurred and detected Saw tooth behavior: probing for bandwidth 16 Kbytes 8 Kbytes time Transport Layer 18

TCP Congestion Control (2) TCP Slow Start What is the goal? getting to equilibrium gradually but quickly Implements the multiplicative increase (MI) algorithm When connection begins, increase rate exponentially fast until network congested (i.e., first loss event) When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps Available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate Transport Layer 19

RTT TCP Slow Start (more) When connection begins, increase rate exponentially until first loss event: double CongWin every RTT Host A Host B done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast time Transport Layer 20

Refinement: Inferring Loss After 3 duplicate ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Philosophy: 3 duplicate ACKs indicates network capability of delivering some segments timeout indicates a more alarming congestion scenario (congested network) Transport Layer 21

Refinement Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event Transport Layer 22

Summary: TCP Congestion Control When CongWin is below Threshold, sender in slow-start phase, window grows exponentially When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS Transport Layer 23

TCP Throughput What s the average throughout of TCP as a function of window size of maximum segment size (MSS) in bytes and RTT? Ignore slow start Let w be the window size when loss occurs. When window is w of MSS, throughput is throughput = w MSS RTT Bytes/sec Just after loss, window drops to w/2, throughput to (w MSS)/2RTT. Average throughout: 0.75(w MSS)/RTT Transport Layer 24

Example Client receives 1500 byte segments from a web server knowing the average RTT equals 100 ms and data throughput is 10 Gbps Calculate the required window size. How long does it take to receive a data object from a Web server after sending a request? Answer: Required window size W = RTT throughput / MSS = 83,333 segments The total time (T) to receive the data object will include: TCP connection establishment data transfer delay T = RTT + MSS/throughput RTT = 100ms Transport Layer 25

TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R Transport Layer 26

Why is TCP fair? Two competing sessions: Additive increase: gives slope of 1, as throughout increases Multiplicative decrease: decreases throughput proportionally R equal bandwidth share loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R Transport Layer 27

Fairness (more) Fairness and UDP Multimedia applications often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Fairness and parallel TCP connections Nothing prevents applications from opening parallel connections between 2 hosts Web browsers do this Transport Layer 28

Internet Transport Protocols Services TCP service: connection-oriented: setup required between client and server processes reliable transport between sending and receiving process flow control: sender won t overwhelm receiver congestion control: throttle sender when network overloaded does not provide: timing, minimum throughput guarantees, security UDP service: unreliable data transfer between sending and receiving process does not provide: connection setup, reliability, flow control, congestion control, timing, throughput guarantee, or security Transport Layer 29

The Application Layer Network programs/services that run on (different) end systems communicate over network e.g., web server software communicates with browser software No need to write software for network-core devices Network-core devices do not run user applications applications on end systems allows for rapid app development, propagation application transport network data link physical application transport network data link physical application transport network data link physical 2: Application Layer 30

Some Network Applications e-mail web instant messaging remote login P2P file sharing multi-user network games streaming stored video clips voice over IP real-time video conferencing etc 2: Application Layer 31

Application Architectures Client-server Peer-to-peer (P2P) Hybrid of client-server and P2P 2: Application Layer 32

Client-Server Architecture client/server server: always-on host permanent IP address server farms for scaling clients: communicate with server may be intermittently connected may have dynamic IP addresses do not communicate directly with each other 2: Application Layer 33

Pure P2P Architecture no always-on server arbitrary end systems directly communicate peers are intermittently connected and change IP addresses peer-peer Highly scalable but difficult to manage 2: Application Layer 34

Hybrid of client-server and P2P Skype voice-over-ip P2P application centralized server: finding address of remote party: client-client connection: direct (not through server) Instant messaging chatting between two users is P2P centralized service: client presence detection/location user registers its IP address with central server when it comes online user contacts central server to find IP addresses of buddies 2: Application Layer 35

Processes communicating Process: program running within a host. within same host, two processes communicate using inter-process communication (defined by OS). processes in different hosts communicate by exchanging messages Client process: process that initiates communication Server process: process that waits to be contacted Note: applications with P2P architectures have client processes & server processes 2: Application Layer 36

Sockets process sends/receives messages to/from its host or server host or server socket socket analogous to door process socket controlled by app developer process socket sending process shoves message out door TCP with buffers, variables Internet TCP with buffers, variables sending process relies on transport infrastructure on other side of door which brings message to socket at receiving process controlled by OS 2: Application Layer 37

Addressing processes to receive messages, process must have identifier host device has unique 32-bit IP address Q: does IP address of host on which process runs suffice for identifying the process? A: No, many processes can be running on same host identifier includes both IP address and port numbers associated with process on host. Example port numbers: HTTP server: 80 Mail server: 25 to send HTTP message to gaia.cs.umass.edu web server: IP address: 128.119.245.12 Port number: 80 more shortly 2: Application Layer 38

Application Layer Protocol The application protocol defines: Types of messages exchanged, e.g., request, response Message syntax: what fields in messages & how fields are delineated Message semantics meaning of information in fields Rules for when and how processes send & respond to messages Public-domain protocols: defined in Request for Comments (RFCs) allows for interoperability e.g., HTTP, SMTP Proprietary protocols: e.g., Skype 2: Application Layer 39

What Transport Services does an Application Need? Data loss some apps (e.g., audio) can tolerate some loss other apps (e.g., file transfer, telnet) require 100% reliable data transfer Timing some apps (e.g., Internet telephony, interactive games) require low delay to be effective Throughput some apps (e.g., multimedia) require minimum amount of throughput to be effective other apps ( elastic apps ) make use of whatever throughput they get Security Encryption, data integrity, 2: Application Layer 40

Transport Service Requirements of Common Applications Application Data loss Throughput Time Sensitive file transfer e-mail Web documents real-time audio/video stored audio/video interactive games instant messaging no loss no loss no loss loss-tolerant loss-tolerant loss-tolerant no loss elastic elastic elastic audio: 5kbps-1Mbps video:10kbps-5mbps same as above few kbps up elastic no no no yes, 100 s msec yes, few secs yes, 100 s msec yes and no 2: Application Layer 41

Quiz Consider the following statements. 1. TCP connections are full duplex 2. TCP has no option for selective acknowledgment 3. TCP connections are message streams Determine the correct statements Transport Layer 1-42

Lecture Summary Covered material The Transport Layer (continue) Connection-oriented Transport (TCP) Flow Control Connection Management Congestion Control Introduction to the Application Layer Application Architectures Required Transport Services by An Application Material to be covered next lecture Continue the Application Layer Simple Introduction to Network Security Application Layer 43