Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II

Similar documents
: GFR -- Providing Rate Guarantees with FIFO Buffers to TCP Traffic

TCP/IP over ATM using ABR, UBR, and GFR Services

Issues in Traffic Management on Satellite ATM Networks

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Final Report

Improving the Performance of TCP/IP over ATM UBR+ Service

Traffic Management of Internet Protocols over ATM

Traffic Management on Satellite ATM Networks

Traffic Management over Satellite ATM Networks: A Status Report

Traffic Management over Satellite ATM Networks: Recent Issues

TCP/IP over ATM over Satellite Links

Traffic Management for TCP/IP over Satellite-ATM Networks 1

******************************************************************* *******************************************************************

R1 Buffer Requirements for TCP over ABR

******************************************************************* *******************************************************************

Which Service for TCP/IP Traffic on ATM: ABR or UBR?

Addressing Interoperability: Issues and Challenges

Current Issues in ATM Forum Traffic Management Group

Real-Time ABR, MPEG2 Streams over VBR, and Virtual Source/Virtual Destination rt-abr switch

PRO V IDING RAT E GU ARAN T EES FOR IN T ERNE T APPLICAT ION TRAFFIC ACROSS ATM NE TWORKS

Congestion in Data Networks. Congestion in Data Networks

different problems from other networks ITU-T specified restricted initial set Limited number of overhead bits ATM forum Traffic Management

ERICA+: Extensions to the ERICA Switch Algorithm

TCP Selective Acknowledgments and UBR Drop Policies to Improve ATM-UBR Performance over Terrestrial and Satellite Networks Rohit Goyal, Raj Jain, Shiv

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

************************************************************************ Distribution: ATM Forum Technical Working Group Members (AF-TM) *************

Traffic Management and. QoS Issues for Large High-Speed Networks

Congestion Control End Hosts. CSE 561 Lecture 7, Spring David Wetherall. How fast should the sender transmit data?

ADVANCED COMPUTER NETWORKS

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Intermediate Traffic Management

Lecture 21: Congestion Control" CSE 123: Computer Networks Alex C. Snoeren

CS321: Computer Networks Congestion Control in TCP

Congestion Control in Communication Networks

Congestion Control & Transport protocols

Lecture 4 Wide Area Networks - Congestion in Data Networks

Mapping an Internet Assured Service on the GFR ATM Service

Chapter 3 Transport Layer

ATM Traffic Management

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

TCP Congestion Control

TCP Congestion Control

Principles of congestion control

ATM Quality of Service (QoS)

CS 268: Lecture 7 (Beyond TCP Congestion Control)

2 CHAPTER 2 LANs. Until the widespread deployment of ABR compatible products, most ATM LANs will probably rely on the UBR service category. To ll the

Performance Analysis of TCP Variants

A Survey on Quality of Service and Congestion Control

Chapter III: Transport Layer

TCP Congestion Control

Page 1. Review: Internet Protocol Stack. Transport Layer Services. Design Issue EEC173B/ECS152C. Review: TCP

RED behavior with different packet sizes

Exercises TCP/IP Networking With Solutions

c Copyright by Rohit Goyal 1999

Source 1. Destination 1. Bottleneck Link. Destination 2. Source 2. Destination N. Source N

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Communication Networks

Page 1. Review: Internet Protocol Stack. Transport Layer Services EEC173B/ECS152C. Review: TCP. Transport Layer: Connectionless Service

Network Management & Monitoring

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

QoS for Real Time Applications over Next Generation Data Networks

Mid Term Exam Results

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Wireless TCP Performance Issues

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

EC1009 HIGH SPEED NETWORKS (ELECTIVE) (2 marks Questions and answers)

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

CS Transport. Outline. Window Flow Control. Window Flow Control

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2

Introduction to IP QoS

Rohit Goyal 1, Raj Jain 1, Sonia Fahmy 1, Shobana Narayanaswamy 2

Multiple unconnected networks

UNIT IV -- TRANSPORT LAYER

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

CSC 4900 Computer Networks: TCP

Satellite-TCP: A Flow Control Algorithm for Satellite Network

TCP. CSU CS557, Spring 2018 Instructor: Lorenzo De Carli (Slides by Christos Papadopoulos, remixed by Lorenzo De Carli)

Chapter 3 Transport Layer

Congestion Control. Queuing Discipline Reacting to Congestion Avoiding Congestion. Issues

Computer Networking

Chapter 3 Transport Layer

CMSC 417. Computer Networks Prof. Ashok K Agrawala Ashok Agrawala. October 30, 2018

TCP Congestion Control : Computer Networking. Introduction to TCP. Key Things You Should Know Already. Congestion Control RED

Outline Computer Networking. TCP slow start. TCP modeling. TCP details AIMD. Congestion Avoidance. Lecture 18 TCP Performance Peter Steenkiste

Chapter 24 Congestion Control and Quality of Service 24.1

CS 356: Introduction to Computer Networks. Lecture 16: Transmission Control Protocol (TCP) Chap. 5.2, 6.3. Xiaowei Yang

Defining QoS for Multiple Policy Levels

Advanced Computer Networks

Episode 5. Scheduling and Traffic Management

Fall 2012: FCM 708 Bridge Foundation I

CSE 123A Computer Networks

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

Lecture 14: Congestion Control"

Lecture 14: Congestion Control"

15-744: Computer Networking. Overview. Queuing Disciplines. TCP & Routers. L-6 TCP & Routers

11. Traffic management in ATM. lect11.ppt S Introduction to Teletraffic Theory Spring 2003

Bandwidth Allocation & TCP

Congestion Control In The Internet Part 2: How it is implemented in TCP. JY Le Boudec 2015

Contents. CIS 632 / EEC 687 Mobile Computing. TCP in Fixed Networks. Prof. Chansu Yu

Transcription:

Design Issues in Traffic Management for the ATM UBR+ Service for TCP over Satellite Networks: Report II Columbus, OH 43210 Jain@CIS.Ohio-State.Edu http://www.cis.ohio-state.edu/~jain/ 1

Overview! Statement of Work: TCP over UBR Issues to Study! Task 2: Drop Policies! Task 6: TCP Implementation Issues! Task 7: SACK Optimization! Task 4a: GFR 2

Why UBR?! Cheapest service category for the user! Basic UBR is very cheap to implement! Simple enhancements can vastly improve performance! Expected to carry the bulk of the best effort TCP/IP traffic. 3

Goals: Issues 1. Analyze Standard Switch and End-system Policies 2. Design Switch Drop Policies 3. Quantify Buffer Requirements in Switches 4. UBR with VBR Background 5. Performance of Bursty Sources 6. Changes to TCP Congestion Control 7. Optimizing the Performance of SACK TCP 4

Non-Goals! Does not cover non-ubr issues.! Does not cover ABR issues.! Does not include non-tm issues. 5

Status 1. Analyze Standard Switch and End-system Policies 1 2. Design Switch Drop Policies 2 3. Quantify Buffer Requirements in Switches 1 4. UBR with VBR Background 4a. Guaranteed Frame Rate 2 4b. Guaranteed Rate 1 5. Performance of Bursty Sources 6. Changes to TCP Congestion Control 2 7. Optimizing the Performance of SACK TCP 2 Status: 1 =Presented at the last meeting, 2 =Presenting now 6

1. Policies Switch Policies No EPD EPD Plain EPD Selective Drop Fair Buffer Allocation No FRR 7 End-System Policies FRR New Reno SACK + New Reno

Policies TCP End System Policies Vanilla TCP : Slow Start and Congestion Avoidance TCP Reno: Fast Retransmit and Recovery Selective Acknowledgments TCP over UBR ATM Switch Drop Policies Minimum Rate Guarantees : per-vc queuing Per-VC Accounting : Selective Drop/FBA Early Packet Discard 8 Tail Drop

1. Policies: Results! In LANs, switch improvements (PPD, EPD, SD, FBA) have more impact than end-system improvements (Slow start, FRR, New Reno, SACK). Different variations of increase/decrease have little impact due to small window sizes.! In satellite networks, end-system improvements have more impact than switch-based improvements! FRR hurts in satellite networks.! Fairness depends upon the switch drop policies and not on end-system policies 9

Policies (Continued)! In Satellite networks: " SACK helps significantly " Switch-based improvements have relatively less impact than end-system improvements " Fairness is not affected by SACK! In LANs: " Previously retransmitted holes may have to be retransmitted on a timeout SACK can hurt under extreme congestion. 10

4b. Guaranteed Rate: Results! Guaranteed rate is helpful in WANs.! For WANs, the effect of reserving 10% bandwidth for UBR is more than that obtained by EPD, SD, or FBA! For LANs, guaranteed rate is not so helpful. Drop policies are more important.! For Satellites, end-system policies seem more important. 11

Past Results: Summary! For satellite networks, end-system policies (SACK) have more impact than switch policies (EPD).! Fast retransmit and recovery (FRR) improves performance over LANs but degrades performance over WANs and satellites.! 0.5*RTT buffers provide sufficiently high efficiency (98% or higher) for SACK TCP over UBR even for a large number of TCP sources! Reserving a small fraction for UBR helps it a lot in satellite networks 12

TCP over UBR: Past Results! For zero TCP loss, buffers needed = Σ TCP windows.! Poor performance with limited buffers.! EPD improves efficiency but not fairness.! In high delay-bandwidth paths, too many packets lost EPD has little effect in satellite networks. 13

2. Switch Drop Policies! Selective Drop! Fair buffer allocation 14

UBR: Selective Drop K R X 0 Packets may be dropped No packets are dropped! K = Buffer size (cells).! R = Drop threshold.! X = Buffer occupancy.! EPD: When (X > R) new incoming packets are dropped. Partially received packets are accepted if possible. 15

Selective Drop (Cont)! N a = Number of active VCs in the buffer! Fair Allocation = X / N a! Per-VC accounting: X i = # of cells in buffer! Buffer load ratio of VC i = X i /(X/ N a )! Drop complete packet of VC i if: Selective Drop: (X > R) AND (X i /(X/N a ) > Z) 16

The Simulation Experiment Source 1 Source N x x! Buffer size (cells): LAN: 1k,3k. WAN: 12k,36k. Satellite: 200k, 600k! RTT: LAN: 30 µs, WAN: 30 ms, satellite (y = satellite hop): 570 ms! Efficiency: Σ throughputs / max possible throughput! Fairness: (Σ x i ) 2 /(n Σ x i2 ), x i = throughput of ith TCP! MSS (bytes): 512 (LAN,WAN), 9180 (satellites) Switch 17 Switch All links = 155 Mbps y Destination 1 Destination N

TCP Parameters! TCP maximum window size, LAN: 64 Kb. WAN: 600,000. Satellite: 8.7 million bytes.! MSS = 512 Bytes (LANs and WANs), 9180 (Satellites)! No TCP delay ack timer! All processing delay, delay variation = 0! TCP sources are unidirectional! TCP timer granularity = 100 ms 18

Reno Efficiency Configuration TCP UBR EPD Selective Drop SACK 0.79 0.89 0.95 LAN Vanilla 0.34 0.67 0.84 Reno 0.69 0.97 0.97 SACK 0.94 0.91 0.95 WAN Vanilla 0.91 0.90 0.91 Reno 0.78 0.86 0.81 SACK 0.93 0.80 0.86 Satellite Vanilla 0.79 0.77 0.78 Reno 0.57 0.16 0.17 19

Reno Fairness Configuration TCP UBR EPD Selective Drop SACK 0.54 0.84 0.97 LAN Vanilla 0.69 0.69 0.92 Reno 0.71 0.98 0.99 SACK 0.98 0.97 0.97 WAN Vanilla 0.76 0.95 0.94 Reno 0.90 0.97 0.99 SACK 1.00 0.92 0.97 Satellite Vanilla 1.00 0.94 0.95 Reno 0.98 0.99 0.99 20

Fairness SD: Effect of Parameters 1 0.99 0.98 0.97 0.96 0.95 0.94 0.93 0.7 0.75 0.8 0.85 0.9 0.95 1 Efficiency! Tradeoff between efficiency and fairness! The scheme is sensitive to parameters! Best value for Z = 0.8, R = 0.9*K 21

Fair Buffer Allocation (FBA) K R X 0 Packets may No packets are dropped be dropped! Drop complete packet of VC i if (X > R) AND (X i N a / X > W(X) W(X) = Z ((K R)/ (X R)) W(X) X 22

Fairness FBA: Effect of Parameters 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Efficiency! Tradeoff between efficiency and fairness! The scheme is sensitive to parameters! Best value of Z = 0.8, R = 0.5*K 0 0.2 0.4 0.6 0.8 1 23

UBR + EPD + FBA UBR EPD SD FBA Conf. Srcs Buffers Eff. Fairn. Eff. Fairn. Eff. Fairn. Eff. Fairn. LAN 5 1000 0.21 0.68 0.49 0.57 0.75 0.99 0.88 0.98 LAN 5 2000 0.32 0.90 0.68 0.98 0.85 0.96 0.84 0.98 LAN 5 3000 0.47 0.97 0.72 0.84 0.90 0.99 0.92 0.97 LAN 15 1000 0.22 0.31 0.55 0.56 0.76 0.76 0.91 0.97 LAN 15 2000 0.49 0.59 0.81 0.87 0.82 0.98 0.85 0.96 LAN 15 3000 0.47 0.80 0.91 0.78 0.94 0.94 0.95 0.93 WAN 5 12000 0.86 0.75 0.90 0.94 0.90 0.95 0.95 0.94 WAN 5 24000 0.90 0.83 0.91 0.99 0.92 0.99 0.92 1 WAN 5 36000 0.91 0.86 0.81 1 0.81 1 0.81 1 WAN 15 12000 0.96 0.67 0.92 0.93 0.94 0.91 0.95 0.97 WAN 15 24000 0.94 0.82 0.91 0.92 0.94 0.97 0.96 0.98 WAN 15 36000 0.92 0.77 0.96 0.91 0.96 0.89 0.95 0.97! FBA improves both efficiency and fairness! Effect of FBA is similar to that of SD. SD is simpler. 24

Drop Policies: Results! Low efficiency and fairness for TCP over UBR! Need switch buffers = Σ(TCP maximum window sizes) for zero TCP loss! EPD improves efficiency but not fairness! Selective drop improves fairness! Fair Buffer Allocation improves both efficiency and fairness, but is sensitive to parameters! TCP synchronization affects performance 25

6. Problem in TCP Implementations! Linear Increase in Segments: CWND/MSS = CWND/MSS + MSS/CWND! In Bytes: CWND = CWND + MSS*MSS/CWND! All computations are done in integer! If CWND is large, MSS*MSS/CWND is zero and CWND does not change. CWND stays at 512*512 or 256 kb. 26

Solutions! Solution 1: Increment CWND after N acks (N > 1) CWND = CWND + N*MSS*MSS/CWND! Solution 2: Use larger MSS on Satellite links such that MSS*MSS > CWND. MSS > Path MTU.! Solution 3: Use floating point! Recommendation: Use solution 1. It works for all MSSs.! To do: Does this change TCP dynamics and adversely affect performance.! Result: Solution 1 works. TCP dynamics is not affected. 27

7. Optimize SACK TCP! SACK helps only if retransmitted packets are not lost.! Currently TCP retransmits immediately after 3 duplicate acks (Fast retransmit), and then waits RTT/2 for congestion to subside.! Network may still be congested Retransmitted packets lost.! Proposed Solution: Delay retransmit by RTT/2, I.e., wait RTT/2 first, and then retransmit.! New Result: Delayed retransmit does not help. 28

4a. Guaranteed Frame Rate (GFR)! UBR with minimum cell rate (MCR) UBR+! Frame based service " Complete frames are accepted or discarded in the switch " Traffic shaping is frame based. All cells of the frame have CLP =0 or CLP =1 " All frames below MCR are given CLP =0 service. All frames above MCR are given best effort (CLP =1) service.! Allocation of excess (over MCR) is arbitrary 29

4a. GFR Options Queuing Buffer Management Tag-sensitive Buffer Mgmt Per-VC Per-VC Thresholds FIFO Global Threshold 2 Thresholds 1 Threshold 30

Options (Cont)! FIFO queuing versus per-vc queuing " Per-VC queuing is too expensive. " FIFO queuing should work by setting thresholds based on bandwidth allocations.! Buffer management policies " Per-VC accounting policies need to be studied! Network tagging and end-system tagging " End system tagging can prioritize certain cells or cell streams. " Network tagging used for policing -- must be requested by the end system. 31

GFR: Results Per-VC Q Fair Excess CLP0! Per-VC queuing and scheduling is sufficient for per-vc MCR.! FBA and proper scheduling is sufficient for fair allocation of excess bandwidth! One global threshold is sufficient for CLP0+1 guarantees Two thresholds are necessary for CLP0 guarantees 32

Issues! All FIFO queuing cases were studied with high target network load, i.e., most of the network bandwidth was allocated as GFR.! Need to study cases with lower percentage of network capacity allocated to GFR VCs. 33

Further Study: Goals! Provide minimum rate guarantees with FIFO buffer for TCP/IP traffic.! Guarantees in the form of TCP throughput and not cell rate (MCR).! How much network capacity can be allocated before guarantees can no longer be met?! Study rate allocations for VCs with aggregate TCP flows. 34

TCP Window Control Max wnd (Max wnd)/2 n RTT! For TCP window based flow control (in linear phase) " Throughput = (Avg wnd) / (Round trip time)! With Selective Ack (SACK), window decreases by 1/2 during packet loss, and then increases linearly. " Avg wnd = [Σ i=1,,n (max wnd/2 + mss i )] /n 35

FIFO Buffer Management! Fraction of buffer occupancy (X i /X) determines the fraction of output rate (µ i /µ) for VCi.! Maintaining average per-vc buffer occupancy enables control of per-vc output rates.! Set a threshold (R i ) for each VC.! When X i exceeds R i, then control the VC s buffer occupancy. X X i 36 µ i µ j X i /X µ i /µ µ = 1

Buffer Management for TCP! TCP responds to packet loss by reducing CWND by one-half. " When ith flow s buffer occupancy exceeds R i, drop a single packet. " Allow buffer occupancy to decrease below R i, and then repeat above step if necessary.! K = Total buffer capacity.! Target utilization = Σ R i /K.! Guaranteed TCP throughput = Capacity R i /K! Expected throughput, µ i = µ R i / Σ R i. (µ = Σ µ i ) 37

Simulation Configuration Source 1 Destination 1 Switch Switch Source N Destination N 1000 km 1000 km 1000 km! SACK TCP.! 15 TCP sources (N = 15).! Buffer Size = K = 48000 cells.! 5 thresholds (R 1,,R 5 ). 38

Configuration (contd.) Sources Expt 1 Expt 2 Expt 3 Expt 4 Expected Throughput 1-3 (R 1 ) 305 458 611 764 2.8 Mbps 4-6 (R 2 ) 611 917 1223 1528 5.6 Mbps 7-9 (R 3 ) 917 1375 1834 2293 8.4 Mbps 10-24 (R 4 ) 1223 1834 2446 3057 11.2 Mbps 13-15 (R 5 ) 1528 2293 3057 3822 14.0 Mbps ΣR i /K 29% 43% 57% 71%! Threshold R ij K*MCR i /PCR! Total throughput µ = 126 Mbps. MSS =1024B.! Expected throughput = µ R i / Σ R i 39

Simulation Results TCP Number Throughput ratio (observed / expected) 1-3 1.0 1.03 1.02 1.08 4-6 0.98 1.01 1.03 1.04 7-9 0.98 1.00 1.00 1.02 10-12 0.98 0.99 0.98 0.88 13-15 1.02 0.98 0.97 1.01! All ratios close to 1. Variations increases with utilization.! All sources experience similar queuing delays 40

TCP Window Control! TCP throughput can be controlled by controlling window.! FIFO buffer Relative throughput per connection is proportional to fraction of buffer occupancy.! Controlling TCP buffer occupancy May control throughput.! High buffer utilization Harder to control throughput.! Formula does not hold for very low buffer utilization Very small TCP windows SACK TCP times out if half the window is lost 41

Differential Fair Buffer Allocation K H L 0 X > H EPD X > L Drop all CLP1. X > L and X i > X W i /W Probabilistic Loss of CLP0 X L No Loss! W i = Weight of VC i = MCR i /(GFR Capacity)! W = Σ W i! L = Low Threshold. H = High Threshold! X i = Per-VC buffer occupancy. (X= Σ X i )! Z i = Parameter (0 Z 1) 42

DFBA Operating Region Throughput Delay Load Buffer occupancy (X) H (cliff) L (knee) Desired operating region 43

DFBA (contd( contd.) (L< X < H) AND X i (W/W i ) > X 2 X i (W/W i ) 1 4 X < L 3 X > H (L< X < H) AND X i (W/W i ) < X L 44 X H

DFBA (contd( contd.) Region Condition Action 1 Underload Improve efficiency 2 Mild congestion, more than fair share 3 Mild congestion, less than fair share 45 Drop low priority packets, bring down to fair share Drop low priority packets, bring up to fair share 4 Severe congestion Reduce load

DFBA Algorithm! When first cell of frame arrives:! IF (X < L) THEN " Accept frame! ELSE IF (X > H) THEN " Drop frame! ELSE IF ( (L < X < H) AND (X i > X W i /W ) ) THEN " Drop CLP1 frame " Drop CLP0 frame with P{Drop} = Z i ( α X i X W i /W X(1 W i /W) 46 + (1 α) X L ) H L

Drop Probability! Fairness Component (VC i s fair share = X W i /W) X i X W i /W X (1 W i /W) Increases linearly as X i increases from X W i /W to X! Efficiency Component X L H L Increases linearly as X increases from L to H 47

Drop Probability (contd( contd.)! Z i allows scaling of total probability function " Higher drop probability results in lower TCP windows " TCP window size W 1/ P{Drop} for random packet loss [Mathis] MSS TCP data rate D RTT P(drop) " To maintain high TCP data rate for large RTT:! Small P(Drop)! Large MSS! Choose small Z i for satellite VCs.! Choose small Z i for VCs with larger MCRs. 48

DFBA Simulation Configuration TCP 1 TCP 20 20 Switch Switch Destination 1 Destination 20 20 VC1 TCP 81 81 TCP 100 Switch x km Switch VC5 y km Switch 49 10 km 1 km Destination 81 81 Switch Destination 100

DFBA Simulation Configuration! SACK TCP, 50 and 100 TCP sources! 5 VCs through backbone link.! Local switches merge TCP sources.! x = Access hop = 50 µs (Campus), or 250 ms GEO! y = Backbone hop = 5 ms (WAN or LEO) or 250 ms (GEO)! GFR capacity = 353.207 kcells/sec ( 155.52 Mbps)! α = 0.5 50

Simulation Configuration (contd( contd)! 50 TCPs with 5 VCs (50% MCR allocation) " MCR i = 12, 24, 36, 48, 60 kcells/sec, i=1, 2, 3, 4, 5 " W i = 0.034, 0.068, 0.102, 0.136, 0.170 " Σ (MCR i /GFR capacity) = Σ W i = W 0.5 51

Simulation Configuration (contd( contd)! 50 and 100 TCPs with 5 VCs (85% MCR allocation) " MCR i = 20, 40, 60, 80, 100 kcells/sec, i=1, 2, 3, 4, 5 " W i = 0.0566, 0.1132, 0.1698, 0.2264, 0.283 " Σ (MCR i /GFR capacity) = Σ W i = W 0.85 52

Simulation Results MCR Achieved Throughput Excess Excess / MCR 4.61 11.86 7.25 1.57 9.22 18.63 9.42 1.02 13.82 24.80 10.98 0.79 18.43 32.99 14.56 0.79 23.04 38.60 15.56 0.68 69.12 126.88 57.77! 50 TCPs with 5VCs (50% MCR allocation)! Switch buffer size = 25 kcells! Z i =1, for all i! MCR guaranteed. Lower MCRs get higher excess. 53

Effect of MCR Allocation MCR Achieved Excess Excess/MCR Throughput 7.68 12.52 4.84 0.63 15.36 18.29 2.93 0.19 23.04 25.57 2.53 0.11 30.72 31.78 1.06 0.03 38.40 38.72 0.32 0.01 115.2 126.88 11.68! 50 TCPs with 5 VCs (85% MCR allocation)! Switch buffer size = 25 kcells! Z i =1, for all I! MCR guaranteed. Lower MCRs get higher excess 54

Effect of Number of TCPs MCR Achieved Excess Excess/MCR Throughput 7.68 11.29 3.61 0.47 15.36 18.19 2.83 0.18 23.04 26.00 2.96 0.13 30.72 32.35 1.63 0.05 38.40 39.09 0.69 0.02 115.2 126.92 11.72! 100 TCPs with 5 VCs (85 % MCR allocation)! Switch buffer size = 25 kcells! Z i =1, for all i! Results are independent of the number of sources 55

Buffer Occupancy Cells Time! 100 TCPs with 5 VCs (85 % MCR allocation)! Switch buffer size = 25 kcells! Queues are approximately proportional to MCRs 56

Effect of Buffer Size MCR Achieved Excess Excess/MCR Throughput 7.68 11.79 4.11 0.54 15.36 18.55 3.19 0.21 23.04 25.13 2.09 0.09 30.72 32.23 1.51 0.05 38.40 38.97 0.57 0.01 115.2 126.67 11.47! 100 TCPs with 5 VCs (85 % MCR allocation)! Switch buffer size = 6 kcells (Small)! Z i =1, for all I! MCR guaranteed. Lower MCRs get higher excess. 57

Buffer Size (Cont) MCR Achieved Excess Excess/MCR Throughput 7.68 10.02 2.34 0.30 15.36 19.31 3.95 0.26 23.04 25.78 2.74 0.12 30.72 32.96 2.24 0.07 38.40 38.56 0.16 0.00 115.2 126.63 11.43! 100 TCPs with 5 VCs (85 % MCR allocation)! Switch buffer size = 3 kcells (Small)! Z i =1, for all I! MCR guaranteed. Lower MCRs get higher excess. 58

Effect of Z i Z i = 1-W i /W Z i = (1-W i /W) 2 Excess Excess/MCR Excess Excess/MCR 3.84 0.50 0.53 0.07 2.90 0.19 2.97 0.19 2.27 0.10 2.77 0.12 2.56 0.08 2.39 0.08 0.02 0.02 3.14 0.08! 100 TCPs with 5 VCs (85 % MCR allocation)! Switch buffer size = 6 kcells! Small Z i for large MCR enables MCR proportional sharing of excess capacity 59

Summary! Task 2: Design switch drop policies: " Selective drop and Fair Buffer Allocation improve fairness and efficienciy " FBA is more sensitive to parameters than SD! Task 6: Changes to TCP congestion control: " Increment CWND after N acks works OK 60

Summary (Cont)! Task 7: Optimizing SACK TCP: " Delayed retransmit has no effect.! Task 4a: Guaranteed Frame Rate: " SACK TCP throughput may be controlled with FIFO queuing under certain circumstances:! TCP, SACK (?)! Σ MCRs < GFR Capacity! Same RTT (?), Same frame size (?)! No other non-tcp or higher priority traffic (?) " New Buffer Management Policy: DFBA 61

References! All our contributions and papers are available on-line at http://www.cis.ohio-state.edu/~jain/! See Recent Hot Papers for tutorials.! Tasks 1 and 2: Analyze and design switch and endsystem policies. UBR drop policies. Rohit Goyal, et al, "Improving the Performance of TCP over the ATM-UBR service", To appear in Computer Communications, http://www.cis.ohiostate.edu/~jain/papers/cc.htm 62

References (Cont)! Task 3: Buffer requirements for various delaybandwidth products " Rohit Goyal, et al, "Analysis and Simulation of Delay and Buffer Requirements of Satellite-ATM Networks for TCP/IP Traffic," Submitted to IEEE Journal of Selected Areas in Communications, March 1998, http://www.cis.ohiostate.edu/~jain/papers/jsac98.htm 63

References (Cont)! Task 4: UBR with GR and GFR " Rohit Goyal, et al, "Design Issues for providing Minimum Rate Guarantees to the ATM Unspecified Bit Rate Service", Proceedings of ATM'98, May 1998, http://www.cis.ohiostate.edu/~jain/papers/atm98.htm " Rohit Goyal, et al, Providing Rate Guarantees to TCP over the ATM GFR Service, Submitted to LCN 98, http://www.cis.ohiostate.edu/~jain/papers/lcn98.htm 64

Thank You! This research was sponsored by NASA Lewis Research Center. 65