Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 Random Early Drop with In & Out (RIO) for Asymmetrical Geostationary Satellite Links Tat Chee Wan, Member, IEEE ComSoc, and Swee Keong Joo Abstract A model of the TCP/IP stack over Asymmetrical Geostationary Satellite links has been developed to evaluate support of prioritized and non-prioritized TCP traffic over the satellite link. Traffic Prioritization is needed to provide end-toend support for Differentiated Services (DiffServ) in a hybrid terrestrial and satellite-based network. Various parameters, such as router buffer size, TCP window size, and performance of Random Early Drop with In & Out (RIO) over the satellite link is examined. Index terms RIO, Satellite Communications, TCP/IP Throughput Enhancement, TCP Traffic Prioritization, Network Simulation. I. INTRODUCTION Geostationary Satellites have been extensively used for wide area communication links for voice, television and high bandwidth data transmission [1]. Nonetheless, the large propagation delays of such links (in the order of 500 ms) adversely affect the performance of windowed flow control protocols such as TCP/IP [],[3],[7]. In addition, the use of asymmetrical bandwidths (Figure 1) in the forward and reverse links places additional constrain on the achievable throughput, especially if Acknowledgement (ACK) packets have to compete with other traffic over the limited reverse link bandwidth. Forward Link techniques such as Reed-Solomon and Turbo Coding [8] has been recommended to enable satellite links to approach the BER performance of copper-based terrestrial links at the expense of higher processing delays [5]. Several approaches have been proposed to address some of the issues inherent in satellite links with large Bandwidth- Delay Products (BDP) []-[7]. ACK-spoofing [5] is less desirable as it affects the TCP end-to-end semantics. This paper focuses on approaches that do not affect the end-toend semantics of TCP. These backward-compatible schemes generally involve modifications to the TCP Congestion Window Slow Start Threshold (ssthresh) size, maximum packet sizes (Path MTU discovery), and large router buffer provisioning (to cater for large BDP) [5]. II. RIO FOR SATELLITE LINKS Quality of Service provisioning using DiffServ is being promoted as the approach for traffic prioritization over the Internet [9]. DiffServ partitions service classes into Premium, Assured and Best Effort services. While the rigid QoS requirements of Premium service may preclude its adoption over satellite links, Assured and Best Effort service classes are supported using statistical flow shaping algorithms such as Random Early Drop (RED), and RED with In and Out (RIO) [9]. The RIO mechanism is enforced at the routers connecting both ends of the satellite link. In this paper, we will investigate the effect of RIO on the satellite link, given different values for forward link bandwidth, packet (MTU) size, and ssthresh sizes [10]. Sender Reverse Link 1 P (drop) Tail Dropping Region Receiver Max Dropping Probability Graduated Dropping Region for Non-Prioritized (Out) traffic Graduated Dropping Region for Prioritized (In) traffic Figure 1: Asymmetrical Forward and Reverse Satellite Links Satellite links are also prone to high Bit Error Rates (BER). Link-level retransmission, while able to address the BER issue, introduces additional delays that may interfere with the flow control function of TCP [5],[6]. More recently, the use of sophisticated Forward Error Control (FEC) Tat Chee Wan (tcwan@cs.usm.my) is with the Network Research Group, School of Computer Sciences, University of Science Malaysia, 11800 Minden, Penang, Malaysia. Swee Keong Joo (skjoo@visto.com) was a Research Officer with the Network Research Group, and is currently with Motorola Multimedia (M) Sdn. Bhd. in Kuala Lumpur, Malaysia. min th max th total th Figure : RIO Dropping Mechanism for two traffic classes Avg queue size RIO performs graduated random packet dropping based on the router queue size (Figure ). The dropping probability for Non-prioritized (Out) traffic is bounded by queue size thresholds min th and max th, where the probability increases from 0 to the Max. Dropping Probability. Beyond max th, all Out packets are dropped. Prioritized (In) traffic experiences 1
Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 graduated packet drops from max th till total th (max. queue size). Beyond total th tail dropping occurs, where all packets (In and Out) will be dropped due to buffer overflow. III. TCP THROUGHPUT OVER SATELLITE LINKS A simulation model was developed in PARSEC [11] to simulate TCP performance over a single hop satellite link. The model implements link-level FEC support to provide reliable transmission even for link BER of up to 10-3. The model implements Congestion Window growth until the Slow Start Threshold (ssthresh) is reached. Incremental window growth beyond ssthresh is not modeled, since it does not contribute significantly to throughput for larger window sizes. Round Trip Time (RTT) Forward Link Bandwidth ( ) 51 Kbps Reverse Link Bandwidth 1.5 Mbps 1380 bytes (MTU=1500) Router Queue Size (R s) ( =1,, 3.5, 4) 18, 36, 6, 71 Kbytes Slow Start Threshold (ssthresh) 16, 3, 64, 96, 18 Kbytes Mbytes IPERF Link Raw BER 10-3 IPERF Satellite Link FEC Reed Solomon /3 Coding IPERF Effective BER < 10-7 IPERF Router Queue Size (R s) 16 Kbytes IPERF Slow Start Threshold (ssthresh) 16, 3, 64, 96, 18 Kbytes Table : for Satellite Link Validation Test TCP Throughput Validation for 51 Kbps Link 470 The analytical network model in Figure 3 was adopted to compare the impact of different ssthresh on TCP throughput over the satellite link [5]. Source Buffer (B=Rs) Forward Link Bandwidth ( ) RTT (T) Destination 450 430 410 390 IPERF Figure 3: TCP Analytical Network Model A suitable value for ssthresh to achieve fast convergence of the TCP windowing function is given as: [5] B T ssthresh (1) where: B=Buffer (Router Queue) Size = R s, = Forward Link Bandwidth, and T = Round Trip Time (RTT). R s is calculated based on the maximum number of packets in flight (saturated link capacity T/): T R s () where: = scaling factor. The parameters used for simulation are given in Table 1. Round Trip Time (T) Forward Link Bandwidth ( ) Reverse Link Bandwidth Router Queue Size (R s) Slow Start Threshold (ssthresh) 51, 104, 048 Kbps 51 Kbps Variable Variable Variable Mbytes Table 1: General Simulation for Satellite Link Model A. Model Validation The output for the simulation model was compared against the measured throughput over a single hop asymmetric satellite link using IPERF (http://www.iperf.org). Since the routers used were configured with default TCP parameters (ssthresh, buffer size), reliable measured results could only be obtained for forward link bandwidth of 51 Kbps. 370 Figure 4: Validation of TCP Throughput over 51 Kbps Satellite Link using MTU=1500 bytes As shown in Figure 4, the simulation model represents the idealized upper bound for TCP throughput over the satellite link. It is comparable to the performance of the IPERF results for ssthresh of 64 Kbytes and above. Analytical values for R s and ssthresh are: R 51Kbps 570ms s 8 17. Kbytes (3) ssthresh 467 466 465 464 463 46 461 for = 1. 16.4Kbytes 51Kbps /8 570ms 6.7 Kbytes (4) Simulated TCP Throughput for 51 Kbps Link (expanded)! #"%$ &!' Figure 5: Simulated TCP Throughput over 51 Kbps Satellite Link using MTU=1500 bytes (expanded)
( + Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 Figure 5 (expanded view of Figure 4), shows the impact of on TCP throughput while throughput increases as ( is varied from 1 to 4, it shows only marginal increase from ( = 3.5 to ( = 4. Consequently, ( = 3.5 was used for subsequent simulations to simplify calculation of R s. for a given Forward Link Bandwidth. The peak link efficiency occurs when ssthresh = (B+1 T)/. From Figure 6, maximum link combined throughput (streams a & b) using 048-byte packet size for 51 Kbps Link is approximately 480 Kbps. Therefore link efficiency is approximately 93.8%. 570ms s * 3.5 ), * ) R (5) for - = 3.5, T =. 500 TCP Throughput for Normal TCP streams over 51 Kbps Link Figure 4 also shows that the IPERF throughput does not increase significantly when a window size greater than 64 Kbytes is used for the 51 kbps forward link. It should be noted that the model does not handle the case of small window sizes (16 & 3 Kbytes) well, as it does not account for other factors such as coding delays (introduced by Reed Solomon FEC) and processing delays due to the number of context switches for small window sizes. However, since we are interested primarily in large window sizes to effectively utilize the large link BDP, this limitation would not be critical to the simulation results that are of interest. 400 300 (B+/ T)/ 51 (a) 51 (b) 51 (a+b) 104 (a) 104 (b) 104 (a+b) 048 (a) 048 (b) 048 (a+b) B. Impact of Router Queue Size (R s ) on TCP Throughput From single TCP stream simulations performed using different values of router queue size, with a minimum size of R s (. = 3.5), to a maximum of 86 Kbytes, it was determined that there was no significant impact on TCP throughput induced by router queue sizes beyond R s (. = 3.5). Consequently, router queue sizes for subsequent simulations were specified based on R s (. = 3.5). C. Flow Control Window and s Round Trip Time (T) Forward Link Bandwidth (/ ) Reverse Link Bandwidth Router Queue Size (R s) (0 = 3.5) Slow Start Threshold (ssthresh) 51, 104, 048 Kbps 51 Kbps 51, 104, 048 bytes R s Kbytes 16, 3, 64, 18, (B+/ T)/ Kbytes Mbytes Table 3: for two competing Normal (Non-Prioritized) streams scenario R s (. = 3.5) was used for all simulations. Different values for ssthresh and packet size were used to determine their effect on throughput experienced by two competing normal (nonprioritized) TCP streams. We also considered the case of ssthresh = (B+1 T)/. This is summarized in Table 4. Forward Link Bandwidth R s ( = 3.5) ssthresh = (B+33 T)/ 51 Kbps 6 Kbytes 50 Kbytes 104 Kbps 15 Kbytes 100 Kbytes 048 Kbps 49 Kbytes 00 Kbytes Table 4: R s and ssthresh for given Forward Link Bandwidths As can be seen from the following graphs, the throughput of each competing non-prioritized stream is more or less equal. As larger packet sizes are used, the percentage of wasted bandwidth decreases, resulting in improved efficiency. For the 048-byte packet size with a 10-byte header, the maximum combined link efficiency (combined throughput of streams a & b) of approximately 94% can be attributed directly to the header overhead. Consequently, header compression could be utilized to improve the link efficiency 00 Figure 6: TCP Throughput for two Normal (Non-Prioritized) streams (a) & (b), for various values of Packet (MTU) size and Congestion Window (ssthresh) size over 51 Kbps link From Figure 7, maximum link combined throughput (streams a & b) using 048-byte packet size for the 104 Kbps Link is approximately 965 Kbps. Therefore link efficiency is approximately 94.%. 990 890 790 690 590 490 390 TCP Throughput for Normal TCP streams over 104 Kbps Link 51 (a) 51 (b) 51 (a+b) 104 (a) 104 (b) 104 (a+b) 048 (a) 048 (b) 048 (a+b) (B+/ T)/ Figure 7: TCP Throughput for two Normal (Non-Prioritized) streams (a) & (b), for various values of Packet (MTU) size and Congestion Window (ssthresh) size over 104 Kbps link. From Figure 8, maximum link combined throughput (streams a & b) using 048-byte packet size for the 048 Kbps Link is approximately 1940 Kbps. Therefore link efficiency is approximately 94.7%. 3
Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 1950 Throughput for Normal TCP streams over 048 Kbps Link Throughput for Prioritized and Non-Prioritized TCP streams over 51 Kbps Link 1750 1550 1350 1150 950 51 (a) 51 (b) 51 (a+b) 104 (a) 104 (b) 104 (a+b) 048 (a) 048 (b) 048 (a+b) (B+4 T)/ 390 90 0.3/0.7 (P) 0.3/0.7 (NP) 0.3/0.7 (P+NP) 0./0.8 (P) 0./0.8 (NP) 0./0.8 (P+NP) 0.1/0.9 (P) 0.1/0.9 (NP) 0.1/0.9 (P+NP) Normal (a) Normal (b) Normal (a+b) (B+5 T)/ Q-point 750 19 190 Figure 8: TCP Throughput for two Normal (Non-Prioritized) streams (a) & (b), for various values of Packet (MTU) size and Congestion Window (ssthresh) size over 048 Kbps link. IV. IMPACT OF RIO ON TCP THROUGHPUT Round Trip Time (T) Forward Link Bandwidth (5 ) 51, 104, 048 Kbps Reverse Link Bandwidth 51 Kbps 104 bytes (MTU = 1144 bytes) Router Queue Size (R s) (6 =3.5) R s Kbytes Slow Start Threshold (ssthresh) 16, 3, 64, 96, 18, (B+5 T)/ Kbytes Mbytes Max Drop Probability 0.7 {min th, max th} (fraction of R s) {0.3, 0.7}, {0., 0.8}, {0.1, 0.9} Table 5: for competing Prioritized (P) and Non-Prioritized (NP) streams scenario The simulations were performed using R s (7 = 3.5), and packet size of 104 bytes. Various ssthresh values were used, in conjunction with different dropping thresholds {min th, max th } specified as a fraction of R s. Queue Size Fractions R s (88 =3.5) 0.3 0.7 0. 0.8 0.1 0.9 6 KB 19 KB 43 KB 1 KB 50 KB 6 KB 56 KB 15 KB 38 KB 88 KB 5 KB 100 KB 13 KB 113 KB 49 KB 7 KB 168 KB 48 KB 19 KB 4 KB 16 KB Table 6: Dropping Thresholds for different Router Queue Size (R s) As seen from the following figures (Figure 9 - Figure 11), when RIO is utilized to perform stream prioritization, peak link efficiency no longer occurs when ssthresh = (B+9 T)/. The prioritized stream (P) achieves maximum stream throughput (P throughput ) for the most aggressive RIO setting {0.1, 0.9} when ssthresh is close to (B+9 T)/. However, max. P throughput is not directly related on this value of ssthresh; the optimum ssthresh to achieve max. P throughput depends on the RIO setting. For less aggressive RIO settings, peak P throughput is achieved at a window size beyond (B+9 T)/. Furthermore, Non Prioritized (NP) stream throughputs (NP throughput ) decreases for large ssthresh values. 90 Figure 9: TCP Throughput for Prioritized (P) and Non-Prioritized (NP) streams, for various RIO Dropping Thresholds and Congestion Window (ssthresh) size over 51 Kbps link. 900 800 700 600 500 400 300 00 Throughput for Prioritized and Non-Prioritized TCP streams over 104 Kbps Link 0.3/0.7 (P) 0.3/0.7 (NP) 0.3/0.7 (P+NP) 0./0.8 (P) 0./0.8 (NP) 0./0.8 (P+NP) 0.1/0.9 (P) 0.1/0.9 (NP) 0.1/0.9 (P+NP) Normal (a) Normal (b) Normal (a+b) (B+5 T)/ Q-point Figure 10: TCP Throughput for Prioritized (P) and Non-Prioritized (NP) streams, for various RIO Dropping Thresholds and Congestion Window (ssthresh) size over 104 Kbps link. 4
Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 1740 1540 1340 1140 940 740 Throughput for Prioritized and Non-Prioritized TCP streams over 048 Kbps Link 0.3/0.7 (P) 0.3/0.7 (NP) 0.3/0.7 (P+NP) 0./0.8 (P) 0./0.8 (NP) 0./0.8 (P+NP) 0.1/0.9 (P) 0.1/0.9 (NP) 0.1/0.9 (P+NP) Normal (a) Normal (b) Normal (a+b) (B+: T)/ Q-point therefore not advisable to use ssthresh values beyond the Q- point since that impacts both P throughput and link efficiency. Throughput @ ssthresh = (B+<< T)/ Forward Link Bandwidth (MTU=1144 bytes) % Difference 51 Kbps 104 Kbps 048 Kbps (P-NP)/NP {0.3, 0.7} 49% 58% 57% (P-NP)/NP {0., 0.8} 73% 88% 88% (P-NP)/NP {0.1, 0.9} 108% 133% 147% (P-a)/a {0.3, 0.7} 15% 14% 16% (P-a)/a {0., 0.8} 18% 16% 17% (P-a)/a {0.1, 0.9} 0% 0% 17% Table 8: Percentage Difference in TCP Throughput between Prioritized (P) and Non-Prioritized (NP)/ Normal (a) Streams The more aggressive RIO schemes achieve higher P throughput at the expense of link efficiency (P+NP throughput ). From Table 8, we observe that there is a 50% difference between P throughput and NP throughput for RIO {0.3, 0.7} when ssthresh = (B+; T)/ for the various forward link bandwidths. This increases to between 100% and 150% for RIO {0.1, 0.9} for the various forward link bandwidths. P throughput achieves a premium of at least 15% over Normal (non-prioritized) TCP streams. 540 340 19 56 30 Figure 11: TCP Throughput for Prioritized (P) and Non-Prioritized (NP) streams, for various RIO Dropping Thresholds and Congestion Window (ssthresh) size over 048 Kbps link. In Figure 10, there is an increase in the NP throughput for RIO {0.1, 0.9} when ssthresh = (B+; T)/. This suggests that there are additional non-linear interactions between RIO and ssthresh when ssthresh approaches (B+; T)/. For the 048 Kbps link simulation (Figure 11), packet drops take effect only after the 16 Kbyte window size, resulting in identical throughputs for the RIO and normal streams for ssthresh = 16 Kbytes. There is an inflexion point (Q-point) for P throughput where the various RIO settings intersect. Beyond the Q-point, P throughput for less aggressive RIO schemes ({0.3, 0.7}, {0., 0.8}) exceed the P throughput of the more aggressive schemes. Link Efficiency @ ssthresh = (B+<< T)/ Forward Link Bandwidth (MTU=1144 bytes) % Utiliziation 51 Kbps 104 Kbps 048 Kbps (P+NP)/: {0.3, 0.7} 85.9% 83.9% 84.3% (P+NP)/: {0., 0.8} 8.8% 80.6% 79.% (P+NP)/: {0.1, 0.9} 79.% 77.1% 7.5% (a+b)/: {Normal} 89.0% 89.4% 88.% Table 7: Percentage Utilization for link efficiency due to TCP Throughput of Prioritized (P) and Non-Prioritized (NP) Streams From Table 7, it could be seen that the link efficiency (P+NP throughput combined throughput of P+NP) for RIO prioritized streams (P+NP) drops by at least 3% for RIO {0.3, 0.7}, to over 10% for RIO {0.1, 0.9} compared with the normal (non-prioritized) case. In addition, link efficiency drops dramatically when ssthresh exceeds the Q-point. It is V. CONCLUSION From the simulations, it can be shown that with the use of sophisticated FEC algorithms such as Reed Solomon or Turbo Coding, the link efficiency is greatly improved when the header overhead is reduced. This means that larger packet sizes, or equivalently header compression, would result in better throughput. This is consistent with other reported results [5]. However, this has to be balanced with the ability of existing routers to support large MTU sizes. When normal TCP streams are used, peak link efficiency is achieved when ssthresh = (B+; T)/. Again, this is consistent with other reported results. However, this is no longer true when RIO is utilized to provide prioritized TCP streams. Link efficiency is sacrificed in order to achieve prioritization; increasing ssthresh values result in lower link efficiencies. for ssthresh should not exceed the Q- point, as it would result in lowered link efficiency as well as prioritized stream throughput. If the routers were operated using small ssthresh sizes (ssthresh < (B+; T)/), more aggressive RIO settings (e.g., {0.1, 0.9} should be adopted to achieve higher prioritized stream throughput (P throughput ). However, as ssthresh approaches the Q-point, ((B+; T)/ < ssthresh < Q-point), the use of less aggressive RIO settings (e.g., {0.3, 0.7}) results in higher link efficiency without adversely affecting P throughput. The choice of RIO settings is therefore a compromise between maximizing link efficiency and providing higher P throughput. VI. ACKNOWLEDGEMENTS This work is partially funded through IRPA Grants from the Malaysian government. It is done in collaboration with the Asian Internet Interconnection Initiative (AI3) project. 5
Proc. Joint Int l Conf. IEEE MICC 001, LiSLO 001, ISCE 001, Kuala Lumpur, Malaysia Oct 1-4, 001 VII. REFERENCES [1] G. Maral, M. Bosquet, Satellite Communications Systems: Systems, Techniques and Technology, 3 rd Ed., J. Wiley & Sons, 1998 [] M. Allman, D. Glover, L. Sanchez, Enhancing TCP Over Satellite Channels using Standard Mechanisms, RFC 488, IETF, January 1999, http://www.ietf.org/rfc/rfc488.txt [3] J. Touch, S. Ostermann, D. Glover, et. al. Ongoing TCP Research Related to Satellites, RFC 760, IETF, February 000, http://www.ietf.org/rfc/rfc760.txt [4] N. Ghani, S. Dixit, TCP/IP Enhancements for Satellite Networks, IEEE Communications Magazine, Vol. 37, No. 7, July 1999 [5] C. Barakat, E. Altman, W. Dabbous, On TCP Performance in a Heterogeneous Network: A Survey, IEEE Communications Magazine, Vol. 38, No. 1, January 000. [6] R. Durst, G. Miller, and E. Travis, TCP Extensions for Space Communications, Proc. ACM MobiCom, November 1996, http://www.isr.umd.edu/cshcn/links/ipos/tcpforspace.ps.gz [7] W. Stevens TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms, RFC 001, IETF, January 1997, http://www.ietf.org/rfc/rfc001.txt [8] B. Sklar, A Primer on Turbo Code Concepts, IEEE Communications Magazine, Vol. 35 No. 1, Sept. 1997, pp. 94-10. [9] X. Xiao, L. M. Ni, Internet QoS: the Big Picture, IEEE Network Magazine, Vol. 13, March 1999. http://www.cse.msu.edu/~xiaoxipe/papers/msu.inet.qos.bigpicture.pdf [10] S. K. Joo, T. C. Wan, Incorporation of QoS and Mitigated TCP/IP over Satellite Links, Proc. 1 st Asian International Mobile Computing Conference (AMOC 000), Nov. 000, Penang, Malaysia. [11] R. Bagrodia, R. Meyer, et. al., PARSEC: A Parallel Simulation Environment for Complex Systems, IEEE Computer, Vol. 31, No. 10, Oct. 1998. 6