Ashortage of IPv4 address space has

Similar documents
The main purpose of load balancing is to

Why Your Application only Uses 10Mbps Even the Link is 1Gbps?

Send documentation comments to You must enable FCIP before attempting to configure it on the switch.

Kernel Korner. Analysis of the HTB Queuing Discipline. Yaron Benita. Abstract

estadium Project Lab 2: Iperf Command

TCP/IP Performance ITL

Reasons not to Parallelize TCP Connections for Fast Long-Distance Networks

Performance Evaluation of NEMO Basic Support Implementations

CS 421: COMPUTER NETWORKS SPRING FINAL May 16, minutes

Chapter III. congestion situation in Highspeed Networks

Performance of Multicast Traffic Coordinator Framework for Bandwidth Management of Real-Time Multimedia over Intranets

Protocol Overview. TCP/IP Performance. Connection Types in TCP/IP. Resource Management. Router Queues. Control Mechanisms ITL

TCP s Retransmission Timer and the Minimum RTO

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

Research Article A Novel Solution based on NAT Traversal for High-speed Accessing the Campus Network from the Public Network

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Evaluating IPv6 in Peer-to-Peer Gigabit Ethernet for UDP using Modern Operating Systems

cs/ee 143 Communication Networks

TCP on High-Speed Networks

CS268: Beyond TCP Congestion Control

MIGRATION OF INTERNET PROTOCOL V4 TO INTERNET PROTOCOL V6 USING DUAL-STACK TECHNIQUE

A 6to4 Gateway with Co-located NAT

WarpTCP WHITE PAPER. Technology Overview. networks. -Improving the way the world connects -

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

The IPv6 Deployment in Taiwan Academic Network

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print,

Da t e: August 2 0 th a t 9: :00 SOLUTIONS

Appendix B. Standards-Track TCP Evaluation

Network Management & Monitoring

Transition Strategies from IPv4 to IPv6: The case of GRNET

Congestion Control in Communication Networks

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING

ETSF10 Internet Protocols Transport Layer Protocols

Over the past few years

Toward Inter-Connection on OpenFlow Research Networks

Layer 4 TCP Performance in carrier transport networks Throughput is not always equal to bandwidth

Analyzing the Receiver Window Modification Scheme of TCP Queues

Computer Networks. Wenzhong Li. Nanjing University

Reliable Transport II: TCP and Congestion Control

Comparision study of MobileIPv4 and MobileIPv6

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007

ECE4110, Internetwork Programming, QUIZ 2 - PRACTICE Spring 2006

Short answer (35 points)

Sena Technologies White Paper: Latency/Throughput Test. Device Servers/Bluetooth-Serial Adapters

Measuring end-to-end bandwidth with Iperf using Web100

A Policy Controlled IPv4/IPv6 Network Emulation Environment

Evaluation of End-to-End TCP Performance over WCDMA

Student ID: CS457: Computer Networking Date: 3/20/2007 Name:

Network Test and Monitoring Tools

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

ISSN: Index Terms Wireless networks, non - congestion events, packet reordering, spurious timeouts, reduce retransmissions.

PLEASE READ CAREFULLY BEFORE YOU START

PLEASE READ CAREFULLY BEFORE YOU START

Experimental Study of TCP Congestion Control Algorithms

Performance of Host Identity Protocol on Lightweight Hardware

8. TCP Congestion Control

THE TCP specification that specifies the first original

Measuring end-to-end bandwidth with Iperf using Web100

Improving TCP Performance over Wireless Networks using Loss Predictors

Increase-Decrease Congestion Control for Real-time Streaming: Scalability

Transmission Control Protocol (TCP)

UNIT IV -- TRANSPORT LAYER

ENHANCING ENERGY EFFICIENT TCP BY PARTIAL RELIABILITY

QoS on Low Bandwidth High Delay Links. Prakash Shende Planning & Engg. Team Data Network Reliance Infocomm

PERFORMANCE EVALUATION OF WPA2 SECURITY PROTOCOL IN MODERN WIRELESS NETWORKS

EC441 Midterm Two Fall 2017

Wireless TCP. TCP mechanism. Wireless Internet: TCP in Wireless. Wireless TCP: transport layer

TCPDelay: Constant bit-rate data transfer over TCP

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service

TCP on High-Speed Networks

Incorporation of TCP Proxy Service for improving TCP throughput

CS 421: COMPUTER NETWORKS SPRING FINAL May 24, minutes. Name: Student No: TOT

VirtuLocity VLNCloud Software Acceleration Service Virtualized acceleration wherever and whenever you need it

Example questions for the Final Exam, part A

Chapter 7. The Transport Layer

(a) Figure 1: Inter-packet gaps between data packets. (b)

UNIVERSITY OF TORONTO FACULTY OF APPLIED SCIENCE AND ENGINEERING

Chapter 3 Review Questions

CMPE 150/L : Introduction to Computer Networks. Chen Qian Computer Engineering UCSC Baskin Engineering Lecture 11

CS 638 Lab 6: Transport Control Protocol (TCP)

A Multihoming based IPv4/IPv6 Transition Approach

Comparison Between Ipv4 And Ipv6 Using Opnet Simulator

CSCI-1680 Transport Layer III Congestion Control Strikes Back Rodrigo Fonseca

Practice Problems: P22, 23, P24, P25, P26, P27, P28, P29, P30, P32, P44 and P45. Hand-in the following: P27, P28, P32, P37, P44

ECE 697J Advanced Topics in Computer Networks

MCS-377 Intra-term Exam 1 Serial #:

Your Name: Your student ID number:

Computer Networking Introduction

Problem 7. Problem 8. Problem 9

Master s Thesis. TCP Congestion Control Mechanisms for Achieving Predictable Throughput

ENSC 835 project TCP performance over satellite links. Kenny, Qing Shao Grace, Hui Zhang

End-to-End Protocols. Transport Protocols. User Datagram Protocol (UDP) Application Layer Expectations

FINAL May 21, minutes

Network Management & Monitoring Network Delay

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL

Evaluating the Effect of IP and IGP on the ICMP Throughput of a WAN

Washington State University CptS 455 Sample Final Exam (corrected 12/11/2011 to say open notes) A B C

CS419: Computer Networks. Lecture 10, Part 3: Apr 13, 2005 Transport: TCP performance

Transport Layer PREPARED BY AHMED ABDEL-RAOUF

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Transcription:

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT Int. J. Network Mgmt 2005; 15: 411 419 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nem.582 A measurement study of network efficiency for TWAREN IPv6 backbone By Tin-Yu Wu, Han-Chieh Chao*,, Tak-Goa Tsuei and Yu-Feng Li The Advanced Research and Education Network (TWAREN) was established by the National Centre for High-Performance Computing. It provides advanced research and education in Taiwan. TWAREN supports the IPv4 and IPv6 protocols. This paper analyses network efficiency under the IPv6 structures. The TWAREN test will focus on the traffic issue for all fixed-size packets. The purpose of this test is to determine packet loss under the maximum packet transfer situation. We measure the network throughput under the maximum speed and the latency associated with each packet. Using that information, we in turn derive the needed data on IPv6 TCP/UDP bandwidth and packet loss under the TWAREN structure. Copyright 2005 John Wiley & Sons, Ltd. Introduction Ashortage of IPv4 address space has become apparent with the increasing number of Internet users, particularly in Europe and Asia. The new IPv6 technology was therefore developed and evolved to deal with the traffic issue. TWAREN, as shown in Figure 1, was fully deployed in December 2003. It has established greater than a 20 GB backbone bandwidth within Taiwan, especially for research and education purposes. A TWAREN project was proposed and implemented between two college campuses using Cisco network apparatus. The main purpose of this project is to analyse network efficiency under the IPv6 structures. 1 Among the entire network test instruments available, free software, Iperf, 2 was selected for its usage and budget consideration. Among the entire network test instruments available, free software, Iperf, 2 was selected for its usage and budget consideration. Iperf is capable of measuring the maximum TCP network bandwidth and finding the network efficiency by setting and tuning the variables involved. Network information, including bandwidth, delay jitter and diagram loss, can be revealed. *Correspondence to: Han-Chieh Chao, Department of Electrical Engineering, National Dong Hwa University, Hualien, Taiwan, Republic of China. E-mail: hcc@mail.ndhu.edu.tw Contract/grant sponsor: National Centre for High-Performance Computing Copyright 2005 John Wiley & Sons, Ltd.

412 TIN-YU WU ET AL. Figure 1. TWAREN topology We make the following contributions: We share our test data on a web site with the National Centre for High-Performance Computing. Readers can reference the URL at http://www.ndhu.edu.tw/~yin/ipv6/. We performed bandwidth measurement, delay jitter and diagram loss. We conducted experiments to evaluate, measure and optimize IPv4/IPv6 UDP, TCP and RTT transmission performance using a TWAREN backbone. The TWAREN test focuses on traffic issues for all fixed-size packets. The test determines packet loss under maximum packet transfer conditions. Network throughput is measured under the maximum speed and latency associated with each packet. Using that information, we will in turn derive the needed data for IPv6 TCP/UDP bandwidth and packet loss under the TWAREN structure. The rest of this paper is organized as follows. The second section introduces the test instrument and methodology. The third section examines the proposed test data and simulation results. Conclusions are drawn in the fourth Section. Testing Instrument and Methodology This section provides a detailed description of our test instrument and methodology. Our test topology consists of a server-testing platform, core router, packet generator and sniffer. Test Instrument Most of the testing software used in our proposal was provided free except for the server. The hardware used was a Pentium 4 IBM PC with an Intel Gigabit Ethernet card for the server-testing platform. The operating system employed Fedora Core 2 Linux with Iperf and Ping6 as the software testing instruments. Iperf was used to collect the traffic traces for offline traffic analysis. Iperf free software was used to generate both IPv6 and IPv4 network packets to produce the transmission data list for TCP and UDP. It can also be used to measure the maximum TCP throughput, UDP bandwidth, UDP delay jitter and UDP packet loss. Ping6, another measuring instrument, can be used to obtain information on IPv6 round-trip time over an ICMPv6 packet. 3

NETWORK EFFICIENCY FOR TWAREN IPv6 BACKBONE 413 Figure 2. Network testing topology (NCHU: National Chung Hsing University; NDHU: National Dong Hwa University) Test Methodology This test focused on measuring IPv6 network efficiency over the TWAREN backbone by adopting the RFC 2544 definitions. 4 Under the fixed-size traffic packet situation, it first finds the maximum transmission rate under the presumption of no packet loss. It then derives the greatest throughput for IPv4/IPv6 under the TWAREN backbone. Figure 1 illustrates the test network structure. This test focused on measuring IPv6 network efficiency over the TWAREN backbone by adopting the RFC 2544 definitions. 4 Two servers shown in Figure 2 were located at two separate campuses. Both servers embodied IPv4/IPv6 dual stack architectures and contained IPv4 and IPv6 addresses. The test procedure involved sending different-sized packets at different rates from one campus to another. The purpose was to measure the maximum IPv6 network efficiency by observing packet loss fluctuations between the two sides. 5 An indirect mechanism was adopted for this test to avoid interference with the existing network at each campus, as shown in Figure 3. The test network was connected to a Cisco 3750 Layer 3 Switch under Giga-POP (point of presence with gigabit capacity). In addition to the test on network efficiency over the two ends of TWAREN, The TWAREN test also included a server efficiency analysis. The goal was to determine the maximum number of packets transmitted between the two servers. To test the network efficiency, the two servers were directly connected through a Gigabit Ethernet (1000 BASE- T), and point-to-point analysis was conducted on the data packet transmission efficiency. Testing Data In this section we examine the IPv4/IPv6 efficiency performance testing by measuring the data on the servers and TWAREN backbone. Experiments were performed to evaluate the effect of the packet size and bandwidth throughput on active transport (UDP, TCP and RTT) traffic flows. Analytical Approach In our proposed measurement, the TWAREN backbone TCP, UDP and RTT efficiency performance was estimated using Iperf. The bandwidth was calculated as follows: SN-1 Bandwidth = t - t ( ) N i (1)

414 TIN-YU WU ET AL. Figure 3. Cisco 3750 Layer 3 Switch partitioned part of the existing network where S = packet size, N = total number of packets received and t i = time required to receive the ith packet. We adopt the Weibull distribution to represent, in terms of theory value, where a is the shape parameter. Shape parameters allow a distribution to take on a variety of shapes, depending on the value of the shape parameter, where b is the location parameter: c MSS x = RTT Loss Ê TCPthroughput = BW Á1 -e Ë - Ê a x ˆ Ë b (2) (3) where c is a random variable, BW = measuring bandwidth, MSS = maximum TCP segment length, RTT = round trip time and throughput = 800Mbps. TCP calculates RTT upon every received ACK and estimates the retransmission timeout (RTO): 6 SRTT( k)= ( 1- g) SRTT ( k -1 )+ g RTT ( k ) (4) SERR( k)= RTT( k)+ SRTT ( k -1 ) SDEV( k)= ( 1-h) SDEV( k -1)+ h SERR( k) RTO( k)= SRTT( k)+ f SDEV( k) SRTT, SERR and SDEV are the exponentially smoothed RTT, error term and deviation estimates, respectively; g, h and f are the tunable parameters. From Equation 3 we can calculate the throughput ratio with packet size as shown in Table 1. ˆ Order Packet size Throughput (bytes) ratio 1 64 0.7619 2 128 0.8421 3 256 0.9142 4 384 0.9411 5 512 0.9552 6 640 0.9638 7 768 0.9696 9 896 0.9739 10 1024 0.9770 11 1152 0.9795 12 1280 0.9815 13 1408 0.9832 Table 1. Throughput ratio with packet size Figure 4 Shows that the packet throughput reaches a maximum (800 Mbps) when the packet size is 768 bytes. Measuring Data on the Servers According to the connection scheme shown in Figure 3, the UDP packet efficiency performance can be found through tuning the packet generation rate. When the rate was set at 100 700Mbps, the UDP network efficiency for both IPv4 and IPv6 was about the same with respect to different-sized packets and reached the wire speed. When the rate was brought up to 800 1000Mbps (as shown in Figure 5), the IPv6 UDP efficiency reached a ceiling, while on the IPv4 side efficiency continued

NETWORK EFFICIENCY FOR TWAREN IPv6 BACKBONE 415 900 800 700 Throughout(Mbps) 600 500 400 300 200 100 0 64 128 256 384 512 640 768 896 1024 1152 1280 1408 Packet Size(Bytes) Figure 4. Packet generation rate (800Mbps) Figure 5. The efficiency performance of IPv4/IPv6 UDP from measuring the server data (800 Mbps) to rise and reached steady state at 800Mbps. From the server perspective, the UDP packet throughput for IPv4 reached a maximum at 800Mbps, compared with 700Mbps for IPv6. Thus, IPv6 UDP efficiency is roughly 87.5% of the IPv4 efficiency. Figure 6 demonstrates the efficiency analysis targets for IPv4/IPv6 TCP packets. The preset window size limit for Fedora Core 2 Linux was 256 kbytes. When the packet size was set to over 640 kbytes, IPv4 TCP packet efficiency was higher than its IPv6 counterpart by about 10%. Measuring Data between the Two TWAREN Campuses Figure 7 illustrates the throughput versus packet size relationship for TWAREN between National Dong-Hwa University and National Chung-Hsing University campuses for IPv4/IPv6 UDP efficiency. The test between the two campuses focused on throughput, UDP packets loss rate, TCP throughput and round-trip time. When the packet generation rate was set to 100 300 Mbps, UDP

416 TIN-YU WU ET AL. Figure 6. IPv4/IPv6 TCP efficiency performance measuring the server data Figure 7. The efficiency performance of IPv4/IPv6 UDP with TWAREN backbone (300, 500, 700 and 1000 Mbps) IPv4 and IPv6 efficiency was about the same for different packet sizes. They all reached wire speed. When the generation rate was set to 500Mbps, it can clearly be observed that the packet loss rate for IPv6 started to lead the IPv4 case when the packet size fell into the 1204 1280 byte region. The IPv4 UDP throughput was about twice that for the IPv6 case. The generation rate was set to 700Mbps. It can be seen that the packet loss rate for IPv4 was higher than that for IPv6 and reached over 95%. The UDP efficiency for IPv6 was better than that for IPv4. When the generation rate was set to 1000Mbps, again the packet loss rate for IPv4 was higher than that for IPv6 and reached over 97%. The UDP efficiency for IPv6 was better than that for IPv4. The best UDP efficiency versus packet size performance under the no packet loss condition (RFC 2544) is shown in Figure 8. The IPv4 UDP packets reached 360 Mbps at most, while IPv6 UDP packets reached 340 Mbps.

NETWORK EFFICIENCY FOR TWAREN IPv6 BACKBONE 417 Figure 8. The best efficiency performance of IPv4/IPv6 UDP with TWAREN backbone Figure 9. The best efficiency performance of IPv4/IPv6 TCP with TWAREN backbone The best scenario for IPv4/IPv6 TCP efficiency analysis (window size to be 256 kbytes) is shown in Figure 9. When the packet size equalled 640 bytes, the IPv4 TCP packets reached the maximum efficiency of 144 Mbps, while the IPv6 TCP packets reached 141Mbps. Figure 10 demonstrates the ICMPv4/ICMPv6 round-trip time (RTT) data analysis for the TWAREN backbone. It is clear that for all packet sizes the IPv4 data transmission was faster than that for IPv6 by about 0.4ms. The RTT for the ICMPv4 1408 byte packet is slower than that of ICMPv4 64 bytes by 0.2 ms. The RTT for the ICMPv6 1408 byte packet is slower than that of ICMPv6 64 bytes by 0.3ms. This paper reveals the efficiency test of the IPv4/IPv6 experiments over the TWAREN backbone. From this collected data it is quite clear that the efficiency of transmitting IPv4 packets surpasses that for IPv6 packets. The reasons can be summarized as follows: Owing to cost and market considerations, commercial router would use hardware to process IPv4 packets while some IPv6 operations might be handled through software. Therefore, the efficiency will be influenced. IPv6 actually introduces three times more address bits than IPv4, resulting in more time to process the header. A header compression

418 TIN-YU WU ET AL. Figure 10. The efficiency performance of IPv4/IPv6 RTT with TWAREN backbone mechanism 7,8 has been proposed to cope with this problem but has not yet been widely accepted. Conclusion The measurement results from the first test (the efficiency test on the servers) showed that the highest throughput for IPv4 UDP packets reached 811Mbps under the no packet loss condition. The throughput for the IPv6 side was about 715Mbps, which is roughly 88% that of the IPv4 case. The TCP packet test result reached the highest throughput at 859Mbps for IPv4 TCP when the window size was set to 256 kbytes. The throughput on the IPv6 side was about 770Mbps, roughly 89% that of the IPv4 case. The packet generation rate (packets/s) indicated no apparent relationship for different packet size or various throughput values. Regardless of packet size, the IPv6 generation rate was on average 90% that of the IPv4 rate. The server test result without any network apparatus connected verified the 90% relationship between IPv6 and IPv4 cases. This was independent of UDP packets, TCP packets or packet generation rate variables. The second test focused on a point-to-point connection between the two campuses National Dong-Hwa and National Chung-Hsing universities over the TWAREN backbone. The packet loss was excluded and the highest IPv4 UDP packet throughput was about 303 Mbps. This was no different from the 304Mbps in the IPv6 case. Since the server test showed the packet generation rate for IPv4 was about 10% faster than the IPv6 case, the IPv4 packet loss rate was relatively higher than that for the IPv6 case. The IPv6 UDP packet efficiency was quite compatible with that for the IPv4 case without the packet loss consideration. We used the RFC 2544 measurement to derive the UDP packet efficiency over TWAREN under the no packet loss presumption. We found the highest throughput for IPv4 UDP packets was 359 Mbps and 339 Mbps for IPv6 UDP packets. Comparatively, a figure of 94% reflects how the two packet versions behaved in the TWAREN backbone. The server efficiency measurement efficiency for the IPv4 packets was adjusted to 44% after passage through TWAREN. The IPv6 packet value was 47% using the same adjustment. If the same scenario was applied to TCP packets, the highest throughput for IPv4 TCP packets would be 144 Mbps, making the window size 256 kbytes. The throughput for IPv6 packets would be 141 Mbps. The percentage is 98% for TCP packets in conjunction with the TWAREN backbone. The efficiency measurement adjustment after TWAREN passage reduced to 17% on IPv4 TCP packets and 18% for IPv6 TCP packet. The following are the TWAREN network efficiencies between National Dong-Hwa University and National Chung-Hsing University. The highest throughput for IPv4 UDP packets was 359 Mbps 44% of the value when tested on the

NETWORK EFFICIENCY FOR TWAREN IPv6 BACKBONE 419 server. The highest throughput for IPv6 UDP packets was 339Mbps 47% of the value when tested on the server. The highest throughput for IPv4 TCP packets was 144Mbps 17% of the value when tested on the server. The highest throughput for IPv6 TCP packets was 141Mbps 18 % of the value when tested on the server. The round-trip time analysis on ICMP/ICMPv6 reflects that the IPv4 value was faster than the IPv6 value by about 0.4ms with respect to all packet sizes. The RTT on the ICMPv4 1048 byte packet was roughly 0.2ms slower than that for the ICMPv4 64 byte packets. The RTT on ICMPv6 1048 byte packets was roughly 0.3ms slower than that for ICMPv6 64 byte packets. Acknowledgement This work was sponsored by the National Centre for High-Performance Computing. References 1. Deering S, Hinden R. Internet Protocol: Version 6 (IPv6) Specification. RFC 2460. 2. Iperf. http://dast.nlanr.net/projects/iperf/ 3. Conta A, Deering S. Internet Control Message Protocol (ICMPv6) for the Internet Protocol Version6 (IPv6) Specification. RFC2463, December 1998. 4. McQuaid J, Bradner S. Benchmarking Methodology for Network Interconnect Devices. RFC 2544, March 1999. 5. Hei Y, Yamazaki K. Traffic analysis and worldwide operation of open 6to4 relays for IPv6 deployment. In Proceedings of Applications and the Internet, 2004; 265 268. 6. Jacobson V. Congestion avoidance and control. In Proceedings of SIGCOMM 1988. 7. Ertekin E, Christou CA. Internet protocol header compression, robust header compression, and their applicability in the global information grid. Communications Magazine, IEEE 2004; 42(11): 106 116. 8. Minaburo A, Toutain L, Nuaymi L. Deployment of IPv6 robust header compression profiles 1 and 2, Computer Science, 2003. ENC 2003. In Proceedings of the Fourth Mexican International Conference, 8 12 September 2003; 278 283. If you wish to order reprints of this or any other articles in the International Journal of Network Management, please contact reprints@wiley.co.uk.