Forschungszentrum Telekommunikation Wien [Telecommunications Research Center Vienna] Dynamic Traffic Engineering for Future IP Networks Ivan Gojmerac, Thomas Ziegler, Fabio Ricciato, Peter Reichl Telecommunications Research Center Vienna (ftw.) 4. Würzburger Workshop "IP Netzmanagement, IP Netzplanung und Optimierung" Würzburg, 27 July 2004
Outline Introduction to traffic engineering Adaptive Multi-Path (AMP) algorithm Performance evaluation and results Summary and outlook
What is Traffic Engineering (TE)? Traffic engineering is defined as performance optimization of operational networks (IETF) - Consider the traffic at the macroscopic level - Consider the network as a set of limited resources - Transmission bandwidth, switching throughput Traffic engineering tries to optimally match traffic demands with the available network resources by acting on routing Traffic Demands Network Routing
Traffic Engineering in IP Networks Traffic engineering methods for IP networks: - Link weight optimization in native IP networks - Optimization of Multi-Protocol Label Switched (MPLS) networks - Algorithmic approaches (dynamic routing in the ARPAnet, OMP)
Example of Connection-Less TE: Link Weight Optimization Set of Link Weights Optimization.. 2 2 Traffic Demands 1 1 31 1 Network 1 1 3 2
Example of Connection-Oriented TE: Explicit-Routing Optimization Set of Explicit Routes for Virtual Pipes Traffic Demands Optimization.. Network
Traffic Engineering in IP Networks Existing traffic engineering methods have important disadvantages: - MPLS and link weight optimization require additional network management - Unpredictable signaling overhead with Optimized Multi-Path (OMP) Our objective: - Autonomous and continuous load distribution in the network - Low overhead in terms of memory and bandwidth consumption Proposal: Adaptive Multi-Path Algorithm (AMP)
Current IP Routing 3 Node D 3 Node B Node F Node A 3 Congestion 3 2 2 Node E Node G 3 Node C 3 Equal-Cost Multi-Path (ECMP) 8
AMP Basic Operation 3 Node D 3 Node B Node F Node A 3 3 2 2 3 Node C 3 Node E Backpressure Messages Node G 8
AMP Signaling Upstream BM Downstream traffic Node Y 1 BM BM Y1 ->X Load BM X->Y 0 BM Y2 ->X BM Load Node Y 0 Node X Node Y 2 BM Y3 ->X BM Load Node Y 3
AMP Signaling Upstream BM Downstream traffic Node Y 1 BM Y1 ->X Load BM X->Y0 BM Y2 ->X Load Node Y 0 Node X Node Y 2 BM Y3 ->X Load Node Y 3
AMP Signaling BM X > Y = f Load,..., Load, BM Y > X,..., BM Y > ( 0 XY1 XY 1 n X n ) Quasi-recursive structure of backpressure messages GLOBAL PROPAGATION OF LOAD INFORMATION THROUGH LOCAL EXCHANGE OF SIGNALING MESSAGES
AMP Signaling BM X > Y = f Load,..., Load, BM Y > X,..., BM Y > ( 0 XY1 XY 1 n X n ) Summarization of the number of parameters
AMP Signaling Upstream BM Downstream traffic Node Y 1 BM X->Y 0 Node X BM Y1 ->X Load BM Y2 ->X Load Node Y 0 Node Y 2 BM Y3 ->X Load One parameter per link: g i = max ( Load, BM Y > XY i i X ) Node Y 3
AMP Signaling BM X > Y = f Load,..., Load, BM Y > X,..., BM Y > ( 0 XY1 XY 1 n X n ) Reduction of the number of parameters g i = max ( Load, BM Y > XY i i X ) BM = f ( g1, g2,..., g? X > Y 0 n )
AMP Signaling Upstream BM Downstream traffic Node Y 1 g 1 BM X->Y 0 Node X g 2 Node Y 0 Node Y 2 In X ->Y 0 ->Y 1 ->Y 2 Y 0 -> X 160 120 g 3 Y 1 -> 490 X 230 Y 2 -> 830 120 X In/Out Matrix in X Node Y 3
AMP Signaling BM X > Y = f Load,..., Load, BM Y > X,..., BM Y > ( 0 XY1 XY 1 n X n ) Reduction of the number of parameters g i = max ( Load, BM Y > XY i i X ) BM = f ( g1, g2,..., g X > Y 0 n ) = Y i Ω X 0 β Y0 X Y \ Y β X Y i i g i weights for congestion contributions
AMP Performance Evaluation Implementation of AMP in Network Simulator (ns-2) Simulated topology: - AT&T-US Network of 27 nodes and 47 links SF LA CHI NYC - Link capacities of 2.4 and 9.6 Gbit/s (scaled down to 15 and 60 Mbit/s in our simulations) Simulated traffic: - Web traffic according SURGE model - Traffic distribution according to the gravity model - Linear scaling of the number of Web users
AMP Performance Evaluation Average Web Page Response Time ] [s m e T i n s e R espo 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 SPR ECMP AMP Number of Web Users 2000 4000 6000 8000 10000 12000 14000 16000 Web page response time most important metric from the user s perspective Significant reductions in Web page response times throughout investigated scenarios (up to 43%) SPR Shortest Path Routing ECMP Equal-Cost Multi-Path Routing
AMP Performance Evaluation Total TCP Goodput s ] e y t B pu t[ d o G C P T T otal 9 x 1011 8 7 6 5 4 3 2 1 0 SPR ECMP AMP Number of Web Users 2000 4000 6000 8000 10000 12000 14000 16000 Improved efficiency of resource utilization Total TCP goodput consistently higher with AMP compared to SPR and ECMP in our simulations (improvements of up to 28%)
AMP Performance Evaluation Average CoVs of Link Load C ov ge A vera 3 2.5 2 1.5 1 0.5 0 Number of Web Users 2000 4000 6000 8000 10000 12000 14000 16000 SPR ECMP AMP Similar average Coefficient of Variation (CoVs) of all link loads for the three routing strategies stability of AMP load balancing
AMP Performance Evaluation Hamburg Berlin Köln Frankfurt München
AMP Performance Evaluation Average Web Page Response Time 3 2.5 2 1.5 1 ] [s m e T i e n s po R es 0.5 0 Number of Web Users SPR ECMP AMP 1000 2000 3000 4000 5000 6000 7000
AMP Performance Evaluation Total TCP Goodput 2 1.8 1.6 1.4 1.2 1 0.8 e s ] b yt pu t[t d o G C P T l T ota 0.6 0.4 0.2 0 Number of Web Users SPR ECMP AMP 1000 2000 3000 4000 5000 6000 7000
Summary & Outlook AMP Summary: Load balancing within the framework of routing No management overhead, minimal signaling overhead Implementation in Network Simulator (ns-2) Significant performance improvements Future research: AMP and network resilience AMP fluid simulation
Thank you for your attention! gojmerac@ftw.at
AMP Load Balancing Node A Node B g BA g BD g BE Node D Node F Node E Node G Node C The goal of the load balancing mechanism in every node is to equalize the values of g on all output links
AMP Load Balancing In order to avoid packet disordering: => the unit for load balancing is a microflow aggregate => packets are assigned to an aggregate by applying a CRC-16 hash-function on their source and destination IP addresses The CRC-16 solution space [0, 65535] is divided among the viable next hops 161.53.101.8 173.42.78.55 CRC-16 13217
AMP Load Balancing Example routing table in Node B the hash-space boundaries are defined for every reachable destination Destinations (in Node B) Next hop: Node A Next hop: Node D Next hop: Node E Node A [0 65535] (ALL PACKETS) Node C [0 23723] [23724 65535] Node D [0 65535] (ALL PACKETS) Node E [0 65535] (ALL PACKETS) Node F [0 34447] [34448 65535] Node G [0 52142] [52143 65535]
AMP Load Balancing Conservative load balancing mechanism the size of load adjustment steps is changed dynamically 100 80 p s S te n t e tm A djus d a o L o f S i z e 60 40 20 0-20 -40-60 -80 Opposite Direction Initial Direction -100 0 50 100 150 200 Time [s]
Publications I. Gojmerac, T. Ziegler, P. Reichl: Adaptive Multipath Routing Based on Local Distribution of Link Load Information. Proc. QoFIS'03, Stockholm, October 2003. I. Gojmerac, T. Ziegler, F. Ricciato, P. Reichl: Adaptive Multipath Routing for Dynamic Traffic Engineering. Proc. GLOBECOM'03, San Francisco, November 2003.