Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Similar documents
This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network

Layer 3: Network Layer. 9. Mar INF-3190: Switching and Routing

Design of a Weighted Fair Queueing Cell Scheduler for ATM Networks

Scheduling Algorithms to Minimize Session Delays

RECHOKe: A Scheme for Detection, Control and Punishment of Malicious Flows in IP Networks

UNIT- 2 Physical Layer and Overview of PL Switching

Data Link Layer (part 2)

DEPARTMENT of. Computer & Information Science & Engineering

RCRT:Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks

Lecture (08, 09) Routing in Switched Networks

Basics (cont.) Characteristics of data communication technologies OSI-Model

FIRM: A Class of Distributed Scheduling Algorithms for High-speed ATM Switches with Multiple Input Queues

Computation of Multiple Node Disjoint Paths

The GLIMPS Terabit Packet Switching Engine

A Dynamic NOC Arbitration Technique using Combination of VCT and XY Routing

Worst-case Ethernet Network Latency for Shaped Sources

Routing in packet-switching networks

Unit 2 Packet Switching Networks - II

CSMA based Medium Access Control for Wireless Sensor Network

ADAPTIVE LINK WEIGHT ASSIGNMENT AND RANDOM EARLY BLOCKING ALGORITHM FOR DYNAMIC ROUTING IN WDM NETWORKS

Prioritized Shufflenet Routing in TOAD based 2X2 OTDM Router.

Absolute QoS Differentiation in Optical Burst-Switched Networks

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Network Working Group Request for Comments: 1046 ISI February A Queuing Algorithm to Provide Type-of-Service for IP Links

A Graph-based Approach to Compute Multiple Paths in Mobile Ad Hoc Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Topic 6: SDN in practice: Microsoft's SWAN. Student: Miladinovic Djordje Date:

William Stallings Data and Computer Communications. Chapter 10 Packet Switching

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

Wide area networks: packet switching and congestion

Chapter 7 CONCLUSION

A Probabilistic Approach for Achieving Fair Bandwidth Allocations in CSFQ

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Module 2 Communication Switching. Version 1 ECE, IIT Kharagpur

Communication using Multiple Wireless Interfaces

Sparse Converter Placement in WDM Networks and their Dynamic Operation Using Path-Metric Based Algorithms

1 Energy Efficient Protocols in Self-Aware Networks

Lecture (04 & 05) Packet switching & Frame Relay techniques Dr. Ahmed ElShafee

Lecture (04 & 05) Packet switching & Frame Relay techniques

White Paper. Major Performance Tuning Considerations for Weblogic Server

Combining In-Transit Buffers with Optimized Routing Schemes to Boost the Performance of Networks with Source Routing?

Routing Algorithm. How do I know where a packet should go? Topology does NOT determine routing (e.g., many paths through torus)

DiffServ Architecture: Impact of scheduling on QoS

Boosting the Performance of Myrinet Networks

Network Layer Enhancements

An Enhanced Dynamic Packet Buffer Management

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

UNIT-II OVERVIEW OF PHYSICAL LAYER SWITCHING & MULTIPLEXING

Switched Network Latency Problems Solved

TDT Appendix E Interconnection Networks

Three-section Random Early Detection (TRED)

Fig. 1. Superframe structure in IEEE

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

BROADBAND AND HIGH SPEED NETWORKS

Deadlock-free XY-YX router for on-chip interconnection network

Lecture 4 Wide Area Networks - Routing

Load Sharing with OCGRR for Network Processors Which Supports Different services

Backup segments. Path after failure recovery. Fault. Primary channel. Initial path D1 D2. Primary channel 1. Backup channel 1.

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Performance and Scalability with Griddable.io

FUTURE communication networks are expected to support

Chapter 12. Routing and Routing Protocols 12-1

A closer look at network structure:

Efficient Signaling Algorithms for ATM Networks

Growth. Individual departments in a university buy LANs for their own machines and eventually want to interconnect with other campus LANs.

Splitter Placement in All-Optical WDM Networks

Stop-and-Go Service Using Hierarchical Round Robin

AODV-PA: AODV with Path Accumulation

Optical networking technology

Basic Switch Organization

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

Performance Analysis of Storage-Based Routing for Circuit-Switched Networks [1]

Lecture 9: Bridging & Switching"

Design and Evaluation of I/O Strategies for Parallel Pipelined STAP Applications

Process Scheduling. Copyright : University of Illinois CS 241 Staff

EP2210 Scheduling. Lecture material:

Data and Computer Communications. Chapter 12 Routing in Switched Networks

Department of Computer Engineering University of California at Santa Cruz. File Systems. Hai Tao

Switch Architecture for Efficient Transfer of High-Volume Data in Distributed Computing Environment

QoS Configuration. Overview. Introduction to QoS. QoS Policy. Class. Traffic behavior

Call Admission Control in IP networks with QoS support

Analysis and Algorithms for Partial Protection in Mesh Networks

Network Model for Delay-Sensitive Traffic

Keywords: Medium access control, network coding, routing, throughput, transmission rate. I. INTRODUCTION

Performance Evaluation of Various Routing Protocols in MANET

Impact of bandwidth-delay product and non-responsive flows on the performance of queue management schemes

Fairness in bandwidth allocation for ABR congestion avoidance algorithms

Exploring Multiple Paths using Link Utilization in Computer Networks

The Google File System

CSE 123: Computer Networks Alex C. Snoeren. HW 2 due Thursday 10/21!

What is Network Acceleration?

MAC LAYER. Murat Demirbas SUNY Buffalo

Delay Controlled Elephant Flow Rerouting in Software Defined Network

DEPLOYMENT OF PERFORMANCE IN LARGE SCALE WIRELESS MESH NETWORK 1

A Hybrid Load Balance Mechanism for Distributed Home Agents in Mobile IPv6

Multiconfiguration Multihop Protocols: A New Class of Protocols for Packet-Switched WDM Optical Networks

6LPXODWLRQÃRIÃWKHÃ&RPPXQLFDWLRQÃ7LPHÃIRUÃDÃ6SDFH7LPH $GDSWLYHÃ3URFHVVLQJÃ$OJRULWKPÃRQÃDÃ3DUDOOHOÃ(PEHGGHG 6\VWHP

Delayed reservation decision in optical burst switching networks with optical buffers

File. File System Implementation. Operations. Permissions and Data Layout. Storing and Accessing File Data. Opening a File

Transcription:

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois 60616 Email: {aaldhahe, anjali}@iit.edu Alexandre Fortin Midtronics Inc. Willowbrook, Illinois 60527 Email: alexandrefortin@free.fr Abstract Traditional network routing algorithms send data from the source to destination along a single path. The selected path may experience congestion due to other traffic, thereby reducing throughput. Multi-path routing divides the data along multiple paths, with the aim of increasing overall throughput. Due to the different latencies of using multiple paths, packets are received out-of-order at the destination, and thus require reordering. This problem can be overcome by placing a buffer at the destination node in order to synchronize the incoming packets across the multiple paths. However, this solution is not scalable, since using a single large buffer is impractical, and can be expensive. An alternative is to distribute the buffering requirement along bounded buffers located at the intermediate nodes on the path, thereby reducing the need for a large buffer at the receiver. This work proposes several fair allocation algorithms aimed at improving the existing methods already in place to distribute the buffer requirement across the network in an efficient manner. Index Terms multi-path routing, distributed buffering, computer networks I. INTRODUCTION In traditional computer network connectivity, single-path routing algorithms select the shortest length path between the source and destination nodes. In an ideal situation, only one user has access to all the resources of the network, and does not have to share them with anyone else. However, this idealistic environment rarely exists. In reality, a network is used by multiple users, each with their own flow requirements. Therefore, using a single path to achieve connectivity between two nodes could lead to decreased performance and reduced throughput due to congestion and traffic caused by other users. An alternative aimed at avoiding the performance bottlenecks that come with using a single path is to divide the user flow requirement across multiple paths between the source and destination. By transmitting the same amount of traffic simultaneously over multiple paths, the congestion along one path will be decreased, and the overall throughput of the network will increase. The work in [1] establishes the principle that utilizing more than one path to transmit data between two nodes will lead to larger performance gains. Multipath routing algorithms take advantage of network redundancy and link availability in order to reduce network congestion and increase overall throughput [2] [3]. However, there are significant disadvantages and drawbacks to using such an algorithm to fulfill network connectivity. By dividing traffic over multiple paths, out-of- order packet delivery becomes a major concern. Since each path has its own length, or delay, sending packets across multiple paths causes these packets to arrive at the destination at different times, and hence, out of their original sequence. One approach to solving the problem of having different latencies is to place a buffer at the destination to hold the incoming packets while they are placed back in the correct order. The work in [4] illustrates that utilizing one buffer at the destination node is not a scalable solution when multiple users in the network are considered. An alternative, therefore, is to evenly distribute the amount of buffering required across the path itself. Each intermediate node along the path will have a shared responsibility in buffering the flow so that it reaches the destination properly. To do this, smaller bounded buffers are placed at every node in the network. The amount of buffer space that is required at the destination node is distributed, with portions of it placed at every intermediate node along the path between the source and destination, if such nodes exist. This method effectively distributes the buffering load across the entire network, and the amount of buffer space required at the destination is greatly reduced. Additionally, the buffers placed at the intermediate nodes also have the potential to buffer multiple flows, thereby making this solution even more scalable. The fair allocation algorithm put forth by [5] achieves distributed buffering by dedicating an equal share of the available resources at each node to every flow. Thus, each flow in the network is treated equally, as they all receive the same amount of buffering capacity, hence the term fair allocation. With each allocation performed, the algorithm transfers the buffered amount from the destination node to the intermediate node, and updates the available buffer space at each node. This process signifies the distribution of the requirement. Although this algorithm achieves a certain amount of distribution, the seminal work in [6] shows that it is hampered by limitations that restricts its effectiveness and performance. In this paper, we propose several modifications and improvements to the existing fair allocation algorithm. The existing fair allocation algorithm is used as the benchmark which all proposed algorithms will be compared and tested against.

The paper is organized as follows. In Section II, the existing algorithm is further detailed. In Section III, each proposed algorithm is introduced and an in-depth account of their implementation is provided. In Section IV, the algorithms are then tested against the existing model and each other to determine their performance characteristics. Future work is previewed, along with concluding remarks, in Section V. II. THE FAIR ALLOCATION ALGORITHM In order to distribute the amount of buffering required across a path, we must first determine how much buffering a flow requires. For a given flow, the amount of buffer space required at the destination is calculated as BR[f i ] = (d p d max ) f c (1) First, the difference between the delay of the path, d p, and the maximum path delay experienced at the destination node, d max, is determined, and the absolute value is taken. The maximum path delay represents the path with the worst delay that terminates at the destination node. This path, however, must be one of the multiple paths chosen by the heuristic model to fulfill the requirement. The d max parameter is included because we need to account for the worst-performing path, as it represents the performance bottleneck in the system. All other paths must buffer their flows so that their path delays effectively equal that of the worst-performing path. The result is then multiplied by the transmission requirement of the flow itself f c to determine the amount of buffering required to ensure in-order packet delivery. As previously described, the motivation of using a fair allocation scheme in distributed buffering is to ensure fairness among all the network flows. The current algorithm achieves fair allocation by equally distributing an intermediate nodes buffer space among all traversing flows. The flowchart in Figure 1 illustrates the existing algorithm. First, the algorithm examines every network flow that requires buffering, and determines the intermediate nodes along the flow. It then determines the amount of available buffer space at the node, the number of flows traversing through the node, and the number of nodes along the selected flow. The amount of dedicated buffer space for a selected flow at a specific intermediate node is computed as the destination node. This step is critical as it represents the release of occupied buffer space at the receiver that will now be handled by the intermediate node. Although the existing fair allocation model is simplistic in nature, its performance is poor. Based on results from [5], the amount of released buffer space at the receiver is almost non-existent, regardless of the initial buffer size. The problem lies within the allocation formula itself. It uses the product of the number of flows traversing through a node and the number of nodes along the flow as the denominator. Thus, the denominator in the formula is directly proportional to the number of flows in the network. In typical large-scale networks with multiple users, the number of flows can be significantly large. Therefore, in order to achieve a reasonable result from this formual, the numerator, or available buffer space, would need to be several times larger than the denominator, which is neither practical nor sustainable. The results indicate that the allocation formula yielded an insignificant value, which translated into the poor performance observed. The goal of the three proposed algorithms will be to rectify the existing allocation formula in order to significantly increase the amount of released buffer space. They will also attempt to address the issue of wasted resources that comes as a result of using a fair allocation method. Buffer[f i ] = Buffer[n] t f n f (2) where Buf f er[n] represents the available buffer space at node n, t f is the number of flows traversing through the node, and n f is the number of nodes along the selected flow. The calculated amount represents how much of the buffer requirement can be handled by the intermediate node rather than by the destination node. Therefore, in order to complete the allocation, the algorithm subtracts the dedicated amount from the intermediate nodes buffer, and adds it to that of Fig. 1. Existing Fair Allocation Algorithm

Fig. 2. Proposed Per-Flow Algorithm III. IMPLEMENTATION DETAILS In this section, we detail the implementation and operation of the three proposed fair allocation algorithms. The proposed models have been implemented on a Java platform, and integrated into the multiple user, multiple path routing algorithm developed in [5]. A. Per-Flow Algorithm For every flow in the network that requires buffering, the per-flow algorithm begins by first examining the path of the flow to determine how many intermediate nodes exist between the source and destination, as shown in Figure 2. If no such nodes exist, distributed buffering cannot proceed, and the entire buffer requirement will continue to be satisfied by the destination node alone. Once the algorithm determines how many intermediate nodes there are along the flow, it calculates the fair amount to allocate to each node using the following formula Allocation = BR[f i] n fi (3) where BR[f i ] represents the buffer requirement of flow f i, and n fi is the number of intermediate nodes along the flow. This calculated amount represents how much of the buffer requirement each intermediate node must handle in order to distribute it evenly across the flow. Next, the algorithm can perform the allocation itself. It begins at the first intermediate node along the flow, and determines if the node has enough available resources. If not, the node is bypassed, and the next intermediate node, if existent, is examined. If enough resources do exist at the node, a portion of its buffer equal to the calculated fair amount is allocated to the flow under consideration. An equal amount of buffer space is released back to the destination node, thus signaling the transfer of buffering responsibility from the destination to the intermediate node. The allocation process is repeated with all intermediate nodes along the flow until the requirement has been satisfied, or the node buffers have been exhausted. If at the end of the allocation process there remains an unallocated portion of the original requirement, the algorithm will attempt to allocate this remaining amount to intermediate nodes that were not used previously, since they did not have enough buffer capacity to accomodate the calculated fair amount. The per-flow algorithm is simple in its implementation, efficient in its execution, and will significantly improve the amount of released buffer space as compared to the existing fair allocation algorithm. However, its primary drawback is the fact that a flow with a large buffer requirement could potentially monopolize the entire buffer space of an intermediate node, rendering it unavailable to other flows. B. Per-Node Algorithm The per-node algorithm performs buffer allocation in a manner similar to that of the existing model. However, there are several contrasting features in this implementation. As the flowchart in Figure 3 illustrates, the algorithm has two distinct phases: distribution and allocation. In the distribution phase, the algorithm determines the amount of buffer space each flow is allotted at every node in the network. This allotment is then used to perform the actual buffering in the allocation phase. The distribution phase begins by examining every node in the network to determine how many traversing flows there are. Flows that do not have a buffer requirement are filtered out. Next, the algorithm calculates the fair allocation amount that is to be used for a particular node, as such Allocation = Buffer[n] t f (4) This calculated value, along with the identification number of the corresponding node, are stored and used in the allocation phase. This formula differs from the one used in the existing algorithm as it no longer includes the number of nodes along the traversing flow as part of the denominator. Once the buffer

algorithm utilizes a similar two phase system found in the per-node model to distribute the available resources, and then allocate them to the flows. The distribution phase, however, is more complex, as this algorithm allocates the available buffer space at an intermediate node to flows based on the path delay they experience. This is in contrast to the per-node algorithm, which allocates an equal amount of a nodes buffer space to all traversing flows. From Eq. 1, we see that flows that experience a longer path delay require more buffering. Therefore, these flows are more likely to require more buffer space than flows that have smaller path delays. Fig. 3. Proposed Per-Node Algorithm spaces have been reserved for the flows, the allocation phase can begin. Much like the per-flow model, the algorithm goes flow-by-flow, and node-by-node to perform the allocation. If the flows buffer requirement has not been fully satisfied after the last intermediate node has been reached, the algorithm restarts the process, and attempts to allocate any remaining requirement ay each node using their leftover buffer space from the allocation formula, if available. Although the complexity of the per-node algorithm is more advanced than that of the existing model, the performance is markedly better, and for two distinct reasons. First, the allocation formula has been simplified, and the result provides more dedicated buffer space for every flow at each intermediate node. Secondly, unlike the existing algorithm, this formula does not include traversing flows that do not require buffering. By removing these flows, more resources are made available to flows that do require them. The per-node algorithm allows for a greater amount of buffer space to be released back to the destination node. However, it has not eliminated the problem of resource wastage. If a flow does not utilize all of the dedicated buffer space it has been allocated at an intermediate node, that space will go unused. Making these wasted resources available to other flows holds the potential for further performance gains. C. Weighted Path Algorithm This third and final proposed algorithm implements processes aimed at solving the shortcomings of the existing, per-flow and per-node models, all under one comprehensive solution. From Figure 4, we see that the weighted path Fig. 4. Proposed Weighted Path Algorithm The algorithm begins by examining every node in the network to determine how many traversing flows require buffering. Next, the algorithm calculates the differential delay, d, of every traversing flow, as such d = d p d max (5) The differential delay is defined as the absolute difference between the path delay experienced by the flow, d p, and the maximum path delay as seen by the sink node of the flow, d max. The algorithm computes the summation of all these differential delays, and uses these two results to calculate the dedicated buffer space for each traversing flow at an intermediate node, as such Allocation = d p d Buffer[n] (6)

The weighted path algorithm provides more resources to flows that need it, eliminates resource waste, and promotes fairness in the way allocation is performed. It successfully combines the favorable elements of all the other fair allocation algorithms that have been proposed. However, the primary weakness of this model is the significantly larger processing time required to execute the algorithm. The round-robin allocation process that has been implemented adds significant processing overhead, thereby driving up the time required to complete it. Fig. 5. GEANT Network Topology This formula provides a weighted amount of buffer space to the traversing node that is proportional to the path delay it experiences. Any buffer space remaining after this allotment is complete is placed in a free buffer pool, and will be used later to satisfy any unallocated requirement. Following the distribution of all available resources, the algorithm can now perform the actual allocation. This process is also different from previous models, as it will be performed using a roundrobin scheme. In Figure 1, we see that in the existing algorithm, allocation is performed at each node sequentially. The algorithm attempts to allocate as much of the buffer requirement to one node before moving on to the next one. Although this is efficient, it can produce an uneven distribution of the requirement across the nodes. In this algorithm, load balancing is achieved by performing allocation in a roundrobin manner. For every flow with a buffer requirement, the algorithm examines the intermediate nodes to determine how much buffer space has been reserved for this particular flow. The algorithm then allocates 1 unit of requirement to every node, one at a time, in a round-robin fashion. An equal amount is also released back to the destination node. This process continues until the requirement has been satisfied, or the reserved space at all the nodes has been exhausted. Once complete, the algorithm revisits every intermediate node along the flow to determine if there are any unused resources that have been reserved for that flow. These resources are released and placed into the free buffer pool. Finally, if the flow requirement has not been fully satisfied, the algorithm will attempt to allocate the remainder using the free buffer pool at every intermediate node, if available. Not only will this process eliminate buffer waste, it can potentially improve the amount of buffer space released back to the destination node by making unused resources available for other flows in need. IV. SIMULATION RESULTS In this section, the performance of the proposed fair allocation algorithms are compared against the existing models using a sophisticated network topology implemented on a Java platform. The performance is evaluated in terms of the amount of released buffer space, the average processing time to execute the allocation algorithm, and the distribution of the allocated buffer requirement across the network flow. In order to effectively test the performance of the algorithms that have been proposed, they must be tested under a real network scenario. The topology used is GEANT, a pan- European multi-gigabit computer network primarily used for research and academic purposes. The topology, as seen in Figure 5, consists of 33 nodes and 94 links. A total of 15 users have been randomly assigned for the purpose of this simulation. Each user has a source and destination node, and a flow requirement of data they wish to transmit between these two nodes. The first performance metric measured is the amount of buffer space that is released back to the destination nodes as a result of distributing all buffering requirements across the intermediate nodes. This metric is the most indicative of performance, as it measures the effectiveness of the resource allocation and distribution algorithms. The released buffer space is calculated by the algorithm by comparing the size of the destination buffer prior to distribution, then again afterwards. The program generates an output showing the change in buffer space following distribution for all the nodes in the network. The graph in Figure 6 illustrates the amount of released buffer space with relation to the total available buffer space at the destination nodes. Every node in the network has the same bounded buffer space. Since there are 6 unique destination nodes used in the simulation testbench, the total amount is the sum of these 6 buffer spaces. In general, these results show a significant increase in the amount of released buffer space in all three proposed algorithms as compared to the existing algorithm. This is a direct result to the modified and improved allocation formulas that were implemented in all three algorithms. Among the proposed algorithms, the per-flow and weighted models generally perform better than the per-node model. This can be attributed to the buffer waste avoidance techniques both models employ that effectively makes more resources available to other flows, and hence increases the amount of buffer space that is released back to the destination node.

Fig. 6. Released Buffer Space using Fair Allocation Algorithms The second performance metric examined is the processing time required to execute each of the fair allocation algorithms. The graph in Figure 7 illustrates the processing time of each algorithm with relation to the total available buffer space at the destination nodes. Although most of the fair allocation algorithms generally experience the same processing time, there is one that does not perform well. The weighted path algorithm experiences a processing time up to 8 times larger than the other models. The significant increase in processing time is due to the added overhead that is the result of the round-robin buffer allocation process used. V. CONCLUSION Multipath routing algorithms achieve higher throughput by dividing a flow across multiple paths. However, due to the different latencies of using multiple paths, packets are received out-of-order at the destination. By using a buffer at the destination node, packets incoming across the multiple paths can be synchronized. The use of a single buffer is neither scalable nor efficient, particularly when multiple users exist on the network. The alternative is to distribute the buffering requirement across bounded buffers located at the intermediate nodes on the path. In this work, several fair allocation algorithms aimed at improving existing methods are detailed, implemented, and integrated into a multiple user, multiple path routing algorithm. A simulation study is presented, and the results indicate that the proposed algorithms do indeed outperform the existing method. Future work will focus on improving the alternative distribution scheme known as chunk allocation. The work will focus on the development of an algorithm to dynamically select the optimal chunk size to use. At a higher level, future work will also focus on developing optimal and heuristic multipath routing solutions aimed at networks where delay and link costs are dynamic rather than static. Fig. 8. Buffer Distribution using Fair Allocation Algorithms Fig. 7. Processing Time of Fair Allocation Algorithms The final metric used to assess the performance of the fair allocation algorithms is buffer distribution. Although the results of this test do not improve the amount of released buffer space, it is an interesting metric to examine nonetheless. We are interested to see how each algorithm distributes the buffer requirement of a flow across the intermediate nodes. The graph in Figure 8 illustrates the distribution of the flow requirement among the intermediate nodes following allocation. The results show that the weighted path model performs the best in terms of distributing the flow requirement among the intermediate nodes as evenly as possible. This is made possible by the round-robin allocation process used, in which the flow requirement is allocated to one node at a time in 1 Mbit intervals. The per-flow and per-node models yield disproportionate buffer distributions. This can be attributed to the fact that both models do not perform allocation in a round-robin manner, but rather sequentially. ACKNOWLEDGEMENTS This work was supported by NSF grant NeTS-0916743 (ARRA - NSF CNS-0916743). REFERENCES [1] N. F. Maxemchuk, Dispersity Routing on ATM Networks, in Proceedings of IEEE Infocom, San Francisco, California, 1993, pp. 347 357. [2] H. Suzuki and F. A. Tobagi, Fast bandwidth reservation scheme with multi-link and multi-path routing in ATM networks, in Proceedings of IEEE Infocom, Florence, Italy, 1992, pp. 2233 2240. [3] I. Cidon, R. Rom, and Y. Shavitt, Analysis of Multi-Path Routing, IEEE/ACM Trans. on Networking, vol. 7, pp. 885 896, December 1999. [4] T. Anjali, A. Fortin, G. Calinescu, S. Kapoor, N. Kirubanandan, S. Tongngam, Multipath Network Flows: Bounded Buffers and Jitter, in Proceedings of IEEE Infocom, San Diego, California, 2010, pp. 1 7. [5] T. Anjali, A. Fortin, S. Kapoor, Multi-Commodity Network Flows over Multipaths with Bounded Buffers, in Proceedings of IEEE International Conference on Communications, Cape Town, South Africa, 2010, pp. 1 5. [6] A. Al-Dhaher, Distributed Buffering in Multipath Routing, Dept. of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois, 2010.