High Performance Fair Bandwidth Allocation for Resilient Packet Rings

Size: px
Start display at page:

Download "High Performance Fair Bandwidth Allocation for Resilient Packet Rings"

Transcription

1 In Proceedings of the 5th ITC Specialist Seminar on Traffic Engineering and Traffic Management High Performance Fair Bandwidth Allocation for Resilient Packet Rings V. Gambiroza, Y. Liu, P. Yuan, and E. Knightly ECE Department, Rice University Houston, TX 775, USA violeta, yhliu, pyuan, Abstract The Resilient Packet Ring (RPR) IEEE 8.7 standard is under development as a new high-speed backbone technology for metropolitan area networks. A key performance objective of RPR is to simultaneously achieve high utilization, spatial reuse, and fairness, an objective not achieved by current technologies such as SONET and Gigabit Ethernet nor by legacy ring technologies such as FDDI. The core technical challenge for RPR is the design of a bandwidth allocation algorithm that dynamically achieves these properties. The difficulty is in the distributed nature of the problem, that upstream ring nodes must inject traffic at a rate according to congestion and fairness criteria downstream. Unfortunately, the proposed algorithms in the current draft standards have a number of critical limitations. For example, we show that in a two-flow two-link scenario with unbalanced and constant-rate traffic demand, a draft RPR algorithm will suffer from dramatic bandwidth oscillations within nearly the entire range of the link capacity. Moreover, such oscillations hinder spatial reuse and decrease throughput significantly. In this paper, we introduce a new dynamic bandwidth allocation algorithm called Distributed Virtual-time Scheduling in Rings (DVSR). The key idea is for nodes to compute a simple lower bound of temporally and spatially aggregated virtual time using per-ingress counters of packet (byte) arrivals. We show that with this information propagated along the ring, each node can remotely approximate the ideal fair rate for its own traffic at each downstream link. Hence, DVSR flows rapidly converge to their ring-wide fair rates while maximizing spatial reuse. To evaluate DVSR, we bound the deviation in service between DVSR and an idealized reference model, thereby bounding the unfairness. With simulations, we find that compared to current techniques, DVSR s convergence times are an order of magnitude faster (e.g., vs. 5 msec), oscillations are mitigated (e.g., ranges of % vs. up to %), and nearly complete spatial reuse is achieved (e.g.,.% throughput loss vs. %). Introduction Current technology choices for high-speed metropolitan ring networks provide a number of unsatisfactory alternatives. A SONET ring can ensure minimum bandwidths This research is supported by NSF Grants ANI-998 and ANI- 858, and by a Sloan Fellowship. (and hence fairness) between any pair of nodes. However, use of circuits prohibits unused bandwidth from being reclaimed by other flows and results in low utilization. On the other hand, a Gigabit Ethernet (GigE) ring can provide full statistical multiplexing and high utilization, but suffers from unfairness. For example, in the topology of Figure, nodes will obtain different throughputs to the core or hub node depending on their spatial location on the ring. Finally, legacy ring technologies such as FDDI do not employ spatial reuse. That is, by using a rotating token such that a node must have the token to transmit, only one node can transmit at a time. RPR Figure : Illustration of Resilient Packet Ring The IEEE 8.7 Resilient Packet Ring (RPR) working group was formed in early to develop a standard for bi-directional packet metropolitan rings. Unlike FDDI, the protocol supports destination packet removal so that a packet will not traverse all ring nodes and spatial reuse can be achieved. However, allowing spatial reuse introduces a challenge to ensure fairness among different nodes competing for ring bandwidth. Consequently, the key performance objective of RPR is to simultaneously achieve high utilization, spatial reuse, and fairness. flow (,5) flow (,) flow (,5) 9 flow (,5) 8 flow (,5) link link link link Figure : Parallel Parking Lot To illustrate spatial reuse and fairness, consider the depicted scenario in Figure in which four infinite demand flows share link in route to destination node 5. In this For the reader asking why a ring?, we note that rings are the most prevalent metro technology primarily for their protection and fault tolerance properties. Additional RPR goals beyond the scope of this paper include fast fault recovery similar to that of SONET

2 parallel parking lot example, each of these flows should receive / of the link bandwidth. Moreover, to fully exploit spatial reuse, flow (,) should receive all excess capacity on link, which is / due to the downstream congestion. The key technical challenge of RPR is design of a bandwidth allocation algorithm that can dynamically achieve such rates. Note that to realize this goal, some coordination among nodes is required. For example, if each node performs weighted fair queueing [8], a local operation without coordination among nodes, flows (,) and (,5) would obtain equal bandwidth shares at node so that flow (,) would receive a net bandwidth of / vs. the desired /. Thus, RPR algorithms must throttle traffic at ingress points based on downstream traffic conditions to achieve these rate allocations. Two draft proposals, Gandalf [6] and Aladdin [5], have been under development and debate since the inception of RPR. Additionally, a third consensus proposal termed Darwin [7] has been recently adopted as a compromise, incorporating most of its algorithmic properties from Gandalf as well as others from Aladdin. Unfortunately, we have found that the proposed algorithms have a number of important performance limitations. First, they are prone to severe and permanent oscillations in the range of the entire link bandwidth in simple unbalanced traffic scenarios with unbalanced input traffic rates, or unbalanced congestion on bottleneck links. Second, they are not able to fully achieve spatial reuse and fairness. Third, they require numerous control signal updates to converge to their fair bandwidths thereby hindering fast responsiveness. In this paper, we develop a new dynamic bandwidth allocation algorithm termed Distributed Virtual-time Scheduling in Rings (DVSR). Like current implementations, DVSR has a simple FIFO (or Strict Priority) transit path. However, with DVSR, each node uses its per-destination byte counters to construct a simple lower bound on the evolution of the spatially and temporally aggregated virtual time. That is, using measurements available at an RPR node, we compute the minimum cumulative change in virtual time since the receipt of the last control message, as if the node were performing weighted fair queueing at the granularity of ingress-aggregated traffic. By circulating such control information throughout the ring, we show how nodes can perform simple operations on the collected information and throttle their ingress flows to their ideal fair rates as defined in the RIAS reference model of []. Next, we study the performance of DVSR and RPR algorithms in general using a combination of theoretical analysis and simulation. In particular, we analytically bound DVSR s unfairness due to use of delayed and timeaveraged information in the control signal. We perform ns- and OPNET simulations to compare RPR algorithms and obtain insights into problematic scenarios and sources of poor algorithm performance. For example, we show that while DVSR can fully reclaim unused bandwidth in scenarios with unbalanced traffic, Gandalf suffers from permanent oscillation and significant utilization losses. We also show how DVSR s fairness mechanism can provide performance isolation among nodes throughputs. For example, in a Parking Lot scenario (Figure 9) with even moderately aggregated TCP flows from one node competing for bandwidth with non-responsive UDP flows from other nodes, all ingress nodes obtain nearly equal throughput shares with DVSR, quite different from the unfair node throughputs obtained with a GigE ring. Finally, we have developed a Gb/sec network processor implementation of DVSR. Details of the testbed and results of our measurement study on an eight-node ring can be found in reference []. Background on IEEE 8.7 RPR In this section, we describe the objectives of the Resilient Packet Ring (RPR) standard and briefly review a proposal under discussion in the IEEE 8.7 working group, Gandalf [6] (a successor to SRP []). The goal of all fairness algorithms is to achieve RIAS fairness (Ring Ingress Agreggated with Spatial Reuse), a property that we defined in [, ]. RIAS Fairness has two key components. The first component defines the level of traffic granularity for fairness determination at a link as an ingress-aggregated (IA) flow, i.e., the aggregate of all flows originating from a given ingress node. The second component of RIAS fairness ensures maximal spatial reuse subject to this first constraint. That is, bandwidth can be reclaimed by IA flows when it is unused either due to lack of demand or in cases of sufficient demand in which flows are bottlenecked elsewhere. Thus, RIAS allocations are quite different than max-min fairness and proportional fairness [,, 5].. RPR Node Architecture Control Message Transit In Egress Traffic Rate Controller(s) Station Transmit Buffer(s) Transit Buffer(s) Ingress (Station) Traffic N Fair Bandwidth Allocator Traffic Monitor Scheduler Transit Out Figure : Generic RPR Node Architecture The architecture of a generic RPR node is illustrated in Figure. First, observe that all station traffic entering the ring is first throttled by rate controllers. In the example of The Darwin proposal [7], introduced in January, uses most mechanisms from the Gandalf algorithm with additional features from the Aladdin algorithm [5]. We compare DVSR with Aladdin in []. In RPR terminology, station traffic is referred to as transmit traffic; we use the former term to more easily differentiate from transit traffic.

3 the Parallel Parking Lot, it is clear that to fully achieve spatial reuse, flow (,5) must be throttled to rate / at its ring ingress point. Second, these rate controllers are at a perdestination granularity. This allows a type of virtual output queueing analogous to that performed in switches to avoid head-of-line blocking, i.e., if a single link is congested, an ingress node should only throttle its traffic forwarded over that link. Next, RPR nodes have measurement modules (byte counters) to measure demanded and/or serviced station and transit traffic. These measurements are used by the fairness algorithm to compute a feedback control signal to throttle upstream nodes to the desired rates. Nodes that receive the control message use the information in the message, perhaps together with local information, to set the bandwidths for the rate controllers. The final component is the scheduling algorithm that arbitrates service among station and transit traffic. This scheduler is typically FIFO with buffer thresholds or Strict Priority (SP) of transit traffic over station traffic. Both ensure hardware simplicity and SP ensures that the transit path is lossless, i.e., once a packet is injected into the ring, it will not be dropped at a downstream node. The dynamic bandwidth control algorithm that determines the station rate controller values, and hence the basic fairness and spatial reuse properties of the system is the key component of RPR protocols and is the focus of the discussion below as well as throughout the paper. 5. Gandalf There are two key measurements for Gandalf s bandwidth control, forward rate and my rate. The former represents the service rate of all transit traffic and the latter represents the rate of all serviced station traffic [6]. 6 In both cases, it is important that the rates are measured at the output of the scheduler so that they represent serviced rates rather than offered rates. A node is said to be congested whenever forward rate[n] + my rate[n], where is a fixed value less than the link capacity. When a node is congested, it transmits a control message to upstream nodes containing the normalized service rate of its own station traffic, my rate[n]. When an upstream node receives a congestion message advertising my rate[n], it reduces its rate limiter values, termed allowed rate[i][j], for all values of on the path from to. The objective is to have upstream nodes throttle their own station rate controller values to their minimum received value of the collected my rate[n] values along the path to the destination. Thus, station traffic rates will not exceed the advertised my rate value of any node in the downstream path of a flow. If upstream node receiving the congestion message from node is also congested, it will propagate the 5 We focus attention on the committed rate class in which each node obtains a minimum bandwidth share and reclaims unused bandwidth in a weighted fair manner. RPR also defines a guaranteed class in which bandwidth cannot be reclaimed (and hence no such control algorithm is required), and a best effort class that is a special case of committed with a minimum rate of. 6 All rates actually represent byte counts over a fixed interval length. message upstream using the minimum of the node s newly computed allowed rate and its own measured my rate[n- ]. If node is not congested, it sets my rate[n-] to a null value to indicate a lack of congestion. Node will also set a null rate if it is congested but my rate[n-] forward rate[n-], as this situation indicates that the congestion is due to node s station traffic vs. transit traffic from further upstream. When nodes receive null messages, they increment their values of allowed rate by a fixed value. This allows the upstream node to reclaim additional bandwidth if one of the downstream flows reduces its rate. Moreover, such rate increases are essential to convergence to fair rates even in cases of static demand. Considering the above parking lot example, if a downstream node advertises my rate below the true fair rate (which does indeed occur before convergence), all upstream nodes will throttle to this lower rate; in this case, the downstream nodes will later become uncongested so that flows will increase their allowed rate. This process will then oscillate more and more closely around the targeted fair rates for this example. To dampen the system s oscillations during convergence, measurement and control variables such as my rate and allowed rate are necessarily low pass filtered. The logic of the algorithm is that if all nodes sharing the bottleneck send at the rate of the minimum received my rate, then flow rates are equalized and fairness is achieved. Without congestion, flows should periodically increase their allowed rate to ensure they are receiving their maximal bandwidth share. Finally, in Gandalf and Darwin, there is an option for each node to have only a single rate controller instead of per-destination rate controllers as depicted in Figure. Note that use of a single rate controller precludes achieving maximal spatial reuse. Returning to the example of Figure, flows (,5) and (,) require allowed rate[][5] = / and allowed rate[][] = / to achieve maximum spatial reuse. However, if only a single value allowed rate[] is used, this value must be /. Hence, flows (,5) and (,) would each obtain throughput /8 such that the total bandwidth for traffic originating from node is /.. Discussion There are a number of important limitations to the current IEEE 8.7 draft algorithms which we discuss below and explore using simulations in Section 5. First, the algorithms suffer from severe and permanent oscillations for scenarios with unbalanced traffic. For example, consider the Gandalf algorithm with two nodes and two flows such that flow (,) originating upstream has demand for the full link capacity, and flow (,) originating downstream has a low rate which we denote by. Hence the demands are constant rate and unbalanced. Since the traffic arrival rate downstream is!", the downstream link will become congested. Thus, a congestion message will arrive upstream containing the transmission rate of the downstream flow, in this case. Consequently, the upstream node must throttle its flow from rate to rate. At this point, the rate on the downstream link is #$ so that congestion clears, and upon

4 receiving null congestion messages, the upstream flow increases its rate back to. Repeating the cycle, the upstream flow s rate will permanently oscillate between the full link capacity and the low rate of the downstream flow. There are multiple effects of such oscillations, including throughput loss for these open loop flows, adverse effects and throughput loss to TCP traffic, increased delay jitter, etc. The key issue is that the congestion signal my rate does not accurately reflect the congestion status or fair rate at downstream nodes and hence the system oscillates in search of the correct fair rate. Second, Gandalf is not able to fully exploit spatial reuse. For Gandalf and the above example, the upstream flow s rate should be to achieve full spatial reuse. Instead, the rate is oscillating between and resulting in lower throughput and partial spatial reuse. Third, the Gandalf suffers from slow convergence times. In particular, to mitigate oscillations even for constant rate traffic inputs as in the example above, all measurements are low pass filtered. However, such filtering, when combined with the coarse feedback information, has the effect of delaying convergence (for scenarios where convergence does occur). Distributed Virtual time Scheduling in Rings (DVSR) In this section, we devise a distributed algorithm to dynamically realize the bandwidth allocations in the RIAS reference model. Our key technique is to have nodes construct a proxy of virtual time at the Ingress Aggregated flow granularity. This proxy is a lower bound on virtual time temporally aggregated over time and spatially aggregated over traffic flows sharing the same ingress point (IA flows). By distributing this information to other nodes on the ring, all nodes can remotely compute their fair rates at downstream nodes, and rate control their per-destination station traffic to the RIAS fair rates. We first describe the algorithm in an idealized setting, initially considering virtual time as computed in a GPS fluid system [8] with an IA flow granularity. We then progressively remove the impractical assumptions of the idealized setting, leading to the network-processor implementation described in []. As previously, we denote as the offered input rate (demanded rate) at time from ring ingress node to ring egress node. Moreover let denote the rate of the per-destination ingress shaper for this same flow.. Distributed Fair Bandwidth Allocation The distributed nature of the ring bandwidth allocation problem yields three fundamental issues that must be addressed in algorithm design. First, resources must be remotely controlled in that an upstream node must throttle its traffic according to congestion at a downstream node. Second, the algorithm must contend with temporally aggregated and delayed control information in that nodes are only periodically informed about remote conditions, and the received information must be a temporally aggregated summary of conditions since the previous control message. Finally, there are multiple resources to control with complex interactions among multi-hop flows. We next consider each issue independently... Remote Fair Queueing The first concept of DVSR is control of upstream ratecontrollers via use of ingress-aggregated virtual time as a congestion message received from downstream nodes. For a single node, this can be conceptually viewed as remotely transmitting packets at the rate that they would be serviced in a GPS system, where GPS determines packet service order according to a granularity of packets ingress nodes only (as opposed to ingress and egress nodes, micro-flows, etc.). GPS (a) GPS Server Rate Controller Feedback Delay D (b) Approximation MUX Figure : Illustration of Remote Fair Queueing Figure illustrates remote bandwidth control for a single resource. In this case, RIAS fairness is identical to flow max-min fairness so that GPS can serve as the ideal reference scheduler. Conceptually, consider that the depicted multiplexer (labelled MUX in Figure (b)) computes virtual time as if it is performing idealized GPS, i.e., the rate of change of virtual time is inversely proportional to the (weighted) number of backlogged flows. The system on the right approximates the service of the (left) GPS system via adaptive rate control using virtual time information. In particular, consider for the moment that the rate controllers receive continuous feedback of the multiplexer s virtual time calculation and that the delay in receipt of this information is. The objective is then to set the rate controller values to the flows service rates in the reference system. In the idealized setting, this can be achieved by the simple observation that the evolution of virtual time reveals the fair rates. In this case, considering a link capacity and denoting virtual time as, the rate for flow and hence the correct rate controller value is simply given by 7 when and otherwise. For example, consider the four flow parking lot example of Figure 9. Suppose that the system is initially idle so that, and that immediately after time, flows begin transmitting at infinite rate (i.e., they become infinitely backlogged flows). As soon as the multiplexer depicted in Figure (b) becomes backlogged, has slope /. With this value instantly fed back, all rate controllers are immediately set to = / and flows are serviced at their fair 7 Note that GPS has fluid service such that all flows are served at identical (or weighted) rates whenever they are backlogged. v(t)

5 rate. Suppose at some later time the th flow shuts off so that the fair rates are now /. As the th flow would no longer have packets (fluid) in the multiplexer, will now have slope / and the rate limiters are set to /. Thus, by monitoring virtual time, flows can increase their rates to reclaim unused bandwidth and decrease it as other flows increase their demand. Note that with flows, the rate controllers will never be set to rates below /, the minimum fair rate. Finally, notice that in this ideal fluid system with zero feedback delay, the multiplexer is never more than infinitesimally backlogged, as the moment fluid arrives to the multiplexer, flows are throttled to a rate equal to their GPS service rates. Hence, all buffering and delay is incurred before service by the rate controllers... Delayed and Temporally Aggregated Control Information The second key component of distributed bandwidth allocation in rings is that congestion and fairness information shared among nodes is necessarily delayed and temporally aggregated. That is, in the above discussion we assumed that virtual time is continually fed back to the rate controllers without delay. However, in practice feedback information must be periodically summarized and transmitted in a message to other nodes on the ring. Thus, delayed receipt of summary information is also fundamental to a distributed algorithm. For the same single resource example of Figure, and for the moment for, consider that every seconds the multiplexer transmits a message summarizing the evolution of virtual time over the previous seconds. If the multiplexer is continuously backlogged in the interval, then information can be aggregated via a simple time average. If the multiplexer is idle for part of the interval, then additional capacity is available and rate controller values may be further increased accordingly. Moreover, should not be reset to when the multiplexer goes idle, as we wish to track its increase over the entire window. Thus, denoting as the fraction of time during the previous interval that the multiplexer is busy serving packets, the rate controller value should be! The example depicted in Figure 5 illustrates this time averaged feedback signal and the need to incorporate that arises in this case (but not in the above case without time averaged information). Suppose that the link capacity is packet per second and that packet transmission times. If the traffic demand is such that six packets arrive from flow and two packets from flow, then flows are backlogged in the interval [,], flow in the interval [,8], and flows in [8,]. Thus, since =.8 the rate limiter value according to Equation () is.8. Note that if both flows increase their demand from their respective rates of.6 and. to this maximum rate controller value of.8, congestion will occur and the next cycle will have and fair rates of.5. () 6 5 Packet Size (a) Traffic Arrival for Flow v(t) 5 t Packet Size (c) Virtual Time (b) Traffic Arrival for Flow Figure 5: Temporally Aggregated Virtual Time Feedback Finally, consider that the delay to receive information is given by. In this case, rate controllers will be set at time to their average fair rate for the interval. Consequently, due to both delayed and time averaged information, rate controllers necessarily deviate from their ideal values, even in the single resource example. We consider such effects of and analytically in Section and via simulations in Section 5... Multi-node RIAS Fairness There are three components to achieving RIAS fairness encountered in multiple node scenarios. First, an ingress node must compute its minimum fair rate for the links along its flows paths. Thus, in the parking lot example, node initially receives fair rates, /, /, and / from the respective nodes on its path and hence sets its ingress rate to /. Second, if an ingress node has multiple flows with different egress nodes sharing a link, it must sub-allocate its per-link IA fair rate to these flows. The first and second steps can be combined by setting the rate limiter value to be where is the single link fair rate at link as given by Equation () and denotes the number of flows at link with ingress node. 8 Finally, we observe that in certain cases, the process takes multiple iterations to converge, even in this still idealized setting, and hence multiple intervals to realize the RIAS fair rates. The key reason is that nodes cannot express their true demand to all other nodes initially, as they may be bottlenecked elsewhere. For example, consider the scenario illustrated in Figure 6 in which all flows have infinite demand. After an initial window of duration, flow (,6) will be throttled to its RIAS fair rate of / on link 5. However, flow (,) will initially have its rate throttled to / rather than /, as there is no way yet for node to know that flow (,6) is bottlenecked elsewhere. Hence it will take a second interval in which the unused 8 This sub-allocation could also be scaled to the demand using the operator. For simplicity, we consider equal sub-allocation here. t t ()

6 capacity at link can be signalled to node, after which flow (,) will transmit at its RIAS fair rate of /. flow (,) flow (,6) flow (,6) flow (,6) flow (5,6) Figure 6: Upstream Parallel Parking Lot In practical scenarios, there are many additional factors affecting convergence time. We experimentally study DVSR s convergence time through simulations in Section 5.. DVSR Protocol In the discussion above, we presented DVSR s conceptual operation in an idealized setting. Here, we describe the DVSR protocol as implemented in the simulator and testbed. We divide the discussion into four parts: scheduling of station vs. transit packets, computation of the feedback signal (control message), transmission of the feedback signal, and rate limit computation. For scheduling, we allow two policies to arbitrate service among station and transit traffic. The first policy is first-in first-out scheduling of all offered traffic (station or transit) and the second is strict priority of transit traffic over station traffic (as performed by the Aladdin algorithm). As with both Gandalf and Aladdin scheduling policies, the motivation for the choices of FIFO and SP is to have simple scheduling on the transit path amenable to high speed and cost effective implementation. In the hardware and simulations, we perform FIFO scheduling, and we discuss tradeoffs between the choice of FIFO vs. SP in Section.. As above, for hardware simplicity, the node is not required to perform weighted fair queueing nor any of its variants. Hence, virtual time (or an approximation of virtual time) is not available from the scheduler (e.g., if the node were scheduling according to IA-granularity deficit round robin (DRR) [9] or start-time fair queueing [9] the scheduler itself would yield a mechanism to approximate virtual time). Here, we describe a simple mechanism to lower bound the evolution of virtual time over an interval using per-ingress byte counters. As inputs to the algorithm, a node measures the number of arriving bytes from each ingress node, including the station, over the window of duration. 9 We denote the measurement at this node from ingress node as (omitting the node superscript for simplicity). First, we observe that the exact value of cannot be derived only from byte counters as exposes shared congestion whereas byte counts do not. For example, consider that two packets from two ingress nodes arrive in a window of duration. If the packets arrive back-to-back, then increases by over an interval of packet transmission times. On the other hand, if the packets arrive separately so that their service does not overlap, then increases from to twice. Thus, the total increase in the former case is and in the latter case is, 9 Thus the measurements available to the algorithm are identical to those of the Aladdin draft. 5 6 with both cases having a total backlogging interval of packet transmission times. However, a simple lower bound to can be computed by observing that the minimum increase in occurs if all packets arrive at the beginning of the interval. This minimum increase will then provide a lower bound to the true fair rate. We denote as this bound on! " at a particular node. Moreover, consider that the byte counts from each ingress node are ordered such that for flows transmitting any traffic during the interval. Then is computed every seconds as given by the pseudo code of Table. Table : IA-fair Rate Computation at Intervals if ( else! i= Count = Rcapacity = while (( )&&( Count- - Rcapacity -= = Rcapacity / Count "! Note that when # (the link is not always busy over the previous interval), the fair rate is simply the largest ingress-aggregated flow transmission rate plus the unused capacity. When, the pseudocode computes the max-min fair allocation for the largest ingress-aggregated flow so that is given by $&%(' $. Implementation of the algorithm has several aspects not yet described. First, is easily computed by dividing the number of bytes transmitted by, the maximum number of bytes that could be serviced in. Second, ordering the byte counters such that )* requires a sort with complexity O(,+.-/, ). For a 6 node ring with shortest path routing, the maximum value of is such that +.-/, is 6. Finally, the main while loop in Table has at most iterations. As DVSR s computational complexity does not increase with the link capacity, and typical values of are. to 5 msec, the algorithm is easily performed in real time in our implementation s MHz network processor. We next address transmission of the feedback signal. In our implementation, we construct a single N-byte control message containing each node s most recently computed value of such that the message contains for the N-node ring. Upon receiving a control message, node replaces the byte with its most recently com- For simplicity of explanation, we consider the link capacity 5 to be in units bytes/sec and consider all nodes to have equal weight. ))

7 puted value of as determined according to Table. An alternate messaging approach more similar to Gandalf is to have each node periodically transmit messages with a single value vs. having all values in a circulating message. Our adopted approach results in fewer control message packet transmissions. The final step is for nodes to determine their rate controller values given their local measurements and current values of. This is achieved as described in Section.. in which each (ingress) node sub-allocates its per-link fair rates to the flows with different egress nodes.. Discussion We make several observations about the DVSR algorithm. First, note that if there are nodes forwarding traffic through a particular transit node, rate controllers will never be set to rates below, the minimum fair rate. Thus, even if all bandwidth is temporarily reclaimed by other nodes, each node can immediately transmit at this minimum rate; after receiving the next control message, upstream nodes will throttle their rates to achieve fairness at timescales greater than ; until, packets are serviced in FIFO order. Next, observe that by weighting ingress nodes, any set of minimum rates can be achieved, provided that the sum of such minimum rates is less than the link capacity. Third, while we have chosen FIFO scheduling to arbitrate service order among station and transit traffic, we note that alternate scheduling algorithms can also be used. For example, if transit traffic is given strict priority over station traffic, DVSR will have a lossless transit path. However, the drawback of SP scheduling is that a station could be delayed in initially accessing the ring if upstream nodes have previously reclaimed unused capacity, i.e., the station traffic may not be transmitted until upstream nodes receive the updated control message expressing the station s new demand. A third choice for the scheduler is a variant of weighted fair queueing such as deficit round robin. Compared to FIFO and SP, DRR would have improved shortterm fairness at timescales less than, and outputs of the scheduler could also aid in the computing the IA virtual time approximation. The tradeoff is the additional transit path complexity as compared to FIFO and SP. As described in Section, current IEEE 8.7 proposals use strict priority and FIFO with buffer thresholds primarily for transitpath hardware simplicity, and we also adopt FIFO and SP for DVSR implementation. Finally, we observe that DVSR does not low pass filter control signal values at transit nodes nor rate limiter values at stations. The key reason is that the system has a natural averaging interval built in via periodic transmission of control signals. By selecting an expressive control signal (namely, a bound on the average increase in IA virtual time as opposed to the station transit rate) no further damping is required. We experimentally study the algorithms relative convergence times in Section 5. Analysis of DVSR Fairness There are many factors of a realistic system that will result in deviations between DVSR service rates and ideal RIA fair rates. Here, we isolate the issue of temporal information aggregation and develop a simple theoretical model to study how impacts system fairness. The technique can easily be extended to study the impact of propagation delay, an issue we omit for brevity.. Scenario Node Buffer Rate Controller Scheduler Buffer Feedback GPS Figure 7: Single Node Model for DVSR We consider a simplified but illustrative scenario with remote fair queueing and temporally aggregated feedback as in Figure. We further assume that the multiplexer is an ideal fluid GPS server, and that the propagation delay is. We consider two flows and that have infinite demand and are continuously backlogged. For all other flows we consider the worst case traffic pattern that maximizes the service discrepancy between flows and. Thus, Figure 7 depicts the analysis scenario and highlights the relative roles of the node buffer queueing station traffic at rate controllers vs. the scheduler buffer queueing traffic at transit nodes. We say that a flow is node-backlogged if the buffer at its ingress node s rate controller is non-empty and that a flow is scheduler-backlogged if the (transit/station) scheduler buffer is non-empty. Moreover, whenever the available service rate at the GPS multiplexer is larger than the rate limiter value in DVSR, the flow is referred to as overthrottled. Likewise, if the available GPS service rate is smaller than the rate limiter value in DVSR, the flow is under-throttled. Note that as we consider flows with infinite demand, flows are always node-backlogged such that traffic enters the scheduler buffer at the rate controllers rates. Observe that the scheduler buffer occupancy increases in under-throttled situation. However, while an over-throttled situation may result in a flow being underserved, it may also be over-served if the flow has traffic queued previously.. Fairness Bound To characterize the deviation of DVSR from the reference model for the above scenario, we first derive an upper bound on the total amounts of over- and under-throttled traffic as a function of the averaging interval. For notational simplicity, we consider fixed size packets The true DVSR scheduler, packet FIFO, would be intractable for the analysis below.

8 such that time is slotted, and denote as the virtual time at time. Moreover, let denote the total non-idle time in the interval! and denote the number of flows (representing ingress nodes) by. The bound for under-throttled traffic is derived as follows. Lemma A node-backlogged flow in DVSR can be under throttled by at most. Proof: For a node-backlogged flow, an underthrottled situation occurs when the fair rate decreases, since the flow will temporarily be throttled using the previous higher rate. In such a case, the average slope of decreases between times and!. For a system with flows, the worst case of under-throttling occurs when the slope repeatedly decreases for consecutive periods of duration. Otherwise, if the fair rate increases, flow will be over throttled, and the occupancy of the scheduler buffer is decreasing during that period. Thus, assuming flow enters the system at time, and denoting as the total amount of under-throttled traffic for flow by time, we have *! since! is the total service obtained during slot for flow as well as the total throttled traffic for slot!. The last step holds because for a flow with infinite demand, is between and during an under-throttled period. Similarly, the following lemma establishes the bound for the over-throttled case. Lemma A node-backlogged flow in DVSR can be over throttled by at most. Proof: For a node backlogged flow, over throttling occurs when the available fair rate increases. In other words, a flow will be over throttled when the average slope of increases from to!. The worst case is when this occurs for consecutive periods of duration. For over-throttled situations, the server can potentially be idle. According to DVSR, the total throttled amount for time slot! will be!!. Thus, assuming flow enters the system at time, and denoting as the over-throttling of flow by slot, we have that!!! *!! * where the last step holds since! is no less than. Lemmas and are illustrated in Figure 8. Let (labelled fair share ) denote the cumulative (averaged) fair share for flow in each time slot given the requirements in this time slot. Let (labelled rate controller ) denote the throttled traffic for flow. Lemmas and specify that will be within the range of of. Furthermore, let (labelled service obtained ) denote the cumulative service for flow. Then DVSR guarantees that if flow has infinite demand, will not be less than. This can be justified as follows. As long as is less than (i.e., flow is scheduler backlogged), flow is guaranteed to obtain a fair share of service. Hence, the slope of will be no less than that of. Otherwise, flow would be in an over-throttled situation, and, and from Lemma, is no less than ". Also notice that can be no larger than, so that the service for flow is within the range of of as well. 6 5 Fair Share Rate Controller Service Obtained 6 8 Time (T) Figure 8: Illustration of Fairness Bound From the above analysis, we can easily derive a fairness bound for two flows with infinite demand as follows. Lemma The service difference during any interval for two flows and with infinite demand is bounded by # under DVSR. Proof: Observe that scheduler-backlogged flows will get no less than their fair shares due to the GPS scheduler. Therefore, for an under-throttled situation, each flow will receive no less than its fair share. Hence unfairness only can occur during over-throttling. In such a scenario, a flow can only obtain additional service of its under-throttled amount. On the other hand, a flow can at most be underserved by its over-throttled amount. From Lemmas and, this amount can at most be #. Finally, note that for the special case of, the bound goes to zero so that DVSR achieves perfect fairness without any over/under throttling.. Discussion The above methodology can be extended to multiple DVSR nodes in which each flow has one node buffer (at the ingress point) but multiple scheduler buffers. In this case, under-throttled traffic may be distributed among multiple scheduler buffers. On the other hand, for multiple nodes, to maximize spatial reuse, DVSR will rate control a flow at the ingress node using the minimum throttling rate from all the links. By substituting the single node throttling rate

9 with the minimum rate among all links, Lemmas and can be shown to hold for the multiple node case as well. Despite the simplified scenario for the above analysis, it does provide a simple if idealized fairness bound of #. For a Gb/sec ring with 6 nodes and =.5 msec, this corresponds to a moderate maximum unfairness of 5 kb, i.e., 5 kb bounds the service difference between two infinitely backlogged flows under the above assumptions. 5 Simulation Experiments In this section, we use simulations to study the performance of DVSR and provide comparisons with Gandalf. Moreover, as a baseline we compare with a Gigabit Ethernet (GigE) Ring that has no distributed bandwidth control algorithm and simply services arriving packets in first-in first-out order. We divide our study into two parts. First, we study DVSR in the context of the basic RPR goals of achieving spatial reuse and fairness. We also explore interactions between TCP congestion control and DVSR s RIAS fairness objectives. Second, we compare RPR algorithms focusing on convergence time, oscillations, and throughput. We highlight the case of unbalanced traffic that causes throughput degradations to Gandalf and point out its root causes. While our study is necessarily not exhaustive, we choose a number of scenarios that illustrate the challenges and key issues in RPR algorithm design. All DVSR simulation results are obtained with our ns- implementation. All Gandalf results are obtained using the OPNET simulation modules from the IEEE 8.7 development group []. Unless otherwise specified, we consider 6 Mbps links (OC-), KByte buffer size, kbyte packet size, and. msec link propagation delay between each pair of nodes. For a ring of nodes, we set to be msec such that one DVSR control packet continually circulates around the ring. 5. Fairness and Spatial Reuse 5.. Fairness in the Parking Lot flow (,5) flow (,5) 9 flow (,5) 8 flow (,5) Figure 9: Parking Lot We first consider the parking lot scenario with a ten-node ring as depicted in Figure 9 and widely studied in []. Four constant-rate UDP flows (,5), (,5), (,5), and (,5) each transmit at an offered traffic rate of 6 Mbps, and we measure each flow s throughput at node 5. We perform the experiment with DVSR, Gandalf, and GigE (for comparison, we set the GigE link rate to 6 Mbps) and present the results in Figure. The figure depicts the average normalized throughput for each flow over the 5 second simulation, i.e., the total received traffic at node 5 divided by GigE nodes can employ a simple form of coordination (backpressure) by sending collision signals when buffers are full Normalized Throughput DVSR GigE flow (,5) flow (,5) flow (,5) flow (,5) Flow Figure : Parking Lot the simulation time. The labels above the bars represent the un-normalized throughput in Mbps. We make the following observations about the figure. First, DVSR and Gandalf achieve the correct RIAS fair rates (6/) to within %. In contrast, without the coordinated bandwidth control of the RPR algorithms, GigE fails to ensure fairness, with flow (,5) obtaining 5% throughput share whereas flow (,5) obtains.5%. 5.. Performance Isolation for TCP Traffic Unfairness among congestion-responsive TCP flows and non-responsive UDP flows is well established. However, suppose one ingress node transmits only TCP traffic whereas all other ingress nodes send high rate UDP traffic. The question is whether DVSR can still provide RIAS fair bandwidth allocation to the node with TCP flows, i.e., can DVSR provide inter-node performance isolation? The key issue is whether DVSR s reclaiming of unused capacity to achieve spatial reuse will hinder the throughput of the TCP traffic. Normalized Throughput flows flows flows TCP flow (,5) Flow UDP flows (-,5) Figure : DVSR s TCP and UDP Flow Bandwidth Shares To answer this question, we consider the same parking For DVSR, we have repeated these and other experiments with Pareto on-off flows with various parameters and found identical average throughputs. The issue of variable rate traffic is more precisely explored with the TCP and convergence-time experiments below. Current Gandalf OPNET modules are not compatible with TCP modules, hence we limit our discussion to DVSR.

10 lot topology of Figure 9 and replace flow (,5) with multiple TCP micro-flows, where each micro-flows is a longlived TCP Reno flow (e.g., each representing a large file transfer). The remaining three flows are each constant rate UDP flows with rate. (86.6 Mbps). Ideally the TCP traffic would obtain throughput.5, which is the RIAS fair rate between nodes and 5. However, Figure indicates that whether this rate is achieved depends on the number of TCP micro-flows composing flow (,5). For example, with only 5 TCP micro-flows, the total TCP throughput for flow (,5) is.7, considerably above the pure excess capacity of., but below the target of.5. The key reason is that upon detecting loss, the TCP flows reduce their rate providing further excess capacity for the aggressive UDP flows to reclaim. The TCP flows can eventually reclaim that capacity via linear increase of their rate in the congestion avoidance phase, but their throughput suffers on average. However, this effect is mitigated with additional aggregated TCP micro-flows such that for or more micro-flows, the TCP traffic is able to obtain the same share of ring bandwidth as the UDP flows. The reason is that with highly aggregated traffic, loss events do not present the UDP traffic with a significant opportunity to reclaim excess bandwidth, and DVSR can fully achieve RIAS fairness. In contrast, for GigE and TCP flows, the TCP traffic obtains a throughput share of %, significantly below its fair share of 5%. Thus, GigE rings cannot provide the node-level performance isolation provided by DVSR rings. 5.. RIAS vs. Proportional Fairness for TCP Traffic Normalized Throughput DVSR GigE flow (,5) flow (,5) flow (,5) flow (,5) Flow 6.5 Figure : DVSR Throughputs for TCP Micro-Flows Next, we consider the case that each of the four flows in the parking lot is a single TCP micro-flow, and present the corresponding throughputs for DVSR and GigE in Figure. As expected, with a GigE ring the flows with the fewest number of hops and lowest round trip time receive the largest bandwidth shares. However, DVSR seeks to eliminate such spatial bias and provide all ingress nodes with an equal share. For DVSR and a single flow per ingress this is achieved to within approximately %. This margin narrows to % by TCP micro-flows per ingress node (not shown). Thus, with sufficiently aggregated TCP traffic, a DVSR ring appears as a single node to TCP flows such that there is no bias to different RTTs. 5.. Spatial Reuse in the Parallel Parking Lot We now consider the spatial reuse scenario of the Parallel Parking Lot (Figure ) again with each flow offering traffic at the full link capacity (and hence, balanced traffic load). The rates that achieve IA fairness while maximizing spatial reuse are.5 for all flows except flow (,) which should receive all excess capacity on link and receive rate.75. Normalized Throughput DVSR GigE flow (,5) flow (,5) flow (,5) flow (,5) flow (,) Flow Figure : Spatial Reuse in Parallel Parking Lot Figure shows that the average throughput for each flow for DVSR is within % of the RIAS fair rates. Gandalf can also achieve these ideal rates within the same range if the per-destination queue option is invoked. In contrast, as with the Parking Lot example, GigE favors downstream flows for the bottleneck link, and diverges significantly from the RIAS fair rates. 5. Comparison of RPR Algorithms 5.. Convergence In this experiment, we study the convergence times of the algorithms using the parking lot topology and UDP flows with normalized rate. (8.8 Mbps). The flows starting times are staggered such that flows (,5), (,5), (,5), and (,5) begin transmission at times,.,., and. seconds respectively. Figures (a) and (b) depict the throughput over windows of duration. Observe that DVSR converges in two ring times, i.e., msec, whereas Gandalf takes approximately 5 msec to converge. Moreover, the range of oscillation during convergence is significantly reduced for DVSR as compared to Gandalf. However, note that the algorithms have a significantly different number of control messages. Gandalf s control update interval is fixed to. msec so that it has received 5 control messages in 5 msec before converging. In contrast, DVSR has received control messages in msec. For each of the algorithms, we also explore the sensitivity of the convergence time to the link propagation delay and feedback update time. We find that in both cases, the relationships are largely linear across the range of delays of interest for metropolitan networks. For example, with link propagation delays increased by a factor of so that the ring time is msec, DVSR takes approximately msec to converge, slightly larger than #.

Design, Analysis, and Implementation of DVSR: A Fair, High Performance Protocol for Packet Rings

Design, Analysis, and Implementation of DVSR: A Fair, High Performance Protocol for Packet Rings IEEE/ACM Transactions on Networking, ():8-, February 4. Design, Analysis, and Implementation of DVSR: A Fair, High Performance Protocol for Packet Rings V. Gambiroza, P. Yuan, L. Balzano, Y. Liu, S. Sheafor,

More information

The IEEE Media Access Protocol for High-Speed Metropolitan-Area Resilient Packet Rings

The IEEE Media Access Protocol for High-Speed Metropolitan-Area Resilient Packet Rings The IEEE 8.7 Media Access Protocol for High-Speed Metropolitan-Area Resilient Packet Rings V. Gambiroza, P. Yuan, and E. Knightly Abstract The Resilient Packet Ring (RPR) IEEE 8.7 standard is under development

More information

End-to-End Performance and Fairness in Multihop Wireless Backhaul Networks

End-to-End Performance and Fairness in Multihop Wireless Backhaul Networks End-to-End Performance and Fairness in Multihop Wireless Backhaul Networks Violeta Gambiroza, Bahareh Sadeghi, and Edward W. Knightly Department of Electrical and Computer Engineering Rice University Houston,

More information

Resilient Packet Rings with Heterogeneous Links

Resilient Packet Rings with Heterogeneous Links Resilient Packet Rings with Heterogeneous Links Mete Yilmaz Edge Routing Business Unit Cisco CA 95134, USA myilmaz@cisco.com Nirwan Ansari Department of Electrical and Computer Engineering New Jersey Institute

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Performance Evaluation of Scheduling Mechanisms for Broadband Networks Performance Evaluation of Scheduling Mechanisms for Broadband Networks Gayathri Chandrasekaran Master s Thesis Defense The University of Kansas 07.31.2003 Committee: Dr. David W. Petr (Chair) Dr. Joseph

More information

P D1.1 RPR OPNET Model User Guide

P D1.1 RPR OPNET Model User Guide P802.17 D1.1 RPR OPNET Model User Guide Revision Nov7 Yan F. Robichaud Mark Joseph Francisco Changcheng Huang Optical Networks Laboratory Carleton University 7 November 2002 Table Of Contents 0 Overview...1

More information

Lecture (08, 09) Routing in Switched Networks

Lecture (08, 09) Routing in Switched Networks Agenda Lecture (08, 09) Routing in Switched Networks Dr. Ahmed ElShafee Routing protocols Fixed Flooding Random Adaptive ARPANET Routing Strategies ١ Dr. Ahmed ElShafee, ACU Fall 2011, Networks I ٢ Dr.

More information

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router Scheduling algorithms Scheduling Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Scheduling: choose a packet to transmit over a link among all

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

Appendix B. Standards-Track TCP Evaluation

Appendix B. Standards-Track TCP Evaluation 215 Appendix B Standards-Track TCP Evaluation In this appendix, I present the results of a study of standards-track TCP error recovery and queue management mechanisms. I consider standards-track TCP error

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

CHAPTER 5 PROPAGATION DELAY

CHAPTER 5 PROPAGATION DELAY 98 CHAPTER 5 PROPAGATION DELAY Underwater wireless sensor networks deployed of sensor nodes with sensing, forwarding and processing abilities that operate in underwater. In this environment brought challenges,

More information

Performance Consequences of Partial RED Deployment

Performance Consequences of Partial RED Deployment Performance Consequences of Partial RED Deployment Brian Bowers and Nathan C. Burnett CS740 - Advanced Networks University of Wisconsin - Madison ABSTRACT The Internet is slowly adopting routers utilizing

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Keshav Chapter 9.1, 9.2, 9.3, 9.4, 9.5.1, 13.3.4 ECE 1771: Quality

More information

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007) Seung-Jong Park (Jay) http://www.csc.lsu.edu/~sjpark 1 Today Wireline Fair Schedulling Why? Ideal algorithm Practical algorithms Wireless Fair Scheduling

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

Distributed bandwidth allocation for resilient packet ring networks

Distributed bandwidth allocation for resilient packet ring networks Computer Networks 49 (2005) 161 171 www.elsevier.com/locate/comnet Distributed bandwidth allocation for resilient packet ring networks Fahd Alharbi, Nirwan Ansari * Advanced Networking Laboratory, Department

More information

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals

Equation-Based Congestion Control for Unicast Applications. Outline. Introduction. But don t we need TCP? TFRC Goals Equation-Based Congestion Control for Unicast Applications Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) Jitendra Padhye Umass Amherst Jorg Widmer International Computer Science Institute

More information

Episode 5. Scheduling and Traffic Management

Episode 5. Scheduling and Traffic Management Episode 5. Scheduling and Traffic Management Part 2 Baochun Li Department of Electrical and Computer Engineering University of Toronto Outline What is scheduling? Why do we need it? Requirements of a scheduling

More information

Routing in packet-switching networks

Routing in packet-switching networks Routing in packet-switching networks Circuit switching vs. Packet switching Most of WANs based on circuit or packet switching Circuit switching designed for voice Resources dedicated to a particular call

More information

CS 421: COMPUTER NETWORKS SPRING FINAL May 21, minutes

CS 421: COMPUTER NETWORKS SPRING FINAL May 21, minutes CS 421: COMPUTER NETWORKS SPRING 2015 FINAL May 21, 2015 150 minutes Name: Student No: Show all your work very clearly. Partial credits will only be given if you carefully state your answer with a reasonable

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

Local Area Networks (LANs) SMU CSE 5344 /

Local Area Networks (LANs) SMU CSE 5344 / Local Area Networks (LANs) SMU CSE 5344 / 7344 1 LAN/MAN Technology Factors Topology Transmission Medium Medium Access Control Techniques SMU CSE 5344 / 7344 2 Topologies Topology: the shape of a communication

More information

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks

Hybrid Control and Switched Systems. Lecture #17 Hybrid Systems Modeling of Communication Networks Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks João P. Hespanha University of California at Santa Barbara Motivation Why model network traffic? to validate

More information

Optical Packet Switching

Optical Packet Switching Optical Packet Switching DEISNet Gruppo Reti di Telecomunicazioni http://deisnet.deis.unibo.it WDM Optical Network Legacy Networks Edge Systems WDM Links λ 1 λ 2 λ 3 λ 4 Core Nodes 2 1 Wavelength Routing

More information

There are 10 questions in total. Please write your SID on each page.

There are 10 questions in total. Please write your SID on each page. Name: SID: Department of EECS - University of California at Berkeley EECS122 - Introduction to Communication Networks - Spring 2005 to the Final: 5/20/2005 There are 10 questions in total. Please write

More information

Chapter 12. Routing and Routing Protocols 12-1

Chapter 12. Routing and Routing Protocols 12-1 Chapter 12 Routing and Routing Protocols 12-1 Routing in Circuit Switched Network Many connections will need paths through more than one switch Need to find a route Efficiency Resilience Public telephone

More information

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007 Question 344 Points 444 Points Score 1 10 10 2 10 10 3 20 20 4 20 10 5 20 20 6 20 10 7-20 Total: 100 100 Instructions: 1. Question

More information

Utility-Based Rate Control in the Internet for Elastic Traffic

Utility-Based Rate Control in the Internet for Elastic Traffic 272 IEEE TRANSACTIONS ON NETWORKING, VOL. 10, NO. 2, APRIL 2002 Utility-Based Rate Control in the Internet for Elastic Traffic Richard J. La and Venkat Anantharam, Fellow, IEEE Abstract In a communication

More information

Improvement of Resilient Packet Ring Fairness February 2005 Revised August 2005

Improvement of Resilient Packet Ring Fairness February 2005 Revised August 2005 Improvement of Resilient Packet Ring Fairness February 5 Revised August 5 Fredrik Davik Simula Research Laboratory University of Oslo Ericsson Research Norway Email: bjornfd@simula.no Stein Gjessing Simula

More information

ECE 610: Homework 4 Problems are taken from Kurose and Ross.

ECE 610: Homework 4 Problems are taken from Kurose and Ross. ECE 610: Homework 4 Problems are taken from Kurose and Ross. Problem 1: Host A and B are communicating over a TCP connection, and Host B has already received from A all bytes up through byte 248. Suppose

More information

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks

Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Analysis of Binary Adjustment Algorithms in Fair Heterogeneous Networks Sergey Gorinsky Harrick Vin Technical Report TR2000-32 Department of Computer Sciences, University of Texas at Austin Taylor Hall

More information

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem

Congestion Control. Andreas Pitsillides University of Cyprus. Congestion control problem Congestion Control Andreas Pitsillides 1 Congestion control problem growing demand of computer usage requires: efficient ways of managing network traffic to avoid or limit congestion in cases where increases

More information

Chapter 24 Congestion Control and Quality of Service 24.1

Chapter 24 Congestion Control and Quality of Service 24.1 Chapter 24 Congestion Control and Quality of Service 24.1 Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 24-1 DATA TRAFFIC The main focus of congestion control

More information

Adaptive Routing. Claudio Brunelli Adaptive Routing Institute of Digital and Computer Systems / TKT-9636

Adaptive Routing. Claudio Brunelli Adaptive Routing Institute of Digital and Computer Systems / TKT-9636 1 Adaptive Routing Adaptive Routing Basics Minimal Adaptive Routing Fully Adaptive Routing Load-Balanced Adaptive Routing Search-Based Routing Case Study: Adapted Routing in the Thinking Machines CM-5

More information

Lecture 5: Performance Analysis I

Lecture 5: Performance Analysis I CS 6323 : Modeling and Inference Lecture 5: Performance Analysis I Prof. Gregory Provan Department of Computer Science University College Cork Slides: Based on M. Yin (Performability Analysis) Overview

More information

048866: Packet Switch Architectures

048866: Packet Switch Architectures 048866: Packet Switch Architectures Output-Queued Switches Deterministic Queueing Analysis Fairness and Delay Guarantees Dr. Isaac Keslassy Electrical Engineering, Technion isaac@ee.technion.ac.il http://comnet.technion.ac.il/~isaac/

More information

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks. Congestion Control in Today s Internet Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks Ion Stoica CMU Scott Shenker Xerox PARC Hui Zhang CMU Congestion Control in Today s Internet Rely

More information

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network

Improving the Data Scheduling Efficiency of the IEEE (d) Mesh Network Improving the Data Scheduling Efficiency of the IEEE 802.16(d) Mesh Network Shie-Yuan Wang Email: shieyuan@csie.nctu.edu.tw Chih-Che Lin Email: jclin@csie.nctu.edu.tw Ku-Han Fang Email: khfang@csie.nctu.edu.tw

More information

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks

Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks Performance of Multihop Communications Using Logical Topologies on Optical Torus Networks X. Yuan, R. Melhem and R. Gupta Department of Computer Science University of Pittsburgh Pittsburgh, PA 156 fxyuan,

More information

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols

Chapter II. Protocols for High Speed Networks. 2.1 Need for alternative Protocols Chapter II Protocols for High Speed Networks 2.1 Need for alternative Protocols As the conventional TCP suffers from poor performance on high bandwidth delay product links [47] meant for supporting transmission

More information

COMP/ELEC 429/556 Introduction to Computer Networks

COMP/ELEC 429/556 Introduction to Computer Networks COMP/ELEC 429/556 Introduction to Computer Networks Weighted Fair Queuing Some slides used with permissions from Edward W. Knightly, T. S. Eugene Ng, Ion Stoica, Hui Zhang T. S. Eugene Ng eugeneng at cs.rice.edu

More information

Lecture 4 Wide Area Networks - Routing

Lecture 4 Wide Area Networks - Routing DATA AND COMPUTER COMMUNICATIONS Lecture 4 Wide Area Networks - Routing Mei Yang Based on Lecture slides by William Stallings 1 ROUTING IN PACKET SWITCHED NETWORK key design issue for (packet) switched

More information

William Stallings Data and Computer Communications. Chapter 10 Packet Switching

William Stallings Data and Computer Communications. Chapter 10 Packet Switching William Stallings Data and Computer Communications Chapter 10 Packet Switching Principles Circuit switching designed for voice Resources dedicated to a particular call Much of the time a data connection

More information

THERE are a growing number of Internet-based applications

THERE are a growing number of Internet-based applications 1362 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 14, NO. 6, DECEMBER 2006 The Stratified Round Robin Scheduler: Design, Analysis and Implementation Sriram Ramabhadran and Joseph Pasquale Abstract Stratified

More information

Merits of Open Loop. Siamack Ayandeh Onex Communications Corp. a subsidiary of. TranSwitch Corp.

Merits of Open Loop. Siamack Ayandeh Onex Communications Corp. a subsidiary of. TranSwitch Corp. One TM Merits of Open Loop Siamack Ayandeh sayandeh@onexco.com Onex Communications Corp a subsidiary of TranSwitch Corp. Outline Allows for dynamic partitioning between the High and Low priority traffic

More information

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks Kok Soon Thia and Chen Khong Tham Dept of Electrical Engineering National University of Singapore Tel: (65) 874-5095 Fax:

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

IEEE RPR Performance (Corner Cases)

IEEE RPR Performance (Corner Cases) IEEE 802.17 (Corner Cases), AmerNet Adisak Mekkittikul, Lantern Interim Meeting Orlando, FL Motivation Need to simulate corner case scenarios and ensure that the proposed standard works under these conditions

More information

Network Model for Delay-Sensitive Traffic

Network Model for Delay-Sensitive Traffic Traffic Scheduling Network Model for Delay-Sensitive Traffic Source Switch Switch Destination Flow Shaper Policer (optional) Scheduler + optional shaper Policer (optional) Scheduler + optional shaper cfla.

More information

CS244a: An Introduction to Computer Networks

CS244a: An Introduction to Computer Networks Do not write in this box MCQ 9: /10 10: /10 11: /20 12: /20 13: /20 14: /20 Total: Name: Student ID #: CS244a Winter 2003 Professor McKeown Campus/SITN-Local/SITN-Remote? CS244a: An Introduction to Computer

More information

CS244 Advanced Topics in Computer Networks Midterm Exam Monday, May 2, 2016 OPEN BOOK, OPEN NOTES, INTERNET OFF

CS244 Advanced Topics in Computer Networks Midterm Exam Monday, May 2, 2016 OPEN BOOK, OPEN NOTES, INTERNET OFF CS244 Advanced Topics in Computer Networks Midterm Exam Monday, May 2, 2016 OPEN BOOK, OPEN NOTES, INTERNET OFF Your Name: Answers SUNet ID: root @stanford.edu In accordance with both the letter and the

More information

DQDB Networks with and without Bandwidth Balancing

DQDB Networks with and without Bandwidth Balancing DQDB Networks with and without Bandwidth Balancing Ellen L. Hahne Abhijit K. Choudhury Nicholas F. Maxemchuk AT&T Bell Laboratories Murray Hill, NJ 07974 ABSTRACT This paper explains why long Distributed

More information

Resource Allocation and Queuing Theory

Resource Allocation and Queuing Theory and Modeling Modeling Networks Outline 1 Introduction Why are we waiting?... 2 Packet-Switched Network Connectionless Flows Service Model Router-Centric versus Host-Centric Reservation Based versus Feedback-Based

More information

Variable Step Fluid Simulation for Communication Network

Variable Step Fluid Simulation for Communication Network Variable Step Fluid Simulation for Communication Network Hongjoong Kim 1 and Junsoo Lee 2 1 Korea University, Seoul, Korea, hongjoong@korea.ac.kr 2 Sookmyung Women s University, Seoul, Korea, jslee@sookmyung.ac.kr

More information

Congestion control in TCP

Congestion control in TCP Congestion control in TCP If the transport entities on many machines send too many packets into the network too quickly, the network will become congested, with performance degraded as packets are delayed

More information

INTERNATIONAL TELECOMMUNICATION UNION

INTERNATIONAL TELECOMMUNICATION UNION INTERNATIONAL TELECOMMUNICATION UNION TELECOMMUNICATION STANDARDIZATION SECTOR STUDY PERIOD 21-24 English only Questions: 12 and 16/12 Geneva, 27-31 January 23 STUDY GROUP 12 DELAYED CONTRIBUTION 98 Source:

More information

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering

McGill University - Faculty of Engineering Department of Electrical and Computer Engineering McGill University - Faculty of Engineering Department of Electrical and Computer Engineering ECSE 494 Telecommunication Networks Lab Prof. M. Coates Winter 2003 Experiment 5: LAN Operation, Multiple Access

More information

Improving Internet Performance through Traffic Managers

Improving Internet Performance through Traffic Managers Improving Internet Performance through Traffic Managers Ibrahim Matta Computer Science Department Boston University Computer Science A Glimpse of Current Internet b b b b Alice c TCP b (Transmission Control

More information

INGRESS RATE CONTROL IN RESILIENT PACKET RINGS

INGRESS RATE CONTROL IN RESILIENT PACKET RINGS ABSTRACT THOMBARE, ASAVARI ANIL. Ingress Rate Control in Resilient Packet Rings. (Under the direction of Dr. Ioannis Viniotis.) Resilient Packet Ring (RPR) Protocol is becoming popular in Metro Networks

More information

H3C S9500 QoS Technology White Paper

H3C S9500 QoS Technology White Paper H3C Key words: QoS, quality of service Abstract: The Ethernet technology is widely applied currently. At present, Ethernet is the leading technology in various independent local area networks (LANs), and

More information

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis

Computer Network Fundamentals Spring Week 3 MAC Layer Andreas Terzis Computer Network Fundamentals Spring 2008 Week 3 MAC Layer Andreas Terzis Outline MAC Protocols MAC Protocol Examples Channel Partitioning TDMA/FDMA Token Ring Random Access Protocols Aloha and Slotted

More information

CS244a: An Introduction to Computer Networks

CS244a: An Introduction to Computer Networks Name: Student ID #: Campus/SITN-Local/SITN-Remote? MC MC Long 18 19 TOTAL /20 /20 CS244a: An Introduction to Computer Networks Final Exam: Thursday February 16th, 2000 You are allowed 2 hours to complete

More information

Fairness Example: high priority for nearby stations Optimality Efficiency overhead

Fairness Example: high priority for nearby stations Optimality Efficiency overhead Routing Requirements: Correctness Simplicity Robustness Under localized failures and overloads Stability React too slow or too fast Fairness Example: high priority for nearby stations Optimality Efficiency

More information

Congestion Control. COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk

Congestion Control. COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk Congestion Control COSC 6590 Week 2 Presentation By Arjun Chopra, Kashif Ali and Mark Obsniuk Topics Congestion control TCP and the internet AIMD congestion control Equation Based congestion control Comparison

More information

Transmission Control Protocol. ITS 413 Internet Technologies and Applications

Transmission Control Protocol. ITS 413 Internet Technologies and Applications Transmission Control Protocol ITS 413 Internet Technologies and Applications Contents Overview of TCP (Review) TCP and Congestion Control The Causes of Congestion Approaches to Congestion Control TCP Congestion

More information

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Investigating the Use of Synchronized Clocks in TCP Congestion Control Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle (UNC-CH) November 16-17, 2001 Univ. of Maryland Symposium The Problem TCP Reno congestion control reacts only to packet

More information

Computer Networking. Queue Management and Quality of Service (QOS)

Computer Networking. Queue Management and Quality of Service (QOS) Computer Networking Queue Management and Quality of Service (QOS) Outline Previously:TCP flow control Congestion sources and collapse Congestion control basics - Routers 2 Internet Pipes? How should you

More information

EE 122 Fall st Midterm. Professor: Lai Stoica

EE 122 Fall st Midterm. Professor: Lai Stoica EE 122 Fall 2001 1 st Midterm Professor: Lai Stoica Question 1 (15 pt) Layering is a key design principle in computer networks. Name two advantages, and one disadvantage to layering. Explain. Use no more

More information

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 2, APRIL 2000 185 A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services Ion Stoica, Hui Zhang, Member, IEEE, and

More information

Worst-case Ethernet Network Latency for Shaped Sources

Worst-case Ethernet Network Latency for Shaped Sources Worst-case Ethernet Network Latency for Shaped Sources Max Azarov, SMSC 7th October 2005 Contents For 802.3 ResE study group 1 Worst-case latency theorem 1 1.1 Assumptions.............................

More information

RESILIENT PACKET RING TECHNOLOGY

RESILIENT PACKET RING TECHNOLOGY 1 RESILIENT PACKET RING TECHNOLOGY INTRODUCTION 1. An important trend in networking is the migration of packet-based technologies from Local Area Networks to Metropolitan Area Networks (MANs). The rapidly

More information

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation

Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Achieving Distributed Buffering in Multi-path Routing using Fair Allocation Ali Al-Dhaher, Tricha Anjali Department of Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois

More information

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA

CS 556 Advanced Computer Networks Spring Solutions to Midterm Test March 10, YOUR NAME: Abraham MATTA CS 556 Advanced Computer Networks Spring 2011 Solutions to Midterm Test March 10, 2011 YOUR NAME: Abraham MATTA This test is closed books. You are only allowed to have one sheet of notes (8.5 11 ). Please

More information

Call Admission Control in IP networks with QoS support

Call Admission Control in IP networks with QoS support Call Admission Control in IP networks with QoS support Susana Sargento, Rui Valadas and Edward Knightly Instituto de Telecomunicações, Universidade de Aveiro, P-3810 Aveiro, Portugal ECE Department, Rice

More information

Multiple Access Protocols

Multiple Access Protocols Multiple Access Protocols Computer Networks Lecture 2 http://goo.gl/pze5o8 Multiple Access to a Shared Channel The medium (or its sub-channel) may be shared by multiple stations (dynamic allocation) just

More information

Answers to Sample Questions on Transport Layer

Answers to Sample Questions on Transport Layer Answers to Sample Questions on Transport Layer 1) Which protocol Go-Back-N or Selective-Repeat - makes more efficient use of network bandwidth? Why? Answer: Selective repeat makes more efficient use of

More information

Congestion Control in Communication Networks

Congestion Control in Communication Networks Congestion Control in Communication Networks Introduction Congestion occurs when number of packets transmitted approaches network capacity Objective of congestion control: keep number of packets below

More information

This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network

This Lecture. BUS Computer Facilities Network Management. Switching Network. Simple Switching Network This Lecture BUS0 - Computer Facilities Network Management Switching networks Circuit switching Packet switching gram approach Virtual circuit approach Routing in switching networks Faculty of Information

More information

Master s Thesis. Title. Supervisor Professor Masayuki Murata. Author Yuki Koizumi. February 15th, 2006

Master s Thesis. Title. Supervisor Professor Masayuki Murata. Author Yuki Koizumi. February 15th, 2006 Master s Thesis Title Cross-Layer Traffic Engineering in IP over WDM Networks Supervisor Professor Masayuki Murata Author Yuki Koizumi February 15th, 2006 Graduate School of Information Science and Technology

More information

On TCP friendliness of VOIP traffic

On TCP friendliness of VOIP traffic On TCP friendliness of VOIP traffic By Rashmi Parthasarathy WSU ID # 10975537 A report submitted in partial fulfillment of the requirements of CptS 555 Electrical Engineering and Computer Science Department

More information

THE fundamental philosophy behind the Internet is

THE fundamental philosophy behind the Internet is IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 1, FEBRUARY 2004 173 Network Border Patrol: Preventing Congestion Collapse and Promoting Fairness in the Internet Célio Albuquerque, Member, IEEE, Brett

More information

ECEN Final Exam Fall Instructor: Srinivas Shakkottai

ECEN Final Exam Fall Instructor: Srinivas Shakkottai ECEN 424 - Final Exam Fall 2013 Instructor: Srinivas Shakkottai NAME: Problem maximum points your points Problem 1 10 Problem 2 10 Problem 3 20 Problem 4 20 Problem 5 20 Problem 6 20 total 100 1 2 Midterm

More information

Queuing Mechanisms. Overview. Objectives

Queuing Mechanisms. Overview. Objectives Queuing Mechanisms Overview Objectives This module describes the queuing mechanisms that can be used on output interfaces. It includes the following topics: Queuing Overview FIFO Queuing Priority Queuing

More information

Improving TCP Performance over Wireless Networks using Loss Predictors

Improving TCP Performance over Wireless Networks using Loss Predictors Improving TCP Performance over Wireless Networks using Loss Predictors Fabio Martignon Dipartimento Elettronica e Informazione Politecnico di Milano P.zza L. Da Vinci 32, 20133 Milano Email: martignon@elet.polimi.it

More information

Kommunikationssysteme [KS]

Kommunikationssysteme [KS] Kommunikationssysteme [KS] Dr.-Ing. Falko Dressler Computer Networks and Communication Systems Department of Computer Sciences University of Erlangen-Nürnberg http://www7.informatik.uni-erlangen.de/~dressler/

More information

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation

A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks. Motivation A Hybrid Systems Modeling Framework for Fast and Accurate Simulation of Data Communication Networks Stephan Bohacek João P. Hespanha Junsoo Lee Katia Obraczka University of Delaware University of Calif.

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Communication using Multiple Wireless Interfaces

Communication using Multiple Wireless Interfaces Communication using Multiple Interfaces Kameswari Chebrolu and Ramesh Rao Department of ECE University of California, San Diego Abstract With the emergence of different wireless technologies, a mobile

More information

Adaptive RTP Rate Control Method

Adaptive RTP Rate Control Method 2011 35th IEEE Annual Computer Software and Applications Conference Workshops Adaptive RTP Rate Control Method Uras Tos Department of Computer Engineering Izmir Institute of Technology Izmir, Turkey urastos@iyte.edu.tr

More information

William Stallings Data and Computer Communications 7 th Edition. Chapter 12 Routing

William Stallings Data and Computer Communications 7 th Edition. Chapter 12 Routing William Stallings Data and Computer Communications 7 th Edition Chapter 12 Routing Routing in Circuit Switched Network Many connections will need paths through more than one switch Need to find a route

More information

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca CSCI 1680 Computer Networks Fonseca Homework 1 Due: 27 September 2012, 4pm Question 1 - Layering a. Why are networked systems layered? What are the advantages of layering? Are there any disadvantages?

More information

Experimental Framework and Simulator for the MAC of Power-Line Communications

Experimental Framework and Simulator for the MAC of Power-Line Communications Experimental Framework and Simulator for the MAC of Power-Line Communications Technical Report Christina Vlachou EPFL, Switzerland christinavlachou@epflch Julien Herzen EPFL, Switzerland julienherzen@epflch

More information

A Probabilistic Approach for Achieving Fair Bandwidth Allocations in CSFQ

A Probabilistic Approach for Achieving Fair Bandwidth Allocations in CSFQ A Probabilistic Approach for Achieving Fair Bandwidth Allocations in Peng Wang David L. Mills Department of Electrical & Computer Engineering University of Delaware Newark, DE 976 pwangee@udel.edu; mills@eecis.udel.edu

More information