Aggregate Flow Control: Improving Assurances for Differentiated Services Network

Size: px
Start display at page:

Download "Aggregate Flow Control: Improving Assurances for Differentiated Services Network"

Transcription

1 Aggregate Flow Control: Improving Assurances for Differentiated Services Network Biswajit Nandy, Jeremy Ethridge, Abderrahmane Lakas, Alan Chapman {bnandy, jethridg, alakas, Nortel Networks, Ottawa, Canada Abstract The Differentiated Services Architecture is a simple, but novel, approach for providing service differentiation in an IP network. However, there are various issues to be addressed before any sophisticated end-to-end services can be offered. This work proposes an Aggregate Flow Control (AFC) technique with a Diffserv traffic conditioner to improve the bandwidth and delay assurance of differentiated services. A prototype has been developed to study the end-to-end behavior of customer aggregates. In particular, this new approach improves performance in the following manner: (1) fairness issues among aggregated customer traffic with different number of micro-flows in an aggregate, interaction of non-responsive traffic (UDP) and responsive traffic (TCP), and the effect of different packet sizes in aggregates; () improved transactions per second for short TCP flows; and (3) reduced inter-packet delay variation for streaming UDP traffic. Experiments are also performed in a topology with multiple congestion points to show an improved treatment of conformant aggregates, and the ability of AFC to handle multiple aggregates and differing target rates. Keywords Aggregate Flow Control, Congestion Management, Diffserv, TCP-friendly. I. INTRODUCTION The Differentiated Services (Diffserv) architecture [] has recently become the preferred method for addressing QoS issues in IP networks. An end-to-end differentiated service is obtained through the concatenation of per-domain services and Service Level Agreements (SLAs) between adjoining domains along the traffic path, from source to destination. Per-domain services are realized by traffic conditioning at the edge and simple differentiated forwarding mechanisms at the core of the network. One of the forwarding mechanisms recently standardized by the IETF is the Assured Forwarding (AF) [4] Per Hop Behaviors (PHB). The basis of the AF PHB is differentiated dropping of packets during congestion at a router. To build an end-toend service with AF, subscribed traffic profiles for customers are maintained at the traffic conditioning nodes at the edge of the network. The aggregated traffic is monitored; and packets are marked at the traffic conditioner. When an aggregate s measured traffic is within its committed information rate (CIR), its packets are marked with the lowest drop precedence, dp0. When traffic exceeds its CIR, but falls below its Peak Information Rate (PIR), packets are marked with a higher drop precedence, dp1. If the measured traffic exceeds its PIR, packets are marked with the highest drop precedence, dp. At the core of the network, during congestion, packets with a dp1 marking have a higher probability of being dropped than packets with a dp0 marking. Similarly, packets with a dp marking have a higher probability of being dropped than packets with a dp0 or dp1 marking. The different drop probabilities are achieved with a RED-like [1] Active Queue Management (AQM) technique, with three different sets of RED parameters one for each of the drop precedence markings. Although the IETF Diffserv Working Group has finalized the basic building blocks for Diffserv, the possible end-toend services that can be created for an end user of the AF PHB are still under development. Various issues with bandwidth and delay assurance in an AF PHB Diffserv network have been reported in recent research papers [5][6]. These issues need to be resolved before quantitative assurances of some form can be specified in SLA contracts. Recent papers have addressed the issues using intelligent traffic conditioning at the network edge [7][8][9] to improve bandwidth assurances. The intelligent traffic conditioning approach mitigates the impact of various factors that affect the distribution of available bandwidth, such as UDP/TCP interaction, RTT disparities, and target rate. A second approach is to address the very reason for the congestion at the network core by enforcing flow control for the customer aggregates at the edges of a domain. The architecture discussed in this paper, Aggregate Flow Control (AFC), follows this congestion management approach. AFC manages congestion in a manner that is fair to competing customer aggregates and that reduces the queues at the core of the network. AFC is an edge-to-edge control mechanism that has been combined with Diffserv traffic conditioning to address assurance issues for AF-based services. This new Diffserv overlay promotes treatment of aggregates that is independent of user data AFC performs congestion management on an aggregate, regardless of what user data it carries. Such data transparency (a) improves fairness in TCP/UDP interactions; and it renders achieved bandwidth for an aggregate insensitive to (b) the number of microflows it contains, and (c) its packet sizes. Additionally, AFC pushes congestion from the shared core of a Diffserv network onto the edges whose ingress traffic is causing congestion. This transition (a) improves the treatment of conformant aggregates, (b) protects short-lived TCP flows, and (c) reduces the jitter of streaming traffic. Section II discusses related studies of end-to-end bandwidth assurance issues in Diffserv networks and proposed improvements. Section III describes the architecture of the integrated edge router, which combines a Diffserv traffic conditioner with AFC. Section IV details the

2 implementation and Section V reports the experimental results. Finally, the discussion and conclusions are captured in Sections VI and VII, respectively. II. RELATED WORK Customer AFC Traf. DS Cond. Traf. Cond. Ibanez and Nichols [5], via simulation studies, show that RTT, target rate, and TCP/UDP interactions are key factors in the throughput of flows that obtain an Assured Service using a RIO-like (RED with In/Out) [3] scheme. Their main conclusion is that such an Assured Service cannot offer a quantifiable service to TCP traffic. Seddigh, Nandy, and Pieda [6] confirm with detailed experimental study that those factors are critical for biasing distribution of excess bandwidth in an over-provisioned network. Lin, Zheng, and Hou [7] propose an enhanced TSW profiler and two enhanced RIO queue management algorithms. Simulation results show that the combination of enhanced algorithms improves the throughput and fairness among aggregated customer flows with different target rates, RTTs, and co-existing TCP and UDP flows. However, the proposed solutions may not be scaleable, due to their dependence on state information at the core of the network. Yeom and Reddy [8] suggest an algorithm that improves fairness for the case of individual flows in an aggregate that have different RTTs. The proposed algorithm maintains perflow information at the edge of the network. Nandy et al. [9] use intelligent traffic conditioning at the edge of the network to address throughput issues in AF-based Diffserv networks. An improved policer algorithm is suggested to mitigate the impact of RTT disparities on the distribution of excess bandwidth. TCP/UDP interaction is addressed by mapping TCP and UDP flows to different drop precedences. Harrison and Kalyanaraman [1] propose an overlay architecture and algorithms for edge-to-edge traffic control. A core router indicates congestion by marking a bit at the IP layer. On receipt of congestion notification, the edge node enforces rate control on incoming traffic aggregates. This edge-to-edge rate control approach can be integrated with a Diffserv traffic conditioner to address the bandwidth and delay related issues in a Diffserv network. However, the paper does not discuss this extension. III. ARCHITECTURE Many of the above-mentioned bandwidth and delay assurance issues in a Diffserv network can be addressed if the customer traffic aggregates are managed in a controlled, TCP-friendly [13] manner. The Aggregate Flow Control overlay regulates the flow of the aggregated customer traffic into the core of the network. The control mechanism is based on the feedback from the core i.e., packet drop due to congestion at the core. Edge Router Core Figure 1. Diffserv Network with AFC Traffic aggregates that are exceeding their committed rate and causing congestion at the core are throttled at the edge of the network. Thus, the queues at the core of the network remain small, due to congestion control at the network edges. The queues caused by misbehaving aggregate flows are pushed back to the ingress point of the corresponding edge routers. As shown in Figure 1, the building blocks for the TCPfriendly Aggregate Flow Control mechanism reside with the Diffserv traffic conditioners at the edge routers. This approach does not assume any special congestion notification at the core routers. The edge-to-edge AFC architecture is an extension to a scheme called TCP Trunking, proposed by Chapman and Kung [10]. A study reporting the design, implementation, and performance of TCP Trunking in a best effort network is reported by Kung and Wang [11]. AFC extends and improves TCP Trunking and integrates it with a standard Diffserv traffic conditioner. AFC works in the following manner: (a) Control TCP connections are associated with a customer traffic aggregate between two network edges. (b) Control TCP packets are injected into the network to detect congestion along the path of the aggregated data. (c) Based on the packet drops of the control flow at the core routers, congestion control is enforced on the aggregated customer traffic at the ingress edge router. Control TCP connections play the role of controlling the behavior of the aggregates between the edges. Although the aggregated customer traffic and its associated TCP control packets are in separate flows, the traffic aggregate can be virtually viewed as the payload of the control TCP flows, while the control packets are the headers. It is assumed that control packets and data packets are following the same path between the edges. Thus, it can be said that at the core, there are a few special TCP flows carrying customer aggregates between two edges.

3 Classifier Flow Control Unit Data Packet Control Credit Meter Diffserv Policer Credit update Diffserv Meter Diffserv Marker Figure. Block Diagram of a Traffic Conditioner Figure depicts the functional block diagram for an edge node. A classifier segregates the traffic belonging to various customer aggregates. Each customer aggregate between two edges is controlled by a virtual TCP connection. Customer data is queued at the Flow Control Unit. The data is forwarded to Diffserv traffic conditioner only if credit is available for that customer aggregates. Credit is decremented whenever customer data is forwarded to the Diffserv traffic conditioner. AFC uses the notion of a virtual maximum segment size (vmss) for the control TCP flow. As explained earlier, conceptually, the customer data is the payload for the control TCP. The virtual maximum segment size for the control flow is the amount of customer data allowed for each credit (e.g., 1514 bytes of customer data). The Flow Control Unit generates a control packet (i.e., injects a header packet) after every vmss bytes of data that it transmits. Credit generation is determined by the state (or the congestion window) of the virtual control TCP connection, as detailed in Section IV. The Credit Meter accumulates credit upon the generation of a control packet by the Flow Control Unit. Thus, customer data arrives at the input of a Diffserv traffic conditioner only if the Flow Control Unit allows the data to proceed. The user data and its associated control packets are metered, policed, and marked in the identical manner of any standard Diffserv traffic conditioner. An underlying assumption of Aggregate Flow Control is that control packets follow the same path as their associated data packets. Each aggregate data stream is associated with a control flow. The control flow allows the customer traffic to grow during light to no congestion, while throttling the traffic if congestion is detected. This technique pushes the queues at the core routers to the edge of the network, because it limits the sending rate of a congestion-causing aggregate at its ingress edge node. In the AFC implementation used for this paper s experimentation, each edge node employs RED to manage its queues. However, it is worth noting that once congestion is pushed onto an edge node, any intelligent traffic management scheme may be applied. Aggregated Flow Control is applicable to services with double-ended SLAs i.e., cases for which both the source and destination of traffic aggregates are known. One example of this scenario is VPN service. The double-ended restriction arises because a control TCP connection must be established between two edges. The Aggregated Flow Control mechanism in a Diffserv network can be further explained with an example, using the experimental topology illustrated in Figure 4 (see Section V). Consider the case of two traffic aggregates, labeled A and B. Aggregates A is between edge nodes E1 and E3, while aggregate C is between E and E3. Thus, each aggregate passes through the same core router, C1. Assume that E1 receives Aggregate A traffic below its CIR and that E receives Aggregate B traffic above its PIR. The bottleneck link is between the Core and edge node E3; and suppose that it can only support the committed information rates for the two customers. Aggregate A s traffic is marked with drop precedence dp0, while Aggregate B s is marked with dp0, dp1, and dp. Each aggregate s control TCP packets are marked in the same manner as its data packets. The congestion at the core causes dp traffic to be dropped with a higher probability. Due to the drop of dp control traffic, the control TCP throttles Aggregate B at E. This congestion control will reduce the queue length at the shared core router and push it onto only E. Aggregate A, which is conforming to its profile, therefore receives better service. At the core of the network, when there is no congestion, none of the data and control packets is dropped. The Flow Control Unit at edge node allows increased data traffic by following TCP s Additive Increase and Multiplicative Decrease (AIMD) congestion control mechanism. As soon as a control packet is dropped, the virtual control TCP throttles the flow of customer data (following the AIMD mechanism) by limiting the availability of credit at the credit meter. IV. IMPLEMENTATION DETAIL Figure 1 treats an AFC edge node as two functional blocks; data passes through an AFC Traffic Conditioner into a Diffserv Traffic Conditioner. Figure expands on that view by denoting the six process blocks that comprise those two conditioners. Figure 3 provides greater detail of the AFC

4 Cl. Customer Data Queues Data Packet Flow Control Packet Flow Credit Update Flow Control Unit TCP Stack Credit Figure 3. Aggregate Flow Control Data Path DS Block OH blocks, while representing all of the Diffserv functionality in a single block. The AFC Traffic Conditioner employs a token bucket scheme to regulate data output from the customer network into the core network, as illustrated in Figure 3. Incoming packets are intercepted by the Classifier (Cl.), classified as belonging to a particular aggregate, and queued according to that classification. The Flow Control Unit (FCU) schedulesfrom the customer data queues, based on the TCP credit bucket. The FCU can only forward user packets when TCP credit is available in the bucket. When the FCU sends a packet for output, the bucket is drained by the size of that packet. For every vmss worth of user data that is sent, a control packet is generated and sent to the TCP stack. All data and control packets are handed to the Diffserv conditioner for metering, policing, and marking. The AFC architecture does not require any changes to the Diffserv functionality; it is simply an overlay that controls input to the Diffserv block. An Output Handler (OH) monitors all outgoing packets. When the OH detects a control packet, it increments the TCP bucket for the associated aggregate by vmss bytes. Control packets, like any TCP packet, may be held in the stack, if their congestion window does not allow a transmission. The loss of a control packet in the network will slow the transmission rate of control packets; and, because control packets regulate data packets, the drop will slow the transmission rate of user data. Figure 3 and its explanation so far include two important simplifications. Firstly, the figure depicts only one TCP credit bucket. In fact, each aggregate has its own bucket. The second simplification concerns the TCP control flows. The loss of a TCP control flow s packet causes that flow to halve its congestion window. If each aggregate is regulated by a single control flow, a packet loss causes the entire aggregate to halve its sending rate. When dealing with large traffic aggregates, such congestion control is too harsh. Instead, multiple control TCP flows are used for each aggregate. Consider an aggregate that is managed by four control flows. If one of those flows loses a packet, that flow halves its congestion window; but the other flows are unaffected. Since one of the four congestion windows that control an aggregate is halved, the aggregate s effective control window is cut by one-eighth. The experiments detailed in this paper use four control flows per aggregate, which results in smoother behaviour than a single flow gives. There is a practical limit to how many control flows should be used. Given a fixed queue size at the core routers, increasing to an arbitrarily high number of control flows implies that many of those flows will be in slow-start state. In the experimental testbed, four control flows produces smooth behaviour; and fixing the number of control flows allows core RED parameters to be chosen. To produce the fair treatment that is one of the goals of AFC, all aggregates are governed by the same number of control flows. At initialization time, the TCP credit is set to n+1 vmss, where n is the number of control flows, to avoid a potential deadlock. Assume that vmss is 1514 bytes and that the user sends 104-byte packets. If the credit is initialized to 1 vmss, the first packet will drain 104 bytes from the bucket. The second user packet cannot be sent, since the remaining credit is 490 bytes. No control packet is generated because vmss bytes of user data have not been sent. This lack of credit update causes a deadlock. If the initial credit is n+1 vmss, the second user packet and a control packet will be transmitted; and vmss worth of TCP credit will be added. Since, there is no loss of credit in the system, there is no potential for deadlock. The only requirement is that the minimum allowable vmss is one MTU. The AFC mechanism schedules from the multiple control flows in a round-robin manner. However, the scheduler also detects if a control flow has written a packet to the stack that has not yet reached the Output Handler (see Figure 3). The scheduler skips any flow with a packet pending in the stack. This step prevents the shared TCP credit from being consumed on a control packet that is guaranteed to wait in the stack behind another pending control packet. Thus, a packet drop in one control flow does not block the other control flows. In the unusual case that all control flows have packets pending output, the scheduler uses the flow whose turn it was supposed to be in the round robin. If a customer aggregate is throttled, the customer data queue will grow. In the experiments, a single level RED algorithm is applied on these queues for packet dropping.

5 V. EXPERIMENTAL RESULTS Studies were performed using an experimental testbed that includes Diffserv and AFC building blocks. The devices run on a Pentium platform with VxWorks as the RTOS. Figure 4 shows the basic experimental topology. The setup consists of four router elements: E1, E, E3, and C1. Each edge device is connected to an end host or traffic source. The netperf [14] tool is used as the TCP traffic generator and udpblast is used to create UDP streams. Each link in the topology has a 5 Mbps capacity. 1 E1 E C1 - Edge Device - Client - Core Device The experiments detailed in this section refer to Aggregate 1 and Aggregate. Aggregate 1 runs between Client 1 and Client 3; and Aggregate runs between Clients and 3. Each therefore passes through a single congested core device. The bottleneck link is between nodes C1 and E3. The edge devices in the testbed classify packets based on source and destination IP addresses. The policer utilizes the Time Sliding Window (TSW) tagger [3]. The experiments compare conventional Diffserv, referred to as the Standard Traffic Conditioner (TC), to the AFC scheme. The core device implements the AF PHB using the threelevel version of RED. Three sets of RED thresholds are maintained in the core device one for each drop precedence. A coupled queue accounting scheme is used, meaning that: (a) the probability of dropping a dp0 packet depends on the number of dp0 packets in the queue, (b) the probability of dropping a dp1 packet depends on the number of dp0 and dp1 packets in the queue, and (c) the probability of dropping a dp packet depends on the number of dp0, dp1, and dp packets in the queue. In all cases, decisions are made using a weighted running average of the queue size. The following RED parameters are used in all experiments: TABLE 1 CORE RED PARAMETERS Prec. min th (pkts) max th max p (pkts) dp dp dp E3 Figure 4. Single Congestion Point Topology 3 The physical queue size is capped at 50 packets and w q equals It is important to note that the thresholds are not as high as they might appear. A vmss of 1514 bytes is used throughout the tests, meaning that the AFC mechanism will insert a 40-byte control packet for every user data packet. Thus, a 40-packet threshold allows 0 (full-sized) user packets and 0 control packets. The thresholds are used because the core RED implementation is packet-based, rather than byte-based. Each edge node in the testbed also uses a single-level RED queue, with thresholds of 30 and 60 packets, a max p of 0.0, and a w q value of The six experiments study the impact of various factors on throughput and delay. The fairness issues examined are the effect of a differing number of microflows in an aggregate, the impact of non-responsive flows, the effect of differing packet sizes, and the treatment of short-lived TCP flows. Additional experiments measure the interpacket delay of streaming data and the impact of multiple congested nodes. Finally, the applicability of AFC is demonstrated through experiments involving different numbers of aggregates and aggregates with differening profiles. 1st. Experiment: Impact of Number of Microflows Service agreements in a Diffserv network are made on aggregated traffic; and various business houses will contract a target rate with a service provider. It is likely that different customers will have different number of microflows in a target aggregate. In an over-provisioned network, the aggregate with a larger number of TCP flows obtains a greater share of the excess bandwidth. This experiment shows that the sharing of excess bandwidth can be made insensitive to the number of microflows. In this scenario, there are two sets of aggregated TCP flows. Each has the same Diffserv target: a 0.5 Mbps CIR and a 1.0 Mbps PIR. Aggregate 1 contains 10 TCP flows, while the number of TCP flows in Aggregate varies from 5 to 5. With the Standard Diffserv Traffic Conditioner, the bandwidth obtained by each aggregate is directly proportinal to its number of microflows. The experiment was repeated with Aggregate Flow Control enabled at the edge routers. The bandwidth achieved under AFC is independent of the number of microflows in the aggregate; each aggregate obtains an equal amount of bandwidth. The improved bandwidth distribution results because the control TCP flows transform the scenario from one in which 15 to 35 TCP flows vie for bandwidth into a case in which two aggregates compete for bandwidth. Sharing occurs on an aggregate level, which is independent of the data within an aggregate.

6 Differing Number of Microflows per Aggregate TCP/UDP Interaction Throughput (Mbps) DS Agg 1 DS Agg AFC Agg 1 AFC Agg Aggregate 1: 10 flows Microflows in Aggregate Throughput (Mbps) DS Agg 1 DS Agg Aggregate 1: 10 TCP flows AFC Agg 1 AFC Agg Exp. 5 Agg 1 Aggregate UDP rate (Mbps) Exp. Agg Figure 5. Number of Microflows Figure 6. TCP/UDP Interaction nd. Experiment: TCP/UDP interactions A paying Diffserv customer will inject both TCP and UDP traffic into the network. The interaction between TCP and UDP may cause the unresponsive UDP traffic to impact the TCP traffic in an adverse manner. Clearly, there is a need to protect responsive TCP flows from non-responsive UDP flows, while protecting certain UDP flows, which require the same fair treatment as TCP, due to multimedia demands. The Diffserv customer should decide the importance of the payload, assuming that the network is capable of handling both TCP and UDP traffic in a fair manner. In this scenario, there are two sets of aggregated flows, each with a 1 Mbps CIR and a Mbps PIR. Aggregate 1 comprises 10 TCP flows and Aggregate has a UDP flow with a sending rate increasing from 1 Mbps to 5 Mbps. Each aggregate has an identical target rate, so after the UDP rate reaches 3 Mbps and causes congestion, the two aggregates should share the bandwidth equally, which is the data plotted in Figure 6 as the Expected Aggregate values. With the Standard Diffserv Traffic Conditioner, as the UDP streaming rate increases, the amount of bandwidth obtained by the TCP aggregate decreases. However, with Aggregate Flow Control, the UDP throughput is restricted by the control TCP flows. The aggregates share the bottleneck link bandwidth in a TCP-friendly manner. The throughputs obtained by the aggregates are closer to the expected throughput. Additionally, it is worth noting that AFC does not restrict the TCP aggregate from consuming all of the excess bandwidth when the UDP streaming rate is 1 Mbps and Mbps. This shows that AFC scheme allows an elastic growth of traffic aggregates at the time of no congestion. 3rd. Experiment: Impact of Packet Size When two aggregates send different sized TCP packets, the effect is similar to the case of two aggregates with differing numbers of microflows. With all other factors being equal, the aggregate that is sending larger packets will consume more of the available bandwidth, because its TCP flows individual congestion windows grow more quickly. However, Aggregate Flow Control effectively applies a congestion window to each aggregate that is independent of packet size. A control packet is generated for every vmss bytes of user data, regardless of how many user packets it takes to reach vmss. Thus, AFC removes packet size as a factor in aggregate congestion control. Figure 7 illustrates the difference between standard Diffserv and AFC. Aggregate 1 transmits 56-byte packets, while Aggregate s packet increase from 56 bytes to 1500 bytes. Each has a 1 Mbps CIR and a Mbps PIR. Through Diffserv, as the disparity in packet sizes grows, so does the disparity in achieved bandwidth. With AFC, though, differing packet sizes are not a factor in aggregate bandwidth distribution. 4th. Experiment: Protection of Short TCP Flows The objective of this experiment is to study if short-lived TCP flows can be protected under congestion by enabling Aggregated Flow Control. A traffic mix of TCP and UDP is chosen to create congestion. Each aggregate has a 1 Mbps CIR and a Mbps PIR. Aggregate includes a base traffic of long-lived flows plus a series of short TCP flows. Each short flow consists of a single packet request from Client 3 to Client, followed by a 16 Kbytes response (transaction). Transactions per second is the metric used to indicate the disparity between standard Diffserv and AFC.

7 Throughput (Mbps) Differing Packet Sizes DS Agg 1 DS Agg AFC Agg 1 Agg 1: 56 byte packets AFC Agg Aggregate Packet Size (bytes) Agg. 1 Total Traffic (E1-E3) TABLE SHORT-LIVED FLOWS WITH DIFFSERV Agg. Base Traffic (E-E3) Trans/ sec Standard Diffserv Avg. Q Core Avg. RTT (ms) Avg. Q E1 Avg. Q E 1 4M UDP M UDP M UDP 10 TCP TCP M UDP TCP 10 TCP TCP none TABLE 3 SHORT-LIVED FLOWS WITH AFC Figure 7. Impact of Packet Size Table lists the results of 5 tests conducted over a Diffserv network; while Table 3 covers the results of the same tests performed with Aggregate Flow Control. Note that the core queue in Table 3 includes data and control packets. The number inside the brackets shows the number of data packets in the core queue. The majority of data packets are of maximum segment size. Test 1 studies the short TCP flows in the presence of competing UDP flows. For Aggregate Flow Control, there is a 4 times improvement in transactions per second. This increase is due to two factors. One is that AFC promotes the fair sharing of bandwidth between TCP and UDP, as explained earlier. The second factor is that AFC pushes congestion from the core of a network onto the edge that is exceeding its profile. With standard Diffserv, Edge 1 forwards all of Aggregate 1 s incoming traffic into the core of the network, causing a large queue at the core. The AFC mechanism throttles the sending rate from the edge of the network into its core. Under the Diffserv case, no queue develops at the edge; but the queue at the core averages 160 packets. With AFC, the average core queue is reduced to 94 packets. The average queue at Edge 1, whose aggregate is causing most of the congestion, swells to 51 packets; whereas the average queue at Edge grows only to 3 packets. The AFC improvements are also emphasized in the RTT differences, as measured along the transactional traffic path. The same factors that explain the results of Test 1 apply to the other test cases; and one can examine the same metrics to make comparisons. Test combines the short TCP flow with 10 long-lived TCP flows. Again, there is an improvement in transactions per second. The disparity is not as great as in Test 1, because the 10 TCP flows will ramp up to consume a higher share of the link than the fixed Mbps UDP traffic of Test 1. Thus, the 10 long-lived TCP flows of Aggregate leaves less room for Agg. 1 Total Traffic (E1-E3) Agg. Base raffic (E- E3) Trans/ sec Aggregate Flow Control Diffserv Avg. Avg. Q Avg. RTT Core Q (ms) E1 Avg. Q E 1 4M UDP M UDP (47) M UDP 10 TCP (47) TCP M UDP (43) TCP 10 TCP (43) TCP none (31) 34 1 the transactional traffic. However, even in this scenario, AFC outperforms standard Diffserv, in both transactions per second and RTT. In this scenario, the queue at the core caused by Aggregate s 10 TCP flows is pushed backed to edge E. If a separate queue is allocated at the edge for transactional traffic, the transactions per second can be greatly increased. It is up to a network administrator to decide whether to protect shortlived flows further, by either allocating a separate aggregate for them or by favouring them when managing edge queues. Test 3 emphasizes the difference between Tests 1 and. Again, the additional traffic on Aggregate is a UDP stream whose sending rate is less than that aggregate s fair share of the link. The AFC scheme enables the transactional traffic to consume that remaining fair share, while keeping the RTT low. Test 4 shows an improvement in transactions per second, when only TCP traffic is sent. Even when the competing traffic is equal, the AFC architecture improves the treatment of short-lived flows. The transactions per second in Table 3 for Test and Test 4 are almost equal. This similarity arises because, in each case, the short TCP flow is encountering equal total queues (edge + core queues) in the data path. Test 5 proves that even if a customer aggregate is sending only one short TCP flow (and is therefore well within its specified profile), in a Diffserv network, the transactions per second can suffer. It is difficult for the aggregate to ramp up to a significant share of the link. Aggregate Flow Control does a much better job of securing bandwidth for the shortflow aggregate, resulting in a transaction rate that is 4.5 times that of the standard Diffserv case.

8 5th. Experiment: Improving Interpacket Delay Characteristics for Streaming Traffic This experiment demonstrates that the Aggregate Flow Control can improve the interpacket delay of streaming UDP traffic in presence of competing traffic from other customers. 10 TCP flows are sent along Aggregate 1; while 1 Mbps of UDP is streamed along Aggregate. Each aggregate has a 1 Mbps CIR and a Mbps PIR. The expected interpacket arrival time for the UDP is 1 milliseconds. Figure 8 shows that the spread in interpacket arrival time is greater for the standard Diffserv scenario. The reason for the improved delay characteristic of AFC can be explained by the results of Test 3 of Experiment 3. Aggregate Flow Control significantly reduces the queue at the core in that scenario. The congestion caused by Aggregate 1 s 10 TCP flows is pushed back to its corresponding edge node. AFC produces a smaller, less bursty core queue. Number of Packets Interpacket Arrival Time Distribution More Interpacket Arrival Gap (ms) Figure 8. Interpacket Arrival Times 6th. Experiment: Effect of Multiple Congested Nodes The single congestion point topology serves to illustrate the advantages of Aggregate Flow Control. For the final experiments, though, a more complex topology is used, which adds two more congestion points, as illustrated in Figure 9. For Experiment 6, the throughput of an aggregate between Clients 1 and 5 is measured. That aggregate contains 10 TCP flows. Aggregates are enabled and disabled along the remaining clients in such a way as to produce congestion at one, two, or all three core routers. Specifically, the competing aggregates run from Host to 3, from 3 to 4, and from 4 to 5. Thus, each faces only one congested node.each of these competeing customer aggregates is a UDP flow streaming at 3 Mbps. Each aggregate uses a Diffserv profile of 1 Mbps CIR and Mbps PIR. DS AFC 1 E1 E C1 C C3 E3 - Edge Device 3 - Core Device Figure 10 shows the effect of multiple congested nodes. In this scenario, the throughput drops very quickly under Diffserv. With AFC, the impact of multiple congestion points is mitigated. With 3 congested nodes, for example, the Diffserv throughput has dropped to 0.8 Mbps, whereas the AFC throughput is 1.4 Mbps. The reason that a long flow suffers when it passes through multiple congested nodes is twofold. Firstly, each packet has a chance of being dropped at each core node, therefore the total probability of a packet drop increases with multiple nodes. Secondly, it is a known fact that for aggregates with RTT differences, the aggregate with the shorter RTT ramps up more quickly and consumes a greater share of excess bandwidth. This second point does not apply to the UDP traffic used in this experiment; but it is a factor with TCP traffic. Agg 1-5 Throughput (Mbps) The reason that AFC lessens the impact of multiple nodes is again because it pushes queues at the congestion points onto the edges. Thus, compared with Diffserv, the drop probability and queueing delay increases less quickly as the number of congested nodes increases. E4 Figure 9. Multiple Congestion Points Topology Effect of Multiple Congested Nodes DS AFC Agg 1: 10 TCP flows Other Agg: 3 Mbps UDP E5 1 3 Number of Congested Nodes Figure 10. Effect of Multiple Congested Nodes 4 5

9 7th. Experiment: Varying Numbers of Aggregates The first six experiments illustrate the advantages of AFC over a standard Diffserv network. The final two experiments demonstrate the applicability of AFC to more complex scenarios. These tests make no comparison to Diffserv; they simply illustrate that AFC extends beyond what has been proven. The Multiple Congested Nodes experiment uses a network with up to four competing aggregates. However, due to the symmetry of that experiment, no congested link is shared by more than two aggregates. Experiment 7 sends aggregates from Hosts 1,, 3, and 4 to Host 5. Thus, the various links are shared by between two and four aggregates. Each aggregate comprises 10 TCP flows, with an equal profile of a 1 Mbps CIR and a Mbps PIR. Two, three, and four aggregates are sent at a time, and the bandwidth is measured between Edge 5 and Host 5. Table 4 details the results, which clearly show that AFC is able to equally divide the bandwidth bewteen the total number of aggregates. TABLE 4 BANDWIDTH DISTRIBUTION BETWEEN A VARIABLE NUMBER OF AGGREGATES (Mbps) Agg. 1-5 Agg. -5 Agg. 3-5 Agg Off Off Off th. Experiment: Varying Target Rates Although the other experiments on this paper illustrate equal bandwidth sharing for aggregates with equal profiles, AFC in fact distributes bandwidth according to profiles. This final experiment captures the results of two competing aggregates with different profiles. Each aggregate is composed of 10 TCP flows. Aggregate 1 has a fixed profile of a 1 Mbps CIR and a Mbps PIR; while the profile for Aggregate varies, as noted in Table 5. The results illustrate that AFC respects different traffic profiles. Furthermore, excess bandwidth seems to be divided equally between aggregates. For every case, each aggregate has a PIR 1 Mbps higher than its CIR. Thus, in this comparison, all bandwidth after CIR can be considered excess. For example, consider the case for which Aggregate has a profile of ( Mbps, 3 Mbps). In this scenario, the total committed bandwidth is 3 Mbps (1 Mbps for Aggregate 1 and Mbps for Aggregate ). If the remaining Mbps of bandwidth is divided equally, Aggregate 1 should expect a throughput of Mbps, while Aggregate should expect 3 TABLE 5 BANDWIDTH DISTRIBUTION WITH VARYING PROFILES Profile (CIR, PIR) Agg. 1 BW (Mbps) Agg. BW (Mbps) (0.0, 1.0) (1.0,.0).4.4 (.0, 3.0) (3.0, 4.0) (4.0, 5.0) Mbps. These figures match the results (1.8 Mbps and 3.0 Mbps) to within experimental error. The other trials also closely match the expected results, although as the total committed rate approaches link speed, the bandwidth division falls further from the theoretical values. VI. DISCUSSION This section discusses various issues related to Aggregated Flow Control in a Diffserv Network. As proven by the experiments of Section V, the AFC scheme improves the bandwidth and delay assurances in an AF PHB based Differentiated services network. AFC promotes elastic sharing of available bandwidth among customer aggregates, it introduces fairness among customer aggregates, and it reduces queues at the core of the network. At times of no congestion, the customer aggregates are allowed to use the available bandwidth. While, the aggregates are throttled as soon as congestion at the core starts dropping packets. Why Aggregates? In a Diffserv network, performance commitments for customer traffic will be specified at aggregate levels, rather than on the level of individual flows. For example, it is likely that VPN applications in a Diffserv network will specify throughput, latency, and loss rate in terms of aggregates. Thus, for scalability reasons, the congestion control scheme for aggregates needs to be performed without monitoring individual flows. How good is the scheme? The AFC mechanism uses the well-understood AIMD congestion management approach of TCP. In fact, the simplicity in its implementation one of the strengths of AFC. A standard TCP NewReno stack is used for the development of the prototype. The self-clocking nature of TCP adds to the robustness of the AFC scheme. The control loop is very well defined in the sense that new control packets can be injected in the network only if old ones leave the network. Deployment Issues: The AFC approach requires an incremental modification at the Diffserv edge routers. As explained earlier, the AFC building blocks will reside with the traffic conditioner block of a Diffserv edge router. AFC does not assume or require any special networking support at the core of the network. The AFC traffic cannot mix with other uncontrolled Diffserv traffic. In the case of traffic mix, AFC traffic will

10 start competing with individual Diffserv flows. This competition will lead to unfair treatment of (well-behaved) AFC traffic. This unfairness can be avoided by allocating a special queue with a specified bandwidth at the core nodes. All the Aggregated Flow Controlled Diffserv traffic will use this queue for an improved end-to-end service. This solution will expedite a partial deployment of AFC Diffserv network. Improved treatment of customer aggregates: The AFC scheme pushes the queues for the respective customers to the edges. This transfer allows an individual customer aggregate or a particular traffic type to be treated in a special fashion. For example, a specialized algorithm will be feasible at the edge for a controlled loss and latency for certain customer aggregates. Another example is assigning a separate queue at the edge for transactional traffic (as discussed in Experiment 4, Test ) for improved transactions per second. Further Investigation: There are a number of issues requiring further investigation. (1) How many control flows needed per customer aggregates is to be determined. () The impact of a higher vmss is to be studied. This information is needed to reduce the overhead due to control packets. The experiments of this paper inject a 40-byte packet for every 1514 bytes (vmss) of user data; thus, the bandwidth overhead is.6%. If vmss is increased to 5*1514, the bandwidth overhead is reduced to 0.5%. The packet overhead for the reported experiments is the worst case scenario; i.e., all packets are full-sized and vmss is 1514 bytes. However, the typical distribution of packet sizes[15] provides a mean packet size of 40 bytes. Thus, a reasonable possible packet overhead is 5.56%. (3) The current AFC approach is applicable to one-to-one and one-to-many network topology. It requires further investigation to extend the AFC scheme to one-to-any topology. VII. CONCLUSIONS The major contribution of this paper is the proposed novel scheme for Aggregate Flow Control in a Diffserv network. A prototype is developed to demonstrate the following capabilities of the proposed scheme: (1) the aggregate congestion management is performed in a TCP-friendly manner; () The scheme allows elastic growth of customer traffic aggregates at times of no congestion; (3) The shared queue at the core is distributed to the respective edges whose ingress traffic is causing congestion. These capabilities of the AFC mechanism lead to improved end-to-end bandwidth and delay assurances in a Diffserv network. The various experiments illustrate the specific benefits of AFC. AFC offers an improved fairness in interaction of TCP and UDP traffic in a Diffserv network. The scheme makes the bandwidth sharing among aggregates insensitive to the number of microflows in an aggregate and its packet sizes. The AFC approach improves transactions per second for short-lived TCP flows under congestion; and it makes the interpacket delay characteristics for streaming traffic more predictable. The AFC scheme also improves bandwidth assurance in a network with multiple congested nodes by protecting all conformant traffic. Aggregate Flow Control is an easily deployable overlay mechanism for Diffserv networks that significantly improves aggregate fairness and network behaviour. VIII. ACKNOWLEDGMENT The authors would like to thank Nabil Seddigh, Steve Jaworski, and Peter Pieda for discussion and support at various stages of this work. IX. REFERENCES [1] S. Floyd and V. Jacobson, Random Early Detection gateways for Congestion Avoidance, IEEE/ACM Transactions on Networking, V.1 N.4, August 1993, p [] S. Blake et al, "An Architecture for Differentiated Services, RFC 475, December [3] D. Clark and W. Fang, Explicit Allocation of Best Effort Packet Delivery Service, IEEE/ACM Transactions on Networking, V.6 N. 4, August, [4] J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, Assured Forwarding PHB Group, RFC 597, June [5] J. Ibanez and K. Nichols, Preliminary Simulation Evaluation of an Assured Service, Internet Draft, draft-ibanez-diffservassured-eval-00.txt, August [6] N. Seddigh, B. Nandy, and P. Pieda, Bandwidth Assurance Issues for TCP flows in a Differentiated Services Network, In Proceedings of Globecom 99, Rio De Janeiro, December [7] W. Lin, R. Zheng, and J. Hou, How to Make Assured Services More Assured, In Proceedings of ICNP, Toronto, Canada, October [8] I. Yeom and N. Reddy, Realizing throughput guarantees in a differentiated services network, In Proceedings of ICMCS, Florence, Italy, June [9] B. Nandy, N. Seddigh, P. Pieda, J. Ethridge, Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks, In Proceedings of Networking 000, Paris, France, May 000. [10] A. Chapman A and H.T. Kung, Traffic Management for Aggregate IP Streams, In Proceedings of CCBR, Ottawa, November [11] H.T. Kung and S.Y. Wang, TCP Trunking: Design, Implementation and Performance, In Proceedings of ICNP, Toronto, Canada, October [1] D. Harrison and S. Kalyanaraman, Edge-To-Edge Traffic Control for the Internet, RPI ECSE Networks Laboratory Technical Report, ECSE-NET-000-I, January 000. [13] S. Floyd and K. Fall, Promoting the use of End-to-End Congestion Control in the Internet, IEEE/ACM Transactions on Networking, August [14] Netperf: [15] S. McCreary and K. Claffy, Trends in Wide Area IP Traffic Patterns, ITC Specialist Seminar on IP Traffic Modeling, Measurement and Management, Monterey, September 000.

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks

Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks B. Nandy, N. Seddigh, P. Pieda, J. Ethridge OpenIP Group, Nortel Networks, Ottawa, Canada Email:{bnandy, nseddigh,

More information

Network Working Group. B. Nandy Nortel Networks June A Time Sliding Window Three Colour Marker (TSWTCM)

Network Working Group. B. Nandy Nortel Networks June A Time Sliding Window Three Colour Marker (TSWTCM) Network Working Group Request for Comments: 2859 Category: Experimental W. Fang Princeton University N. Seddigh B. Nandy Nortel Networks June 2000 Status of this Memo A Time Sliding Window Three Colour

More information

Bandwidth Assurance Issues for TCP flows in a Differentiated Services Network *

Bandwidth Assurance Issues for TCP flows in a Differentiated Services Network * Bandwidth Assurance Issues for TCP flows in a Differentiated Services Network Version.6, March, 999 Bandwidth Assurance Issues for TCP flows in a Differentiated Services Network * Nabil Seddigh, Biswajit

More information

Effect of Number of Drop Precedences in Assured Forwarding

Effect of Number of Drop Precedences in Assured Forwarding Internet Engineering Task Force Internet Draft Expires: January 2000 Mukul Goyal Arian Durresi Raj Jain Chunlei Liu The Ohio State University July, 999 Effect of Number of Drop Precedences in Assured Forwarding

More information

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK I.J.E.M.S., VOL.2 (3) 211: 163-171 ISSN 2229-6X PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION Jai Kumar and U.C. Jaiswal Department of Computer Science and Engineering, Madan

More information

Performance Analysis of Assured Forwarding

Performance Analysis of Assured Forwarding Internet Engineering Task Force Internet Draft Expires: August 2000 Mukul Goyal Arian Durresi Raj Jain Chunlei Liu The Ohio State University February 2000 Performance Analysis of Assured Forwarding Status

More information

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures

Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures Analysis of the interoperation of the Integrated Services and Differentiated Services Architectures M. Fabiano P.S. and M.A. R. Dantas Departamento da Ciência da Computação, Universidade de Brasília, 70.910-970

More information

Dynamic Fair Bandwidth Allocation for DiffServ Classes

Dynamic Fair Bandwidth Allocation for DiffServ Classes Dynamic Fair Bandwidth Allocation for DiffServ Classes Hideyuki Shimonishi Ichinoshin Maki Tutomu Murase Masayuki Murata Networking Research Labs, NEC Corporation Graduate School of Engineering Science,

More information

Effect of Context Transfer during Handoff on Flow Marking in a DiffServ Edge Router

Effect of Context Transfer during Handoff on Flow Marking in a DiffServ Edge Router Effect of Context Transfer during Handoff on Flow Marking in a DiffServ Edge outer Muhammad Jaseemuddin 1, Omer Mahmoud 2 and Junaid Ahmed Zubairi 3 1 Nortel Networks, Ottawa, Canada, 2 International Islamic

More information

ItswTCM: a new aggregate marker to improve fairness in DiffServ

ItswTCM: a new aggregate marker to improve fairness in DiffServ Computer Communications 26 (2003) 1018 1027 www.elsevier.com/locate/comcom ItswTCM: a new aggregate marker to improve fairness in DiffServ Hongjun Su a, Mohammed Atiquzzaman b, * a School of Computing,

More information

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL

PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL I.J.E.M.S., VOL.2 (4) 2011: 221-228 ISSN 2229-600X PERFORMANCE ANALYSIS OF AF IN CONSIDERING LINK UTILISATION BY SIMULATION WITH DROP-TAIL Jai Kumar, Jaiswal Umesh Chandra Department of Computer Science

More information

Fairness Comparisons of Per-flow and Aggregate Marking Schemes in DiffServ Networks

Fairness Comparisons of Per-flow and Aggregate Marking Schemes in DiffServ Networks Fairness Comparisons of Per-flow and Aggregate Marking Schemes in DiffServ Networks VIRPI LAATU 1, JARMO HARJU 2 AND PEKKA LOULA 3 1 Tampere University of Technology, P.O. Box 300, FIN-28101, Pori, Finland,

More information

A Round Trip Time and Time-out Aware Traffic Conditioner for Differentiated Services Networks

A Round Trip Time and Time-out Aware Traffic Conditioner for Differentiated Services Networks A Round Trip Time and Time-out Aware Traffic Conditioner for Differentiated Services Networks Ahsan Habib, Bharat Bhargava, Sonia Fahmy CERIAS and Department of Computer Sciences Purdue University, West

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Abstract: Scheduling is one of the most important components in providing a differentiated service at the routers. Due to the varying traffic characteristics

More information

DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET

DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET Pawansupreet Kaur 1, Monika Sachdeva 2 and Gurjeet Kaur 3 1 Department of Computer Engineering, SBS State Technical Campus, Ferozpur, Punjab Meens399@gmail.com

More information

Random Early Marking: Improving TCP Performance in DiffServ Assured Forwarding

Random Early Marking: Improving TCP Performance in DiffServ Assured Forwarding Random Early Marking: Improving TCP Performance in DiffServ Assured Forwarding Sandra Tartarelli and Albert Banchs Network Laboratories Heidelberg, NEC Europe Ltd. Abstract In the context of Active Queue

More information

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel

Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Quality of Service Mechanism for MANET using Linux Semra Gulder, Mathieu Déziel Semra.gulder@crc.ca, mathieu.deziel@crc.ca Abstract: This paper describes a QoS mechanism suitable for Mobile Ad Hoc Networks

More information

IP Differentiated Services

IP Differentiated Services Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. 7. IP Diffserv introduction Pag. 1 IP Differentiated Services Providing differentiated services in IP networks

More information

Active Resource Management for The Differentiated Services Environment

Active Resource Management for The Differentiated Services Environment Abstract Active Resource Management for The Differentiated Services Environment Ananthanarayanan Ramanathan, Manish Parashar The Applied Software Systems Laboratory Department of Electrical And Computer

More information

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control 1 DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control Sundeep.B.Singh, Girish.P.Saraph, Chetan.P.Bhadricha and Girish.K.Dadhich Indian Institute of Technology Bombay,

More information

Study of TCP and UDP flows in a Differentiated Services Network using Two Markers System 1

Study of TCP and UDP flows in a Differentiated Services Network using Two Markers System 1 Study of TCP and UDP flows in a Differentiated Services Network using Two Markers System 1 Sung-Hyuck Lee, Seung-Joon Seok, Seung-Jin Lee, and Chul-Hee Kang* Department of Electronics Engineering, Korea

More information

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model

Principles. IP QoS DiffServ. Agenda. Principles. L74 - IP QoS Differentiated Services Model. L74 - IP QoS Differentiated Services Model Principles IP QoS DiffServ Differentiated Services Architecture DSCP, CAR Integrated Services Model does not scale well flow based traffic overhead (RSVP messages) routers must maintain state information

More information

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control

DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control 1 DiffServ over MPLS: Tuning QOS parameters for Converged Traffic using Linux Traffic Control Sundeep.B.Singh and Girish.P.Saraph Indian Institute of Technology Bombay, Powai, Mumbai-400076, India Abstract

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) The Internet was originally designed for best-effort service without guarantee of predictable performance. Best-effort service is often sufficient for a traffic that is not sensitive

More information

A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System

A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System A DiffServ IntServ Integrated QoS Provision Approach in BRAHMS Satellite System Guido Fraietta 1, Tiziano Inzerilli 2, Valerio Morsella 3, Dario Pompili 4 University of Rome La Sapienza, Dipartimento di

More information

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d)

Problems with IntServ. EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) DiffServ (cont d) Problems with IntServ EECS 122: Introduction to Computer Networks Differentiated Services (DiffServ) Computer Science Division Department of Electrical Engineering and Computer Sciences University of California,

More information

Achieving fair and predictable service differentiation through traffic degradation policies.

Achieving fair and predictable service differentiation through traffic degradation policies. Achieving fair and predictable service differentiation through traffic degradation policies. Vasil Hnatyshin, Adarshpal S. Sethi Department of Computer and Information Sciences, University of Delaware,

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 Multimedia networking:

More information

Real-Time Protocol (RTP)

Real-Time Protocol (RTP) Real-Time Protocol (RTP) Provides standard packet format for real-time application Typically runs over UDP Specifies header fields below Payload Type: 7 bits, providing 128 possible different types of

More information

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module Quality of Service Monitoring and Delivery Part 01 ICT Technical Update Module Presentation Outline Introduction to IP-QoS IntServ Architecture DiffServ Architecture Post Graduate Certificate in Professional

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 34 This chapter describes how to use different methods to configure quality of service (QoS) on the Catalyst 3750 Metro switch. With QoS, you can provide preferential treatment to certain types

More information

Internet Services & Protocols. Quality of Service Architecture

Internet Services & Protocols. Quality of Service Architecture Department of Computer Science Institute for System Architecture, Chair for Computer Networks Internet Services & Protocols Quality of Service Architecture Dr.-Ing. Stephan Groß Room: INF 3099 E-Mail:

More information

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module

Presentation Outline. Evolution of QoS Architectures. Quality of Service Monitoring and Delivery Part 01. ICT Technical Update Module Quality of Service Monitoring and Delivery Part 01 ICT Technical Update Module Presentation Outline Introduction to IP-QoS IntServ Architecture DiffServ Architecture Post Graduate Certificate in Professional

More information

Core-Stateless Proportional Fair Queuing for AF Traffic

Core-Stateless Proportional Fair Queuing for AF Traffic Core-Stateless Proportional Fair Queuing for AF Traffic Gang Cheng, Kai Xu, Ye Tian, and Nirwan Ansari Advanced Networking Laboratory, Department of Electrical and Computer Engineering, New Jersey Institute

More information

Differentiated Service Router Architecture - Classification, Metering and Policing

Differentiated Service Router Architecture - Classification, Metering and Policing Differentiated Service Router Architecture - Classification, Metering and Policing Presenters: Daniel Lin and Frank Akujobi Carleton University, Department of Systems and Computer Engineering 94.581 Advanced

More information

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman A Preferred Service Architecture for Payload Data Flows Ray Gilstrap, Thom Stone, Ken Freeman NASA Research and Engineering Network NASA Advanced Supercomputing Division NASA Ames Research Center Outline

More information

Configuring QoS. Understanding QoS CHAPTER

Configuring QoS. Understanding QoS CHAPTER 29 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

CSE 123b Communications Software

CSE 123b Communications Software CSE 123b Communications Software Spring 2002 Lecture 10: Quality of Service Stefan Savage Today s class: Quality of Service What s wrong with Best Effort service? What kinds of service do applications

More information

ITBF WAN Quality of Service (QoS)

ITBF WAN Quality of Service (QoS) ITBF WAN Quality of Service (QoS) qos - 1!! Scott Bradner Quality of Service (QoS)! the ability to define or predict the performance of systems on a network! note: predictable may not mean "best! unfair

More information

Network Support for Multimedia

Network Support for Multimedia Network Support for Multimedia Daniel Zappala CS 460 Computer Networking Brigham Young University Network Support for Multimedia 2/33 make the best of best effort use application-level techniques use CDNs

More information

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services

Quality of Service (QoS) Computer network and QoS ATM. QoS parameters. QoS ATM QoS implementations Integrated Services Differentiated Services 1 Computer network and QoS QoS ATM QoS implementations Integrated Services Differentiated Services Quality of Service (QoS) The data transfer requirements are defined with different QoS parameters + e.g.,

More information

2015 Neil Ave, Columbus, OH Tel: , Fax: , durresi, mukul, jain, bharani

2015 Neil Ave, Columbus, OH Tel: , Fax: ,   durresi, mukul, jain, bharani A Simulation Study of QoS for TCP Over LEO Satellite Networks With Differentiated Services Sastri Kota, Arjan Durresi **, Mukul Goyal **, Raj Jain **, Venkata Bharani ** Lockheed Martin 160 East Tasman

More information

Configuring QoS. Finding Feature Information. Prerequisites for QoS

Configuring QoS. Finding Feature Information. Prerequisites for QoS Finding Feature Information, page 1 Prerequisites for QoS, page 1 Restrictions for QoS, page 3 Information About QoS, page 4 How to Configure QoS, page 28 Monitoring Standard QoS, page 80 Configuration

More information

Advanced Mechanisms for Available Rate Usage in ATM and Differentiated Services Networks

Advanced Mechanisms for Available Rate Usage in ATM and Differentiated Services Networks Advanced Mechanisms for Available Rate Usage in ATM and Differentiated Services Networks Roland Bless, Dirk Holzhausen, Hartmut Ritter, Klaus Wehrle Institute of Telematics, University of Karlsruhe Zirkel

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 30 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Differentiated Services

Differentiated Services Diff-Serv 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors Diff-Serv 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks QoS in IP networks Prof. Andrzej Duda duda@imag.fr Contents QoS principles Traffic shaping leaky bucket token bucket Scheduling FIFO Fair queueing RED IntServ DiffServ http://duda.imag.fr

More information

Quality of Service in the Internet

Quality of Service in the Internet Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Modeling Two-Windows TCP Behavior in Internet Differentiated Services Networks

Modeling Two-Windows TCP Behavior in Internet Differentiated Services Networks Modeling Two-Windows TCP Behavior in Internet Differentiated Services Networks Jianhua He, Zongkai Yang, Zhen Fan, Zuoyin Tang Department of Electronics and Information HuaZhong University of Science and

More information

Internetworking with Different QoS Mechanism Environments

Internetworking with Different QoS Mechanism Environments Internetworking with Different QoS Mechanism Environments ERICA BUSSIKI FIGUEIREDO, PAULO ROBERTO GUARDIEIRO Laboratory of Computer Networks, Faculty of Electrical Engineering Federal University of Uberlândia

More information

Unit 2 Packet Switching Networks - II

Unit 2 Packet Switching Networks - II Unit 2 Packet Switching Networks - II Dijkstra Algorithm: Finding shortest path Algorithm for finding shortest paths N: set of nodes for which shortest path already found Initialization: (Start with source

More information

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues

What Is Congestion? Computer Networks. Ideal Network Utilization. Interaction of Queues 168 430 Computer Networks Chapter 13 Congestion in Data Networks What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity

More information

Part1: Lecture 4 QoS

Part1: Lecture 4 QoS Part1: Lecture 4 QoS Last time Multi stream TCP: SCTP Multi path TCP RTP and RTCP SIP H.323 VoIP Router architectures Overview two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP)

More information

Lecture 14: Performance Architecture

Lecture 14: Performance Architecture Lecture 14: Performance Architecture Prof. Shervin Shirmohammadi SITE, University of Ottawa Prof. Shervin Shirmohammadi CEG 4185 14-1 Background Performance: levels for capacity, delay, and RMA. Performance

More information

PERFORMANCE COMPARISON OF TRADITIONAL SCHEDULERS IN DIFFSERV ARCHITECTURE USING NS

PERFORMANCE COMPARISON OF TRADITIONAL SCHEDULERS IN DIFFSERV ARCHITECTURE USING NS PERFORMANCE COMPARISON OF TRADITIONAL SCHEDULERS IN DIFFSERV ARCHITECTURE USING NS Miklós Lengyel János Sztrik Department of Informatics Systems and Networks University of Debrecen H-4010 Debrecen, P.O.

More information

Cross-Layer Architecture for H.264 Video Streaming in Heterogeneous DiffServ Networks

Cross-Layer Architecture for H.264 Video Streaming in Heterogeneous DiffServ Networks Cross-Layer Architecture for H.264 Video Streaming in Heterogeneous DiffServ Networks Gabriel Lazar, Virgil Dobrota, Member, IEEE, Tudor Blaga, Member, IEEE 1 Agenda I. Introduction II. Reliable Multimedia

More information

Integrated and Differentiated Services. Christos Papadopoulos. CSU CS557, Fall 2017

Integrated and Differentiated Services. Christos Papadopoulos. CSU CS557, Fall 2017 Integrated and Differentiated Services Christos Papadopoulos (Remixed by Lorenzo De Carli) CSU CS557, Fall 2017 1 Preliminary concepts: token buffer 2 Characterizing Traffic: Token Bucket Filter Parsimonious

More information

Last time! Overview! 14/04/15. Part1: Lecture 4! QoS! Router architectures! How to improve TCP? SYN attacks SCTP. SIP and H.

Last time! Overview! 14/04/15. Part1: Lecture 4! QoS! Router architectures! How to improve TCP? SYN attacks SCTP. SIP and H. Last time Part1: Lecture 4 QoS How to improve TCP? SYN attacks SCTP SIP and H.323 RTP and RTCP Router architectures Overview two key router functions: run routing algorithms/protocol (RIP, OSPF, BGP) forwarding

More information

DiffServ Architecture: Impact of scheduling on QoS

DiffServ Architecture: Impact of scheduling on QoS DiffServ Architecture: Impact of scheduling on QoS Introduction: With the rapid growth of the Internet, customers are demanding multimedia applications such as telephony and video on demand, to be available

More information

Unresponsive Flow Detection and Control Using the Differentiated Services Framework

Unresponsive Flow Detection and Control Using the Differentiated Services Framework Unresponsive Flow Detection and Control Using the Differentiated Services Framework AHSAN HABIB, BHARAT BHARGAVA Center for Education and Research in Information Assurance and Security (CERIAS) and Department

More information

Lecture 24: Scheduling and QoS

Lecture 24: Scheduling and QoS Lecture 24: Scheduling and QoS CSE 123: Computer Networks Alex C. Snoeren HW 4 due Wednesday Lecture 24 Overview Scheduling (Weighted) Fair Queuing Quality of Service basics Integrated Services Differentiated

More information

Quality of Service (QoS)

Quality of Service (QoS) Quality of Service (QoS) A note on the use of these ppt slides: We re making these slides freely available to all (faculty, students, readers). They re in PowerPoint form so you can add, modify, and delete

More information

EE 122: Differentiated Services

EE 122: Differentiated Services What is the Problem? EE 122: Differentiated Services Ion Stoica Nov 18, 2002 Goal: provide support for wide variety of applications: - Interactive TV, IP telephony, on-line gamming (distributed simulations),

More information

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015

Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 Assignment 7: TCP and Congestion Control Due the week of October 29/30, 2015 I d like to complete our exploration of TCP by taking a close look at the topic of congestion control in TCP. To prepare for

More information

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Resource allocation in networks. Resource Allocation in Networks. Resource allocation Resource allocation in networks Resource Allocation in Networks Very much like a resource allocation problem in operating systems How is it different? Resources and jobs are different Resources are buffers

More information

Fairness Measurements about TCP Flows in DS Networks: Comparison of Per-flow and Aggregate Marking Schemes

Fairness Measurements about TCP Flows in DS Networks: Comparison of Per-flow and Aggregate Marking Schemes Fairness Measurements about TCP Flows in DS Networks: Comparison of Per-flow and Aggregate Marking Schemes VIRPI LAATU, JARMO HARJU AND PEKKA LOULA Department of the information technology Tampere University

More information

Sections Describing Standard Software Features

Sections Describing Standard Software Features 27 CHAPTER This chapter describes how to configure quality of service (QoS) by using automatic-qos (auto-qos) commands or by using standard QoS commands. With QoS, you can give preferential treatment to

More information

Stateless Proportional Bandwidth Allocation

Stateless Proportional Bandwidth Allocation Stateless Proportional Bandwidth Allocation Prasanna K. Jagannathan *a, Arjan Durresi *a, Raj Jain **b a Computer and Information Science Department, The Ohio State University b Nayna Networks, Inc. ABSTRACT

More information

Performance of Multicast Traffic Coordinator Framework for Bandwidth Management of Real-Time Multimedia over Intranets

Performance of Multicast Traffic Coordinator Framework for Bandwidth Management of Real-Time Multimedia over Intranets Performance of Coordinator Framework for Bandwidth Management of Real-Time Multimedia over Intranets Chin Hooi Tang, and Tat Chee Wan, Member, IEEE ComSoc. Abstract Quality of Service (QoS) schemes such

More information

INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARISON

INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARISON INTEGRATED SERVICES AND DIFFERENTIATED SERVICES: A FUNCTIONAL COMPARON Franco Tommasi, Simone Molendini Faculty of Engineering, University of Lecce, Italy e-mail: franco.tommasi@unile.it, simone.molendini@unile.it

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 37 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750-E or 3560-E switch. With QoS,

More information

Fair per-flow multi-step scheduler in a new Internet DiffServ node architecture

Fair per-flow multi-step scheduler in a new Internet DiffServ node architecture Fair per- multi-step scheduler in a new Internet DiffServ node architecture Paolo Dini 1, Guido Fraietta 2, Dario Pompili 3 1 paodini@infocom.uniroma1.it, 2 guifra@inwind.it, 3 pompili@dis.uniroma1.it

More information

H3C S9500 QoS Technology White Paper

H3C S9500 QoS Technology White Paper H3C Key words: QoS, quality of service Abstract: The Ethernet technology is widely applied currently. At present, Ethernet is the leading technology in various independent local area networks (LANs), and

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

Congestion Control and Resource Allocation

Congestion Control and Resource Allocation Problem: allocating resources Congestion control Quality of service Congestion Control and Resource Allocation Hongwei Zhang http://www.cs.wayne.edu/~hzhang The hand that hath made you fair hath made you

More information

QoS for Real Time Applications over Next Generation Data Networks

QoS for Real Time Applications over Next Generation Data Networks QoS for Real Time Applications over Next Generation Data Networks Final Project Presentation December 8, 2000 http://www.engr.udayton.edu/faculty/matiquzz/pres/qos-final.pdf University of Dayton Mohammed

More information

of-service Support on the Internet

of-service Support on the Internet Quality-of of-service Support on the Internet Dept. of Computer Science, University of Rochester 2008-11-24 CSC 257/457 - Fall 2008 1 Quality of Service Support Some Internet applications (i.e. multimedia)

More information

Differentiated Services

Differentiated Services 1 Differentiated Services QoS Problem Diffserv Architecture Per hop behaviors 2 Problem: QoS Need a mechanism for QoS in the Internet Issues to be resolved: Indication of desired service Definition of

More information

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach Topic 4b: QoS Principles Chapter 9 Computer Networking: A Top Down Approach 7 th edition Jim Kurose, Keith Ross Pearson/Addison Wesley April 2016 9-1 Providing multiple classes of service thus far: making

More information

RED behavior with different packet sizes

RED behavior with different packet sizes RED behavior with different packet sizes Stefaan De Cnodder, Omar Elloumi *, Kenny Pauwels Traffic and Routing Technologies project Alcatel Corporate Research Center, Francis Wellesplein, 1-18 Antwerp,

More information

Congestion in Data Networks. Congestion in Data Networks

Congestion in Data Networks. Congestion in Data Networks Congestion in Data Networks CS420/520 Axel Krings 1 Congestion in Data Networks What is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master

More information

Configuring QoS CHAPTER

Configuring QoS CHAPTER CHAPTER 36 This chapter describes how to configure quality of service (QoS) by using automatic QoS (auto-qos) commands or by using standard QoS commands on the Catalyst 3750 switch. With QoS, you can provide

More information

Master Course Computer Networks IN2097

Master Course Computer Networks IN2097 Chair for Network Architectures and Services Prof. Carle Department for Computer Science TU München Master Course Computer Networks IN2097 Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for

More information

Before configuring standard QoS, you must have a thorough understanding of these items:

Before configuring standard QoS, you must have a thorough understanding of these items: Finding Feature Information, page 1 Prerequisites for QoS, page 1 QoS Components, page 2 QoS Terminology, page 3 Information About QoS, page 3 Restrictions for QoS on Wired Targets, page 41 Restrictions

More information

THE Differentiated Services (DiffServ) architecture [1] has been

THE Differentiated Services (DiffServ) architecture [1] has been Efficient Resource Management for End-to-End QoS Guarantees in DiffServ Networks Spiridon Bakiras and Victor O.K. Li Department of Electrical & Electronic Engineering The University of Hong Kong Pokfulam

More information

Basics (cont.) Characteristics of data communication technologies OSI-Model

Basics (cont.) Characteristics of data communication technologies OSI-Model 48 Basics (cont.) Characteristics of data communication technologies OSI-Model Topologies Packet switching / Circuit switching Medium Access Control (MAC) mechanisms Coding Quality of Service (QoS) 49

More information

CS268: Beyond TCP Congestion Control

CS268: Beyond TCP Congestion Control TCP Problems CS68: Beyond TCP Congestion Control Ion Stoica February 9, 004 When TCP congestion control was originally designed in 1988: - Key applications: FTP, E-mail - Maximum link bandwidth: 10Mb/s

More information

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service

Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Department of Computer Science 1/18 Multi-class Applications for Parallel Usage of a Guaranteed Rate and a Scavenger Service Markus Fidler fidler@informatik.rwth-aachen.de Volker Sander sander@fz.juelich.de

More information

Improving QOS in IP Networks. Principles for QOS Guarantees

Improving QOS in IP Networks. Principles for QOS Guarantees Improving QOS in IP Networks Thus far: making the best of best effort Future: next generation Internet with QoS guarantees RSVP: signaling for resource reservations Differentiated Services: differential

More information

Adaptive-Weighted Packet Scheduling for Premium Service

Adaptive-Weighted Packet Scheduling for Premium Service -Weighted Packet Scheduling for Premium Service Haining Wang Chia Shen Kang G. Shin The University of Michigan Mitsubishi Electric Research Laboratory Ann Arbor, MI 489 Cambridge, MA 239 hxw,kgshin @eecs.umich.edu

More information

Fair Assured Services Without Any Special Support at the Core

Fair Assured Services Without Any Special Support at the Core Fair Assured Services Without Any Special Support at the Core Sergio Herrería-Alonso, Manuel Fernández-Veiga, Andrés Suárez-González, Miguel Rodríguez-Pérez, and Cándido López-García Departamento de Enxeñería

More information

ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS

ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS ECSE-6600: Internet Protocols Spring 2007, Exam 1 SOLUTIONS Time: 75 min (strictly enforced) Points: 50 YOUR NAME (1 pt): Be brief, but DO NOT omit necessary detail {Note: Simply copying text directly

More information

Request for Comments: K. Poduri Bay Networks June 1999

Request for Comments: K. Poduri Bay Networks June 1999 Network Working Group Request for Comments: 2598 Category: Standards Track V. Jacobson K. Nichols Cisco Systems K. Poduri Bay Networks June 1999 An Expedited Forwarding PHB Status of this Memo This document

More information

Mapping an Internet Assured Service on the GFR ATM Service

Mapping an Internet Assured Service on the GFR ATM Service Mapping an Internet Assured Service on the GFR ATM Service Fernando Cerdán and Olga Casals Polytechnic University of Catalonia, Computer Department Architecture, C/ Jordi Girona 1-3, E-08071 Barcelona

More information

Transport of TCP/IP Traffic over Assured Forwarding IP Differentiated Services 1

Transport of TCP/IP Traffic over Assured Forwarding IP Differentiated Services 1 Transport of TCP/IP Traffic over Assured Forwarding IP Differentiated Services Paolo Giacomazzi, Luigi Musumeci, Giacomo Verticale Politecnico di Milano, Italy Email: {giacomaz, musumeci, vertical}@elet.polimi.it

More information

Converged Networks. Objectives. References

Converged Networks. Objectives. References Converged Networks Professor Richard Harris Objectives You will be able to: Discuss what is meant by convergence in the context of current telecommunications terminology Provide a network architecture

More information

4A0-107 Q&As. Alcatel-Lucent Quality of Service. Pass Alcatel-Lucent 4A0-107 Exam with 100% Guarantee

4A0-107 Q&As. Alcatel-Lucent Quality of Service. Pass Alcatel-Lucent 4A0-107 Exam with 100% Guarantee 4A0-107 Q&As Alcatel-Lucent Quality of Service Pass Alcatel-Lucent 4A0-107 Exam with 100% Guarantee Free Download Real Questions & Answers PDF and VCE file from: 100% Passing Guarantee 100% Money Back

More information