TELE9751 - Switching Systems and Architecture Assignment Week 10 Lecture Summary - Traffic Management (including scheduling) Student Name and zid: Akshada Umesh Lalaye - z5140576 Lecturer: Dr. Tim Moors
Traffic Management: Traffic management is regulating the flow of traffic into network points. This will improve the service of that network and ensure that traffic gets block when it is congested. Traffic management aims to fairly allocate resources to adaptive applications (e.g. File Transfer), guarantee performance to non-adaptive applications (e.g. Video calls) and lowcost system implementation. Need for Traffic Management: Multiplexing causes two main issues within the network: Burstiness and Delay. Due to load from other streams, a stream can face delays or maybe even distorted as shown in Figure 1. If the source limits traffic the network can guarantee service as shown in Figure 2. Service Models: There are two types of service models: Connectionless and Connectionoriented. 1. Connectionless: The model is simple to implement but provide poor guarantees in terms of service. The source marks the priority of traffic and sends it over the network. Network gives service to high priority traffic. As source can mark all of its traffic as high priority, the network will become overloaded and this model cannot guarantee service. 2. Connection-oriented: The source/client specifies two things: traffic characteristics (e.g. burstiness) and desired performance (e.g. delay or loss requirement). The Network checks if it can support the new client, and then either accepts or rejects the client. The network regulates incoming traffic if client tries to violate the terms. The terms can be negotiated during transfers. Traffic Regulation: Controlling the traffic entering the network so as to define certain parameters for it (e.g. Average rate, short-term rate, maximum burst size) and then ensuring that traffic doesn t exceed declared parameters is known as Traffic Regulation. This is mainly achieved via Leaky Buckets. Leaky bucket algorithm: In this scenario, the host computer sends a random burst of unregulated data which is limited by a leaky bucket mechanism which ensures a constant outflow irrespective of inflow of traffic. Refer Figure 3. This regulation is done by a Token Regulator. As shown in Figure 4, the packets require a token to pass through. When a packet is passed a token is removed from bucket. Tokens arrive at a constant rate in the bucket. If bucket is full, token is discarded. If no token is present, packet may be buffered or discarded. The regulator is of 3 types: Policer, Shaper and Marker/Tagger. 1. Policer: Packets which cannot be sent through bucket are discarded. 2. Shaper: Packets which cannot be sent immediately are buffered. 3. Marker/Tagger: Packet which cannot be sent through bucket are marked as low priority and passed when higher priority traffic has completed transmission. Thus, Leaky Bucket limit loss due to buffer overflow and can also be used to guarantee delay and throughput by means of scheduling as explained below. Scheduling: Scheduling is applied to resolve contention issues especially when packets are queued to get to output port. Scheduling helps to decide service order, service time and whether to serve or discard a packet. The aim of scheduling is to have fairness, protection, efficiency of implementation and ability to guarantee performance.
Fairness: Fairness is ensuring controllable allocation of resource to different sources in a network. Fairness can be determined by Fairness Index or by Min-Max Fairness Algorithm. Fairness Index: When index is 1, all users get same allocation of resources. In other case, few of users get same allocation while remaining users get no resources allocated to them as depicted in Figure 5. Min-Max Fairness Algorithm: No source receives more than its demand. The main aim is to maximize the network utility in such a way that all sources get at least the minimum resource. Disadvantage: Fairness may result in a reduced throughput. Protection: Protection of flows from mis-behaving flows can be ensured by being fair and regulating the traffic by using Leaky Bucket Algorithm. Scheduling Algorithms: The scheduling algorithms are used to provide a guarantee of service. Refer Figure 6. First Come First Serve: The packets are served in order of arrival. Advantage: Simple to implement. Disadvantage: Cannot control against sources which transmit often. This results in one source consuming more bandwidth than others. Round Robin: Each source get access to a resource in a pre-determined circular order for a fixed duration of time. Advantage: Weighting can be used to increase throughput. Disadvantage: Mean packet length is required to calculate weights. Low weight packets can be deprived of access for longer periods. Generalised Processor Sharing (GPS): The working is similar to Weighted Round Robin. However, the packets are assumed to be of fluid nature i.e. of small size (bits). The service rate is decided by the weight. The excess arrivals are stored in buffers. GPS is considered an ideal form of scheduling algorithm. Fair Queuing: There are no weights considered here and buffer is assumed to be of infinite size. Finish order of packet is determined by GPS and then packets are served in order of their finish order. Fair queuing can handle variable length packets and also provide guarantee of service. If buffer size is limited, packets with largest finish numbers are discarded. We can also consider weight in Weighted Fair Queuing (WFQ). Packet length is considered as finish order. Inactive: Fi = R + L, Active: Fi = Fi-1 + L, Active until R=Fi, dr/dt = 1/(# active) where i: ith packet, F: Finish number, R: Round number, L: packet length Thus, Fair queuing can guarantee throughput and delay, but has inefficient admission control. Non-Work Conserving Scheduling: These schemes remain idle when packets are queues even if no packet is being processed. Advantage: Efficient Admission Control, enable tighter delay bounds, reduce delay variability, send low-priority traffic when idle, reduced buffer sizes. Disadvantage: Reduced average delay and throughput, complicated switches
References: [1] Dr. Tim Moors, Lecture Notes, 9.pdf (Week 10 - Traffic Management (including scheduling)) [2] Varghese, G. (1960), Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices. Amsterdam: Elsevier/Morgan Kaufmann, c2005, Chapter 14 Appendix: [1] Figure 1 - Delay due to multiplexing traffic Figure 2 - Source limits traffic: service guarantee by network Figure 3 - Leaky Bucket Algorithm Figure 4 - Token Regulator Figure 5 - Fair Index
Figure 6 - Scheduling Algorithms First Come First Serve (FCFS) Round Robin Generalised Processor Sharing (GPS) Fair Queuing