Due Date: Lab report is due on Mar 6 (PRA 01) or Mar 7 (PRA 02)

Similar documents
This labs uses traffic traces from Lab 1 and traffic generator and sink components from Lab 2.

Traffic Shaping (Part 1)

The programming for this lab is done in Java and requires the use of Java datagrams.

On the road again. The network layer. Data and control planes. Router forwarding tables. The network layer data plane. CS242 Computer Networks

Using SPLAY Tree s for state-full packet classification

Xilinx Answer Xilinx PCI Express DMA Drivers and Software Guide

ECE 545 Project Deliverables

(ii). o IP datagram packet is payload of a TCP segment o TCP segment is payload of an IP datagram. (iii).

Practical Exercises in Computer Networks and Distributed Systems

Date: October User guide. Integration through ONVIF driver. Partner Self-test. Prepared By: Devices & Integrations Team, Milestone Systems

Transmission Control Protocol Introduction

Project #1 - Fraction Calculator

HPE LoadRunner Best Practices Series. LoadRunner Upgrade Best Practices

LAB 7 (June 29/July 4) Structures, Stream I/O, Self-referential structures (Linked list) in C

CCNA 1 Chapter v5.1 Answers 100%

COP2800 Homework #3 Assignment Spring 2013

Systems & Operating Systems

You may receive a total of two GSA graduate student grants in your entire academic career, regardless of what program you are currently enrolled in.

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

INSTALLING CCRQINVOICE

MyUni Adding Content. Date: 29 May 2014 TRIM Reference: D2013/ Version: 1

Assignment #5: Rootkit. ECE 650 Fall 2018

1 Getting and Extracting the Upgrader

McGill University School of Computer Science COMP-206. Software Systems. Due: September 29, 2008 on WEB CT at 23:55.

FIREWALL RULE SET OPTIMIZATION

TL 9000 Quality Management System. Measurements Handbook. SFQ Examples

Municode Website Instructions

HP OpenView Performance Insight Report Pack for Quality Assurance

Using CppSim to Generate Neural Network Modules in Simulink using the simulink_neural_net_gen command

Level 2 Cambridge Technical in IT

Max 8/16 and T1/E1 Gateway, Version FAQs

Troubleshooting of network problems is find and solve with the help of hardware and software is called troubleshooting tools.

Priority-aware Coflow Placement and scheduling in Datacenters

$ARCSIGHT_HOME/current/user/agent/map. The files are named in sequential order such as:

Ascii Art Capstone project in C

1 Getting and Extracting the Upgrader

Upgrading Kaltura MediaSpace TM Enterprise 1.0 to Kaltura MediaSpace TM Enterprise 2.0

Report Writing Guidelines Writing Support Services

Enterprise Chat and Developer s Guide to Web Service APIs for Chat, Release 11.6(1)

New Product Release Package 8 XT[2] System and Software 19 Jan 2009

SeaLinx Guide. Table of Contents

2. When an EIGRP-enabled router uses a password to accept routes from other EIGRP-enabled routers, which mechanism is used?

Please contact technical support if you have questions about the directory that your organization uses for user management.

To start your custom application development, perform the steps below.

WebEx Web Conferencing Quick Start Guide

CCNA 1 Chapter v5.1 Answers 100%

Lab 5 Sorting with Linked Lists

Adverse Action Letters

Secure File Transfer Protocol (SFTP) Interface for Data Intake User Guide

B Tech Project First Stage Report on

ClassFlow Administrator User Guide

1 Version Spaces. CS 478 Homework 1 SOLUTION

Imagine for MSDNAA Student SetUp Instructions

CCNA 1 Chapter v5.1 Answers 100%

Getting Started with the SDAccel Environment on Nimbix Cloud

Upgrade Guide. Medtech Evolution General Practice. Version 1.9 Build (March 2018)

InformationNOW Elementary Scheduling

Network Rail ARMS - Asbestos Risk Management System. Training Guide for use of the Import Survey Template

Summary. Server environment: Subversion 1.4.6

Proper Document Usage and Document Distribution. TIP! How to Use the Guide. Managing the News Page

Quick Start Guide. Basic Concepts. DemoPad Designer - Quick Start Guide

Upgrade Guide. Medtech Evolution Specialist. Version 1.11 Build (October 2018)

Contents: Module. Objectives. Lesson 1: Lesson 2: appropriately. As benefit of good. with almost any planning. it places on the.

Higher Maths EF1.2 and RC1.2 Trigonometry - Revision

Integrating QuickBooks with TimePro

These tasks can now be performed by a special program called FTP clients.

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

HW4 Software version 3. Device Manager and Data Logging LOG-RC Series Data Loggers

How to use DCI Contract Alerts

The transport layer. Transport-layer services. Transport layer runs on top of network layer. In other words,

Linking network nodes

ROCK-POND REPORTING 2.1

Software Usage Policy Template

CSE 361S Intro to Systems Software Lab #2

html o Choose: Java SE Development Kit 8u45

Project 4: System Calls 1

Lab 1 - Calculator. K&R All of Chapter 1, 7.4, and Appendix B1.2

UML : MODELS, VIEWS, AND DIAGRAMS

softpanel generic installation and operation instructions for nanobox products

Using the Swiftpage Connect List Manager

User Guide. Document Version: 1.0. Solution Version:

CCNA 3 Chapter 2 v5.0 Exam Answers 2015 (100%)

Constituent Page Upgrade Utility for Blackbaud CRM

PFCG: Authorizations for object S_TCODE

Grade 4 Mathematics Item Specification C1 TJ

IT Essentials (ITE v6.0) Chapter 5 Exam Answers 100% 2016

What s New in the 2018 Edition. Release Notes

CounterSnipe Software Installation Guide Software Version 10.x.x. Initial Set-up- Note: An internet connection is required for installation.

CCNA 1 Chapter v5.1 Answers 100%

Preparation: Follow the instructions on the course website to install Java JDK and jgrasp on your laptop.

Cisco Tetration Analytics, Release , Release Notes

Computer Science Programming Contest

Chapter 5. The Network Layer IP

PAGE NAMING STRATEGIES

2. What is the most cost-effective method of solving interface congestion that is caused by a high level of traffic between two switches?

Extended Traceability Report for Enterprise Architect

NiceLabel LMS. Installation Guide for Single Server Deployment. Rev-1702 NiceLabel

Reporting Requirements Specification

1 Binary Trees and Adaptive Data Compression

DS-5 Release Notes. (build 472 dated 2010/04/28 08:33:48 GMT)

Transcription:

Lab 3 Packet Scheduling Due Date: Lab reprt is due n Mar 6 (PRA 01) r Mar 7 (PRA 02) Teams: This lab may be cmpleted in teams f 2 students (Teams f three r mre are nt permitted. All members receive the same grade). Purpse f this lab: Packet scheduling algrithms determine the rder f packet transmissin at the utput link f a packet switch. The simplest and mst widely used scheduling algrithm is FIFO (First-in- First-ut), als called FCFS. Scheduling algrithms can becme quite cmplex if the packet switch needs t differentiate traffic types. Sftware Tls: The prgramming fr this lab is dne in Java and requires the use f Java datagrams. This labs uses traffic traces frm Lab 1 and the leaky-bucket cmpnent frm Lab 2. What t turn in: Turn in a hard cpy f all yur answers t the questins in this lab, including the plts, hard cpies f all yur Java cde, and the annymus feedback frm. Versin 1 (February 12, 2007) Jörg Liebeherr, 2007. All rights reserved. Permissin t use all r prtins f this material fr educatinal purpses is granted, as lng as use f this material is acknwledged in all derivative wrks.

Table f Cntent Table f Cntent 2 Preparing fr Lab 3 2 Cmments 2 Part 1. FIFO Scheduling 3 Part 2. Pririty Scheduling 6 Part 3. Weighted Rund Rbin (WRR) Scheduler 11 Feedback Frm fr Lab 3 16 Preparing fr Lab 3 This lab requires the prgrams frm Labs 1 and 2. Cmments Quality f plts in lab reprt: This lab asks yu t prduce plts fr a lab reprt. It is imprtant that the graphs are f high quality. All plts must be prperly labeled. This includes that the units n the axes f all graphs are included, and that each plt has a header line that describes the cntent f the graph. Extra credit: All three parts f this lab have an extra credit cmpnent (5% each). Feedback: T be able t imprve the labs fr future years, we cllect data n the current lab experience. Yu must submit an annymus feedback frm fr each lab. Please use the feedback frm at the end f the lab, and return the frm with yur lab reprt. Java: In the Unix lab, the default versin f the Java installatin is relatively ld. T access a mre recent versin use the fllwing cmmands: Cmpiling: /lcal/java/jdk1.5.0_09/bin/javac Running: /lcal/java/jdk1.5.0_09/bin/java ECE 466 LAB 3 - PAGE 2 J. Liebeherr

Part 1. FIFO Scheduling The first bjective f this lab is t explre the backlg and delay at a link with a FIFO (First-In- First-Out) buffer, when the traffic lad arriving t the link is increased. The FIFO buffer perates at link capacity C=1 Mbps. The situatin is illustrated in the figure, where packets are represented as rectangular bxes. With FIFO, als referred t as FCFS (First-Cme-First-Served), packets are transmitted in the rder f their arrival. Arrivals Buffer C Departures The gal f this exercise is t bserve and measure the backlg and delay at a FIFO buffer when the lad is varied. Fr traffic arrivals yu will use the cmpund Pissn prcess. Exercise 1.1 Traffic generatr fr cmpund Pissn traffic Wrk with the traffic generatr frm Exercise 2.1 in Lab 2, which is based n the cmpund Pissn arrival prcess frm Lab 1, where packet arrival events fllw a Pissn prcess with rate λ = 1250 packets/sec, and the packet size has an expnential distributin with average size 1/μ = 100 Bytes. The average rate f this flw is 1 Mbps. Dwnlad a trace with the Pissn data frm: http://www.cmm.utrnt.ca/~jrg/teaching/ece466/labs/lab1/pissn3.data Cnsider the traffic generatr that was built in Exercise 2.1 f Lab 2. Recall that the infrmatin in the trace file is re-scaled as fllws: The time values in the file are multiplied by a factr f 10; The packet size values in the file are multiplied by a factr f 10. This results in a cmpund Pissn prcess with packet arrival rate λ = 125 packets/sec, and the packet size has an expnential distributin with average size 1/μ =1000 Bytes. The average traffic rate is unchanged with this change, and remains 1 Mbps. At a link with 1 Mbps, the abve Pissn surce will generate an average lad f 100% f the link capacity. The lad f a link, als referred t as utilizatin and dented by ρ, indicates the percentage f time that a wrk-cnserving link will be busy (busy = transmitting a packet). The utilizatin is cmputed as fllws: ρ = (Average packet arrival rate) x (average transmissin time f packet) = λ x 1/(μC) ECE 466 LAB 3 - PAGE 3 J. Liebeherr

Add a feature t the cde f yur traffic generatr that can re-scale the packet size t generate an average traffic rate f N 0.1 Mbps, where N = 1, 2,, 9. Test the crrectness f the traffic generatr, using the traffic sink frm Part 2 in Lab 2. Exercise 1.2 Implement a FIFO Scheduler Build a FIFO scheduler that accepts arrivals frm the traffic generatr f the last exercise, and that transmits t a sink. The FIFO scheduler must satisfy the fllwing requirements: a. The FIFO scheduler must be able t receive a packet, while a packet is being transmitted. This can be dne by using a separate thread fr receiving packets. b. The FIFO scheduler transmits an arriving packet immediately if n packet is in transmissin. Otherwise, the packet is added t the buffer. c. After cmpleting the transmissin f a packet, the transmitter selects the packet frm the buffer with the earliest arrival time. d. Set the maximum size f the buffer in the FIFO scheduler t 100 kb. If the available buffer size is t small fr an arriving packet, the packet is discarded (and a message is displayed.) Determine the maximum rate at which yur FIFO scheduler can transmit packets. Exercise 1.3 Observing a FIFO Scheduler at different lads Use the traffic generatr t evaluate the FIFO scheduler with cmpund Pissn traffic at different lads. Use the added feature in the traffic generatr frm Exercise 1.1 and run the re-scaled Pissn trace file with an average rate f N 0.1 Mbps, where N = 1, 2,, 9. Fr each value f N, determine the fllwing values: Maximum backlg and waiting time in the buffer; Average backlg and waiting time in the buffer; Percentage f time that the FIFO scheduler is transmitting; Percentage f time that a packet is waiting (i.e., in buffer and nt in transmissin). Percentage f traffic that is discarded due t a buffer verflw. Present plts that shw the abve values as a functin f N. Designate ranges f N, where the FIFO scheduler is in a regime f lw lad and high lad. Justify yur chice. ECE 466 LAB 3 - PAGE 4 J. Liebeherr

Exercise 1.4 (Optinal, 5% extra credit) Unfairness in FIFO A limitatin f FIFO is that it cannt distinguish different traffic surces. Suppse that there are many lw-bandwidth surces and a single traffic surce that sends data at a very high rate. If the high-bandwidth surce causes an verlad at the link, then all traffic surces will experience packet lsses due t buffer verflws. Frm the perspective f a lw-bandwidth surce, this seems unfair. (The lw-bandwidth surce wuld like t see that packet lsses experienced at the buffer are prprtinal t the traffic rate f a surce). The fllwing experiment tries t exhibit the unfairness issues f FIFO fr tw traffic surces. There are tw traffic surces, which are each re-scaled cmpund Pissn surces as in Exercise 1.1: Surce 1 sends at an average rate f N 1 0.1 Mbps. Surce 2 sends at an average rate f N 2 0.1 Mbps. Bth surces send traffic int a FIFO scheduler (with rate C=1 Mbps) and 100 kb f buffer. Run a series f experiments where the lad f the tw surces is set t (N 1, N 2 ) with N 1 = 5 and N 2 = N 2 = 4, 6, 8, 10,12,14. Recrd the average thrughput (utput rate) f each traffic surce (Nte: Yu must be able t keep track whether a packet transmissin is due t Surce 1 r Surce 2). Lab Reprt: Prepare a table that shws the average thrughput values and interpret the result. Is it pssible t write a frmula that predicts the thrughput as a functin f the arrival rate? Prvide the plts and a discussin f the plts. Als include answers t the questins. ECE 466 LAB 3 - PAGE 5 J. Liebeherr

Part 2. Pririty Scheduling FIFO scheduling gives all traffic the same type f service. If the netwrk carries traffic types with different characteristics and service requirements, a FIFO scheduling algrithm is nt sufficient. T differentiate traffic types and give each type a different grade f service, mre sphisticated scheduling algrithms are needed. One f these algrithms is the pririty scheduling algrithm. Fr example, cnsider a mix f file sharing traffic and vice (e.g., vice-ver-ip) traffic: File sharing traffic is high-vlume and transmitted in large packets, whereas vice-ver-ip traffic has a relatively lw data rate and is transmitted in shrt packets. If the traffic is handled by a FIFO scheduler, as shwn belw, then vice packets may experience a pr grade f service dependent n the n the arrivals f file sharing packets. File sharing packet Vice-ver-IP packet FIFO scheduler A pririty scheduler can imprve the service given t vice packets by giving vice packets higher pririty. A pririty scheduler always selects the packet with the highest pririty level fr transmissin. high pririty C Traffic classificatin C lw pririty Pririty scheduler A pririty scheduler assumes that incming traffic can be mapped t a pririty level. A traffic classificatin cmpnent f the scheduler uses an identifier in incming packets t perfrm this mapping. The identifier can be based n surce and destinatin addresses, applicatin type applicatin type (e.g., via the prt number), r ther infrmatin in packet headers. Pririty schedulers are als referred t as static pririty (SP) r Head-f-Line (HOL) schedulers. In this part f the lab, yu will design and implement a pririty scheduler with tw pririty levels, as shwn in the Figure belw. Traffic is transmitted t the scheduler frm tw surces: a cmpund Pissn surce and a vide surce. The surces label packets with an identifier: 1 fr packets frm the Pissn surce and 2 fr packets frm the vide surce. A traffic classifier at the pririty scheduler reads the identifier. Packets with label 1 are handled as lw pririty, and packets with label 2 are handled as high pririty packets. ECE 466 LAB 3 - PAGE 6 J. Liebeherr

Pissn tracefile high pririty Traffic Generatr Traffic Generatr 1 2 Traffic classificatin lw pririty C vide tracefile Transmissin f labeled packets Classificatin and scheduling Exercise 2.1 Transmissin f labeled packets Build a traffic generatr as shwn n the left hand side f the figure abve. The requirements fr the transmissin f packets are as fllws: Build a traffic generatr fr a vide tracefile and fr a Pissn tracefile. The transmissins f the vide surce and the Pissn surce are perfrmed by tw distinct prgrams. The transmissin f the vide surce that generates packets at a rate f 256 kbps. The tracefile fr the vide surce can be dwnladed frm http://trace.eas.asu.edu/trace/pics/frametrace/h263/verbse_jurassic_256.dat The frmat f the file is shwn belw. #Time [ms] Frametype Length [byte] # 0 I 687 40 P 345 120 PB 7584 ECE 466 LAB 3 - PAGE 7 J. Liebeherr

. A traffic generatr fr the vide surce can be build by re-using the cde frm Exercise 1.2 and Exercise 2.1 frm Lab 2. As in Lab 2, the maximum amunt f data that can be put int a single packet is 1480 bytes. Frames exceeding this length are divided and transmitted in multiple packets. The transmissin f the Pissn surce is determined by the Pissn traffic generatr built in Exercise 1.1 f this lab (Lab 3). The traffic generatr must be able t run the rescaled Pissn trace file with an average rate f N 0.1 Mbps, where N = 1, 2,, 9. Befre a packet is transmitted it must be labeled with an identifier, which is lcated in the first byte f the paylad. The identifier is the number 0x01 fr packets frm the Pissn surce and 0x02 fr the vide surce. Packets are transmitted t a remte UDP prt. Bth surces transmit UDP datagrams t the same destinatin prt n the same hst (e.g., prt 4444). Yu may use a traffic sink as build fr Exercise 2.2 f Lab 2 fr testing the implementatin. Exercise 2.2 Packet classificatin and pririty scheduling Build a traffic classificatin and scheduling cmpnent as shwn n the right hand side f the abve figure. The pririty scheduler cnsists f tw FIFO queues: ne FIFO queue fr high pririty traffic and ne FIFO queue fr lw pririty traffic. Set the maximum buffer size f each FIFO queue t 100 kb. The pririty scheduler always transmits a high-pririty packet, if the high pririty FIFO queue cntains a packet. Lw pririty packets are selected fr transmissin nly when there are n high pririty packets. The transmissin rate f the link is C=1 Mbps. The traffic classificatin cmpnent reads the first byte f the paylad f an arriving packet and identifies the pririty label. (The starting pint fr the traffic classificatin cmpnent can be the FIFO scheduler frm Exercise 1.2.) Once classified, packets are assigned t the pririty queues. Vide traffic (with label 2 ) is assigned t the high pririty queue and Pissn traffic (with label 1 ) is assigned t the lw pririty queue. If a new packet arrives when the link is idle (i.e., n packet is in transmissin and n packet is waiting in the FIFO queues) the arriving packet is immediately transmitted. Otherwise, the packet is enqueued in the crrespnding FIFO queue. The pririty scheduler is wrk-cnserving: As lng as there is a packet waiting, the scheduler must transmit a packet. Packet transmissins is nn-preemptive: Once the transmissin f a packet has started, the transmissin cannt be interrupted. In particular, when a lw pririty packet is in ECE 466 LAB 3 - PAGE 8 J. Liebeherr

transmissin, an arriving high-pririty packet must wait until the transmissin is cmpleted. Test the prgram with the traffic generatr frm Exercise 2.1. Exercise 2.3 Evaluatin f the pririty scheduler Evaluate the pririty scheduler with the traffic generatr Exercise 2.1 using the fllwing transmissin scenaris: The vide surce transmits accrding t the data in the tracefile (see Exercise 2.1). As in Exercise 1.1, the Pissn surce is re-scaled s that it transmits with an average rate f N 0.1 Mbps, where N = 1, 2,, 9. Fr each value f N, determine the fllwing values fr bth high and lw pririty traffic: Maximum backlg in the buffers; Average backlg in the buffer; Percentage f time that the FIFO scheduler is transmitting; Percentage f time that a packet is waiting in the high (and lw pririty) queue. Present plts that shw the abve values as a functin f N. Cmpare the utcme t Exercise 1.3. Exercise 1.3 (Optinal, 5% extra credit) Starvatin in pririty schedulers A limitatin f SP scheduling is that it always gives preference t high-pririty scheduling. If the lad frm high-pririty traffic is very high, it may cmpletely pre-empt lw-pririty traffic frm the link. This is referred t as starvatin. The fllwing experiment tries t exhibit the starvatin f lw pririty traffic. The experiment is similar t the last exercise f Part 1. Cnsider the previusly build pririty scheduler with tw pririty classes. There are tw traffic surces, which are each re-scaled cmpund Pissn surces as in Exercise 1.1: Surce 1 transmits at an average rate f N 1 0.1 Mbps. Surce 2 transmits at an average rate f N 2 0.1 Mbps. Traffic frm Surce 1 is labelled with identifier 1 (lw pririty) and Surce 2 is labelled with 2 (high pririty). ECE 466 LAB 3 - PAGE 9 J. Liebeherr

Bth surces send traffic t the pririty scheduler (with rate C=1 Mbps) and 100 kb f buffer fr each queue. Run a series f experiments where the lad f the tw surces is set t (N 1, N 2 ) with N 1 = 5 and N 2 = 4, 6, 8, 10, 12,14. Recrd the average thrughput (utput rate) f each traffic surce. Prepare a table that shws the average thrughput f high and lw pririty traffic and interpret the result. Cmpare the utcme t Exercise 1.4. Lab Reprt: Prvide the plts and a discussin f the plts. Als include answers t the questins. ECE 466 LAB 3 - PAGE 10 J. Liebeherr

Part 3. Weighted Rund Rbin (WRR) Scheduler Many scheduling algrithms attempt t achieve a ntin f fairness by regulating the fractin f link bandwidth allcated t each traffic surce. The bjectives f a fair scheduler are as fllws: If the link is nt verladed, a traffic surce shuld be able t transmit all f its traffic. If the link is verladed, each traffic surce btains the same rate guarantee, called the fair share, with the fllwing rules: If the traffic frm a surce is less than the fair share, it can transmit all its traffic; If the traffic frm a surce exceeds the fair share, it can transmit at a rate equal t the fair share. The fair share depends n the number f active surces and their traffic rate. Suppse we have set f surces where the arrival rate f Surce i is r i, and a link with capacity C (bps). If the arrival rate exceeds the capacity, i.e., that satisfies the equatin:, then the fair share is the number f As an example, suppse we have a link with capacity 10 Mbps, and the arrival rates f flws are r 1 =8 Mbps, r 2 = 6 Mbps, and r 3 = 2 Mbps, then the fair share is f = 4 Mbps, resulting in an allcated rate is 4 Mbps fr Surce 1, 4 Mbps fr Surce 2, and 2 Mbps fr Surce 3. Since different surces have different resurce requirements, it is ften desirable t assciate a weight (w i ) with each surce, and allcate bandwidth prprtinally t the weights. In ther wrds, a surce that has a weight twice as large f secnd surce shuld be able t btain twice the bandwidth f the secnd surce. With these weights, the fair share f at an verladed link, i.e., is btained by slving. Fr example, using the previus example, and assigning weights w 1 =3, w 2 =w 3 =1, the allcated rates are 6, 2, and 2 Mbps fr the three surces. ECE 466 LAB 3 - PAGE 11 J. Liebeherr

A scheduling algrithm that realizes this scheme withut weights is called Prcessr Sharing (PS) and with weights Generalized Prcessr Sharing (GPS). Bth PS and GPS are idealized algrithms, since they treat traffic as a fluid. Realizing fairness in a packet netwrk turns ut t be quite hard, since packet sizes have different sizes (50 1500 bytes) and packet transmissins cannt be interrupted. Many cmmercial IP ruters and Ethernet switches (nt the cheap nes!) implement scheduling algrithms that apprximate the GPS scheduling algrithm. A widely used (and easy t implement) scheduling algrithm that apprximates GPA is the Weighted Rund Rbin (WRR) scheduler. The bjective f this part f the lab is implementing and evaluating a WRR scheduling algrithm. Queue 1 Traffic classificatin Queue 2 Queue 3 C A WRR scheduler, illustrated in the figure abve, perates as fllws: The scheduler has multiple FIFO queues. A traffic classificatin unit assigns an incming packet t ne f the FIFO queues. The WRR scheduler perates in runds. In each rund the scheduler visits each queue in a rund-rbin (sic!) fashin, starting with Queue 1. During each visit f a queue ne r mre packets may be serviced. The WRR assumes that ne can estimate (r knw) the average packet size f the arrivals t Queue i, dented by L i. The WRR calculates the number f packets t be served in each rund: Fr each Queue i: x i = w i / L i x = min i { x i } Fr each Queue i: packets_per_rund i = x i / x Once all packets f a rund are transmitted r if n mre packets are left, the scheduler visits the next queue. ECE 466 LAB 3 - PAGE 12 J. Liebeherr

Exercise 3.1 Build a WRR scheduler Build a WRR scheduler as described abve, that supprts at least three queues. The WRR will serve Pissn traffic surces as used in Parts 1 and 2 f this lab. Pissn tracefile Traffic Generatr 1 Traffic classificatin Queue 1 2 3 Queue 2 Queue 3 C There are three traffic generatrs that each transmit re-scaled Pissn traffic as dne in Exercise 1.1. Each Pissn surce is re-scaled s that it transmits with an average rate f N 0.1 Mbps, where N = 1, 2,, 9. Each transmitted packet is labelled with 1, 2 r 3 as dne in Part 2 f this lab. The label is ne byte lng. The traffic classificatin cmpnent reads the first byte f the paylad f an arriving packet, and adds the packet t the queue (Packets with label 1 are assciated with Queue 1, etc.). If n packet is in transmissin r in the queue, and arriving packet is transmitted immediately. The transmissin rate f the link is C=1 Mbps. The maximum buffer size f each queue is 100 kb. An arrival that cannt be stred in the queue is discarded. Once the implementatin is cmpleted and tested, mve n t the evaluatin. Exercise 3.2 Evaluatin f a WRR scheduler: Equal weights Evaluate the WRR scheduler with three Pissn surces. The weight N f the traffic generatrs is set t N=8 in fr the first surce, N=6 fr the secnd surce and N=2 fr the third surce. Set the weights f the queues t w 1 =w 2 =w 3 =1. The average packet size f a surce is set t N 100 bytes, where N is the weight. Nte: Cmpare this scenari t the first figure f Part 3. The average lad n the link is 1.5 Mbps, i.e., the link is verladed. We expect that the bandwidth at the link is shared amng the surces at a rati f 4:4:2. Prepare plts that shw the number f packet transmissins and the number f transmitted bytes frm a particular surce (y-axis) as a functin f time (x-axis). Prvide ne plt fr each surce. Select a reasnable time scale fr the x-axis, e.g., a time scale f 10 ms per data pint. ECE 466 LAB 3 - PAGE 13 J. Liebeherr

Cmpare the plts with the theretically expected values f a PS scheduler. Exercise 3.3 Evaluatin f a WRR scheduler: Different weights This exercise re-creates a transmissin scenari as in the secnd figure f Part 3. Evaluate the WRR scheduler with three Pissn surces. As befre, the weight N f the traffic generatrs is set t N=8 in fr the first surce, N=6 fr the secnd surce and N=2 fr the third surce. Set the weights f the queues t w 1 =3 and w 2 =w 3 =1. Nte: Cmpare this scenari t the secnd figure f Part 3. We expect that the bandwidth at the link is shared amng the surces at a rati f 6:2:2. Prepare plts that shw the number f packet transmissins and the number f transmitted bytes frm a particular surce (y-axis) as a functin f time (x-axis). Prvide ne plt fr each surce. Cmpare the plts with the theretically expected values f a GPS scheduler. Exercise 3.4 (Optinal, 5% extra credit) N Unfairness and n Starvatin in WRR A limitatin f SP scheduling is that it always gives preference t high-pririty scheduling. If the lad frm high-pririty traffic is very high, it may cmpletely pre-empt lw-pririty traffic frm the link. This is referred t as starvatin. The fllwing experiment tries t shw that WRR des nt suffer frm the prblems f FIFO (unfair) and SP (starvatin). The experiment retraces the steps f Exercises 1.4 and 2.4. Cnsider the WRR scheduler frm abve with rate C=1 Mbps and 100 kb f buffer fr each queue. There are tw traffic surces, which are each re-scaled cmpund Pissn surces as in Exercise 1.1: Surce 1 transmits at an average rate f N 1 0.1 Mbps. Surce 2 transmits at an average rate f N 2 0.1 Mbps. Traffic frm Surce 1 is labelled with identifier 1 (lw pririty) and Surce 2 is labelled with 2 (high pririty). Bth surces are assigned the same weight at the WRR scheduler (w 1 =w 2 =1). Run a series f experiments where the lad f the tw surces is set t (N 1, N 2 ) with N 1 = 5 and N 2 = 4, 6, 8, 10, 12,14. Recrd the average thrughput (utput rate) f each traffic surce. ECE 466 LAB 3 - PAGE 14 J. Liebeherr

Lab Reprt: Prepare a table that shws the average thrughput f the tw surces and interpret the result. Cmpare the utcme t Exercises 1.4 and 2.4. Prvide the plts and a discussin f the plts. Als include answers t the questins. ECE 466 LAB 3 - PAGE 15 J. Liebeherr

Feedback Frm fr Lab 3 Cmplete this feedback frm at the cmpletin f the lab exercises and submit the frm when submitting yur lab reprt. The feedback is annymus. D nt put yur name n this frm and keep it separate frm yur lab reprt. Fr each exercise, please recrd the fllwing: Part 1. FIFO Scheduling Difficulty (-2,-1,0,1,2) -2 = t easy 0 = just fine 2 = t hard Interest Level (-2,-1,0,1,2) -2 = lw interest 0 = just fine 2 = high interest Time t cmplete (minutes) Part 2. Pririty Scheduling Part 3. Weighted Rund Rbin (WRR) Scheduler Please answer the fllwing questins: What did yu like abut this lab? What did yu dislike abut this lab? Make a suggestin t imprve the lab. ECE 466 LAB 3 - PAGE 16 J. Liebeherr