Lecture 5: Performance Analysis I

Similar documents
Introduction to Queuing Systems

Queuing Systems. 1 Lecturer: Hawraa Sh. Modeling & Simulation- Lecture -4-21/10/2012

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Wireless Networks (CSC-7602) Lecture 8 (22 Oct. 2007) Seung-Jong Park (Jay) Fair Queueing

Unit 2 Packet Switching Networks - II

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)

Network Support for Multimedia

Read Chapter 4 of Kurose-Ross

Quality of Service (QoS)

Priority Traffic CSCD 433/533. Advanced Networks Spring Lecture 21 Congestion Control and Queuing Strategies

of-service Support on the Internet

Lecture Outline. Bag of Tricks

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Computer Networking. Queue Management and Quality of Service (QOS)

Network Model for Delay-Sensitive Traffic

EP2210 Scheduling. Lecture material:

Advanced Computer Networks

Week 7: Traffic Models and QoS

TDDD82 Secure Mobile Systems Lecture 6: Quality of Service

Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Chapter 24 Congestion Control and Quality of Service 24.1

Real-Time Protocol (RTP)

Introduction to Performance Engineering and Modeling

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

QOS IN PACKET NETWORKS

CHAPTER 9: PACKET SWITCHING N/W & CONGESTION CONTROL

CS551 Router Queue Management

TELCOM 2130 Queueing Theory. David Tipper Associate Professor Graduate Telecommunications and Networking Program. University of Pittsburgh

A Rant on Queues. Van Jacobson. July 26, MIT Lincoln Labs Lexington, MA

Lecture 24: Scheduling and QoS

Computer Networks 1 (Mạng Máy Tính 1) Lectured by: Dr. Phạm Trần Vũ

Why is scheduling so difficult?

Fair Queueing. Presented by Brighten Godfrey. Slides thanks to Ion Stoica (UC Berkeley) with slight adaptation by Brighten Godfrey

Fundamentals of Queueing Models

048866: Packet Switch Architectures

EE 122: Router Support for Congestion Control: RED and Fair Queueing. Ion Stoica Oct. 30 Nov. 4, 2002

Master Course Computer Networks IN2097

Lecture 22: Buffering & Scheduling. CSE 123: Computer Networks Alex C. Snoeren

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Master Course Computer Networks IN2097

ADVANCED COMPUTER NETWORKS

Lecture 14: Congestion Control"

Network Layer Enhancements

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Stochastic Processing Networks: What, Why and How? Ruth J. Williams University of California, San Diego

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

Time-Step Network Simulation

Advanced Internet Technologies

Chapter 6 Queuing Disciplines. Networking CS 3470, Section 1

What Is Congestion? Effects of Congestion. Interaction of Queues. Chapter 12 Congestion in Data Networks. Effect of Congestion Control

Random Early Detection (RED) gateways. Sally Floyd CS 268: Computer Networks

Introduction to Performance Engineering and Modeling

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

Congestion in Data Networks. Congestion in Data Networks

QUALITY of SERVICE. Introduction

Congestion Control In the Network

Internet Services & Protocols. Quality of Service Architecture

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

CS 344/444 Computer Network Fundamentals Final Exam Solutions Spring 2007

Project Computer Networking. Resource Management Approaches. Start EARLY Tomorrow s recitation. Lecture 20 Queue Management and QoS

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

Fairness, Queue Management, and QoS

Äriprotsesside modelleerimine ja automatiseerimine Loeng 8 Järjekorrateooria ja äriprotsessid. Enn Õunapuu

Lecture 14: Congestion Control"

CPSC 531: System Modeling and Simulation. Carey Williamson Department of Computer Science University of Calgary Fall 2017

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Improving QOS in IP Networks. Principles for QOS Guarantees

CS557: Queue Management

Lecture 4 Wide Area Networks - Congestion in Data Networks

UNIT 4: QUEUEING MODELS

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Multiple Access Communications. EEE 538, WEEK 11 Dr. Nail Akar Bilkent University Electrical and Electronics Engineering Department

Chair for Network Architectures and Services Prof. Carle Department of Computer Science Technische Universität München.

Prof. Dr. Abdulmotaleb El Saddik. site.uottawa.ca mcrlab.uottawa.ca. Quality of Media vs. Quality of Service

Traffic Access Control. Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011

Queuing Networks. Renato Lo Cigno. Simulation and Performance Evaluation Queuing Networks - Renato Lo Cigno 1

Mohammad Hossein Manshaei 1393

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Episode 5. Scheduling and Traffic Management

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Quality of Service (QoS)

Model suitable for virtual circuit networks

COSC243 Part 2: Operating Systems

Flow Control. Flow control problem. Other considerations. Where?

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Chapter 6: Congestion Control and Resource Allocation

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Quality of Service (QoS)

Flow and Congestion Control

Router s Queue Management

MODELING OF SMART GRID TRAFFICS USING NON- PREEMPTIVE PRIORITY QUEUES

Performance Modeling

Basics (cont.) Characteristics of data communication technologies OSI-Model

QoS Guarantees. Motivation. . link-level level scheduling. Certain applications require minimum level of network performance: Ch 6 in Ross/Kurose

UNIT 2 TRANSPORT LAYER

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Overview. TCP & router queuing Computer Networking. TCP details. Workloads. TCP Performance. TCP Performance. Lecture 10 TCP & Routers

Transcription:

CS 6323 : Modeling and Inference Lecture 5: Performance Analysis I Prof. Gregory Provan Department of Computer Science University College Cork Slides: Based on M. Yin (Performability Analysis)

Overview Queuing Models Birth-Death Process Poisson Process Task Graphs 2

Network Modelling We model the transmission of packets through a network using Queueing Theory Enables us to determine several QoS metrics Throughput times, delays, etc. Predictive Model Properties of queueing network depends on Network topology Performance for each path in network Network interactions 3

Queuing Networks A queuing network consists of service centers and customers (often called jobs) A service center consists of one or more servers and one or more queues to hold customers waiting for service. : customers arrival rate : service rate 1/24/2014 4

Interarrival Interarrival time: the time between successive customer arrivals Arrival process: the process that determines the interarrival times It is common to assume that interarrival times are exponentially distributed random variables. In this case, the arrival process is a Poisson process : customers arrival rate : service rate 0 1 2 n 1/24/2014 5

Service Time The service time depends on how much work the customer needs and how fast the server is able to perform the work Service times are commonly assumed to be independent, identically distributed (iid) random variables. 1/24/2014 6

A Queuing Model Example Server Center 1 cpu1 Server Center 2 Disk1 cpu2 Arriving customers queue cpu3 Disk2 servers Server Center 3 Queuing delay Service time Response time 1/24/2014 7

Terminology Term Buffer size Population size Queuing discipline Preemptive Explanations The total number of customers allowed at a service center. The buffer size usually includes the ones waiting for service and receiving service The total number of potential customers that may want to enter the service center. When it is large enough, it is usually assumed to be infinite. The algorithm that determines the order in which customers are served. Some common recognized queuing disciplines: FCFS, priority, round robin (time sharing), etc. A queuing discipline is called preemptive when the service being provided to a customer can be suspended if some higher priority customer arrives. 1/24/2014 8

Open vs. Closed Queues Open: the customers arrive from an external source Closed: no external source of customers and no departures 1/24/2014 9

Measures of Interest queue length response time throughput: the number of customers served per unit of time utilization: the fraction of time that a service center is busy A relationship between throughput and response time Little s law. 1/24/2014 10

Little s Law The mean number of jobs in a queuing system in the steady state is equal to the product of the arrival rate and the mean response time L W L : The average number of customers in a queuing system : the average arrival rate of customers admitted to the system W : The average time spent by a customer in the queuing system 1/24/2014 11

Example using Little s law Cork Tunnel Observe 120 cars queued in the Cork Tunnel Observe 32 cars/minute arrive at the tunnel What is average waiting time before and in the tunnel? W L 120 3.75min 32

Model Queuing System Queuing System Queue Server Queuing System Server System Strategy: Use Little s law on both the complete system and its parts to reason about average time in the queue

Queuing Notation Queue Server Called an M/M/1 queue Arrival Process Service Process # Servers 14

M/M/1 Queue M/M/1 queue The first M means the arrival process is exponential distributed The second M means the service process is exponential distributed 1 means the number of servers is 1 Assuming buffer size and population size are infinity First-Come First-Served discipline is applied 1/24/2014 15

M/M/2 Queue Arriving customers queue M/M/2 Exponential Arrival Process Exponential Service Process 2 Servers 16

General (Kendal) Notation Queuing Model is usually described as X/Y/Z/K/L/D (M denotes the exponential distribution, E for Erlang, H for hyperexponential, D for deterministic and G for general) X: Arrival process Y: Service process Z: Number of servers at the service center K: Buffer size L: Population size D: The queuing discipline K, L and D are often omitted, which means K, L are and D is FCFS 1/24/2014 17

Distributions M: Exponential D: Deterministic (e.g. fixed constant) E k : Erlang with parameter k H k : Hyperexponential with param. k G: General (anything) M/M/1 is the simplest realistic queue

Kendal Notation Examples M/M/1: Exponential arrivals and service, 1 server, infinite capacity and population, FCFS (FIFO) M/M/m Same, but M servers G/G/3/20/1500/SPF General arrival and service distributions, 3 servers, 17 queue slots (20-3), 1500 total jobs, Shortest Packet First

M/M/1 queue model L L q W q 1 W

Analysis of M/M/1 queue Goal: A closed form expression of the probability of the number of jobs in the queue (P i ) given only and

Solving queuing systems Given: : Arrival rate of jobs (packets on input link) : Service rate of the server (output link) Solve: L: average number in queuing system L q average number in the queue W: average waiting time in whole system W q average waiting time in the queue 4 unknowns: need 4 equations

Solving queuing systems 4 unknowns: L, L q W, W q Relationships using Little s law: L=W L q =W q (steady-state argument) W = W q + (1/) If we know any 1, can find the others Finding L is hard or easy depending on the type of system. In general: L n0 np n

Equilibrium conditions n-1 n n+1 1: 2: inflow = outflow 1 2 ( ) P P P n n1 n1 P n Pn1 stability: 3:,, 1

Solving for P 0 and P n 1: P P 1 0 0 2 P P 2 P P n 0,, n 2: n0 3: n 4: 0 Pn n 1 1 P 1 n 1 0 n0,,, 1 P 0 1 n0 (geometric series) P 1 1 0 1 n ( 1 ) 1 5: n P ( 1 ) n n0 n,

Solving for L L n0 np n n0 n n (1 ) (1 ) n1 n n 1 (1 ) d d 1 n0 n 1 ( 1 ) (1 ) 2 d ( 1 ) 1 d 1 (1 )

Solving W, W q and L q W 1 L 1 W q W 1 1 ( ) L W q q ( ) 2 ( )

W(sec) Response Time vs. Arrivals Waiting vs. Utilization 0.25 0.2 0.15 0.1 0.05 0 0 0.2 0.4 0.6 0.8 1 1.2 % W 1

W(sec) Stable Region Waiting vs. Utilization 0.025 0.02 0.015 0.01 0.005 0 linear region 0 0.2 0.4 0.6 0.8 1 %

Empirical Example M/M/m system

Key Parameters of Interest The service rate = utilization = ( ) P(n) packets in the gateway = Mean # in gateway (L) = 1 P P 0 n (1 )( n ) 32

Example Measurement of a network gateway: mean arrival rate (): 125 Packets/s mean response time per packet: 2 ms Assuming exponential arrivals & departures: What is the service rate,? What is the gateway s utilization? What is the probability of n packets in the gateway? mean number of packets in the gateway? The number of buffers so P(overflow) is <10-6?

Example (cont) The service rate, utilization = 1 0.002 ( ) 500 pps 0.25% P(n) packets in the gateway = P P 0 n (1 )( n ) (0.75)(0.25 n )

Example (cont) Mean # in gateway (L) = 0.25 1 1 0.25 0.33 to limit loss probability to less than 1 in a million: n 10 6

M/M/1 Queue M/M/1 queue The first M means the arrival process is exponential distributed The second M means the service process is exponential distributed 1 means the number of servers is 1 Assuming buffer size and population size are infinity First-Come First-Served discipline is applied 1/24/2014 36

Solving M/M/1 queue Solved for: The steady-state probability in each state Server Utilization The expected number of customers in the system The average response time 0 Construct the corresponding Markov Chain 1 2 k A typical birth-death process 1/24/2014 37

Birth-Death Process 0 k-1 1 2 k 0 1 2 k 1 2 3 k k+1 A Birth-Death Process is a Markov chain where the transitions can occur only between adjacent states. You can solve the Markov chain analytically by solving the set of balance equations. P P 0 k 1 P k1 k 1 1 k 1 i0 i 1 i i 0 i0 i 1 1/24/2014 38

Solving M/M/1 Queue ) (1 1 1 1 1 0 0 0 1 0 k k k k k k k k P P P P 39 1/24/2014 Define the ratio called the utilisation rate, or traffic intensity When < 1 (meaning <), the system is called stable, and the steady state probabilities can be determined by

M/M/1 Queue Property 0 Steady State Probability for state k (k0): (1-) k Server Utilization: 1-P 0 =1- (1-) = E[ N] kp k 1 k 0 1 2 k The mean of number of customers in the system E[N] 1/24/2014 40

M/M/1 Queue Property: Average Response Time Little s Formula L W 1 From previous discussion Average Response Time: 1 1 1 1 1/24/2014 41

response time M/M/1 Performance Figure 1 M/M/1 Queue lambda=1, mu varies 10.0000000 8.0000000 6.0000000 4.0000000 2.0000000 0.0000000 1.11 1.25 1.43 1.67 2 2.5 3.33 5 mu=1/service time 1/24/2014 42

Performance Measures M/M/1 Performance Figure 2 2.5 2 1.5 1 0.5 M/M/1 Queue Mu=1, Lambda varies E[L] E[W] 0 0.5 0.4 0.33 0.29 0.25 0.22 0.2 0.18 0.17 0.15 0.14 0.13 Lambda (1/arrival time) 1/24/2014 43

Average Response Time Example: How does a Fast Lane work in Disneyland? Construct an M/M/1 queue Solve for the average response time, as a function of the inter-arrival time Control the people flow to assure the response time 35 30 25 20 15 10 5 0 6 8 10 12 14 16 18 20 22 24 26 28 30 Interarrival Time (min) 1/24/2014 44

Properties of a Poisson processes Poisson process = exponential distribution between arrivals/departures/service P(arrival t) 1 t Key properties: memoryless Past state does not help predict next arrival Closed under: Addition Subtraction e

A Special birth-death process: Poisson Process (n)= for all n 0 (n)=0 for all n 0 Pure birth process Definition of the Poisson process: The counting process {N(t), t 0} is said to be a Poisson process having rate, >0, if N(0)=0 The process has independent increments The number of events in any interval of length t is Poisson distributed with mean t. 1/24/2014 46

Poisson Process (continued) For all s, t 0 P{N(t+s)-N(s) =n } = E[N(t)] = t Poisson properties: ( t) e t n! Inter-arrival times are exponentially distributed Memoryless property: the prob. of occurrence of an event is independent of how many events have occurred in the past and the time since the last event n 1/24/2014 47

Queuing Networks: Rate Addition and Subtraction Merge: two Poisson streams with arrival rates 1 and 2 : new Poisson stream: 3 1 2 Split : If any given item has a probability P 1 of leaving the stream with rate 1: 2 =(1-P 1 ) 1

Queuing Networks 1 3 0.7 2 0.3 4 0.5 6 1 2 3 3 4 5 6 2 4 7 5 5 0.5 7

Open Queuing Networks Example: Messages arrive at a concentrator according to a Poisson process of rate a. The time required to transmit a message and receive an acknowledgment is exponentially distributed with mean 1/m. Suppose that a message need to be retransmitted with probability p. Find the steady state pdf for the number of messages in the concentrator. Solution: The overall system can be represented by the simple queue with feedback shown below..9 a.1

Open Queuing Networks The net arrival rate into the queue is = a + p, that is, a 1 p Thus, the pdf for the number of messages in the concentrator is P[ N where n] (1 / ) n a /(1 n 0,1,... )

Open Queuing Networks Example: New programs arrive at a CPU according to a Poisson process of rate a as shown in below fig. A program spends as exponentially distributed execution time of mean 1/ 1 in the CPU. At the end of this service time, the program execution is complete with probability p or it requires retrieving additional information from secondary storage with probability 1-p. Suppose that the retrieval of information from secondary storage requires an exponentially distributed amount of time with mean 1/ 2. Find the mean time that each program spends in the system. a CPU 1p I/O 1 p 2

Open Queuing Networks Solution: the net arrival rates into the two queues are Thus Each queue behaves like an M/M/1 system so, 1 2 2 1 ) 1 ( a p and p p and p a a ) (1 2 1 2 2 2 1 1 1 1 ] [ and 1 ] [ L E L E

Open Queuing Networks Little s formula then gives E[W 1 ]=1/ 1 E[L 1 ], E[W 2 ]=1/ 2 E[L 2 ], Little s formula then gives the mean for total time spent in the system: E[W] = E[W 1 ] + E[W 2 ] E[ W ] ( p / a) 1 1 1 [ p / a(1 p)] 2 1 2

Applications: Queues and Traffic shaping Big Ideas: Traffic shaping: Modify traffic at entrance points in the network Can reason about states of the interior and link properties, loads. (E.g. Leaky, Token bucket) Modify traffic in the routers Enforce policies on flows Queuing theory: Analytic analysis of properties of queues as functions of the inputs and service types

Congestion Control Too many packets in some part of the system Congestion

Simplified Network Model Input Arrivals System Output Departures Goal: Move packets across the system from the inputs to output System could be a single switch, or entire network

Problems with Congestion Performance Low Throughput Long Latency High Variability (jitter) Lost data Policy enforcement: User X pays more than user Y, X should get more bandwidth than Y. Fairness One user/flow getting much more throughput or better latency than the other flows

Congestion Control Causes of congestion Types of congestion control schemes Solving congestion

Causes of Congestion Many ultimate causes: New applications, new users, bugs, faults, viruses, spam, stochastic (time based) variations, unknown randomness. Congestion can be long-lived or transient Timescale of the congestion is important Microseconds vs seconds vs hours vs days Different solutions to all the above!

Exhaustion of Buffer Space (cont d) 50 Mbps 50 Mbps Router 50 Mbps 50 Mbps Buffer

Types of Congestion Control Strategies Terminate existing resources Drop packets Drop circuits Limit entry into the system Packet level (layer 3) Leaky Bucket, token bucket, WFQ Flow/conversation level (layer 4) Resource reservation TCP backoff/reduce window Application level (layer 7) Limit types/kinds of applications

Leaky Bucket Across a single link, only allow packets across at a constant rate Packets may be generated in a bursty manner, but after they pass through the leaky bucket, they enter the network evenly spaced If all inputs enforce a leaky bucket, its easy to reason about the total resource demand on the rest of the system

Leaky Bucket: Analogy Packets from input Leaky Bucket Output

Leaky Bucket (cont d) The leaky bucket is a traffic shaper : It changes the characteristics of a packet stream Traffic shaping makes the network more manageable and predictable Usually the network tells the leaky bucket the rate at which it may send packets when a connection is established

Leaky Bucket: Doesn t allow bursty transmissions In some cases, we may want to allow short bursts of packets to enter the network without smoothing them out For this purpose we use a token bucket, which is a modified leaky bucket

68 Summary Key assumptions Steady-state conditions Exponential arrival and service rates Little s Law M/M/1 Queue Fundamental equations for W, L Networks of Queues

69 Addition Notes: Traffic Shaping Material is for general interest Will not be tested on this material

Token Bucket The bucket holds logical tokens instead of packets Tokens are generated and placed into the token bucket at a constant rate When a packet arrives at the token bucket, it is transmitted if there is a token available. Otherwise it is buffered until a token becomes available. The token bucket holds a fixed number of tokens, so when it becomes full, subsequently generated tokens are discarded Can still reason about total possible demand

Token Bucket Packets from input Token Generator (Generates a token once every T seconds) output

Token Bucket vs. Leaky Bucket Case 1: Short burst arrivals Arrival time at bucket 0 1 2 3 4 5 6 0 1 2 3 4 5 6 Departure time from a leaky bucket Leaky bucket rate = 1 packet / 2 time units Leaky bucket size = 4 packets 0 1 2 3 4 5 6 Departure time from a token bucket Token bucket rate = 1 tokens / 2 time units Token bucket size = 2 tokens

Token Bucket vs. Leaky Bucket Case 2: Large burst arrivals Arrival time at bucket 0 1 2 3 4 5 6 0 1 2 3 4 5 6 Departure time from a leaky bucket Leaky bucket rate = 1 packet / 2 time units Leaky bucket size = 2 packets 0 1 2 3 4 5 6 Departure time from a token bucket Token bucket rate = 1 token / 2 time units Token bucket size = 2 tokens

Multi-link congestion management Token bucket and leaky bucket manage traffic across a single link. But what if we do not trust the incoming traffic to behave? Must manage across multiple links Round Robin Fair Queuing

Multi-queue management If one source is sending too many packets, can we allow other sources to continue and just drop the bad source? First cut: round-robin Service input queues in round-robin order What if one flow/link has all large packets, another all small packets? Smaller packets get more link bandwidth

Idealized flow model For N sources, we would like to give each host or input source 1/N of the link bandwidth Image we could squeeze factions of a bits on the link at once ->fluid model E.g. fluid would interleave the bits on the link But we must work with packets Want to approximate fairness of the fluid flow model, but still have packets

Fluid model vs. Packet Model Flow 1 (arrivals) 1 2 3 4 5 6 time Flow 2 (arrivals ) 1 2 3 4 5 time Service in fluid system 1 2 1 2 3 3 4 4 5 6 5 time Service in Packet system 1 2 1 3 2 3 4 4 5 5 6 time

Fair Queuing vs. Round Robin Advantages: protection among flows Misbehaving flows will not affect the performance of well-behaving flows Misbehaving flow a flow that does not implement congestion control Disadvantages: More complex: must maintain a queue per flow per output instead of a single queue per output Biased toward large packets a flow receives service proportional to the number of packets

Virtual Time How to keep track of service delivered on each queue? Virtual Time is the number of rounds of queue service completed by a bit-by-bit Round Robin (RR) scheduler May not be an integer increases/decreases with # of active queues

Approximate bit-bit RR Virtual time is incremented each time a bit is transmitted for all flows If we have 3 active flows, and transmit 3 bits, we increment virtual time by 1. If we had 4 flows, and transmit 2 bits, increment Vt by 0.5. At each packet arrival, compute time packet would have exited the router during virtual time. This is the packet finish number