Lecture 14: M/G/1 Queueing System with Priority

Similar documents
Scheduling. Scheduling algorithms. Scheduling. Output buffered architecture. QoS scheduling algorithms. QoS-capable router

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Scheduling of processes

Quality of Service (QoS)

Interactive Scheduling

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

Scheduling. Scheduling. Scheduling. Scheduling Criteria. Priorities. Scheduling

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Multiprocessor and Real- Time Scheduling. Chapter 10

Chapter 6 Queuing Disciplines. Networking CS 3470, Section 1

MODELING OF SMART GRID TRAFFICS USING NON- PREEMPTIVE PRIORITY QUEUES

Stochastic Processing Networks: What, Why and How? Ruth J. Williams University of California, San Diego

Episode 5. Scheduling and Traffic Management

Congestion Control 3/16/09

Queuing Systems. 1 Lecturer: Hawraa Sh. Modeling & Simulation- Lecture -4-21/10/2012

Class-based Packet Scheduling Policies for Bluetooth

Simulation Studies of a Multi-priority Dual Queue (MPDQ) with Preemptive and Non-preemptive Scheduling

A Comparative Study of Different Queuing Scheduling Disciplines

Scheduling - Overview

1.1 CPU I/O Burst Cycle

Episode 5. Scheduling and Traffic Management

Networking Issues in LAN Telephony. Brian Yang

Queuing Mechanisms. Overview. Objectives

Lecture 9: Switched Ethernet Features: STP and VLANs

Operating Systems. Process scheduling. Thomas Ropars.

CPU THREAD PRIORITIZATION USING A DYNAMIC QUANTUM TIME ROUND-ROBIN ALGORITHM

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

8: Scheduling. Scheduling. Mark Handley

238P: Operating Systems. Lecture 14: Process scheduling

A Novel Scheduling and Queue Management Scheme for Multi-band Mobile Routers

Lecture 24: Scheduling and QoS

PARALLEL ALGORITHMS FOR IP SWITCHERS/ROUTERS

Lecture 5: Performance Analysis I

Real-time operating systems and scheduling

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

QoS provisioning. Lectured by Alexander Pyattaev. Department of Communications Engineering Tampere University of Technology

048866: Packet Switch Architectures

Fairness, Queue Management, and QoS

Real-Time Protocol (RTP)

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Overview. Lecture 22 Queue Management and Quality of Service (QoS) Queuing Disciplines. Typical Internet Queuing. FIFO + Drop tail Problems

Fundamentals of Queueing Models

Uniprocessor Scheduling

Course Syllabus. Operating Systems

CS551 Router Queue Management

Basics (cont.) Characteristics of data communication technologies OSI-Model

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Comp 310 Computer Systems and Organization

Scheduling. The Basics

CS557: Queue Management

EP2210 Scheduling. Lecture material:

H3C S9500 QoS Technology White Paper

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

Topic 4a Router Operation and Scheduling. Ch4: Network Layer: The Data Plane. Computer Networking: A Top Down Approach

Chapter 5 CPU scheduling

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Router Design: Table Lookups and Packet Scheduling EECS 122: Lecture 13

CS418 Operating Systems

Queuing. Congestion Control and Resource Allocation. Resource Allocation Evaluation Criteria. Resource allocation Drop disciplines Queuing disciplines

Prioritized Access in a Channel-Hopping Network with Spectrum Sensing

Process- Concept &Process Scheduling OPERATING SYSTEMS

Service-to-Service Mapping of Differentiated Services to the ABR Service of ATM in Edge/Core Networks

COMP/ELEC 429/556 Introduction to Computer Networks

Project 2: CPU Scheduling Simulator

CSC 4900 Computer Networks: Network Layer

Last Class: Processes

Performance Evaluation of Scheduling Mechanisms for Broadband Networks

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

TELE Switching Systems and Architecture. Assignment Week 10 Lecture Summary - Traffic Management (including scheduling)

Crossbar - example. Crossbar. Crossbar. Combination: Time-space switching. Simple space-division switch Crosspoints can be turned on or off

Scheduling. Scheduling 1/51

Scheduling Bits & Pieces

Network Layer Enhancements

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

CSL373: Lecture 6 CPU Scheduling

QoS Architecture and Its Implementation. Sueng- Yong Park, Ph.D. Yonsei University

Chapter 6: CPU Scheduling

QUEUING SYSTEM. Yetunde Folajimi, PhD

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

Queuing Networks. Renato Lo Cigno. Simulation and Performance Evaluation Queuing Networks - Renato Lo Cigno 1

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Lecture Outline. Bag of Tricks

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

MUD: Send me your top 1 3 questions on this lecture

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

CCVP QOS Quick Reference Sheets

Router s Queue Management

of-service Support on the Internet

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Topic 4b: QoS Principles. Chapter 9 Multimedia Networking. Computer Networking: A Top Down Approach

Simulation-Based Performance Comparison of Queueing Disciplines for Differentiated Services Using OPNET

Linux Traffic Control

Problem Set: Processes

Kommunikationssysteme [KS]

Transcription:

Lecture 14: M/G/1 Queueing System with Priority Dr. Mohammed Hawa Electrical Engineering Department University of Jordan EE723: Telephony. Priority Queueing Systems Until the moment, we assumed identical customers arriving at the queuing system (identical arrival statistics, identical service time statistics, and identical preference in the system). This is called a homogenous customer population. A heterogeneous customer population, on the other hand, consists of different classes of customers. For example, a group of customers can be given priority over other customers because they have different performance requirements (e.g., urgent management packets versus regular data packet). Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 2 1

Definitions In a priority queuing system each arriving customer is treated differently based on its assigned priority. We will assume that the queuing system provides a different queue for each group of customers with a specific priority 1,. All the queues are served by one server based on a certain scheduling mechanism. This model is called M/G/1 with Priority. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 3 System Description Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 4 2

Scheduling Mechanisms (disciplines or strategies) Conventional (simple): First-In First-Out (FIFO) Last-In First-Out (LIFO) Strict Priority scheduling: Preemptive Resume Strategy Preemptive Non-Resume Strategy Non-Preemptive Strategy Fair Queueing: Weighted Fair Queueing (WFQ) Self-Clocked Fair Queueing (SCFQ) Start-Time Fair Queueing (STFQ) Weighted Round Robin (WRR) Deficit Round Robin (DRR) Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 5 Preemptive Resume Strategy In this strategy, customers with higher priority will always be served independently of whatever else is happening with lower priority customers. If a low priority customer was being served when another higher priority customer arrived, the server will stop immediately serving the low priority customer, and move to serve the higher priority customer. After finishing the service of the higher priority customer (and assuming no more high priority customers arrive meanwhile), the server goes back and resumes serving the low priority customer from the point where it stopped. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 6 3

Preemptive Resume Strategy [2] In this strategy, higher priority customers do not feel the existence of lower priority customers, because no class customers can be served if class 1 customers exist, and no class 1 customers can be served if class 2 customers exist, etc. Cleary this strategy cannot be used in practical packet switched networks, since packets cannot be divide into arbitrary parts (depending on the time the server spends serving the packet). However, it can be used to describe a processor running a lowpriority thread that is stopped by a high priority interrupt. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 7 Preemptive Non-Resume Strategy In this strategy, if a low priority customer was being served when another higher priority customer arrived, the server will stop immediately serving the low priority customer, and move to serve the higher priority customer. After finishing the service of the higher priority customer (and assuming no more high priority customers arrive meanwhile), the server resumes serving the low priority customer from the beginning. In other words, it treats that customer as if it was a new customer, i.e., it makes no distinction between a new customer and the one that has been preempted (in terms of service time). Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 8 4

Preemptive Non-Resume Strategy [2] In this strategy, low priority customers are transparent to high priority customers since the high priority customers performance is unaffected by the low priority ones. Even though this strategy can be used in packet switched networks, it results in plenty of wasted server time and partially transmitted packets, hence, it is not used in practical systems. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 9 Non-Preemptive Strategy In this strategy, if a high priority customer arrives while a low priority customer is being served, the low priority customer is allowed to finish the service before the server moves to serve the high priority customer (i.e., the low priority customer is not preempted, but no other low priority customer enters the server). The server does not go back to low priority customers until the high priority queue is empty. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 10 5

Non-Preemptive Strategy [2] In this strategy, low priority customers affect (slightly) the performance of high priority customers, since a high priority customer arriving to an empty queue has to wait for the low priority customer to finish service. This is the strategy that is used in practical packet switched networks, and the one that we analyze in this course. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 11 M/G/1 system with non-preemptive strict priority : Number of traffic classes (priorities), where class 1 traffic is the highest priority and class is the lowest priority. : Mean arrival rate of customers of class (arrivals are Poisson). 1/ : Mean service time of customers of class (arbitrary distribution for service time ). : Second moment for the service time of customers of class. / : Traffic intensity of customers of class. Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 12 6

Total Traffic Hence, the total traffic intensity to the system is: We require that 1 to prevent buffers from accumulating to infinity (as this is infinite buffer case). Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 13 We define : Mean waiting time for a customer of priority in the queue (not in the whole system). = + + : Mean time until next customer in line for service starts service after an arbitrary arrival (i.e. if any customer in service, it has to finish its service). Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 14 7

Definitions : Mean waiting time for a customer of priority due to customers of a higher or the same priority as (i.e., priority ) which already exist in the queue at the time of the customer arrival. : Mean waiting time for a customer of priority due to customers of a higher priority as (i.e., priority ) that arrive after the -priority customer arrival but before it starts its service (i.e., steps into the server). Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 15 Solution 1 1 1 Copyright Dr. Mohammed Hawa Electrical Engineering Department, University of Jordan 16 8