LAST LECTURE ROUND-ROBIN 1 PRIORITIES. Performance of round-robin scheduling: Scheduling Algorithms: Slide 3. Slide 1. quantum=1:

Similar documents
Multiprocessor and Real-Time Scheduling. Chapter 10

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3.

Multiprocessor and Real- Time Scheduling. Chapter 10

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

ECE519 Advanced Operating Systems

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CS370 Operating Systems

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CPU Scheduling: Objectives

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

Multiprocessor scheduling

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Last Class: Processes

CS370 Operating Systems

8: Scheduling. Scheduling. Mark Handley

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important?

Uniprocessor Scheduling. Chapter 9

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

LECTURE 3:CPU SCHEDULING

Operating Systems. Scheduling

Uniprocessor Scheduling

CHAPTER 2: PROCESS MANAGEMENT

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Chapter 9. Uniprocessor Scheduling

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

Chapter 5: CPU Scheduling

Chapter 5 CPU scheduling

SMD149 - Operating Systems

Chapter 9 Uniprocessor Scheduling

Operating Systems Unit 3

Scheduling. Scheduling 1/51

Scheduling in the Supermarket

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

Computer Systems Assignment 4: Scheduling and I/O

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher

Scheduling of processes

Chapter 5: CPU Scheduling

Course Syllabus. Operating Systems

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Scheduling. Scheduling 1/51

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Two Real-Time Operating Systems and Their Scheduling Algorithms: QNX vs. RTLinux

Operating System Review Part

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

Process- Concept &Process Scheduling OPERATING SYSTEMS

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

Department of Computer Science Institute for System Architecture, Operating Systems Group REAL-TIME MICHAEL ROITZSCH OVERVIEW

Chapter 6: CPU Scheduling

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

CS307: Operating Systems

Ch 4 : CPU scheduling

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Operating Systems ECE344. Ding Yuan

Scheduling. Today. Next Time Process interaction & communication

Comparison of scheduling in RTLinux and QNX. Andreas Lindqvist, Tommy Persson,

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Practice Exercises 305

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Chapter 5: CPU Scheduling

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Frequently asked questions from the previous class survey

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Operating Systems CS 323 Ms. Ines Abbes

CS3733: Operating Systems

Multiprocessor Systems. COMP s1

Unit 3 : Process Management

Operating Systems. Process scheduling. Thomas Ropars.

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels

Comp 310 Computer Systems and Organization

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

CS370 Operating Systems

Interactive Scheduling

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Scheduling. The Basics

Chapter 5: Process Scheduling

Operating Systems. Figure: Process States. 1 P a g e

TDIU25: Operating Systems II. Processes, Threads and Scheduling

Multitasking and scheduling

by Maria Lima Term Paper for Professor Barrymore Warren Mercy College Division of Mathematics and Computer Information Science

Transcription:

Slide LAST LETURE Scheduling Algorithms: FIFO Shortest job next Shortest remaining job next Highest Response Rate Next (HRRN) Slide 3 Performance of round-robin scheduling: Average waiting time: not optimal Performance depends heavily on size of time-quantum: - too short: overhead for context switch becomes too expensive - too large: degenerates to FFS policy - rule of thumb: about 8% of all bursts should be shorter than time quantum no starvation ROUND-ROBIN quantum=: A B PRIORITIES Each thread is associated with a priority Basic mechanism to influence scheduler decision: Slide 2 D E 2 quantum=4: A B Slide 4 Scheduler will always choose a thread of higher priority over one of lower priority Implemented via multiple FFS ready queues (one per priority) Lower-priority may suffer starvation adapt priority based on thread s age or execution history D E Priorities can be defined internally or externally 2 Scheduled thread is given a time slice Running thread is preempted upon clock interrupt, running thread is returned to ready queue - internal: e.g., memory requirements, I/O bound vs PU bound - external: e.g., importance of thread, importance of user ROUND-ROBIN PRIORITIES 2

Feedback scheduling: queueing: Admit RQ Processor Release Queue headers Runable processes RQ Release 4 (Highest priority) Processor Slide 3 Slide 7 2 RQn Release (Lowest priority) Processor Priorities influence access to resources, but do not guarantee a certain fraction of the resource (PU etc)! Feedback scheduling: 2 LOTTERY SHEDULING Slide 6 Feedback q = Feedback q = 2 i A B D E A B D E Penalize jobs that have been running longer Slide 8 process gets lottery tickets for various resources more lottery tickets imply better access to resource Advantages: Simple Highly responsive Allows cooperating processes/threads to implement individual scheduling policy (exchange of tickets) q = 2 i : longer time slice for lower priority (reduce starvation) PRIORITIES 3 LOTTERY SHEDULING 4

Example (taken from Embedded Systems Programming: Four processes a running concurrently Process A: % of PU time Process B: 2% of PU time Note: UNIX traditionally uses counter-intuitive priority representation (higher value = less priority) Slide 9 Process : % of PU time Process D: % of PU time How many tickets should each process get to achieve this? Number of tickets in proportion to PU time, e.g., if we have 2 tickets overall Process A: % of tickets: 3 Process B: 2% of tickets: Slide Bands: Decreasing order of priority Swapper Block I/O device control File manipulation haracter I/O device control User processes Process : % of tickets: Process D: % of tickets: TRADITIONAL UNIX SHEDULING (SVR3, 4.3 BSD) Objectives: support for time sharing good response time for interactive users support for low-priority background jobs Advantages: relatively simple, effective works well for single processor systems Slide Strategy: Multilevel feedback using round robin within priorities Priorities are recomputed once per second Base priority divides all processes into fixed bands of priority levels adjustment capped to keep processes within bands Slide 2 Disadvantages: significant overhead in large systems (recomputing priorities) response time not guaranteed non-preemptive kernel: lower priority process in kernel mode can delay high-priority process Favours I/O-bound over PU-bound processes TRADITIONAL UNIX SHEDULING (SVR3, 4.3 BSD) NON-PREEMPTIVE VS PREEMPTIVE KERNEL 6

Slide 3 NON-PREEMPTIVE VS PREEMPTIVE KERNEL kernel data structures have to be protected basically, two ways to solve the problem: Non-preemptive: disable all (most) interrupts while in kernel mode, so no other thread can get into kernel mode while in critial section - inversion possible - oarse grained - Works only for uniprocessor Preemptive: just lock kernel data structure which is currently modified - More fine-grained - Introduces additional overhead, can reduce throughput Slide PU Shared memory Local memory M M M M M M Interconnect (a) (b) (c) (b) Loosely coupled multiprocessor M M M M M M omplete system + M + M + M Internet + M + M + M Each processor has its own memory and I/O channels Generally called a distributed memory multiprocessor (c) Distributed System complete computer systems connected via wide area network communicate via message passing MULTIPROESSOR SHEDULING Slide 4 What kind of systems and applications are there? lassification of Multiprocessor Systems: PU Shared memory Local memory M M M M M M M Interconnect M M M M M omplete system + M + M + M Internet + M + M + M Slide 6 Independent parallelism: Separate applications/jobs No synchronization PARALLELISM Parallelism improves throughput, responsiveness Parallelism doesn t affect execution time of (single threaded) programs oarse and very coarse-grained parallelism: (a) (b) (c) Synchronization among processes is infrequent (a) Tightly coupled multiprocessing Good for loosely coupled multiprocessors Processors share main memory, controlled by single operating system, called symmetric multi-processor (SMP) system an be ported to multiprocessor with little change MULTIPROESSOR SHEDULING 7 PARALLELISM 8

Slide 7 Medium-grained parallelism: Parallel processing within a single application Application runs as multithreaded process Threads usually interact frequently Good for SMP systems Unsuitable for loosely-coupled systems Fine-grained parallelism: Highly parallel applications e.g., parallel execution of loop iterations Very frequent synchronisation Works only well on special hardware vector computers, symmetric multithreading (SMT) hardware Slide 9 ASSIGNMENT OF THREADS TO PROESSORS Treat processors as a pooled resource and assign threads to processors on demand Permanently assign threads to a processor - Dedicate short-term queue for each processor Low overhead Processor could be idle while another processor has a backlog Dynamically assign process to a processor higher overhead poor locality better load balancing MULTIPROESSOR SHEDULING Slide 8 Multiprocessor Scheduling: Which process should be run next and where? We discuss: Tightly coupled multiprocessing Very coarse to medium grained parallelism Homogeneous systems (all processors have same specs, access to devices) Design Issues: How to assign processes/threads to the available processors? Multiprogramming on individual processors? Which scheduling strategy? Slide 2 ASSIGNMENT OF THREADS TO PROESSORS Who decides which thread runs on which processor? Master/slave architecture: Key kernel functions always run on a particular processor Master is responsible for scheduling Slave sends service request to the master simple one processor has control of all resources, no synchronisation Failure of master brings down whole system Master can become a performance bottleneck Scheduling dependend processes ASSIGNMENT OF THREADS TO PROESSORS 9 ASSIGNMENT OF THREADS TO PROESSORS

LOAD SHARING: TIME SHARING 2 3 2 3 2 3 4 8 2 9 3 6 4 7 PU PU 4 goes idle A 8 2 9 3 6 4 7 PU 2 goes idle A 8 B 9 3 6 4 7 Slide 2 Peer architecture: Operating system can execute on any processor Each processor does self-scheduling omplicates the operating system - Make sure no two processors schedule the same thread - Synchronise access to resources Proper symmetric multiprocessing Slide 22 7 A B 6 D E 6 D E 6 4 F 3 G H I 3 G H I 3 2 J K L M N 7 4 2 B F J K L M N 7 (a) (b) (c) Load is distributed evenly across the processors Use global ready queue 4 2 D F E G H I J K L M N Threads are not assigned to a particular processor Scheduler picks any ready thread (according to scheduling policy) Actual scheduling policy less important than on uniprocessor No centralized scheduler required Disadvantages of time sharing: entral queue needs mutual exclusion Slide 23 Potential race condition when several PUs are trying to pick a thread from ready queue May be a bottleneck blocking processors Preempted threads are unlikely to resume execution on the same processor ache use is less efficient, bad locality Different threads of same process unlikely to execute in parallel Potentially high intra-process communication latency ASSIGNMENT OF THREADS TO PROESSORS LOAD SHARING: TIME SHARING 2

GANG SHEDULING Slide 24 Thread A running PU A B A B A B Request Request 2 Reply Reply 2 PU B A B A B A Time 2 3 4 6 Slide 26 ombined time and space sharing: Simultaneous scheduling of threads that make up a single process Useful for applications where performance severely degrades when any part of the application is not running e.g., often need to synchronise with each other Time slot 2 3 4 6 PU 2 3 4 A B D E A B D A B D E 2 A B D A 2 B 2 D 2 E 3 A 2 B 2 D 2 A 3 D 3 E 4 A 3 D 3 A 4 D 4 E A 4 D 4 A 2 E E 6 A 2 E 7 E E 2 E 3 E 4 E E 6 SMP SUPPORT IN MODERN GENERAL PURPOSE OS S a Slide 2 LOAD SHARING: SPAE SHARING Scheduling multiple threads of same process across multiple PUs 8-PU partition 6-PU partition 2 3 4 6 7 8 9 2 3 4 6 7 8 9 2 2 22 23 24 2 26 27 28 29 3 3 Unassigned PU 2-PU partition 4-PU partition statically assigned to PUs at creation time (figure) or dynamic assignment using a central server Slide 27 Solaris 8.: up to 28 Linux 2.4: up to 32 Windows 2 Data enter: up to 32 OS/2 Warp: up to 64 SMP Scheduling in Linux 2.4: tries to schedule process on same PU if the PU busy, assigns it to an idle PU otherwise, checks if process priority allows interrupt on preferred PU uses spin locks to protect kernel data structures a (source http://www.2cpu.com) GANG SHEDULING 3 WINDOWS 2 SHEDULING 4

Slide 28 WINDOWS 2 SHEDULING priority driven, preemptive scheduling system if thread with higher priority becomes ready to run, current thread is preempted scheduled at thread granularity priorities: (zero-page thread), - (variable levels), 6-3 (realtime levels soft) Slide 3 What is a real-time system? REAL-TIME SYSTEMS A real-time system is a system whose correctness includes its response time as well as its functional correctness. What is a hard real-time system? each thread has a quantum value, clock-interrupt handler deducts 3 from running thread quantum default value of quantum: 6 Windows 2 Professional, 36 on Windows 2 Server most wait-operations result in temporary priority boost, favouring IO-bound threads A real-time system with guaranteed worst case response times. Hard real-time systems fail if deadlines cannot be met Service of soft real-time systems degrades if deadlines cannot be met REALTIME SYSTEMS Real-time systems: Slide 29 Overview: Real time systems - Hard and soft real time systems - Real time scheduling - A closer look at some real time operating systems Slide 3 no clear separation system may meet hard deadline of one application, but not of other depending on application, time-scale may vary from microseconds to seconds most systems have some real-time requirements REAL-TIME SYSTEMS REAL-TIME SYSTEMS 6

HARATERISTIS OF REAL-TIME OPERATING SYSTEMS Soft Real-time Applications: Many multi-media apps e.g., DVD or MP3 player Many real-time games, networked games Deterministic: interrupt? How long does it take to acknowledge Operations are performed at fixed, predetermined times or within predetermined time intervals Slide 32 Hard Real-time Applications: ontrol of laboratory experiments Embedded devices Process control plants Robotics Air traffic control Slide 34 Depends on - response time of system for interrupts - capacity of system annot be fully deterministic when processes are competing for resources Requires preemptive kernel Telecommunications Responsive: How long does it take to service the interrupt? Military command and control systems Includes amount of time to begin execution of the interrupt Includes the amount of time to perform the interrupt HARATERISTIS OF REAL-TIME OPERATING SYSTEMS Slide 33 Hard real-time systems: often lack full functionality of modern OS secondary memory usually limited or missing data stored in short term or read-only memory no time sharing Slide 3 User control: User has much more control compared to ordinary OS s User specifies priority Specify paging Which processes must always reside in main memory Disks algorithms to use Modern operating systems provide support for soft real-time applications Hard real-time OS either specially tailored OS, modular systems, or customized version of general purpose OS. Rights of processes Reliability: Failure, loss, degradation of performance may have catastrophic consequences Attempt either to correct the problem or minimize its effects while continuing to run Most critical, high priority tasks execute HARATERISTIS OF REAL-TIME OPERATING SYSTEMS 7 HARATERISTIS OF REAL-TIME OPERATING SYSTEMS 8

HARATERISTIS OF REAL-TIME OPERATING SYSTEMS REAL-TIME SHEDULING Slide 36 General purpose OS objectives like speed fairness maximising throughput minimising average response time are not priorities in real time OS s! Slide 38 Preemptive round-robin: lock tick Request from a real-time process Process Real-time process added to run queue to await its next slice Process 2 Scheduling time Process n Real-time process Slide 37 Features of real-time operating systems: Fast context switch Small size Ability to respond to external interrupts quickly Predictability of system performance! Use of special sequential files that can accumulate data at a fast rate Preemptive scheduling based on priority Minimization of intervals during which interrupts are disabled Delay tasks for fixed amount of time Slide 39 Non-preemptive priority: Request from a real-time process REAL-TIME SHEDULING urrent process Scheduling time Real-time process added to head of run queue Real-time process urrent process blocked or completed REAL-TIME SHEDULING 9 REAL-TIME SHEDULING 2

REAL-TIME SHEDULING lasses of Algorithms: Slide 4 Preemption points: Preemption point REAL-TIME SHEDULING Request from a real-time process urrent process Scheduling time Wait for next preemption point Real-time process Slide 42 Static table-driven - suitable for periodic tasks - input: periodic arrival, ending and execution time - output: schedule that allows all processes to meet requirements (if at all possible) - determines at which points in time a task begins execution Static priority-driven preemptive - static analysis determines priorities - traditional priority-driven scheduler is used Dynamic planning-based - feasibility to integrate new task is determined dynamically Dynamic best effort - no feasibility analysis - typically aperiodic, no static analysis possible - does its best, procs that missed deadline aborted When are periodic events schedulable? P i: period with which event i occurs Slide 4 Immediate preemptive: REAL-TIME SHEDULING Request from a real-time process urrent process Real-time process preempts current process and executes immediately Real-time process Slide 43 i: PU time required to handle event i A set of events e to e m is schedulable if Example: m i= i P i three periodic events with periods of, 2, and msecs Scheduling time require, 3, and msec of PU time schedulable? + 3 2 + =. +. +.2 REAL-TIME SHEDULING 2 DEADLINE SHEDULING 22

DEADLINE SHEDULING DEADLINE SHEDULING Earliest deadline first strategy is provably optimal. It Slide 44 urrent systems often try to provide real-time support by starting real time tasks are quickly as possible speeding up interrupt handling and task dispatching Not necessarily appropriate, since real-time applications are not concerned with speed but with reliably completing tasks priorities alone are not sufficient Slide 46 minimises number of tasks that miss deadline if there is a schedule for a set of tasks, earliest deadline first will find it Earliest deadline first can be used for dynamic or static scheduling works with starting or completion deadline for any given preemption strategy - starting deadlines are given: nonpreemptive - completion deadline: preemptive Two tasks: DEADLINE SHEDULING Sensor A: Sensor B: Slide 4 Additional information used: Ready time - sequence of times for periodic tasks, may or may not be known statically Starting deadline ompletion deadline Processing time - may or may not be known, approximated Resource requirements Subtask scheduler Slide 47 data arrives every 2ms processing takes ms Scheduling decision every ms data arrives every ms processing takes 2ms Task Arrival Time Execution Time Deadline A() 2 A(2) 2 4 A(3) 4 6.. B() 2 B(2) 2...... DEADLINE SHEDULING 23 DEADLINE SHEDULING 24

Periodic threads with completion deadline: RATE MONOTONI SHEDULING Slide 48 Arrival times, execution times, and deadlines Fixed-priority scheduling; A has priority A deadline A2 deadline B deadline A3 deadline A4 deadline A A2 A3 A4 A B B2 B deadline A deadline 2 3 4 6 7 8 9 Time(ms) A B A2 B A3 B2 A4 B2 A B2 A A2 B A3 A4 A, B2 (missed) Slide Works for processes which are periodic need the same amount of PU time on each burst optimal static scheduling algorithm guaranteed to succeed if m i= i P i m (2 m ) Fixed-priority scheduling; B has priority Earliest deadline scheduling using completion deadlines B A (missed) A2 A3 B2 A B A2 A3 A4 (missed) A B A2 B A3 B2 A4 B2 A A, B2 A A2 B A3 A4 A, B2 for m =,,,:,.7,.69,.693 Works by assigning priorities to threads on the basis of their periods highest-priority task is the one with the shortest period Aperiodic threads with starting deadline: 2 3 4 6 7 8 9 2 Arrival times A B D E Requirements Starting deadline Arrival times B E D A A B D E Periodic task timing diagram: ycle ycle 2 Slide 49 Earliest deadline Earliest deadline with unforced idle times Service Starting deadline Arrival times Service Starting deadline Arrival times A E D B (missed) E D A A B D E B E D A B E D A A B D E Slide P Processing Idle task P execution time task P period T Processing Time First-come first-served (FFS) Service Starting deadline A D B (missed) E (missed) D A RATE MONOTONI SHEDULING 2 RATE MONOTONI SHEDULING 26

Task set with RMS: High Highest rate and highest priority task WHY USE RMS? Slide 2 Slide 4 Despite some obvious disadvantages of RMS over EDF, RMS is sometimes used it has a lower overhead simple in pratice, performance similar Lowest rate and lowest priority task greater stability, predictability Low Rate (Hz) LINUX 2.4 SHEDULING SOFT REAL-TIME SUPPORT - A: /3, B: /4, : / User assigns static priority to real time processes (-99), never changed by scheduler A B A B A2 A3 A4 B2 B3 A B4 onventional processes have dynamic priority, always lower than real time processes - sum of base priority and Slide 3 2 3 Slide - number of clock ticks left of quantum for current epoch Scheduling classes RMS A B A2 B2 Failed SHED FIFO: First-in-first-out real-time threads EDF A B A2 B2 A3 2 B3 A4 3 A B4 SHED RR: Round-robin real-time threads SHED OTHER: Other, non-real-time threads Within each class multiple priorities may be used 2 3 4 6 7 8 9 2 3 4 Deadlines cannot be specified, no guarantees given Time (msec) Due to non-preemptive kernel, latency can be too high for real-time systems WHY USE RMS? 27 LINUX 2.4 SHEDULING SOFT REAL-TIME SUPPORT 28

Linux scheduling: SVR4 dispatch queues: A minimum Highest priority B middle middle D B A -4-3 Waiting for disk I/O Waiting for disk buffer Process waiting in kernel mode Slide 6 D maximum (a) Relative thread priorities (b) Flow with FIFO scheduling Slide 8-2 - Waiting for terminal input Waiting for terminal output Waiting for child to exist User priority D B B A 2 3 User priority User priority 2 User priority 3 Process waiting in user mode (c) Flow with RR scheduling Lowest priority Process queued on priority level 3 Slide 7 UNIX SVR4 SHEDULING Highest preference to real-time processes Next-highest to kernel-mode processes Lowest preference to other user-mode processes Real time processes may block system services Slide 9 WINDOWS 2 SHEDULING Priorities organized into two bands or classes Real-time Variable -driven preemptive scheduler also, no deadlines, no guarantees UNIX SVR4 SHEDULING 29 WINDOWS 2 SHEDULING 3

System priorities 3 24 Next thread to run Problem: in real life applications, many tasks are not always periodic. static priorities may not work If real time threads run periodically with same length, fixed priority is no problem: Slide 6 6 Slide 62 User priorities 8 a a a a a a a b b b b b b b Zero page thread Idle thread - a: periodic real time thread, highest priority - b: periodic real time thread - various different low priority tasks (e.g., user I/O) Slide 6 4 3 2 9 8 7 6 4 3 2 base priority highest above normal normal below normal lowest Slide 63 But if frequency of high priority task increases temporarily, system may encounter overload: - system not able to respond - system may not be able to perform requested service Process Thread s Base Thread s Dynamic WINDOWS 2 SHEDULING 3 WINDOWS 2 SHEDULING 32

Example: Network interface control driver, requirements: avoid if possible to drop packets definitely avoid overload Basic Idea: simulation of periodic behaviour of thread by assigning realtime priority: P r Slide 64 If receiver thread get highest priority permanently, system may go into overload if incoming rate exceeds a certain value. expected frequency: packet once every 64µs PU time required to process packet: 2µs 32-entry ring buffer, max % full packet every 64µs receiver thread 2µs/packet Slide 66 background priority: P b execution budget: E replenishment interval: R to thread. Whenever thread exhausts execution budget, priority is set to background priority P b When thread blocks after n units, n will be added to execution budget R units after execution started When execution budget is incremented, thread priority is reset to P r Example: Slide 6 POSIX standard to handle aperiodic or sporadic events SPORADI SHEDULING with static priority, preemptive scheduler Implemented in hard real-time systems such as QNX, some real-time versions of Linux, real-time specification for Java (RTSJ)(partially) an be used to avoid overloading in a system Slide 67 execution budget: replenishment interval: 3 Thread does not block: budget replenishment interval time SPORADI SHEDULING 33 SPORADI SHEDULING 34

Thread blocks: budget replenishment interval HARD REAL TIME OS Slide 68 replenishment interval 2 time () exection starts, st replenishment interval starts (3) thread blocks () continues execution, 2nd replenishment interval starts (7) budget exhausted (3) budget set to 3, thread continues execution (6) budget exhausted (8) budget set to 2 (9) thread continues execution Slide 7 We look at examples of two types of systems: configurable hard real time systems system designed as real time OS from the start hard real-time variants of general purpose OSs try to alleviate shortcomings of OS with respect to real time apps Example: Network interface control Driver Slide 69 use expected incoming rate and desired max PU utilisation of thread to compute execution budget and replenishment period if no other threads wait for execution, packets can be processed even if load is higher otherwise, packets may be dropped packet every 64µs receiver thread 2µs/packet Slide 7 REAL-TIME SUPPORT IN LINUX Scheduling: - POSIX SHED FIFO, SHED RR, - ongoing efforts to improve scheduler efficiency Virtual Memory: - no VM for real-time apps - mlock() and mlockall() to switch off paging period: 64µs * 6 = 24µs execution time: 2µs * 6 = 4µs PU load caused by receiver thread: 4/24 =.39, about 39% Timer: resolution: ms, too coarse grained for real-time apps HARD REAL TIME OS 3 HIGH KERNEL LATENY IN LINUX 36

QNX HIGH KERNEL LATENY IN LINUX Possible solutions: Low Latency Linux Microkernel based architecture POSIX standard API Modular can be costumised for very small size (eg, embedded systems) or large systems Slide 72 - thread in kernel mode yields PU - reduces size of non-preemptable sections - used in some real-time variants of Linux Preemptable Linux - kernel data protected using mutexes/spinlocks Lock breaking preemptable Linux - combination of previous two approaches Slide 74 Memory protection for user applications and os components Scheduling: FIFO scheduling Round-robin Adaptive scheduling - thread consumes its timeslice, its priority is reduced by one - thread blocks, it immediately comes back to its base priority POSIX sporadic scheduling RTLINUX WINDOWS E 3. Slide 73 abstract machine layer between actual hardware and Linux kernel takes control of - hardware interrupts - timer hardware - interrupt disable mechanism Slide 7 omponentised OS designed for embedded systems with hard real-time support handles nested interrupts handles priority inversion based on priority inheritance Offers real time scheduler runs with no interference fron Linux kernel guaranteed upper bound on high priority thread scheduling programmer must utilise RTLinux API for real time applications guaranteed upper bound on delay for interrupt service routines QNX 37 WINDOWS E 3. 38

Linux scheduling: A minimum Slide 76 B middle middle D maximum (a) Relative thread priorities D B A (b) Flow with FIFO scheduling Slide 78 WINDOWS 2 SHEDULING Priorities organized into two bands or classes Real-time Variable -driven preemptive scheduler D B B A (c) Flow with RR scheduling Highest (3) Slide 77 UNIX SVR4 SHEDULING Highest preference to real-time processes Next-highest to kernel-mode processes Lowest preference to other user-mode processes SVR4 dispatch queues: Slide 79 Real-time lasses Lowest (6) Highest () dqactmap dispq 9 n 2 P P P P P P P Variable lasses Lowest () WINDOWS 2 SHEDULING 39 WINDOWS 2 SHEDULING 4

Slide 8 4 3 2 9 8 7 6 4 3 2 base priority highest above normal normal below normal lowest Process Thread s Base Thread s Dynamic WINDOWS 2 SHEDULING 4