COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30
Overview Last lecture: Cooperating Processes and Data-Sharing This lecture: Criteria for Scheduling Algorithms Some Scheduling Algorithms: first-come-first-served, shortest-job-first, priority scheduling, round-robin scheduling, multilevel queue scheduling Note: you will have a TUTORIAL EXAM on CPU scheduling in tutorial 11A (14-15 May). It s worth 10%. The questions in Tutorial 10A are practice for this exam. Zhiyi Huang (Otago) COSC243 Lecture 17 2 / 30
CPU Scheduling: A Recap A CPU scheduler is the kernel process which determines how to move processes between the ready queue and the CPU. ready queue CPU I/O I/O I/O device queue I/O device queue Child executes I/O request made fork a child interrupt occurs wait for an interrupt time slice expired Zhiyi Huang (Otago) COSC243 Lecture 17 3 / 30
Context Switching When the operating system switches between processes, it has a fair amount of housekeeping to do. This housekeeping is known as context switching. Process P0 Operating System Process P1 executing Interrupt or system call save state into PCB0 idle idle reload state from PCB1 Interrupt or system call executing executing Save state into PCB1 Reload state from PCB0 idle Zhiyi Huang (Otago) COSC243 Lecture 17 4 / 30
Terminology: Scheduler and Dispatcher The scheduler decides which process to give to the CPU next, and when to give it. Its decisions are carried out by the dispatcher. Dispatching involves Switching context Switching to user mode Jumping to the proper location in the new program. Dispatch latency: the time it takes the dispatcher to do this. Zhiyi Huang (Otago) COSC243 Lecture 17 5 / 30
Why Do We Want a Scheduler? (1) One key motivation behind CPU scheduling is to keep the CPU busy. This means removing processes from the CPU while they re waiting. If processes never had to wait, then scheduling wouldn t increase CPU utilisation. However, it s a fact about processes that they tend to exhibit a CPU burst cycle. CPU burst I/O burst CPU burst I/O burst Zhiyi Huang (Otago) COSC243 Lecture 17 6 / 30
How Long is a CPU Burst? This is the kind of frequency curve we can expect: 140 120 100 frequency 80 60 40 20 0 8 16 24 32 40 CPU burst duration (ms) Zhiyi Huang (Otago) COSC243 Lecture 17 7 / 30
Why Do We Want a Scheduler? (2) Another reason for having a scheduler is so that processes don t have to spend too much time waiting for the CPU. Even if the CPU is always busy, executing processes in different orders can change the average amount of time a process spends queueing for the CPU. Total CPU time needed P1 P2 P3 P4 Zhiyi Huang (Otago) COSC243 Lecture 17 8 / 30
Why Do We Want a Scheduler? (3) Another reason for having a scheduler is so that interactive processes always respond quickly. One question is how long a process spends waiting for the CPU in total; A different question is how long on average it waits in between visits to the CPU. (Important for interactive processes.) Fast CPU switching P1 P2 P1 P2 P1 P2 P1 P2 time Slower CPU switching P1 P2 P1 P2 Zhiyi Huang (Otago) COSC243 Lecture 17 9 / 30
Criteria for Scheduling Algorithms CPU utilisation: the percentage of time that the CPU is busy. Throughput: the number of processes that are completed per time unit. Turnaround time (for a single process): the length of time from when the process was submitted (arrived) to when it is completed. Waiting time (for a single process): the total amount of time the process spends waiting for the CPU. Response time (for a single process): the average time from the submission of a request to a process until the first response is produced. Zhiyi Huang (Otago) COSC243 Lecture 17 10 / 30
Terminology: The Ready Queue head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information Remember: there are two kinds of waiting: Waiting for the CPU (in the ready queue); Waiting for an I/O device (in a device queue). Don t be confused when you hear about processes waiting in the ready queue! Zhiyi Huang (Otago) COSC243 Lecture 17 11 / 30
Terminology: Preemption There are 4 situations for scheduling decisions to take place: 1 a process switches from running to waiting state; 2 a process switches from running to ready (due to an interrupt); 3 a process switches from waiting to ready state (due to completion of I/O); 4 a process terminates. In a non-preemptive scheduling system, scheduling takes place only under 1 and 4, which has no choice. In a preemptive scheduling system, scheduling can take place under 2 and 3 as well. Zhiyi Huang (Otago) COSC243 Lecture 17 12 / 30
Implementing a Preemptive System Implementing preemption is hard. What if a process is preempted while a system call is being executed? Kernel data e.g. I/O queues might be left in an inconsistent state. Earlier versions of UNIX dealt with this problem by waiting until system calls were completed before switching context. Some systems: MS Windows 3.1 and below is nonpreemptive. Windows 95, NT, XP etc are preemptive. Linux is fully preemptive as of 2.6. Zhiyi Huang (Otago) COSC243 Lecture 17 13 / 30
1) First-Come-First-Served Scheduling The simplest method is to execute the processes in the ready queue on a first-come-first-served (FCFS) basis. head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information When a process becomes ready, it is put at the tail of the queue. When the currently executing process terminates, or waits for I/O, the process at the front of the queue is selected next. This algorithm is non-preemptive. Zhiyi Huang (Otago) COSC243 Lecture 17 14 / 30
Gantt Charts The operation of a scheduling algorithm is commonly represented in a Gantt chart. Consider the following process information: Process Arrival Time Burst Time P1 0 24 P2 1 3 P3 2 3 (N.B. We re just looking at the initial CPU burst for each process.) The Gantt chart for FCFS with the above data is: P1 P2 P3 0 24 27 30 Zhiyi Huang (Otago) COSC243 Lecture 17 15 / 30
Gantt Charts for Algorithm Evaluation Process Arrival Time Burst Time P1 0 24 P2 1 3 P3 2 3 P1 P2 P3 0 24 27 30 Waiting times: P1? 0MS P2? 24-1 = 23MS P3? 27-2 = 25MS Average Waiting Time? (0+23+25)/3MS = 16MS Zhiyi Huang (Otago) COSC243 Lecture 17 16 / 30
Gantt Charts for Algorithm Evaluation Process Arrival Time Burst Time P1 2 24 P2 0 3 P3 1 3 P2 P3 P1 0 3 6 30 Waiting times: P1? 6-2 = 4MS P2? 0MS P3? 3-1 = 2MS Average Waiting Time? (4+0+2)/3MS = 2MS. Zhiyi Huang (Otago) COSC243 Lecture 17 17 / 30
FCFS: Advantages and Disadvantages ADVANTAGES: EASY TO IMPLEMENT. EASY TO UNDERSTAND. DISADVANTAGES: WAITING TIME NOT LIKELY TO BE MINIMAL. CONVOY EFFECT: LOTS OF SMALL PROCESSES CAN GET STUCK BEHIND ONE BIG ONE. Q: COULD THROUGHPUT (NO OF PROCESSS PER TIME UNIT GOING THROUGH THE SYSTEM) BE IMPROVED? BAD RESPONSE TIME. (SO BAD FOR TIME-SHARING SYSTEMS). Zhiyi Huang (Otago) COSC243 Lecture 17 18 / 30
2) Shortest-Job-First Scheduling If we knew in advance which process on the list had the shortest burst time, we could choose to execute that process next. This method is called shortest-job-first (SJF) scheduling. Example: Process Burst Time P1 6 P2 8 P3 7 P4 3 P4 P1 P3 P2 0 3 9 16 24 N.B. processes with equal burst times are executed in FCFS order. Zhiyi Huang (Otago) COSC243 Lecture 17 19 / 30
SJF: Advantages and Disadvantages ADVANTAGES: PROVABLY OPTIMAL AVERAGE WAITING TIME. DISADVANTAGES: YOU NEVER KNOW IN ADVANCE WHAT THE LENGTH OF THE NEXT CPU BURST IS GOING TO BE. POSSIBILITY OF LONG PROCESSES NEVER GETTING EXECUTED? Zhiyi Huang (Otago) COSC243 Lecture 17 20 / 30
Predicting the next CPU burst length It s possible to approximate the length of the next CPU burst: it s likely to be similar in length to the previous CPU bursts. A commonly-used formula: the exponential average CPU burst length. τ n+1 = α t n + (1 α)τ n τ n : the predicted length of CPU burst n. t n : the actual length of CPU burst n. α: a value between 0 and 1. Zhiyi Huang (Otago) COSC243 Lecture 17 21 / 30
Preemption and SJF Scheduling Scenario: - A process P1 is currently executing. - A new process P2 arrives before P1 is finished. - P2 s burst time is shorter than the remaining burst time of P1. Non-preemptive SJF: P1 keeps the CPU. Preemptive SJF: P2 takes the CPU. Tiny example: Process Arrival Time Burst Time P1 0 8 P2 1 4 P1 P2 P1 0 1 5 12 Zhiyi Huang (Otago) COSC243 Lecture 17 22 / 30
3) Priority Scheduling In priority scheduling, each process is allocated a priority when it arrives; the CPU is allocated to the process with highest priority. Priorities are represented by numbers, with low numbers being highest priority. Can priority scheduling be preemptive? YES, NO REASON WHY NOT. What s the relation of SJF scheduling to priority scheduling? SJF IS A TYPE OF PRIORITY SCHEDULING. (SPECIFICALLY, ONE WHERE THE PRIORITY OF A PROCESS IS SET TO BE THE ESTIMATED NEXT CPU BURST. (IF LOW NUMBERS ARE ASSUMED TO DENOTE HIGH PRIORITIES.) Zhiyi Huang (Otago) COSC243 Lecture 17 23 / 30
Starvation and Aging Starvation occurs when a process waits indefinitely to be allocated the CPU. Priority scheduling algorithms are susceptible to starvation. Imagine a process P1 is waiting for the CPU, and a stream of higher-priority processes is arriving. If these processes arrive sufficiently fast, P1 will never get a chance to execute. A solution to the starvation problem is to increase the priority of processes as a function of how long they ve been waiting for the CPU. (Called aging.) Zhiyi Huang (Otago) COSC243 Lecture 17 24 / 30
4) Round-Robin Scheduling Round-robin (RR) scheduling is designed for time-sharing systems. A small unit of time (time quantum) is defined. The ready queue is treated as a circular list. The CPU scheduler goes round the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information CPU I/O operation? Time quantum expired? Zhiyi Huang (Otago) COSC243 Lecture 17 25 / 30
An example of Round-Robin Say we have an RR algorithm with a quantum of 3, and the following process info: The Gantt chart will look like this: Process Arrival Time Burst Time P1 0 8 P2 1 4 P3 4 4 P2=0 P3=0 P1=5 P2=1 P1=2 P3=1 P1=0 P1 P2 P1 P3 P2 P1 P3 0 3 6 9 12 13 15 16 Note: in RR it s possible that a process re-enters the ready queue at the same time that another process arrives in the queue. We ll use FCFS as a tie-breaker here too. Zhiyi Huang (Otago) COSC243 Lecture 17 26 / 30
Changing the Time Quantum in RR Scheduling If the time quantum is set to be infinitely large: RR scheduling reduces to FCFS scheduing. If the time quantum is set to be very small: we can talk of processor sharing. However, there are drawbacks to making the time quantum very small. Let s say we make the time quantum the same as the context switch time. Then we ll spend half our time on context switching! Zhiyi Huang (Otago) COSC243 Lecture 17 27 / 30
RR: Advantages and Disadvantages IT ALL DEPENDS ON THE SIZE OF THE TIME QUANTUM. IF IT S BIG, WE GET THE ADVANTAGES/DISADVANTAGES OF FCFS SCHEDULING. IF IT S VERY SMALL, WE GET FASTER RESPONSE TIME. BUT SLOWER THROUGHPUT. (WHY? BECAUSE THERE S MORE TIME SPENT CONTEXT-SWITCHING.) EVEN IF YOU IGNORE CONTEXT-SWITCH TIME, TURNAROUND TIME GOES DOWN IF MOST PROCESSES COMPLETE WITHIN A SINGLE QUANTUM. FOR INSTANCE, SAY THERE ARE 3 PROCESSES WITH A NEXT CPU BURST OF 10. IF THE QUANTUM SIZE IS 1, THEN AVERAGE TURNAROUND TIME IS 29. BUT IF QUANTUM SIZE IS 10, THEN THEY DO FINISH IN THE NEXT CPU BURST, AND THE AVERAGE TURNAROUND TIME IS 20. IF WE TAKE CONTEXT-SWITCH TIME INTO ACCOUNT, MAKING THE TIME QUANTUM SMALL ALSO HAS THE EFFECT OF INCREASING TURNAROUND TIME. Zhiyi Huang (Otago) COSC243 Lecture 17 28 / 30
5) Multilevel Queue Scheduling Let s say you have two groups of processes: interactive processes; batch processes. What you really want is to run different scheduling algorithms for the two different groups. Multilevel queue scheduling: Split the ready queue into a number of different queues, each with its own scheduling algorithm. Implement a scheduling algorithm to decide which queue is next allocated. (Preemptive priority scheduling is often used.) Zhiyi Huang (Otago) COSC243 Lecture 17 29 / 30
Exercises For this lecture, you should have read Chapter 5 (Sections 1, 2, 3 and 7) of Silberschatz et al. For next lecture: 1 Read Chapter 6 (Sections 1 7). 2 The Unix command nice can be used to run processes at different priorities. Read the man page for nice. Using one of the programs you have written on UNIX, try the following: /bin/nice -n 19 [program] /bin/nice -n 10 [program] Notice any difference in speed? Zhiyi Huang (Otago) COSC243 Lecture 17 30 / 30