Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Similar documents
Today: Synchronization. Recap: Synchronization

Last Class: Processes

Last Class: Synchronization

Last Class: Deadlocks. Today

Lecture #7: Implementing Mutual Exclusion

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002

Page 1. Challenges" Concurrency" CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks"

Page 1. Another Concurrent Program Example" Goals for Today" CS162 Operating Systems and Systems Programming Lecture 4

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Threads and Too Much Milk! CS439: Principles of Computer Systems February 6, 2019

Threads and Too Much Milk! CS439: Principles of Computer Systems January 31, 2018

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

2 Threads vs. Processes

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

EECS 482 Introduction to Operating Systems

Page 1. Recap: ATM Bank Server" Recap: Challenge of Threads"

Concurrency & Synchronization. COMPSCI210 Recitation 25th Feb 2013 Vamsi Thummala Slides adapted from Landon Cox

CS162 Operating Systems and Systems Programming Lecture 7. Mutual Exclusion, Semaphores, Monitors, and Condition Variables

Safety and liveness for critical sections

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

Operating Systems. Scheduling

5. Synchronization. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

Announcements/Reminders

Thread Synchronization: Too Much Milk

8: Scheduling. Scheduling. Mark Handley

Threads and concurrency

Threads and concurrency

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

September 23 rd, 2015 Prof. John Kubiatowicz

CS 318 Principles of Operating Systems

Operating Systems (1DT020 & 1TT802) Lecture 6 Process synchronisation : Hardware support, Semaphores, Monitors, and Condition Variables

Scheduling in the Supermarket

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

Synchronization. CISC3595/5595 Fall 2015 Fordham Univ.

What's wrong with Semaphores?

Supplementary Materials on Multilevel Feedback Queue

Page 1. CS162 Operating Systems and Systems Programming Lecture 6. Synchronization. Goals for Today

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Page 1. Why allow cooperating threads?" Threaded Web Server"

Operating Systems (1DT020 & 1TT802)

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Lecture #7: Shared objects and locks

CS450 OPERATING SYSTEMS FINAL EXAM ANSWER KEY

Chapter 5: CPU Scheduling

Scheduling II. q q q q. Proportional-share scheduling Multilevel-feedback queue Multiprocessor scheduling Next Time: Memory management

Process & Thread Management II. Queues. Sleep() and Sleep Queues CIS 657

Process & Thread Management II CIS 657

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

Thread Synchronization: Foundations. Properties. Safety properties. Edsger s perspective. Nothing bad happens

Chapter 5: Synchronization 1

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Synchronization II: EventBarrier, Monitor, and a Semaphore. COMPSCI210 Recitation 4th Mar 2013 Vamsi Thummala

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

CS 153 Design of Operating Systems Winter 2016

CPSC/ECE 3220 Summer 2018 Exam 2 No Electronics.

Multiprocessor and Real- Time Scheduling. Chapter 10

September 21 st, 2015 Prof. John Kubiatowicz

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSE 153 Design of Operating Systems

LECTURE 3:CPU SCHEDULING

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Course Syllabus. Operating Systems

CSE 120 Principles of Operating Systems Spring 2017

Goals. Processes and Threads. Concurrency Issues. Concurrency. Interlacing Processes. Abstracting a Process

CS 326: Operating Systems. CPU Scheduling. Lecture 6

CS370 Operating Systems

PROCESS SYNCHRONIZATION READINGS: CHAPTER 5

CS370 Operating Systems

Midterm Exam Amy Murphy 6 March 2002

Process Management And Synchronization

CS 318 Principles of Operating Systems

Threads and Critical Sections. Otto J. Anshus, Thomas Plagemann, Tore Brox-Larsen, Kai Li

Example: CPU-bound process that would run for 100 quanta continuously 1, 2, 4, 8, 16, 32, 64 (only 37 required for last run) Needs only 7 swaps

Interactive Scheduling

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

Operating Systems: Quiz2 December 15, Class: No. Name:

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Scheduling. Today. Next Time Process interaction & communication

Advanced Operating Systems (CS 202) Scheduling (1)

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

CS-537: Midterm Exam (Spring 2001)

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Remaining Contemplation Questions

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS370 Operating Systems Midterm Review

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Frequently asked questions from the previous class survey

Scheduling. 2 Constraints 3. Glossary 14

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

CSE 120. Summer, Inter-Process Communication (IPC) Day 3. Inter-Process Communication (IPC) Scheduling Deadlock. Instructor: Neil Rhodes

Midterm Exam. October 20th, Thursday NSC

Lecture 9: Midterm Review

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

Transcription:

Last Class: CPU Scheduling! Scheduling Algorithms: FCFS Round Robin SJF Multilevel Feedback Queues Lottery Scheduling Review questions: How does each work? Advantages? Disadvantages? Lecture 7, page 1 Adjusting Priorities in MLFQ! Job starts in highest priority queue. If job's time slices expires, drop its priority one level. If job's time slices does not expire (the context switch comes from an I/O request instead), then increase its priority one level, up to the top priority level.! CPU bound jobs drop like a rock in priority and I/O bound jobs stay at a high priority. Lecture 7, page 2

Multilevel Feedback Queues:Example 1! 3 jobs, of length 30, 20, and 10 seconds each, initial time slice 1 second, context switch time of 0 seconds, all CPU bound (no I/O), 3 queues Job Length 1 30 2 20 3 10 Completion Time Wait Time RR MLFQ RR MLFQ Queue Time Slice 1 1 Job Average 2 2 3 4 Lecture 7, page 3 Multilevel Feedback Queues:Example 1! 5 jobs, of length 30, 20, and 10 seconds each, initial time slice 1 second, context switch time of 0 seconds, all CPU bound (no I/O), 3 queues Job Length Completion Time Wait Time RR MLFQ RR MLFQ 1 30 60 60 30 30 2 20 50 53 30 33 3 10 30 32 20 22 Average 46 2/3 48 1/3 26 2/3 28 1/3 Queue Time Slice 1 1 2 2 3 4 Job 11 1, 22 1, 33 1 15 3, 27 3, 39 3 113 7, 217 7, 321 7 125 11, 229 11, 10... 332 Lecture 7, page 4

Multilevel Feedback Queues:Example 2! 3 jobs, of length 30, 20, and 10 seconds, the 10 sec job has 1 sec of I/0 every other sec, initial time slice 2 sec, context switch time of 0 sec, 2 queues. Job Length 1 30 2 20 Completion Time Wait Time RR MLFQ RR MLFQ 3 10 Queue Time Slice 1 1 Job Average 2 2 Lecture 7, page 5 Multilevel Feedback Queues:Example 2! 3 jobs, of length 30, 20, and 10 seconds, the 10 sec job has 1 sec of I/0 every other sec, initial time slice 1 sec, context switch time of 0 sec, 2 queues. Job Length Completion Time Wait Time RR MLFQ RR MLFQ 1 30 60 60 30 30 2 20 50 50 30 30 3 10 30 18 20 8 Average 46 2/3 45 26 2/3 25 1/3 Lecture 7, page 6

Improving Fairness! Since SJF is optimal, but unfair, any increase in fairness by giving long jobs a fraction of the CPU when shorter jobs are available will degrade average waiting time. Possible solutions: Give each queue a fraction of the CPU time. This solution is only fair if there is an even distribution of jobs among queues. Adjust the priority of jobs as they do not get serviced (Unix originally did this.) This ad hoc solution avoids starvation but average waiting time suffers when the system is overloaded because all the jobs end up with a high priority,. Lecture 7, page 7 Lottery Scheduling! Give every job some number of lottery tickets. On each time slice, randomly pick a winning ticket. On average, CPU time is proportional to the number of tickets given to each job. Assign tickets by giving the most to short running jobs, and fewer to long running jobs (approximating SJF). To avoid starvation, every job gets at least one ticket. Degrades gracefully as load changes. Adding or deleting a job affects all jobs proportionately, independent of the number of tickets a job has. Lecture 7, page 8

Lottery Scheduling: Example! Short jobs get 10 tickets, long jobs get 1 ticket each. # short jobs/ % of CPU each % of CPU each # long jobs short job gets long job gets 1/1 91% 9% 0/2 2/0 10/1 1/10 Lecture 7, page 9 Lottery Scheduling Example! Short jobs get 10 tickets, long jobs get 1 ticket each. # short jobs/ % of CPU each % of CPU each # long jobs short job gets long job gets 1/1 91% (10/11) 9% (1/11) 0/2 50% (1/2) 2/0 50% (10/20) 10/1 10% (10/101) < 1% (1/101) 1/10 50% (10/20) 5% (1/20) Lecture 7, page 10

Today: Synchronization! What kind of knowledge and mechanisms do we need to get independent processes to communicate and get a consistent view of the world (computer state)? Example: Too Much Milk Time You 3:00 Arrive home 3:05 Look in fridge, no milk 3:10 Leave for grocery Your roommate 3:15 Arrive home 3:20 Arrive at grocery Look in fridge, no milk 3:25 Buy milk Leave for grocery 3:35 Arrive home, put milk in fridge 3:45 Buy milk 3:50 Arrive home, put up mlk 3:50 Oh no! Lecture 7, page 11 Synchronization Terminology! Synchronization: use of atomic operations to ensure cooperation between threads Mutual Exclusion: ensure that only one thread does a particular activity at a time and excludes other threads from doing it at that time Critical Section: piece of code that only one thread can execute at a time Lock: mechanism to prevent another process from doing something Lock before entering a critical section, or before accessing shared data. Unlock when leaving a critical section or when access to shared data is complete Wait if locked => All synchronization involves waiting. Lecture 7, page 12

Too Much Milk: Solution 1! What are the correctness properties for this problem? Only one person buys milk at a time. Someone buys milk if you need it. Restrict ourselves to atomic loads and stores as building blocks. Leave a note (a version of lock) Remove note (a version of unlock) Do not buy any milk if there is note (wait) Thread A Thread B if (nomilk & NoNote) { if (nomilk & NoNote) { leave Note; leave Note; buy milk; buy milk; remove note; remove note; } } Does this work? Lecture 7, page 13 Too Much Milk: Solution 2! How about using labeled notes so we can leave a note before checking the the milk? Thread A Thread B leave note A leave note B if (nonote B) { if (nonote A) { if (nomilk){ if (nomilk){ buy milk; buy milk; } } } } remove note; remove note; Does this work? Lecture 7, page 14

Too Much Milk: Solution 3! Thread A Thread B leave note A leave note B X: while (Note B) { Y: if (nonote A) { do nothing; if (nomilk){ } buy milk; if (nomilk){ } buy milk; } } remove note B; remove note A; Does this work? Lecture 7, page 15 Correctness of Solution 3! At point Y, either there is a note A or not. 1. If there is no note A, it is safe for thread B to check and buy milk, if needed. (Thread A has not started yet). 2. If there is a note A, then thread A is checking and buying milk as needed or is waiting for B to quit, so B quits by removing note B. At point X, either there is a note B or not. 1. If there is not a note B, it is safe for A to buy since B has either not started or quit. 2. If there is a note B, A waits until there is no longer a note B, and either finds milk that B bought or buys it if needed. Thus, thread B buys milk (which thread A finds) or not, but either way it removes note B. Since thread A loops, it waits for B to buy milk or not, and then if B did not buy, it buys the milk. Lecture 7, page 16

Is Solution 3 a good solution?! It is too complicated - it was hard to convince ourselves this solution works. It is asymmetrical - thread A and B are different. Thus, adding more threads would require different code for each new thread and modifications to existing threads. A is busy waiting - A is consuming CPU resources despite the fact that it is not doing any useful work. => This solution relies on loads and stores being atomic. Lecture 7, page 17 Language Support for Synchronization! Have your programming language provide atomic routines for synchronization. Locks: one process holds a lock at a time, does its critical section releases lock. Semaphores: more general version of locks. Monitors: connects shared data to synchronization primitives. => All of these require some hardware support, and waiting. Lecture 7, page 18

Locks! Locks: provide mutual exclusion to shared data with two atomic routines: Lock.Acquire - wait until lock is free, then grab it. Lock.Release - unlock, and wake up any thread waiting in Acquire. Rules for using a lock: Always acquire the lock before accessing shared data. Always release the lock after finishing with shared data. Lock is initially free. Lecture 7, page 19 Implementing Too Much Milk with Locks! Too Much Milk Thread A Thread B Lock.Acquire(); Lock.Acquire(); if (nomilk){ if (nomilk){ buy milk; buy milk; } } Lock.Release(); Lock.Release(); This solution is clean and symmetric. How do we make Lock.Acquire and Lock.Release atomic? Lecture 7, page 20

Hardware Support for Synchronization! Implementing high level primitives requires low-level hardware support What we have and what we want Concurrent programs Low-level atomic operations (hardware) load/store interrupt disable test&set High-level atomic operations (software) lock monitors semaphore send & receive Lecture 7, page 21 Implementing Locks By Disabling Interrupts! There are two ways the CPU scheduler gets control: Internal Events: the thread does something to relinquish control (e.g., I/O). External Events: interrupts (e.g., time slice) cause the scheduler to take control away from the running thread. On uniprocessors, we can prevent the scheduler from getting control as follows: Internal Events: prevent these by not requesting any I/O operations during a critical section. External Events: prevent these by disabling interrupts (i.e., tell the hardware to delay handling any external events until after the thread is finished with the critical section) Why not have the OS support Lock::Acquire() and Lock::Release as system calls? Lecture 7, page 22

Implementing Locks by Disabling Interrupts! For uniprocessors, we can disable interrupts for high-level primitives like locks, whose implementations are private to the kernel. The kernel ensures that interrupts are not disabled forever, just like it already does during interrupt handling. class Lock { public: void Acquire(); void Release(); private: int value; Queue Q; } Lock::Lock { // lock is free value = 0; // queue is empty Q = 0; } Lock::Acquire(T:Thread){ disable interrupts; if (value == BUSY) { add T to Q T->Sleep(); } else { value = BUSY; } enable interrupts; } Lock::Release() { disable interrupts; if queue not empty { take thread T off Q put T on ready queue } else { value = FREE } enable interrupts; } Lecture 7, page 23 Wait Queues! When should Acquire re-enable interrupts when going to sleep? Before putting the thread on the wait queue? No, Release could check the queue, and not wake up the thread. After putting the thread on the wait queue, but before going to sleep? No, Release could put the thread on the ready queue, but it could already be on the ready queue. When the thread wakes up, it will go to sleep, missing the wakeup from Release. =>We still have a problem with multiprocessors. Lecture 7, page 24

Example! When the sleeping thread wakes up, it returns from Sleep back to Acquire. Interrupts are still disabled, so its ok to check the lock value, and if it is free, grab the lock and turn on interrupts. Lecture 7, page 25 Summary! Communication among threads is typically done through shared variables. Critical sections identify pieces of code that cannot be executed in parallel by multiple threads, typically code that accesses and/or modifies the values of shared variables. Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Achieving synchronization directly with loads and stores is tricky and errorprone Solution: use high-level primitives such as locks, semaphores, monitors Lecture 7, page 26