Tirgul 4 Scheduling:

Similar documents
Back to synchronization

Yet another synchronization problem

PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Operating Systems: Quiz2 December 15, Class: No. Name:

Last Class: Deadlocks. Today

UNIT:2. Process Management

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

MC7204 OPERATING SYSTEMS

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

PESIT Bangalore South Campus

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

COMP 3361: Operating Systems 1 Midterm Winter 2009

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Operating Systems. Synchronization, part 3 Monitors, classical sync. problems

Interprocess Communication By: Kaushik Vaghani

COMP 3361: Operating Systems 1 Final Exam Winter 2009

Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

Operating Systems (1DT020 & 1TT802)

CS3502 OPERATING SYSTEMS

Chapter 7: Process Synchronization. Background

(b) External fragmentation can happen in a virtual memory paging system.

Chapter 6: Process Synchronization

SAMPLE MIDTERM QUESTIONS

Main Points of the Computer Organization and System Software Module

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Remaining Contemplation Questions

CPSC/ECE 3220 Summer 2017 Exam 2

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Lesson 6: Process Synchronization

Chapter 6: Process Synchronization

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

Fall 2015 COMP Operating Systems. Lab 06

Process Synchronization

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

FCM 710: Architecture of Secure Operating Systems

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Interprocess Communication (IPC)

CS370 Operating Systems

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

Module 6: Process Synchronization

Synchronization: semaphores and some more stuff. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory

Process Management And Synchronization

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

Department of Computer applications. [Part I: Medium Answer Type Questions]

CS162, Spring 2004 Discussion #6 Amir Kamil UC Berkeley 2/26/04

Last Class: Synchronization Problems!

QUESTION BANK. UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks)

Lecture 9: Midterm Review

COMP 3430 Robert Guderian

Midterm Exam. October 20th, Thursday NSC

CS370 Operating Systems

Homework Assignment #5

5 Classical IPC Problems

Midterm Exam October 15, 2012 CS162 Operating Systems

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

Process Coordination

CSE 153 Design of Operating Systems

מצליחה. 1. int fork-bomb() 2. { 3. fork(); 4. fork() && fork() fork(); 5. fork(); printf("bla\n"); 8. return 0; 9. }

CS630 Operating System Design, Second Exam, Fall 2014

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CS370 Operating Systems

The Deadlock Lecture

Outline. Monitors. Barrier synchronization The sleeping barber problem Readers and Writers One-way tunnel. o Monitors in Java

Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - X Deadlocks - I. University at Buffalo. Synchronization structures

Roadmap. Problems with Semaphores. Semaphores. Monitors. Monitor - Example. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

Concurrency and Synchronisation. Leonid Ryzhyk

The Deadlock Problem (1)

(MCQZ-CS604 Operating Systems)

Introduction to OS Synchronization MOS 2.3

Lecture 2 Process Management

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

CS 153 Design of Operating Systems Winter 2016

Concurrency: Deadlock and Starvation. Chapter 6

Operating Systems Structure

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

DEADLOCKS M O D E R N O P E R A T I N G S Y S T E M S C H A P T E R 6 S P R I N G

CHAPTER NO - 1 : Introduction:

Operating Systems. OPS Processes

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Chapter 6: Process Synchronization

Midterm Exam Amy Murphy 19 March 2003

Sections 01 (11:30), 02 (16:00), 03 (8:30) Ashraf Aboulnaga & Borzoo Bonakdarpour

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

CPS 310 second midterm exam, 11/14/2014

Transcription:

Question 1 [Silberschats 5.7] Tirgul 4 Scheduling: Consider the following preemptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at rate alpha; when it is running, its priority changes at rate beta. All processes are given a priority of 0 when they enter the ready queue. The parameters alpha and beta can be set to five many different scheduling algorithms. 1. What is the algorithm that results from beta > alpha > 0? 2. What is the algorithm that results from alpha < beta < 0? 3. Is there a starvation problem in 1? in 2? explain. 4. Can you think about an expression which determines priorities and takes into account both running time (preference to short) and waiting time (preference to long). 1

Answer to Question 1 Reminder: In dynamically changing priorities, each process has its base priority. At each clock tick we add X rate to each process' priority. We always run the process with highest current priority. 1. beta > alpha > 0 Lets try an example first: beta = 2, alpha = 1, 3 processes P1, P2, P3 arrive one after the other, each lasts 3 seconds. P1 starts (running process is marked in bold) Time 1 2 3 4 5 6 7 8 9 P1 0 2 4 P2 0 1 2 4 6 P3 0 1 2 3 4 6 8 What we have here is First Come First Served algorithm. Proof: If a process is running it had the highest number between all other processes. While he is running its priority increases in a greater rate than all other process waiting until it finishes. If two process are waiting then their number will increase in the same rate, so the one who got first to the wait queue will have a higher initial value and will get CPU time first. 2. alpha < beta < 0 We'll use the same example as before. alpha = -2, beta = -1 Time 1 2 3 4 5 6 7 8 9 P1 0-1 -3-5 -7-9 -11-13 -14 P2 0-1 -3-5 -7-8 P3 0-1 -2 We get Last In First Out. Proof: The rate of a running process is decreased much less than a waiting one so a running process will continue to run since when it was chosen it had the highest 2

number and it is decreasing the least. Since a new process gets priority of 0 it will run next. Since it is running it will continue to run until it finishes. Then the next process in line which was the one before it will run and so on. 3. Is there a starvation problem? In the 1'st case there is no starvation problem. Every process will run in its turn. Assume that process P1 arrives before process P2 then when P2 arrives the number of P1 is bigger. When they're both waiting their numbers will increase in the same rate and P1's number will be bigger than P2's number. After all the processes that arrived before P1 will finish it will get the CPU and will leave it only when it's finished. In the 2'nd case there is a problem of starvation. Since every new process get the CPU, if there is a process arriving all the time before the 1'st process finished it will never get CPU time. 4. An expression that determines priority and take for account running time (preference to short) and waiting time (preference to long) use: waittime runtime 3

Question 2 [Tanenbaum 2/22] Five batch jobs A,B,C,D and E arrive at a computer center at almost the same time (A came first, E last, but all arrived at the same clock tick). They have estimated running times of 10, 6, 2, 4 and 8 seconds. Their (externally determined) priorities are 3,5,2,1 and 4, respectively, with 5 being the highest priority. For each of the following scheduling algorithm, determine the mean process turnaround time. Ignore process switching overhead. All jobs are completely CPU bound. 1. Round robin with 1 second time quantum (preemptive) 2. Priority Scheduling (non-preemptive) 3. First come first served (in order 10, 6, 2, 4, 8) (non-preemptive) 4. Shortest job first (non-preemptive) 4

Answer to Question 2 Reminder: turnaround time of a process how much time passes since the process arrived (submitted) until it finished. Process Running Time Priority A 10 3 B 6 5 C 2 2 D 4 1 E 8 4 1. Round Robin (RR) with 1 sec time quantum: Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 A B C D E(1) A B D E(2) A B D E(3) A Process C Time 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 B E(4) A B E(5) A E(6) A E(7) A A(9) Process D B E(8) A(10) Mean turn around = (8 + 17 + 23 + 28 + 30) / 5 = 21.2 2. Priority Scheduling: B(6), E(8), A(10), C(2), D(4) Mean turn around = (6 + 14 + 24 + 26 + 30) / 5 = 20 3. First Come First Served (FCFS): A(10), B(6), C(2), D(4), E(8) Mean turn around = (10 + 16 + 18 + 22 + 30) / 5 = 19.2 4. Shortest Job First (SJF): C(2), D(4), B(6), E(8), A(10) Mean turn around = (2 + 6 + 12 + 20 + 30) / 5 = 14 5

Question 3 (cs exam, years 2004) The OS keeps two queues. Each queue implements round robin (RR). The OS always prefers to run a process in Q1 (queue 1) over a process in Q2. When a process is created or returns from I/O it enters Q1. A process enters Q2 if it just finished running and it used up its whole time quantum. A process returning from I/O enters Q1 and has precedence over a process which did not start running. In the system are the following processes: Process P1 arrive time = 0, wants: 1 sec cpu, 1 sec IO, 3 sec cpu. Process P2 arrive time = 2, wants: 2 cpu, 2 IO, 2 cpu. Process P3 arrive time = 3, wants: 1 cpu, 3 IO, 3 cpu. Draw the Gantt table and compute mean TA and RT (turnaround and response time). Time quantum in Q1 = 1 sec. Time quantum in Q2 = 2 sec. The system has preemption. For computing the RT assume I/O is always printing to stdout and the user is waiting for this printing. So the first print time is the start of the I/O. 6

Answer to Question 3 CPU ArriveTime - * I/O - P3 * P2 * P1 * 0 1 2 3 4 5 6 7 8 9 10 11 12 Reminder: TA = process finish time process arrive time. RP = process first IO time process arrive time. Mean TA = (7 + 11 + 9) / 3 = 27 / 3 Mean RT = (1 + 6 + 2) / 3 = 9 / 3 7

Question 4 (cs exam 2004) תזכורת - scheduling - guaranteed מריצים את התהליך עם זמן ריצה שקיבל עד כה הכי נמוך. החלק השווה שאמור להיות לו 8

Answer 4 9

10

Tirgul 5 Scheduling: Question 3 Four jobs A,B,C,D arrive at the same time. They have estimated running times of 6, 10, 8 and 4 seconds respectively. Jobs B,C have priority 1 upon arrival, and A,D have priority 2. Draw the scheduling diagram and compute mean turnaround time if scheduling algorithm uses: a) Multilevel queue scheduling with two queues: queue #1 uses Shortest Job First with 2 sec time splice and queue #2 uses Round Robin with 1 sec time slice; b) Same as a) except that process is moved to the other queue if it has been kept in the present queue for more than 10 seconds. Note In this version of Multilevel queue scheduling the scheduler alternates between the queues. Meaning it first picks a process from Q1, then a process from Q2, then again from Q1 and so on. In class you learned a different version, in which it first empties Q1 from processes, before picking processes in Q2. 11

Answer to Question 3 a) The diagram is: Q1 C C C C C C C C B B B B B B B B B B Q2 A D A D A D A D A A Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Average turnaround time is (11+24+26+28)/4 = 22.25 b) The diagram is Q1 C C C C C C C C D D D A A A B B B B B B Q2 A D A B B B B A Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Switch + + Average turnaround time is (11+16+24+28)/4 = 19.75 D migrates once, B and A migrate twice. 12

Tirgul 6 Synchronization: Bakery Algorithm 1 var choosing: shared array[0..n-1] of boolean; 2 number: shared array[0..n-1] of integer;... 3 repeat 4 choosing[i] := true; 5 number[i] := max(number[0],number[1],...,number[n-1]) + 1; 6 choosing[i] := false; 7 for j := 0 to n-1 do begin 8 while choosing[j] do (* nothing *); 9 while number[j] <> 0 and 10 (number[j], j) < (number[i],i) do 11 (* nothing *); 12 end; 13 (* critical section *) 14 number[i] := 0; 15 (* remainder section *) 16 until false; 13

Question 3 In Petersons algorithm for achieving mutual exclusion, what would happen if we swap the lines marked with "X". int turn; /* whose turn is it? */ int interested[n]; /* all initially 0 (FALSE) */ void enter_region(int process){ /* who is entering 0 or 1? */ int other = 1- process; /* opposite of process */ interested[process] = TRUE; /X /* signal that you're interested */ turn = process; /X /* set flag */ while (turn == process && interested[other] == TRUE); /* null statement */ void leave_region(int process){ /* who is leaving 0 or 1? */ interested[process] = FALSE; /* departure from critical region */ Answer to Question 3 We will get a mutual exclusion violation: P1 does turn:=1; P0 does turn:=0; P0 does interested[0]:=true; P0 does while(turn == 0 && interested [1]); // (#t and #f) #f P0 enters the CS. P1 does interested [1]:=true; P1 does while(turn == 1 && interested [0]); // (#f and #t) #f P1 enters the CS. now both processes are in the critical section. 14

Question 4 Assume the while statement in Peterson's solution was change to: while (turn!= process && interested[other] == TRUE) Describe a scheduling scenario in which there is a Mutual Exclusion violation and a scenario in which there is not a Mutual Exclusion violation. Answer to Question 4 Mutex violation: Process A executes `interested [process] = TRUE', `turn = process' and falls through the while loop shown above. While in the critical section, the timer fires and process B executes `interested [process] = TRUE', `turn = process' and falls through the while loop shown above. Both processes are in the critical section. No Mutex violation: P0 interested[0]=true, P1- interested[1]=true, P0 turn = 0, P1 turn =1, P1 passes while loop and enters CS, P2 stuck in while loop. 15

Question 5 Given below is the code of the producer-consumer problem. What might happen in each of the following three cases? a) If we switch lines 6 and 7. b) If we switch lines 11 and 12. c) If we switch lines 7 and 8. #define N 100 /* buffer size */ semaphore mutex = 1; /* mutual exclusion semaphore */ semaphore empty = N; /* count of empty buffer slots */ semaphore full = 0; /* count of full buffer slots */ void producer() { 1. int item; 2. while (TRUE) { 3. produce_item(&item); 4. down(&empty); /* one less empty buffer */ 5. down(&mutex); /* enter critical section */ 6. enter_item(item); /* modify shared buffer pool */ 7. up(&mutex); /* leave critical section */ 8. up(&full ); /* one more full buffer */ void consumer() { 9. int item; 10. while (TRUE) { 11. down(&full ); /* one less full buffer */ 12. down(&mutex); /* enter critical section */ 13. remove_item(&item); /* modify shared buffer pool */ 14. up(&mutex); /* leave critical section */ 15. up(&empty); /* one more empty buffer */ 16. consume_item(item); 16

Answer to Question 5 a) The access to critical section is not synchronized (no Mutex). b) Deadlock. c) Ok 17

Question 6 (CS exam 2004) Prove Dekker s solution for the Critical Section problem: bool* flag = {false, false; int turn = 0 ; void p0() { while (true) { flag[0] = true; while (flag[1]) { if (turn==1) { flag[0] = false; while (turn == 1) { /*nothing*/ flag[0] = true; /* critical section */ turn = 1; flag[0] = false; /* non-critical section */ void p1() { while (true) { flag[1] = true; while (flag[0]) { if (turn==0) { flag[1] = false; while (turn == 0) { /*nothing*/ flag[1] = true; /* critical section */ turn = 0; flag[1] = false; /* non-critical section */ 18

Answer to Question 6 Conditions needed to maintain critical sections: 1. Mutual Exclusion - No two processes in critical sections simultaneously (prevents race conditions). 2. Progress - Process outside its critical section may not block another process. 3. No Starvation - There is a limit on the number of times other processes can enter the critical section while another waits. - Progress תהליך מחוץ ל- CS לא עוצר תהליך שמעוניין להיכנס: תהליך שלא מעוניין להיכנס, מכבה את ה- flag שלו. מכאן שתהליך שמעוניין להיכנס ימצא את ה- flag הנגדי עם ערך,false ויכנס ישר ל- CS מבלי לבדוק את המשתנה.turn א. ב. ג. Mutual exclusion לא יהיו שני תהליכים ב- CS בו זמנית: נניח ש- P0 בפנים. הפקודה האחרונה ששנתה את flag[0] הפכה אותו ל-. true נניח ש- P1 בפנים. הפקודה האחרונה ששנתה את flag[1] הפכה אותו ל-. true אבל, הפקודה האחרונה ששני התהליכים בצעו לפני הכניסה ל- CS הייתה פקודת ה- CS ולפי ההנחה שני התהליכים היו "נתקעים" בלולאה ולא נכנסים ל-,while סתירה! No starvation תהליך המעוניין להיכנס לא יחכה לעד: בה"כ נניח ש- P0 רוצה להיכנס ונמצא בתוך ה- while (אחרת הוא כבר בפנים). P1 יכול להימצא באחד משלושה מקומות: (1 ב-.CS 2) בתוך קוד לולאת ה- while שלו עם = 0.turn 3) בתוך קוד לולאת ה- while שלו עם = 1.turn מקרה 1: אם P 1 ב-,CS בסופו של דבר הוא יצא ויציב = 0.turn אם יישאר בחוץ אנו חוזרים למקרה של.progress לכן נניח ש- P1 זריז ונכנס ללולאת ה- while שלו ולכן אנו במקרה 2 כאשר = 0.turn מקרה 2: מכיוון ש- = 0 turn ואף אחד לא משנה אותו, אזי P1 יתקע בסופו של דבר בלולאה הפנימית לאחר שהציב.flag[1] = false כעת, כאשר P0 יקבל,CPU מכיוון ש- = 0 turn הוא יצא מהלולאה הפנימית ויגיע שוב ללולאת ה- while החיצונית. אבל.CS ולכן הוא יכנס ל- flag[1] = false 19

מקרה 3: אם = 1,turn ו- P1 מבצע את לולאת ה- while החיצונית, הוא יבצע אותה כל זמן ש- [ flag[0.בסופו = true של דבר P0 יקבל,CPU התנאי בלולאת ה- while שלו יכשל והוא יכנס ל-.CS ואנו חוזרים למקרה 1. 20

Tirgul 7 Synchronization Continued: Critical Regions: (Side note in semaphores sometimes up is called wait, and down called signal). "Concurrency is still in the spaghetti stage everything is possible, but it is unreasonably difficult to get it right". 1. Introduction Given that we have semaphores, why do we need another programming language feature for dealing with concurrency? Semaphores are low-level. Omitting a wait breaches safety - we can end up with more than one process inside a critical section. Omitting a signal can lead to deadlock. Semaphore code is distributed throughout a program, which causes maintenance problems. Better to use a high-level construct/abstraction like CR / Monitors. Note Do not confuse with Critical Section (even though in some literature refers to CR as CS). 2. Critical Regions A critical region is a section of code that is always executed under mutual exclusion. Critical regions shift the responsibility for enforcing mutual exclusion from the programmer (where it resides when semaphores are used) to the compiler. They consist of two parts: 1. Variables that must be accessed under mutual exclusion. 2. A new language statement that identifies a critical region in which the variables are accessed. 21

Example: This is only pseudo-pascal-fc - Pascal-FC doesn't support critical regions var v : shared T;... region v do begin... end; All critical regions that are `tagged' with the same variable have compiler-enforced mutual exclusion so that only one of them can be executed at a time: Process A: region V1 do begin { Do some stuff. end; region V2 do begin { Do more stuff. end; Process B: region V1 do begin { Do other stuff. end; Here process A can be executing inside its V2 region while process B is executing inside its V1 region, but if they both want to execute inside their respective V1 regions only one will be permitted to proceed. Each shared variable (V1 and V2 above) has a queue associated with it. Once one process is executing code inside a region tagged with a shared variable, any other 22

processes that attempt to enter a region tagged with the same variable are blocked and put in the queue. 3. Conditional Critical Regions Critical regions aren't equivalent to semaphores. As described so far, they lack condition synchronization. We can use semaphores to put a process to sleep until some condition is met (e.g. see the bounded-buffer Producer-Consumer problem), but we can't do this with critical regions. Conditional critical regions provide condition synchronization for critical regions: region v when B do begin { Do some stuff. end; where B is a boolean expression (usually B will refer to v). Conditional critical regions work as follows: 1. A process wanting to enter a region for v must obtain the mutex lock. If it cannot, then it is queued. 2. Once the lock is obtained the boolean expression B is tested. If B evaluates to true then the process proceeds, otherwise it releases the lock and is queued. When it next gets the lock it must retest B. Note - Because these processes must retest their condition they are doing something akin to busy-waiting, although the frequency with which they will retest the condition is much less. Note also that the condition is only retested when there is reason to believe that it may have changed (another process has finished accessing the shared variable, potentially altering the condition). Though this is more controlled than busy-waiting, it may still be sufficiently close to it to be unattractive. 3.2. Limitations Conditional critical regions are still distributed among the program code. There is no control over the manipulation of the protected variables - no information hiding or encapsulation. Once a process is executing 23

inside a critical region it can do whatever it likes to the variables it has exclusive access to. Conditional critical regions are more difficult to implement efficiently than semaphores. 24

Question 1 Implement bounded buffer using critical regions 25

Answer 1 var buffer: shared struct { pool: array [0..n-1] of item; count, in, out: integer; // producer region buffer when count < n do begin nextp = produceitem(); insert( nextp ); count++ ; end; // consumer region buffer when count > 0 do begin nextc = removefirst(); count--; end; See more: http://www-ist.massey.ac.nz/csnotes/355/lectures/monitors.html 26

Java Synchronization: A method or a code block might be synchronized. class C { synchronized void f(){ // only one thread in f compute_ack(5,5); or class C { void f() { do_something(); synchronized (this) { // only one thread in this code block compute_ack(5,5); When a thread calls a synchronized method of an object, it tries to grab the object's monitor lock. If another thread is holding the lock, it waits until that thread releases it. A thread releases the monitor lock when it leaves the synchronized method. The Object class (and therefore every other Java class) has the methods: wait, notify, notifyall. A call to wait() releases the monitor lock and puts the calling thread to sleep (i.e., it stops running). A subsequent call to notify on the same object wakes up a sleeping thread and lets it start running again. notifyall is similar, but wakes up all sleeping threads. If more than one thread is sleeping, one is chosen arbitrarily, if no threads are sleeping in this object, notify() does nothing. The awakened thread has to wait for the monitor lock before it starts; it competes on an equal basis with other threads trying to get into the monitor. Remeber that a "true" monitor keeps a FIFO ordering of waiting processes/threads, here Java chooses an arbitrary thread to wake. Note that wait might raise InterruptedException. 27

Question 3 Implement bounded buffer with java monitors 28

Answer 3 class Buffer { private Object[] _buffer; private int _max, _in, _out, _count; public Buffer(int max) { // constructor _max = max; _buffer = new Object[_max]; _in = 0; _out = 0; _count = 0; public synchronized void put(object o) { // put object in the queue while (_count == _max) { // buffer full try { wait(); catch (InterruptedException e) { // Do nothing _buffer[_in] = o; _count ++; _in = (_in + 1) % _max; notify(); // notify if consumer is waiting public synchronized Object get() { while (_count == 0) { // buffer empty try { // java forces exception catching wait(); catch (InterruptedException e) { // Do nothing Object o = _buffer[_out]; _count --; _out = (_out + 1) % _max; notify(); // notify if producer is waiting return o; However, this solution does not guarantees FIFO, and a starvation can occur. 29

Question 4 Describe code which forces the waking order in java to be FIFO (as in standard Monitors). 30

Answer 4 /* A critical section that preserves FIFO of waiting threads Since notify wake up an arbitrary thread, we'll use one lock object per thread. This way we know what thread will be waken. Author: MAC MAC */ import java.util.vector; /** * class WaitObject */ class WaitObject { boolean released = false; //this flag avoid race!!! synchronized void dowait() throws InterruptedException { try { if (!released) wait(); catch (InterruptedException ie) { //ignore it synchronized void donotify(){ if (! released){ // not really needed released = true; notify(); /** * class CriticalSection */ class CriticalSection { // critical section that preserves FIFO private Vector _waiting; // wait list private boolean _busy; // someone in critical section public CriticalSection() { // constructor _waiting = new Vector(); // create wait list _busy = false; // noone is in the CS now. 31

public void enter() { WaitObject my_lock = null; synchronized(this){ if (! _busy) { _busy = true; return; else { my_lock = new WaitObject(); // create my unique lock _waiting.add(my_lock); // add to waiting list my_lock.dowait(); // wait on lock public synchronized void leave() { if (_waiting.size() > 0) { // someone is waiting WaitObject o = (WaitObject) _waiting.elementat(0); _waiting.removeelementat(0); o.donotify(); 32

Tirgul 8 Deadlocks Deadlock - the ultimate form of starvation - occurs when two or more processes/threads are waiting on a condition that cannot be satisfied. Deadlock most often occurs when two (or more) processes/threads are each waiting for the other(s) to do something. For deadlock to occur in the system, four conditions must hold simultaneously in the system: 1. Mutual exclusion - resource used by only one process 2. Hold and wait - process can request resource while holding another resource 3. No preemption - only process can release resource 4. Circular wait - 2 or more processes waiting for resources held by other (waiting) processes 33

Question 1 Assume resources are ordered as: R 1, R 2,..., R n. Prove formally (by negation) that if processes always request resources by order (i.e if a process requests R k after R j then k>j) then deadlock will not occur (resources with one unit of-course!) 34

Answer 1 Suppose that our system is prone to deadlocks. We number our processes P 1,...,P n. Let's look at the last condition. Denote by P i -R k ->P j the situation when P i requests a resource R k held by P j. For circular wait condition to be satisfied in a state of deadlock, a subset P i1,...,p im of {P 1,...,P n must exist such that: P i1 -(R j,1 )->P i2 -(R j,2 )->P i3 -(R j,3 )->...-(R j,m-1 )->P im -(R j,m )->P i1 (*) For each process P i,s, s 1, P i,s holds resource R j,s-1 and requests resource R j,s. Then it means that j,s-1<j,s for any s (since they are resource numbers). We receive inequality j,1<j,2<...<j,m. But from (*) j,m<j,1 has to hold. We receive a necessary condition j,1<j,m and j,1>j,m for the forth condition of a deadlock. Thus, it can't be satisfied and the system is deadlock-free. 35

Question 2 (7.9 from Silberschats) Consider a system consisting of m resources of the same type, being shared by n processes. Resources can be requested and released by processes only one at a time. Show that system is deadlock free if the following two conditions hold: 1. The need of each process is between 1 and m resources. 2. The sum of all maximum needs is less than m+n. 36

Answer 2 Assume that the 4 conditions do exist in the system and thus there is a group of processes involved in a circular wait. Let these processes be P 1,...,P k, k<=n, their current demands be D 1,...,D k and the number of resources each of them hold be H 1,...,H k. Wait condition here should look like: P 1 ->P 2 ->...->P k ->P 1, but in fact it is simpler: let M 1,...,M n be total (maximum) demands of processes P 1,...,P n. Then circular wait can occur only if all resources are in use and every process hasn't acquired all its resources: H 1 +...+H k =m and D i >=1 for any i. Since M i =H i +D i, sum of maximum demands of the processes involved in circular wait is: M 1 +..+M k >=m+k. Note that for remaining processes P k+1,...,p n their maximum demands is at least 1: M i >=1, k+1<=i<=n and thus M k+1 +...+M n >= n-k Then total sum of maximum demands is: M 1 +...+M n = M 1 +...+M k +M k+1 +...+M n >= m+k+(n-k)=m+n. But it is defined that sum of all maximal needs is less than m+n, thus a contradiction. 37

Banker s Algorithm (multiple resources) 1. Look for a row R whose unmet resource needs are all smaller then or equal to A. If no such row exists, the system will eventually deadlock. 2. Assume the process of the row chosen finishes (which is possible). Mark that process as terminated and add all its resources to the A vector 3. Repeat steps 1 and 2 until either all processes are marked terminated, which means safe, or until a deadlock occurs, which means unsafe. Question 3 (7.6 from Silberschats) If deadlock is controlled by the banker's algorithm, which of the following changes can be made safely and under what circumstances: 1. Increase Available (add new resources) 2. Decrease Available (remove resources) 3. Increase Max for one process 4. Increase the number of processes 38

Answer 3 1. Increasing the number of resources available can't create a deadlock since it can only decrease the number of processes that has to wait for resources. 2. Decreasing the number of resources can bring to a deadlock in following case: when system decides on the next state to using old resource data, it may decide that this state will be safe. If change in resources number occurs exactly after this evaluation, system can proceed with allocation while next state becomes unsafe. 3. Error condition could be created for two possible reasons: o from Banker's algorithm point of view, process exceeds his maximum claim; o condition that was decided to be safe seizes to be safe if the change occurs after safety evaluation. 4. If number of processes is increased, the state remains safe, since we can first run the old processes by the scheduling that the system created in order to verify that the state is safe, they will finish their job, and will release all their resources. Now, all the system s resources are free, then, we can choose new process and give it all its demands, it will finish and again all the system s resources are free, now, we choose the second new process and give it all its demands, etc. till all processes will finish. 39

Question 4 Consider the following snapshot of a system with five processes (p1,... p5) and four resources (r1,... r4). There are no current outstanding queued unsatisfied requests. currently available resources r1 r2 r3 r4 2 1 0 0 current allocation max demand still needs Process r1 r2 r3 r4 r1 r2 r3 r4 r1 r2 r3 r4 p1 0 0 1 2 0 0 1 2 p2 2 0 0 0 2 7 5 0 p3 0 0 3 4 6 6 5 6 p4 2 3 5 4 4 3 5 6 p5 0 3 3 2 0 6 5 2 a. Compute what each process still might request and fill in the still needs columns. b. Is this system currently deadlocked, or will any process become deadlocked? Why or why not? If not, give an execution order. c. If a request from p3 arrives for (0, 1, 0, 0), can that request be safely granted immediately? In what state (deadlocked, safe, unsafe) would immediately granting the whole request leave the system? Which processes, if any, are or may become deadlocked if this whole request is granted immediately? 40

Answer 4 currently available resources r1 r2 r3 r4 2 1 0 0 current allocation max demand still needs Process r1 r2 r3 r4 r1 r2 r3 r4 r1 r2 r3 r4 p1 0 0 1 2 0 0 1 2 0 0 0 0 p2 2 0 0 0 2 7 5 0 0 7 5 0 p3 0 0 3 4 6 6 5 6 6 6 2 2 p4 2 3 5 4 4 3 5 6 2 0 0 2 p5 0 3 3 2 0 6 5 2 0 3 2 0 a) see table. b) Not deadlocked and will not become deadlocked. Using the Banker s algorithm, to determine the process finishing order: p1, p4, p5, p2, p3. c) Change available to (2, 0, 0, 0) and p3 s row of still needs to (6, 5, 2, 2). Now p1, p4, and p5 can finish, but with available now (4, 6, 9, 8) neither p2 nor p3 s still needs can be satisfied. So, it is not safe to grant p3 s request. Correct answer NO. Processes p2 and p3 may deadlock. 41

Question 5 (from exam 2003 moed b, q.1) #define N 5 #define LEFT (i-1) % N #define RIGHT (i+1) % N #define THINKING 0 #define HUNGRY 1 #define EATING 2 void philosopher(int i) { while(true) { think(); pick_sticks(i); eat(); put_sticks(i); monitor diningphilosophers condition self[n]; integer state[n]; תיקון:כל פילוסוף צריך לקחת בדיוק 3 מנות 42

procedure pick_sticks(i); begin state[i] := HUNGRY; test(i); if state[i]!= EATING then wait(self[i]); end procedure put_sticks(i); begin state[i] := THINKING; test(left); test(right); end non-entry-procedure test(i); begin if (state[left]!= EATING and state[right]!= EATING and state[i] = HUNGRY) then begin state[i] := EATING; signal(self[i]); end; end; for i := 0 to 4 do state[i] := THINKING; end monitor 43

Answer 5 a. we add 2 condition variables : full, empty and an integer :inplate = 0 we add 2 procedures to the monitor: procedure fillplate begin: if (inplate<5){ inplate++ empty.signal() else full.wait() end procedure takefromplate begin: if(inplate = = 0){ //full.signal() empty.wait() inplate -- full.signal() end update the philosopher s function: void philosopher(int i) { while(true) { think(); pick_sticks(i); for(j = 1 to 3){ takefromplate() eat() put_sticks(i); 44

and add the waiter s function: void waiter() { while(true) { fillplate() b. we add another condition varaiable still_eating and integer : finished_eating = 0 and since we want the waiter to fill the plate only when a philosopher finishes eating we have to assume that the plate is full in the beginning i.e inplate = 5 (we don t need anymore the full condition variable) update the above functions as follows: procedure fillplate begin: if (finished_eating = = 0) still_eating.wait() inplate+=3 finished_eating -- empty.signal() end procedure takefromplate begin: if(inplate = = 0){ empty.wait() inplate -- //the only change is that we don t use full anymore end and we add to the put_sticks function : finished_eating ++ still_eating.signal() as the last commands in this function. (we can create another procedure to the monitor finished_eating() which will do that and the philosopher will call this 45

function after eat() function) Deadlock can occur if none of the philosophers can finish its loop of takefromplate. c. For 5 philosophers this cannot happen, since as shown in class, in any moment, there may be 2 philosophers at most that eat, each asks for 3 and there are 5, then at least one of the eaters will get 3 and will awake the waiter. d. For 6 philosophers, there may be 3 philosophers that eat, each can be waiting (after taking, say 2-2-1) and none is waking up the waiter. 46

Question 6 Here is the solution for the Tunnel problem with semaphores: int count[2]; semaphore mutex = 1, busy = 1; semaphore waiting[2] = {1,1; void arrive(int direction) { down(&waiting[direction]); down(&mutex); count[direction] += 1; if(count[direction] == 1) { up(&mutex); // line 1 down(&busy) // line 2 else up(&mutex); up(&waiting[direction]); void leave(int direction){ down(&mutex); count[direction] -= 1; if(count[direction] == 0) up(&busy); up(&mutex); Assume that the system runs several processes. Each one represents a car which acts as follows: arrive(direction) Travel leave(direction) (the value direction can be 0 or 1) a. Does in the given result two cars can travel at the same time in the same direction? Opposite direction? In case they can travel at the same time in the same direction, can they bypass one another? Can be a starvation? b. (From last year exam moed a, q.1,b) 47

Answer 6 a. The solution guarantees that several cars can move, at the same time, in one direction only. The cars can bypass one another since there is no limitation over the speed of the cars. The order of arrival to the tunnel isn t kept. Starvation can occur. Assume there is traffic in direction=0, all the cars that will access the tunnel from the other direction will be blocked on semaphore busy. The only possibility to release them is to have no cars in the tunnel with direction=0. Therefore, if there is unstoppable stream of cars in direction=0, count[0] will be always bigger than zero and the cars in direction=1 will wait endlessly. b. For both sides, the function arrive will be: void arrive(int direction) { down(waiting[direction]) down(mutex_3); down(mutex); count[direction]++; if(count[direction] == 1) { up(mutex) down(busy); else up(mutex); up(mutex3); up(waiting[direction]); The leave function remains the same. 48

Tirgul 9 Scheduling Question 2 (From 2000 quiz, q.2) הערה: FCFS עפ"י הגדרה הינו,non-preemptive לכן צריך לקחת החלטות.I/O רק במקרים שתהליך סיים או יצא ל SCHEDULING בנוסף, שימו לב שה I/O מתבצע לאותו דיסק לכן לא ניתן ש 2 תהליכים יבצעו במקביל,I/O זה קורה בשניה התשיעית. 49

50

Synchronization Question 3 (From 2003 exam moed a, q.1) 51

52

Answer 3 א. ב. פתרון תקין הוא פתרון שבו יש רק W אחד במע' ואף R או שיש כמה R -ים ואף W. הפתרון תקין. הוכחה: יש רק W אחד כי אם יש W בפנים אזי כל ה W האחרים נעצרים על הסמפור.db אם יש W בפנים, הוא בהכרח הוריד את rdb ולכן כל ה R שיגיעו אחריו יתקעו על, rdb בנוסף, כל ה R שהגיעו לפניו, בהכרח ב CS או לאחריו, לכן ה W יכנס ל CS רק אחרי שישיג את המפתח,db כלומר לאחר שכל הקוראים שהגיעו לפניו יסיימו וכל הקוראים שהגיעו אחריו לא יוכלו להכנס כי הם תקועים על. rdb אם יש R בפנים, אזי אין W בפנים בגלל ש R הוריד את ה,db כמה R יכולים להכנס כי הם עוקפים את ה db ונכנסים. כן, תתכן הרעבה של, R אם יש זרם בלתי פוסק של. W תסריט: W ראשון תופס את, rdb ברגע זה כל ה R ים נעצרים על,rdb עכשו כל ה W עוקפים את,rdb ונכנסים ישר לתור של db ולכן ה R -ים יורעבו לעד. ג. 1. שתי האפשרויות קיימות, כאשר R יוצא והוא אחרון בפנים ויש W שמחכה ב db אזי זה נכון. כאשר R יוצא והוא לא אחרון זה לא נכון, כי הוא לא יבצע up ל.db 2. שתי האפשרויות קיימות, כאשר W יוצא והוא לא אחרון יכנס ה W כיוון שכל ה R בהכרח ממתינים על,wc>=2,rdb לכן rdb לא ישוחרר לעומת זאת db כן ישוחרר ולכן W יכנס ולא R. וכאשר W יוצא והוא אחרון ישוחררו db,rdb ולכן R יכנס. 3. נכון. כיוון שכשיש W בפנים כל ה R תקועים על, rdb אם יש עוד W אזי,wc>=2 לכן R. יכנס ולא W כן ישוחרר ולכן db לא ישוחרר לעומת זאת rdb ד. ה. הפתרון נשאר תקין. למעשה, הוא נותן עדיפות ל W חדש על פני R שכבר ישנם במע'. אולם, גם קיים תסריט שבו המע' תתקע: rdb נכנס, לא נוגע ב W1 rdb נכנס, מוריד את W2 rdb לא נוגע ב, יוצא wc=1 W1 rdb נעצר על, wc=2, נכנס W3 rdb לא נוגע ב, נתקע על mutex1 W2 W3 תקוע, לכן כל ה R אחריו יתקעו הפתרון עתה שגוי, מפני ש R יכול להכנס אפילו אם יש W בפנים, בגלל שלא הוריד את סמפור.db 53

Tirgul 10 Memory Management Question 1 Consider a paged memory system with two-level page table. If a memory reference takes 20 nanoseconds (ns), how long does a paged memory reference take? Assume that the second-level page table is always in memory, and: a) There is no TLB, and the needed page is in main memory. b) There is a TLB which the access to it takes 0.05 ns, the needed page is in main memory and i)the TLB does not contain information of this page. ii)the TLB contains information of this page. 54

Answer 1 a) We need 2 accesses to the memory one to the second level page table in order to get the physical address of the needed page and one to the page we wanted after getting its physical address, then it will take us: 2 x 20 ns = 40 ns. b) i)first, we will check the TLB, since the TLB does not contain information of this page, we will have another 2 accesses as in a), then it will take us: 0.05 + 2 x 20 = 40.05 ns. ii) First, we will check the TLB, since the TLB contains information of this page, we can turn directly to the needed page, then it will take us: 0.05 + 20 = 20.05 ns. 55

Question 2 Consider a paged virtual address space composed of 32 pages of 2 KB each which is mapped into a 1 MB physical memory space. a) What is the format of the logical address; i.e., which bits are the offset bits and which are the page number bits? Explain. b) What is the length and width of the page table? Explain. 56

Answer 2 a) VA: each page size is 2 KB = 2 11 Bytes ==> we need 11 bits in order to represent the offset and to be able to access each byte in the page. We have 32 pages ==> we need 5 bits to represent all the pages numbers and to be able to access each page. Then, the format of the logical address is: 5 bits page number 11 bits - offset b) Page table length = 32 rows since each table can have at most 32 pages. Page table width: PA: 1 MB addresses = 2 20 Bytes ==> we need 20 bits to represent all the addresses at the physical memory ==> since, we need to the offset 11 bits as we saw, the other 9 bits represent the page number. Then, we need 9 bits for the PA, possible protection bits:1 modified bit, 1 invalid 57

Question 3 A computer has address space of 32bit. Page size of 2k. A. What is the maximal size of the page table, assuming each entry is 4 bytes. What is the maximal size of a program? Does it depend on the page size? B. Assume that the page table has two levels and for the 1'st level we use 8 bit. What is the maximal size of the 2'nd table? Can you now run programs that demand more memory than in A? 58

Answer 3 A. We have total of 2 32 addresses then the virtual memory s size is 2 32 bytes, the size of each page is 2 11, then we have total of 2 32 \ 2 11 =2 21 pages. We need entry for each page, the size of each entry is 4 byte so we get 4 * 2 21 = 8 MB Maximal program size is 4 GB (the virtual memory s size), independent of page size. B. Since we use 8 bit for the 1'st level table, then each 2'nd level table will have 2 13 entries then maximal size of 2'nd table is 4 * 2 13 = 2 15 bit = 32 K The size of virtual memory stays the same and we can't run bigger programs. 59

Question 4 Program A runs the following code: int i, j, a[100][100]; for (i = 0; i < 100; i++) { for (j = 0; j < 100; j++) { a[i][j] = 0; Program B runs the following code: int i, j, a[100][100]; for (j = 0; j < 100; j++) { for (i = 0; i < 100; i++) { a[i][j] = 0; Assume that the array a is stored by columns a[0,0], a[0,1]... The virtual memory has a page size of 200 words. The program code is in address 0-199 in the virtual memory. a[0][0] is in virtual address 200. We run both programs in a machine with physical memory of 3 pages. Where the code of the program is in the 1'st page and the other two are empty. If the page replacement algorithm is LRU, how many page faults there will be in each of the programs? explain. 60

Answer 4 Array a is stored in a[0][0],a[0][1]... in virtual pages 1..50 The reference string of program A will be: 0,1,0,2,0,3...50 In LRU, page 0 will be in memory all the time. We'll get a total of 50 page faults. The reference string of B will be: 0,1,0,2..., 0,50,0,1,0,2...0,50,.. Total of 5000 page faults. Here too due to LRU page 0 will be in memory all the time. 61

Question 5 Consider the following page reference string: 7,0,1,2,0,3, 0,4,2,3,0,3,2,1,2,0,17,0,1 How many page faults would occur for the following algorithm assuming that the memory s size is 3 frames? Remeber that all frames are initially empty, so your first unique pages will all cost one fault each. LRU FIFO Optimal 62

Answer 5 Algorithm FIFO: (15 page faults) 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 7 7 7 2 2 2 4 4 4 0 0 0 7 7 7 0 0 0 3 3 3 2 2 2 1 1 1 0 0 1 1 1 0 0 0 3 3 3 2 2 2 1 Algorithm Optimal: (9 page faults) 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 7 7 7 2 2 2 2 2 7 0 0 0 0 4 0 0 0 1 1 3 3 3 1 1 Algorithm LRU: (12 page faults) 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 7 7 7 2 2 4 4 4 0 1 1 1 0 0 0 0 0 0 3 3 3 0 0 1 1 3 3 2 2 2 2 2 2 63

Tirgul 11 Memory Management (Cont.) Question 1 (from exam 2001 moed a, q.3) ב. עבור אלג' FIFO וזיכרון פיסי בן 4 דפים, חשב את ה distance string ובעזרתו את מס' ה page.faults 64

Answer 1 65

66

Question 2 (from exam 2005 moed a, q.3) 67

Answer 2 The running of the fifo-second-chance algorithm will be described as follows: (A,B,C) means pages A B and C are in memory now. The first to be taken out of the queue is always the first in that tuple (A) and the last to be taken out from the queue is the last in the tuple (C). A page with ref bit on is colored in bold. Ref bit off is written normal. Meaning (7,12,4) means pages 7,12 and 4 are in memory. The first to taken from the queue is 7, and the ref bit of 12 is the only one on. A) Ref String - Memory Image - PageFault P12 (p12,-,-,-,-) PF P11 (p12,p11,-,-,-) PF P11 (p12,p11,-,-,-) P12 (p12,p11,-,-,-) P11 (p12,p11,-,-,-) P11 (p12,p11,-,-,-) P13 (p12,p11,p13,-,-) PF P12 (p12,p11,p13,-,-) P11 (p12,p11,p13,-,-) P12 (p12,p11,p13,-,-) P21 (p12,p11,p13,p21,-) PF P22 (p12,p11,p13,p21,p22) PF P23 (p11,p13,p21,p22,p23) PF P21 (p11,p13,p21,p22,p23) P22 (p11,p13,p21,p22,p23) P21 (p11,p13,p21,p22,p23) P22 (p11,p13,p21,p22,p23) P21 (p11,p13,p21,p22,p23) P12 (p13,p21,p22,p23,p12) PF P12 (p13,p21,p22,p23,p12) P11 (p21,p22,p23,p12,p11) PF P12 (p21,p22,p23,p12,p11) P11 (p21,p22,p23,p12,p11) P22 (p21,p22,p23,p12,p11) P14 (p22,p23,p12,p11,p14) PF P12 (p22,p23,p12,p11,p14) P11 (p22,p23,p12,p11,p14) P11 (p22,p23,p12,p11,p14) 68

P12 (p22,p23,p12,p11,p14) P21 (p23,p12,p11,p14,p21) PF P22 (p12,p11,p14,p21,p22) PF P24 (p11,p14,p21,p22,p24) PF P22 (p11,p14,p21,p22,p24) P21 (p11,p14,p21,p22,p24) P22 (p11,p14,p21,p22,p24) P21 (p11,p14,p21,p22,p24) We can see that both processes usually refer to pages 1 and 2 often and rarely to other pages (3 and 4). Since fifo-second-chance is a global algorithm. We get for the above distance string, that pages from the working set of a process are sometimes paged out. If you look at the string closely, you could see that whenever page 3 or 4 (for either processes) is referenced, the last two pages used by each process are 1 and 2. So with a window of size 2, whenever pages 3 or 4 are requested, pages 1 and 2 of both processes will be in the working set and not be paged out by WSClock. B) If you look at the string closely, you could see that the working set of each process is at least of size 2. The total memory size is 5. The processes are running in sequence, first process 1, then 2, then 3, then again 1,2,3 So when it is the turn of some process to run, two other processes run before him, each needed at least 2 pages to run, so the memory contains at least 4 pages not belong to this process (from his last run). This is not enough for this process to run and he needs to page in his other pages. This will cause the next process to not have his pages in memory and so on. The total sum of the working set is bigger than the size of the memory. This situation is called thrashing and is solved by the OS by swapping out one of the processes to the backing store (disk). The scheduler will now only have two processes to schedule and will only run them, and they could run efficiently. The swapper (the scheduling algorithm responsible for swapping in and out processes from/to the disk) will have to rotate the swapped out process in order to let every process a chance to run. Notes: You cannot write "LRU" as a solution, since the OS can implement it. Writing LRU instead of "approximation to LRU" is a mistake. LRU for both questions is 69

not the answer. The idea is to keep the working sets of both processes in question 1, LRU will not do it. Also solution for the second question, which have the OS "custom build" a page replacement algorithm for this ref string in advance is also wrong. The OS cannot know what pages will come next, so writing "will keep pages 1,2,3 locked in memory" is very problematic. How did the OS know to keep these pages? What you are doing is looking AHEAD what is going to be the ref string, then custom building the algorithm. Now imagine p1 was actually called p5 and p2 was actually called p1. your algorithm says to keep pages of p1 in memory. This is not only unfair, but is also a poorly design system. A solution like "lets kill multiprocessing, run only one process at a time" is also bad, and similar to this suggestion. 70

Question 3 Consider the following virtual page reference string: 0, 1, 2, 3, 0, 0, 1, 2, 3. Which page references will cause a page fault when a basic clock replacement algorithm is used? Assume that there are 3 page frames and that the memory is initially empty. Show all page faults (including frame loading). 71

Answer 3 0 1 2 3 0 0 1 2 3 *0 0 0 *3 3 3 3 *2 2 *1 1 1 *0 *0 0 0 *3 *2 2 2 2 *1 1 1 p.f p.f p.f p.f p.f p.f p.f p.f * - the placement of the clock s hand before the request The page numbers are after the request has been fulfilled. We will get total of 8 page faults. 72

Question 4 (from exam 2001 moed c) מלא את הטבלה המצורפת 73

Answer 4 מס' הדפים הם לאחר מילוי הבקשה - מציין את מיקום המחוג לפני הבקשה לדף ב 8 ה ואת מיקום המחוג אחרי הבקשה לדף ב 4 ה references הראשונים references האחרונים 74

Tirgul 12 File System I-Nodes Question 1 What is number of disk accesses when a user executes the command more /usr/tmp/a.txt? Assumptions: Size of 'a.txt' is 1 block. The i-node of the root directory is not in the memory. Entries 'usr', 'tmp' & 'a.txt' are all located in the first block of their directories. 75

Answer 1 Accessing each directory requires at least 2 disk accesses: reading the i-node and the first block. In our case the entry we are looking for is always in the first block so we need exactly 2 disk accesses. According to assumption 2 the root directory's i-node is located on the disk so we need 6 disk accesses (3 directories) until we reach a.txt's i-node index. Since "more" displays the file's content, for a.txt we need its i-node + all the blocks of the file (1 block, according to assumption). Total disk accesses: 6 + 2 = 8. 76

Question 2 The Ofer2000 Operating Systems, based on UNIX, provides us a system call rename(char *old, char *new), that changes a file's name from 'old' to 'new'. What is the difference between using this call, and just copying 'old' to a new file, 'new', followed by deleting 'old'? Answer in terms of disk access & allocation. 77

Answer 2 rename - simply changes the file name in the entry of its directory. copy - will allocate new i-node & blocks for the new file, and copy the contents of the old file blocks to the new ones. delete - will release i-node and blocks of the old file. copy + delete - is a much more complicated operation for the Operating System, note that you would not be able to execute it if you do not have enough free blocks or i-nodes left on your disk. 78

Question 3 Write an implementation (pseudo code) of the system call delete(i-node node) that deletes the file related to node. Assumptions: node is related to file & delete is not recursive. The i-node has 10 direct block entries, 1 single indirect entry & 1 double indirect entry. You may use the system calls: read_block(block b) - reads block b from the disk. free_block(block b) & free_i-node(i-node node). 79

Answer 3 delete(i-node node){ for each block b in node.direct do free_block(b); single <-- read_block(node.single_indirect) for each entry e in single do free_block(e); free_block(single); double <-- read_block(node.double_indirect) for each entry e in double do single <-- read_block(e) for each entry ee in single do free_block(ee); free_block(single); free_block(double); free_i-node(node); 80