Tirgul 4 Scheduling:

Size: px
Start display at page:

Download "Tirgul 4 Scheduling:"

Transcription

1 Question 1 [Silberschats 5.7] Tirgul 4 Scheduling: Consider the following preemptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at rate alpha; when it is running, its priority changes at rate beta. All processes are given a priority of 0 when they enter the ready queue. The parameters alpha and beta can be set to five many different scheduling algorithms. 1. What is the algorithm that results from beta > alpha > 0? 2. What is the algorithm that results from alpha < beta < 0? 3. Is there a starvation problem in 1? in 2? explain. 4. Can you think about an expression which determines priorities and takes into account both running time (preference to short) and waiting time (preference to long). 1

2 Answer to Question 1 Reminder: In dynamically changing priorities, each process has its base priority. At each clock tick we add X rate to each process' priority. We always run the process with highest current priority. 1. beta > alpha > 0 Lets try an example first: beta = 2, alpha = 1, 3 processes P1, P2, P3 arrive one after the other, each lasts 3 seconds. P1 starts (running process is marked in bold) Time P P P What we have here is First Come First Served algorithm. Proof: If a process is running it had the highest number between all other processes. While he is running its priority increases in a greater rate than all other process waiting until it finishes. If two process are waiting then their number will increase in the same rate, so the one who got first to the wait queue will have a higher initial value and will get CPU time first. 2. alpha < beta < 0 We'll use the same example as before. alpha = -2, beta = -1 Time P P P We get Last In First Out. Proof: The rate of a running process is decreased much less than a waiting one so a running process will continue to run since when it was chosen it had the highest 2

3 number and it is decreasing the least. Since a new process gets priority of 0 it will run next. Since it is running it will continue to run until it finishes. Then the next process in line which was the one before it will run and so on. 3. Is there a starvation problem? In the 1'st case there is no starvation problem. Every process will run in its turn. Assume that process P1 arrives before process P2 then when P2 arrives the number of P1 is bigger. When they're both waiting their numbers will increase in the same rate and P1's number will be bigger than P2's number. After all the processes that arrived before P1 will finish it will get the CPU and will leave it only when it's finished. In the 2'nd case there is a problem of starvation. Since every new process get the CPU, if there is a process arriving all the time before the 1'st process finished it will never get CPU time. 4. An expression that determines priority and take for account running time (preference to short) and waiting time (preference to long) use: waittime runtime 3

4 Question 2 [Tanenbaum 2/22] Five batch jobs A,B,C,D and E arrive at a computer center at almost the same time (A came first, E last, but all arrived at the same clock tick). They have estimated running times of 10, 6, 2, 4 and 8 seconds. Their (externally determined) priorities are 3,5,2,1 and 4, respectively, with 5 being the highest priority. For each of the following scheduling algorithm, determine the mean process turnaround time. Ignore process switching overhead. All jobs are completely CPU bound. 1. Round robin with 1 second time quantum (preemptive) 2. Priority Scheduling (non-preemptive) 3. First come first served (in order 10, 6, 2, 4, 8) (non-preemptive) 4. Shortest job first (non-preemptive) 4

5 Answer to Question 2 Reminder: turnaround time of a process how much time passes since the process arrived (submitted) until it finished. Process Running Time Priority A 10 3 B 6 5 C 2 2 D 4 1 E Round Robin (RR) with 1 sec time quantum: Time A B C D E(1) A B D E(2) A B D E(3) A Process C Time B E(4) A B E(5) A E(6) A E(7) A A(9) Process D B E(8) A(10) Mean turn around = ( ) / 5 = Priority Scheduling: B(6), E(8), A(10), C(2), D(4) Mean turn around = ( ) / 5 = First Come First Served (FCFS): A(10), B(6), C(2), D(4), E(8) Mean turn around = ( ) / 5 = Shortest Job First (SJF): C(2), D(4), B(6), E(8), A(10) Mean turn around = ( ) / 5 = 14 5

6 Question 3 (cs exam, years 2004) The OS keeps two queues. Each queue implements round robin (RR). The OS always prefers to run a process in Q1 (queue 1) over a process in Q2. When a process is created or returns from I/O it enters Q1. A process enters Q2 if it just finished running and it used up its whole time quantum. A process returning from I/O enters Q1 and has precedence over a process which did not start running. In the system are the following processes: Process P1 arrive time = 0, wants: 1 sec cpu, 1 sec IO, 3 sec cpu. Process P2 arrive time = 2, wants: 2 cpu, 2 IO, 2 cpu. Process P3 arrive time = 3, wants: 1 cpu, 3 IO, 3 cpu. Draw the Gantt table and compute mean TA and RT (turnaround and response time). Time quantum in Q1 = 1 sec. Time quantum in Q2 = 2 sec. The system has preemption. For computing the RT assume I/O is always printing to stdout and the user is waiting for this printing. So the first print time is the start of the I/O. 6

7 Answer to Question 3 CPU ArriveTime - * I/O - P3 * P2 * P1 * Reminder: TA = process finish time process arrive time. RP = process first IO time process arrive time. Mean TA = ( ) / 3 = 27 / 3 Mean RT = ( ) / 3 = 9 / 3 7

8 Question 4 (cs exam 2004) תזכורת - scheduling - guaranteed מריצים את התהליך עם זמן ריצה שקיבל עד כה הכי נמוך. החלק השווה שאמור להיות לו 8

9 Answer 4 9

10 10

11 Tirgul 5 Scheduling: Question 3 Four jobs A,B,C,D arrive at the same time. They have estimated running times of 6, 10, 8 and 4 seconds respectively. Jobs B,C have priority 1 upon arrival, and A,D have priority 2. Draw the scheduling diagram and compute mean turnaround time if scheduling algorithm uses: a) Multilevel queue scheduling with two queues: queue #1 uses Shortest Job First with 2 sec time splice and queue #2 uses Round Robin with 1 sec time slice; b) Same as a) except that process is moved to the other queue if it has been kept in the present queue for more than 10 seconds. Note In this version of Multilevel queue scheduling the scheduler alternates between the queues. Meaning it first picks a process from Q1, then a process from Q2, then again from Q1 and so on. In class you learned a different version, in which it first empties Q1 from processes, before picking processes in Q2. 11

12 Answer to Question 3 a) The diagram is: Q1 C C C C C C C C B B B B B B B B B B Q2 A D A D A D A D A A Time Average turnaround time is ( )/4 = b) The diagram is Q1 C C C C C C C C D D D A A A B B B B B B Q2 A D A B B B B A Time Switch + + Average turnaround time is ( )/4 = D migrates once, B and A migrate twice. 12

13 Tirgul 6 Synchronization: Bakery Algorithm 1 var choosing: shared array[0..n-1] of boolean; 2 number: shared array[0..n-1] of integer;... 3 repeat 4 choosing[i] := true; 5 number[i] := max(number[0],number[1],...,number[n-1]) + 1; 6 choosing[i] := false; 7 for j := 0 to n-1 do begin 8 while choosing[j] do (* nothing *); 9 while number[j] <> 0 and 10 (number[j], j) < (number[i],i) do 11 (* nothing *); 12 end; 13 (* critical section *) 14 number[i] := 0; 15 (* remainder section *) 16 until false; 13

14 Question 3 In Petersons algorithm for achieving mutual exclusion, what would happen if we swap the lines marked with "X". int turn; /* whose turn is it? */ int interested[n]; /* all initially 0 (FALSE) */ void enter_region(int process){ /* who is entering 0 or 1? */ int other = 1- process; /* opposite of process */ interested[process] = TRUE; /X /* signal that you're interested */ turn = process; /X /* set flag */ while (turn == process && interested[other] == TRUE); /* null statement */ void leave_region(int process){ /* who is leaving 0 or 1? */ interested[process] = FALSE; /* departure from critical region */ Answer to Question 3 We will get a mutual exclusion violation: P1 does turn:=1; P0 does turn:=0; P0 does interested[0]:=true; P0 does while(turn == 0 && interested [1]); // (#t and #f) #f P0 enters the CS. P1 does interested [1]:=true; P1 does while(turn == 1 && interested [0]); // (#f and #t) #f P1 enters the CS. now both processes are in the critical section. 14

15 Question 4 Assume the while statement in Peterson's solution was change to: while (turn!= process && interested[other] == TRUE) Describe a scheduling scenario in which there is a Mutual Exclusion violation and a scenario in which there is not a Mutual Exclusion violation. Answer to Question 4 Mutex violation: Process A executes `interested [process] = TRUE', `turn = process' and falls through the while loop shown above. While in the critical section, the timer fires and process B executes `interested [process] = TRUE', `turn = process' and falls through the while loop shown above. Both processes are in the critical section. No Mutex violation: P0 interested[0]=true, P1- interested[1]=true, P0 turn = 0, P1 turn =1, P1 passes while loop and enters CS, P2 stuck in while loop. 15

16 Question 5 Given below is the code of the producer-consumer problem. What might happen in each of the following three cases? a) If we switch lines 6 and 7. b) If we switch lines 11 and 12. c) If we switch lines 7 and 8. #define N 100 /* buffer size */ semaphore mutex = 1; /* mutual exclusion semaphore */ semaphore empty = N; /* count of empty buffer slots */ semaphore full = 0; /* count of full buffer slots */ void producer() { 1. int item; 2. while (TRUE) { 3. produce_item(&item); 4. down(&empty); /* one less empty buffer */ 5. down(&mutex); /* enter critical section */ 6. enter_item(item); /* modify shared buffer pool */ 7. up(&mutex); /* leave critical section */ 8. up(&full ); /* one more full buffer */ void consumer() { 9. int item; 10. while (TRUE) { 11. down(&full ); /* one less full buffer */ 12. down(&mutex); /* enter critical section */ 13. remove_item(&item); /* modify shared buffer pool */ 14. up(&mutex); /* leave critical section */ 15. up(&empty); /* one more empty buffer */ 16. consume_item(item); 16

17 Answer to Question 5 a) The access to critical section is not synchronized (no Mutex). b) Deadlock. c) Ok 17

18 Question 6 (CS exam 2004) Prove Dekker s solution for the Critical Section problem: bool* flag = {false, false; int turn = 0 ; void p0() { while (true) { flag[0] = true; while (flag[1]) { if (turn==1) { flag[0] = false; while (turn == 1) { /*nothing*/ flag[0] = true; /* critical section */ turn = 1; flag[0] = false; /* non-critical section */ void p1() { while (true) { flag[1] = true; while (flag[0]) { if (turn==0) { flag[1] = false; while (turn == 0) { /*nothing*/ flag[1] = true; /* critical section */ turn = 0; flag[1] = false; /* non-critical section */ 18

19 Answer to Question 6 Conditions needed to maintain critical sections: 1. Mutual Exclusion - No two processes in critical sections simultaneously (prevents race conditions). 2. Progress - Process outside its critical section may not block another process. 3. No Starvation - There is a limit on the number of times other processes can enter the critical section while another waits. - Progress תהליך מחוץ ל- CS לא עוצר תהליך שמעוניין להיכנס: תהליך שלא מעוניין להיכנס, מכבה את ה- flag שלו. מכאן שתהליך שמעוניין להיכנס ימצא את ה- flag הנגדי עם ערך,false ויכנס ישר ל- CS מבלי לבדוק את המשתנה.turn א. ב. ג. Mutual exclusion לא יהיו שני תהליכים ב- CS בו זמנית: נניח ש- P0 בפנים. הפקודה האחרונה ששנתה את flag[0] הפכה אותו ל-. true נניח ש- P1 בפנים. הפקודה האחרונה ששנתה את flag[1] הפכה אותו ל-. true אבל, הפקודה האחרונה ששני התהליכים בצעו לפני הכניסה ל- CS הייתה פקודת ה- CS ולפי ההנחה שני התהליכים היו "נתקעים" בלולאה ולא נכנסים ל-,while סתירה! No starvation תהליך המעוניין להיכנס לא יחכה לעד: בה"כ נניח ש- P0 רוצה להיכנס ונמצא בתוך ה- while (אחרת הוא כבר בפנים). P1 יכול להימצא באחד משלושה מקומות: (1 ב-.CS 2) בתוך קוד לולאת ה- while שלו עם = 0.turn 3) בתוך קוד לולאת ה- while שלו עם = 1.turn מקרה 1: אם P 1 ב-,CS בסופו של דבר הוא יצא ויציב = 0.turn אם יישאר בחוץ אנו חוזרים למקרה של.progress לכן נניח ש- P1 זריז ונכנס ללולאת ה- while שלו ולכן אנו במקרה 2 כאשר = 0.turn מקרה 2: מכיוון ש- = 0 turn ואף אחד לא משנה אותו, אזי P1 יתקע בסופו של דבר בלולאה הפנימית לאחר שהציב.flag[1] = false כעת, כאשר P0 יקבל,CPU מכיוון ש- = 0 turn הוא יצא מהלולאה הפנימית ויגיע שוב ללולאת ה- while החיצונית. אבל.CS ולכן הוא יכנס ל- flag[1] = false 19

20 מקרה 3: אם = 1,turn ו- P1 מבצע את לולאת ה- while החיצונית, הוא יבצע אותה כל זמן ש- [ flag[0.בסופו = true של דבר P0 יקבל,CPU התנאי בלולאת ה- while שלו יכשל והוא יכנס ל-.CS ואנו חוזרים למקרה 1. 20

21 Tirgul 7 Synchronization Continued: Critical Regions: (Side note in semaphores sometimes up is called wait, and down called signal). "Concurrency is still in the spaghetti stage everything is possible, but it is unreasonably difficult to get it right". 1. Introduction Given that we have semaphores, why do we need another programming language feature for dealing with concurrency? Semaphores are low-level. Omitting a wait breaches safety - we can end up with more than one process inside a critical section. Omitting a signal can lead to deadlock. Semaphore code is distributed throughout a program, which causes maintenance problems. Better to use a high-level construct/abstraction like CR / Monitors. Note Do not confuse with Critical Section (even though in some literature refers to CR as CS). 2. Critical Regions A critical region is a section of code that is always executed under mutual exclusion. Critical regions shift the responsibility for enforcing mutual exclusion from the programmer (where it resides when semaphores are used) to the compiler. They consist of two parts: 1. Variables that must be accessed under mutual exclusion. 2. A new language statement that identifies a critical region in which the variables are accessed. 21

22 Example: This is only pseudo-pascal-fc - Pascal-FC doesn't support critical regions var v : shared T;... region v do begin... end; All critical regions that are `tagged' with the same variable have compiler-enforced mutual exclusion so that only one of them can be executed at a time: Process A: region V1 do begin { Do some stuff. end; region V2 do begin { Do more stuff. end; Process B: region V1 do begin { Do other stuff. end; Here process A can be executing inside its V2 region while process B is executing inside its V1 region, but if they both want to execute inside their respective V1 regions only one will be permitted to proceed. Each shared variable (V1 and V2 above) has a queue associated with it. Once one process is executing code inside a region tagged with a shared variable, any other 22

23 processes that attempt to enter a region tagged with the same variable are blocked and put in the queue. 3. Conditional Critical Regions Critical regions aren't equivalent to semaphores. As described so far, they lack condition synchronization. We can use semaphores to put a process to sleep until some condition is met (e.g. see the bounded-buffer Producer-Consumer problem), but we can't do this with critical regions. Conditional critical regions provide condition synchronization for critical regions: region v when B do begin { Do some stuff. end; where B is a boolean expression (usually B will refer to v). Conditional critical regions work as follows: 1. A process wanting to enter a region for v must obtain the mutex lock. If it cannot, then it is queued. 2. Once the lock is obtained the boolean expression B is tested. If B evaluates to true then the process proceeds, otherwise it releases the lock and is queued. When it next gets the lock it must retest B. Note - Because these processes must retest their condition they are doing something akin to busy-waiting, although the frequency with which they will retest the condition is much less. Note also that the condition is only retested when there is reason to believe that it may have changed (another process has finished accessing the shared variable, potentially altering the condition). Though this is more controlled than busy-waiting, it may still be sufficiently close to it to be unattractive Limitations Conditional critical regions are still distributed among the program code. There is no control over the manipulation of the protected variables - no information hiding or encapsulation. Once a process is executing 23

24 inside a critical region it can do whatever it likes to the variables it has exclusive access to. Conditional critical regions are more difficult to implement efficiently than semaphores. 24

25 Question 1 Implement bounded buffer using critical regions 25

26 Answer 1 var buffer: shared struct { pool: array [0..n-1] of item; count, in, out: integer; // producer region buffer when count < n do begin nextp = produceitem(); insert( nextp ); count++ ; end; // consumer region buffer when count > 0 do begin nextc = removefirst(); count--; end; See more: 26

27 Java Synchronization: A method or a code block might be synchronized. class C { synchronized void f(){ // only one thread in f compute_ack(5,5); or class C { void f() { do_something(); synchronized (this) { // only one thread in this code block compute_ack(5,5); When a thread calls a synchronized method of an object, it tries to grab the object's monitor lock. If another thread is holding the lock, it waits until that thread releases it. A thread releases the monitor lock when it leaves the synchronized method. The Object class (and therefore every other Java class) has the methods: wait, notify, notifyall. A call to wait() releases the monitor lock and puts the calling thread to sleep (i.e., it stops running). A subsequent call to notify on the same object wakes up a sleeping thread and lets it start running again. notifyall is similar, but wakes up all sleeping threads. If more than one thread is sleeping, one is chosen arbitrarily, if no threads are sleeping in this object, notify() does nothing. The awakened thread has to wait for the monitor lock before it starts; it competes on an equal basis with other threads trying to get into the monitor. Remeber that a "true" monitor keeps a FIFO ordering of waiting processes/threads, here Java chooses an arbitrary thread to wake. Note that wait might raise InterruptedException. 27

28 Question 3 Implement bounded buffer with java monitors 28

29 Answer 3 class Buffer { private Object[] _buffer; private int _max, _in, _out, _count; public Buffer(int max) { // constructor _max = max; _buffer = new Object[_max]; _in = 0; _out = 0; _count = 0; public synchronized void put(object o) { // put object in the queue while (_count == _max) { // buffer full try { wait(); catch (InterruptedException e) { // Do nothing _buffer[_in] = o; _count ++; _in = (_in + 1) % _max; notify(); // notify if consumer is waiting public synchronized Object get() { while (_count == 0) { // buffer empty try { // java forces exception catching wait(); catch (InterruptedException e) { // Do nothing Object o = _buffer[_out]; _count --; _out = (_out + 1) % _max; notify(); // notify if producer is waiting return o; However, this solution does not guarantees FIFO, and a starvation can occur. 29

30 Question 4 Describe code which forces the waking order in java to be FIFO (as in standard Monitors). 30

31 Answer 4 /* A critical section that preserves FIFO of waiting threads Since notify wake up an arbitrary thread, we'll use one lock object per thread. This way we know what thread will be waken. Author: MAC MAC */ import java.util.vector; /** * class WaitObject */ class WaitObject { boolean released = false; //this flag avoid race!!! synchronized void dowait() throws InterruptedException { try { if (!released) wait(); catch (InterruptedException ie) { //ignore it synchronized void donotify(){ if (! released){ // not really needed released = true; notify(); /** * class CriticalSection */ class CriticalSection { // critical section that preserves FIFO private Vector _waiting; // wait list private boolean _busy; // someone in critical section public CriticalSection() { // constructor _waiting = new Vector(); // create wait list _busy = false; // noone is in the CS now. 31

32 public void enter() { WaitObject my_lock = null; synchronized(this){ if (! _busy) { _busy = true; return; else { my_lock = new WaitObject(); // create my unique lock _waiting.add(my_lock); // add to waiting list my_lock.dowait(); // wait on lock public synchronized void leave() { if (_waiting.size() > 0) { // someone is waiting WaitObject o = (WaitObject) _waiting.elementat(0); _waiting.removeelementat(0); o.donotify(); 32

33 Tirgul 8 Deadlocks Deadlock - the ultimate form of starvation - occurs when two or more processes/threads are waiting on a condition that cannot be satisfied. Deadlock most often occurs when two (or more) processes/threads are each waiting for the other(s) to do something. For deadlock to occur in the system, four conditions must hold simultaneously in the system: 1. Mutual exclusion - resource used by only one process 2. Hold and wait - process can request resource while holding another resource 3. No preemption - only process can release resource 4. Circular wait - 2 or more processes waiting for resources held by other (waiting) processes 33

34 Question 1 Assume resources are ordered as: R 1, R 2,..., R n. Prove formally (by negation) that if processes always request resources by order (i.e if a process requests R k after R j then k>j) then deadlock will not occur (resources with one unit of-course!) 34

35 Answer 1 Suppose that our system is prone to deadlocks. We number our processes P 1,...,P n. Let's look at the last condition. Denote by P i -R k ->P j the situation when P i requests a resource R k held by P j. For circular wait condition to be satisfied in a state of deadlock, a subset P i1,...,p im of {P 1,...,P n must exist such that: P i1 -(R j,1 )->P i2 -(R j,2 )->P i3 -(R j,3 )->...-(R j,m-1 )->P im -(R j,m )->P i1 (*) For each process P i,s, s 1, P i,s holds resource R j,s-1 and requests resource R j,s. Then it means that j,s-1<j,s for any s (since they are resource numbers). We receive inequality j,1<j,2<...<j,m. But from (*) j,m<j,1 has to hold. We receive a necessary condition j,1<j,m and j,1>j,m for the forth condition of a deadlock. Thus, it can't be satisfied and the system is deadlock-free. 35

36 Question 2 (7.9 from Silberschats) Consider a system consisting of m resources of the same type, being shared by n processes. Resources can be requested and released by processes only one at a time. Show that system is deadlock free if the following two conditions hold: 1. The need of each process is between 1 and m resources. 2. The sum of all maximum needs is less than m+n. 36

37 Answer 2 Assume that the 4 conditions do exist in the system and thus there is a group of processes involved in a circular wait. Let these processes be P 1,...,P k, k<=n, their current demands be D 1,...,D k and the number of resources each of them hold be H 1,...,H k. Wait condition here should look like: P 1 ->P 2 ->...->P k ->P 1, but in fact it is simpler: let M 1,...,M n be total (maximum) demands of processes P 1,...,P n. Then circular wait can occur only if all resources are in use and every process hasn't acquired all its resources: H H k =m and D i >=1 for any i. Since M i =H i +D i, sum of maximum demands of the processes involved in circular wait is: M M k >=m+k. Note that for remaining processes P k+1,...,p n their maximum demands is at least 1: M i >=1, k+1<=i<=n and thus M k M n >= n-k Then total sum of maximum demands is: M M n = M M k +M k M n >= m+k+(n-k)=m+n. But it is defined that sum of all maximal needs is less than m+n, thus a contradiction. 37

38 Banker s Algorithm (multiple resources) 1. Look for a row R whose unmet resource needs are all smaller then or equal to A. If no such row exists, the system will eventually deadlock. 2. Assume the process of the row chosen finishes (which is possible). Mark that process as terminated and add all its resources to the A vector 3. Repeat steps 1 and 2 until either all processes are marked terminated, which means safe, or until a deadlock occurs, which means unsafe. Question 3 (7.6 from Silberschats) If deadlock is controlled by the banker's algorithm, which of the following changes can be made safely and under what circumstances: 1. Increase Available (add new resources) 2. Decrease Available (remove resources) 3. Increase Max for one process 4. Increase the number of processes 38

39 Answer 3 1. Increasing the number of resources available can't create a deadlock since it can only decrease the number of processes that has to wait for resources. 2. Decreasing the number of resources can bring to a deadlock in following case: when system decides on the next state to using old resource data, it may decide that this state will be safe. If change in resources number occurs exactly after this evaluation, system can proceed with allocation while next state becomes unsafe. 3. Error condition could be created for two possible reasons: o from Banker's algorithm point of view, process exceeds his maximum claim; o condition that was decided to be safe seizes to be safe if the change occurs after safety evaluation. 4. If number of processes is increased, the state remains safe, since we can first run the old processes by the scheduling that the system created in order to verify that the state is safe, they will finish their job, and will release all their resources. Now, all the system s resources are free, then, we can choose new process and give it all its demands, it will finish and again all the system s resources are free, now, we choose the second new process and give it all its demands, etc. till all processes will finish. 39

40 Question 4 Consider the following snapshot of a system with five processes (p1,... p5) and four resources (r1,... r4). There are no current outstanding queued unsatisfied requests. currently available resources r1 r2 r3 r current allocation max demand still needs Process r1 r2 r3 r4 r1 r2 r3 r4 r1 r2 r3 r4 p p p p p a. Compute what each process still might request and fill in the still needs columns. b. Is this system currently deadlocked, or will any process become deadlocked? Why or why not? If not, give an execution order. c. If a request from p3 arrives for (0, 1, 0, 0), can that request be safely granted immediately? In what state (deadlocked, safe, unsafe) would immediately granting the whole request leave the system? Which processes, if any, are or may become deadlocked if this whole request is granted immediately? 40

41 Answer 4 currently available resources r1 r2 r3 r current allocation max demand still needs Process r1 r2 r3 r4 r1 r2 r3 r4 r1 r2 r3 r4 p p p p p a) see table. b) Not deadlocked and will not become deadlocked. Using the Banker s algorithm, to determine the process finishing order: p1, p4, p5, p2, p3. c) Change available to (2, 0, 0, 0) and p3 s row of still needs to (6, 5, 2, 2). Now p1, p4, and p5 can finish, but with available now (4, 6, 9, 8) neither p2 nor p3 s still needs can be satisfied. So, it is not safe to grant p3 s request. Correct answer NO. Processes p2 and p3 may deadlock. 41

42 Question 5 (from exam 2003 moed b, q.1) #define N 5 #define LEFT (i-1) % N #define RIGHT (i+1) % N #define THINKING 0 #define HUNGRY 1 #define EATING 2 void philosopher(int i) { while(true) { think(); pick_sticks(i); eat(); put_sticks(i); monitor diningphilosophers condition self[n]; integer state[n]; תיקון:כל פילוסוף צריך לקחת בדיוק 3 מנות 42

43 procedure pick_sticks(i); begin state[i] := HUNGRY; test(i); if state[i]!= EATING then wait(self[i]); end procedure put_sticks(i); begin state[i] := THINKING; test(left); test(right); end non-entry-procedure test(i); begin if (state[left]!= EATING and state[right]!= EATING and state[i] = HUNGRY) then begin state[i] := EATING; signal(self[i]); end; end; for i := 0 to 4 do state[i] := THINKING; end monitor 43

44 Answer 5 a. we add 2 condition variables : full, empty and an integer :inplate = 0 we add 2 procedures to the monitor: procedure fillplate begin: if (inplate<5){ inplate++ empty.signal() else full.wait() end procedure takefromplate begin: if(inplate = = 0){ //full.signal() empty.wait() inplate -- full.signal() end update the philosopher s function: void philosopher(int i) { while(true) { think(); pick_sticks(i); for(j = 1 to 3){ takefromplate() eat() put_sticks(i); 44

45 and add the waiter s function: void waiter() { while(true) { fillplate() b. we add another condition varaiable still_eating and integer : finished_eating = 0 and since we want the waiter to fill the plate only when a philosopher finishes eating we have to assume that the plate is full in the beginning i.e inplate = 5 (we don t need anymore the full condition variable) update the above functions as follows: procedure fillplate begin: if (finished_eating = = 0) still_eating.wait() inplate+=3 finished_eating -- empty.signal() end procedure takefromplate begin: if(inplate = = 0){ empty.wait() inplate -- //the only change is that we don t use full anymore end and we add to the put_sticks function : finished_eating ++ still_eating.signal() as the last commands in this function. (we can create another procedure to the monitor finished_eating() which will do that and the philosopher will call this 45

46 function after eat() function) Deadlock can occur if none of the philosophers can finish its loop of takefromplate. c. For 5 philosophers this cannot happen, since as shown in class, in any moment, there may be 2 philosophers at most that eat, each asks for 3 and there are 5, then at least one of the eaters will get 3 and will awake the waiter. d. For 6 philosophers, there may be 3 philosophers that eat, each can be waiting (after taking, say 2-2-1) and none is waking up the waiter. 46

47 Question 6 Here is the solution for the Tunnel problem with semaphores: int count[2]; semaphore mutex = 1, busy = 1; semaphore waiting[2] = {1,1; void arrive(int direction) { down(&waiting[direction]); down(&mutex); count[direction] += 1; if(count[direction] == 1) { up(&mutex); // line 1 down(&busy) // line 2 else up(&mutex); up(&waiting[direction]); void leave(int direction){ down(&mutex); count[direction] -= 1; if(count[direction] == 0) up(&busy); up(&mutex); Assume that the system runs several processes. Each one represents a car which acts as follows: arrive(direction) Travel leave(direction) (the value direction can be 0 or 1) a. Does in the given result two cars can travel at the same time in the same direction? Opposite direction? In case they can travel at the same time in the same direction, can they bypass one another? Can be a starvation? b. (From last year exam moed a, q.1,b) 47

48 Answer 6 a. The solution guarantees that several cars can move, at the same time, in one direction only. The cars can bypass one another since there is no limitation over the speed of the cars. The order of arrival to the tunnel isn t kept. Starvation can occur. Assume there is traffic in direction=0, all the cars that will access the tunnel from the other direction will be blocked on semaphore busy. The only possibility to release them is to have no cars in the tunnel with direction=0. Therefore, if there is unstoppable stream of cars in direction=0, count[0] will be always bigger than zero and the cars in direction=1 will wait endlessly. b. For both sides, the function arrive will be: void arrive(int direction) { down(waiting[direction]) down(mutex_3); down(mutex); count[direction]++; if(count[direction] == 1) { up(mutex) down(busy); else up(mutex); up(mutex3); up(waiting[direction]); The leave function remains the same. 48

49 Tirgul 9 Scheduling Question 2 (From 2000 quiz, q.2) הערה: FCFS עפ"י הגדרה הינו,non-preemptive לכן צריך לקחת החלטות.I/O רק במקרים שתהליך סיים או יצא ל SCHEDULING בנוסף, שימו לב שה I/O מתבצע לאותו דיסק לכן לא ניתן ש 2 תהליכים יבצעו במקביל,I/O זה קורה בשניה התשיעית. 49

50 50

51 Synchronization Question 3 (From 2003 exam moed a, q.1) 51

52 52

53 Answer 3 א. ב. פתרון תקין הוא פתרון שבו יש רק W אחד במע' ואף R או שיש כמה R -ים ואף W. הפתרון תקין. הוכחה: יש רק W אחד כי אם יש W בפנים אזי כל ה W האחרים נעצרים על הסמפור.db אם יש W בפנים, הוא בהכרח הוריד את rdb ולכן כל ה R שיגיעו אחריו יתקעו על, rdb בנוסף, כל ה R שהגיעו לפניו, בהכרח ב CS או לאחריו, לכן ה W יכנס ל CS רק אחרי שישיג את המפתח,db כלומר לאחר שכל הקוראים שהגיעו לפניו יסיימו וכל הקוראים שהגיעו אחריו לא יוכלו להכנס כי הם תקועים על. rdb אם יש R בפנים, אזי אין W בפנים בגלל ש R הוריד את ה,db כמה R יכולים להכנס כי הם עוקפים את ה db ונכנסים. כן, תתכן הרעבה של, R אם יש זרם בלתי פוסק של. W תסריט: W ראשון תופס את, rdb ברגע זה כל ה R ים נעצרים על,rdb עכשו כל ה W עוקפים את,rdb ונכנסים ישר לתור של db ולכן ה R -ים יורעבו לעד. ג. 1. שתי האפשרויות קיימות, כאשר R יוצא והוא אחרון בפנים ויש W שמחכה ב db אזי זה נכון. כאשר R יוצא והוא לא אחרון זה לא נכון, כי הוא לא יבצע up ל.db 2. שתי האפשרויות קיימות, כאשר W יוצא והוא לא אחרון יכנס ה W כיוון שכל ה R בהכרח ממתינים על,wc>=2,rdb לכן rdb לא ישוחרר לעומת זאת db כן ישוחרר ולכן W יכנס ולא R. וכאשר W יוצא והוא אחרון ישוחררו db,rdb ולכן R יכנס. 3. נכון. כיוון שכשיש W בפנים כל ה R תקועים על, rdb אם יש עוד W אזי,wc>=2 לכן R. יכנס ולא W כן ישוחרר ולכן db לא ישוחרר לעומת זאת rdb ד. ה. הפתרון נשאר תקין. למעשה, הוא נותן עדיפות ל W חדש על פני R שכבר ישנם במע'. אולם, גם קיים תסריט שבו המע' תתקע: rdb נכנס, לא נוגע ב W1 rdb נכנס, מוריד את W2 rdb לא נוגע ב, יוצא wc=1 W1 rdb נעצר על, wc=2, נכנס W3 rdb לא נוגע ב, נתקע על mutex1 W2 W3 תקוע, לכן כל ה R אחריו יתקעו הפתרון עתה שגוי, מפני ש R יכול להכנס אפילו אם יש W בפנים, בגלל שלא הוריד את סמפור.db 53

54 Tirgul 10 Memory Management Question 1 Consider a paged memory system with two-level page table. If a memory reference takes 20 nanoseconds (ns), how long does a paged memory reference take? Assume that the second-level page table is always in memory, and: a) There is no TLB, and the needed page is in main memory. b) There is a TLB which the access to it takes 0.05 ns, the needed page is in main memory and i)the TLB does not contain information of this page. ii)the TLB contains information of this page. 54

55 Answer 1 a) We need 2 accesses to the memory one to the second level page table in order to get the physical address of the needed page and one to the page we wanted after getting its physical address, then it will take us: 2 x 20 ns = 40 ns. b) i)first, we will check the TLB, since the TLB does not contain information of this page, we will have another 2 accesses as in a), then it will take us: x 20 = ns. ii) First, we will check the TLB, since the TLB contains information of this page, we can turn directly to the needed page, then it will take us: = ns. 55

56 Question 2 Consider a paged virtual address space composed of 32 pages of 2 KB each which is mapped into a 1 MB physical memory space. a) What is the format of the logical address; i.e., which bits are the offset bits and which are the page number bits? Explain. b) What is the length and width of the page table? Explain. 56

57 Answer 2 a) VA: each page size is 2 KB = 2 11 Bytes ==> we need 11 bits in order to represent the offset and to be able to access each byte in the page. We have 32 pages ==> we need 5 bits to represent all the pages numbers and to be able to access each page. Then, the format of the logical address is: 5 bits page number 11 bits - offset b) Page table length = 32 rows since each table can have at most 32 pages. Page table width: PA: 1 MB addresses = 2 20 Bytes ==> we need 20 bits to represent all the addresses at the physical memory ==> since, we need to the offset 11 bits as we saw, the other 9 bits represent the page number. Then, we need 9 bits for the PA, possible protection bits:1 modified bit, 1 invalid 57

58 Question 3 A computer has address space of 32bit. Page size of 2k. A. What is the maximal size of the page table, assuming each entry is 4 bytes. What is the maximal size of a program? Does it depend on the page size? B. Assume that the page table has two levels and for the 1'st level we use 8 bit. What is the maximal size of the 2'nd table? Can you now run programs that demand more memory than in A? 58

59 Answer 3 A. We have total of 2 32 addresses then the virtual memory s size is 2 32 bytes, the size of each page is 2 11, then we have total of 2 32 \ 2 11 =2 21 pages. We need entry for each page, the size of each entry is 4 byte so we get 4 * 2 21 = 8 MB Maximal program size is 4 GB (the virtual memory s size), independent of page size. B. Since we use 8 bit for the 1'st level table, then each 2'nd level table will have 2 13 entries then maximal size of 2'nd table is 4 * 2 13 = 2 15 bit = 32 K The size of virtual memory stays the same and we can't run bigger programs. 59

60 Question 4 Program A runs the following code: int i, j, a[100][100]; for (i = 0; i < 100; i++) { for (j = 0; j < 100; j++) { a[i][j] = 0; Program B runs the following code: int i, j, a[100][100]; for (j = 0; j < 100; j++) { for (i = 0; i < 100; i++) { a[i][j] = 0; Assume that the array a is stored by columns a[0,0], a[0,1]... The virtual memory has a page size of 200 words. The program code is in address in the virtual memory. a[0][0] is in virtual address 200. We run both programs in a machine with physical memory of 3 pages. Where the code of the program is in the 1'st page and the other two are empty. If the page replacement algorithm is LRU, how many page faults there will be in each of the programs? explain. 60

61 Answer 4 Array a is stored in a[0][0],a[0][1]... in virtual pages The reference string of program A will be: 0,1,0,2,0, In LRU, page 0 will be in memory all the time. We'll get a total of 50 page faults. The reference string of B will be: 0,1,0,2..., 0,50,0,1,0,2...0,50,.. Total of 5000 page faults. Here too due to LRU page 0 will be in memory all the time. 61

62 Question 5 Consider the following page reference string: 7,0,1,2,0,3, 0,4,2,3,0,3,2,1,2,0,17,0,1 How many page faults would occur for the following algorithm assuming that the memory s size is 3 frames? Remeber that all frames are initially empty, so your first unique pages will all cost one fault each. LRU FIFO Optimal 62

63 Answer 5 Algorithm FIFO: (15 page faults) Algorithm Optimal: (9 page faults) Algorithm LRU: (12 page faults)

64 Tirgul 11 Memory Management (Cont.) Question 1 (from exam 2001 moed a, q.3) ב. עבור אלג' FIFO וזיכרון פיסי בן 4 דפים, חשב את ה distance string ובעזרתו את מס' ה page.faults 64

65 Answer 1 65

66 66

67 Question 2 (from exam 2005 moed a, q.3) 67

68 Answer 2 The running of the fifo-second-chance algorithm will be described as follows: (A,B,C) means pages A B and C are in memory now. The first to be taken out of the queue is always the first in that tuple (A) and the last to be taken out from the queue is the last in the tuple (C). A page with ref bit on is colored in bold. Ref bit off is written normal. Meaning (7,12,4) means pages 7,12 and 4 are in memory. The first to taken from the queue is 7, and the ref bit of 12 is the only one on. A) Ref String - Memory Image - PageFault P12 (p12,-,-,-,-) PF P11 (p12,p11,-,-,-) PF P11 (p12,p11,-,-,-) P12 (p12,p11,-,-,-) P11 (p12,p11,-,-,-) P11 (p12,p11,-,-,-) P13 (p12,p11,p13,-,-) PF P12 (p12,p11,p13,-,-) P11 (p12,p11,p13,-,-) P12 (p12,p11,p13,-,-) P21 (p12,p11,p13,p21,-) PF P22 (p12,p11,p13,p21,p22) PF P23 (p11,p13,p21,p22,p23) PF P21 (p11,p13,p21,p22,p23) P22 (p11,p13,p21,p22,p23) P21 (p11,p13,p21,p22,p23) P22 (p11,p13,p21,p22,p23) P21 (p11,p13,p21,p22,p23) P12 (p13,p21,p22,p23,p12) PF P12 (p13,p21,p22,p23,p12) P11 (p21,p22,p23,p12,p11) PF P12 (p21,p22,p23,p12,p11) P11 (p21,p22,p23,p12,p11) P22 (p21,p22,p23,p12,p11) P14 (p22,p23,p12,p11,p14) PF P12 (p22,p23,p12,p11,p14) P11 (p22,p23,p12,p11,p14) P11 (p22,p23,p12,p11,p14) 68

69 P12 (p22,p23,p12,p11,p14) P21 (p23,p12,p11,p14,p21) PF P22 (p12,p11,p14,p21,p22) PF P24 (p11,p14,p21,p22,p24) PF P22 (p11,p14,p21,p22,p24) P21 (p11,p14,p21,p22,p24) P22 (p11,p14,p21,p22,p24) P21 (p11,p14,p21,p22,p24) We can see that both processes usually refer to pages 1 and 2 often and rarely to other pages (3 and 4). Since fifo-second-chance is a global algorithm. We get for the above distance string, that pages from the working set of a process are sometimes paged out. If you look at the string closely, you could see that whenever page 3 or 4 (for either processes) is referenced, the last two pages used by each process are 1 and 2. So with a window of size 2, whenever pages 3 or 4 are requested, pages 1 and 2 of both processes will be in the working set and not be paged out by WSClock. B) If you look at the string closely, you could see that the working set of each process is at least of size 2. The total memory size is 5. The processes are running in sequence, first process 1, then 2, then 3, then again 1,2,3 So when it is the turn of some process to run, two other processes run before him, each needed at least 2 pages to run, so the memory contains at least 4 pages not belong to this process (from his last run). This is not enough for this process to run and he needs to page in his other pages. This will cause the next process to not have his pages in memory and so on. The total sum of the working set is bigger than the size of the memory. This situation is called thrashing and is solved by the OS by swapping out one of the processes to the backing store (disk). The scheduler will now only have two processes to schedule and will only run them, and they could run efficiently. The swapper (the scheduling algorithm responsible for swapping in and out processes from/to the disk) will have to rotate the swapped out process in order to let every process a chance to run. Notes: You cannot write "LRU" as a solution, since the OS can implement it. Writing LRU instead of "approximation to LRU" is a mistake. LRU for both questions is 69

70 not the answer. The idea is to keep the working sets of both processes in question 1, LRU will not do it. Also solution for the second question, which have the OS "custom build" a page replacement algorithm for this ref string in advance is also wrong. The OS cannot know what pages will come next, so writing "will keep pages 1,2,3 locked in memory" is very problematic. How did the OS know to keep these pages? What you are doing is looking AHEAD what is going to be the ref string, then custom building the algorithm. Now imagine p1 was actually called p5 and p2 was actually called p1. your algorithm says to keep pages of p1 in memory. This is not only unfair, but is also a poorly design system. A solution like "lets kill multiprocessing, run only one process at a time" is also bad, and similar to this suggestion. 70

71 Question 3 Consider the following virtual page reference string: 0, 1, 2, 3, 0, 0, 1, 2, 3. Which page references will cause a page fault when a basic clock replacement algorithm is used? Assume that there are 3 page frames and that the memory is initially empty. Show all page faults (including frame loading). 71

72 Answer *0 0 0 * *2 2 *1 1 1 *0 *0 0 0 *3 * *1 1 1 p.f p.f p.f p.f p.f p.f p.f p.f * - the placement of the clock s hand before the request The page numbers are after the request has been fulfilled. We will get total of 8 page faults. 72

73 Question 4 (from exam 2001 moed c) מלא את הטבלה המצורפת 73

74 Answer 4 מס' הדפים הם לאחר מילוי הבקשה - מציין את מיקום המחוג לפני הבקשה לדף ב 8 ה ואת מיקום המחוג אחרי הבקשה לדף ב 4 ה references הראשונים references האחרונים 74

75 Tirgul 12 File System I-Nodes Question 1 What is number of disk accesses when a user executes the command more /usr/tmp/a.txt? Assumptions: Size of 'a.txt' is 1 block. The i-node of the root directory is not in the memory. Entries 'usr', 'tmp' & 'a.txt' are all located in the first block of their directories. 75

76 Answer 1 Accessing each directory requires at least 2 disk accesses: reading the i-node and the first block. In our case the entry we are looking for is always in the first block so we need exactly 2 disk accesses. According to assumption 2 the root directory's i-node is located on the disk so we need 6 disk accesses (3 directories) until we reach a.txt's i-node index. Since "more" displays the file's content, for a.txt we need its i-node + all the blocks of the file (1 block, according to assumption). Total disk accesses: = 8. 76

77 Question 2 The Ofer2000 Operating Systems, based on UNIX, provides us a system call rename(char *old, char *new), that changes a file's name from 'old' to 'new'. What is the difference between using this call, and just copying 'old' to a new file, 'new', followed by deleting 'old'? Answer in terms of disk access & allocation. 77

78 Answer 2 rename - simply changes the file name in the entry of its directory. copy - will allocate new i-node & blocks for the new file, and copy the contents of the old file blocks to the new ones. delete - will release i-node and blocks of the old file. copy + delete - is a much more complicated operation for the Operating System, note that you would not be able to execute it if you do not have enough free blocks or i-nodes left on your disk. 78

79 Question 3 Write an implementation (pseudo code) of the system call delete(i-node node) that deletes the file related to node. Assumptions: node is related to file & delete is not recursive. The i-node has 10 direct block entries, 1 single indirect entry & 1 double indirect entry. You may use the system calls: read_block(block b) - reads block b from the disk. free_block(block b) & free_i-node(i-node node). 79

80 Answer 3 delete(i-node node){ for each block b in node.direct do free_block(b); single <-- read_block(node.single_indirect) for each entry e in single do free_block(e); free_block(single); double <-- read_block(node.double_indirect) for each entry e in double do single <-- read_block(e) for each entry ee in single do free_block(ee); free_block(single); free_block(double); free_i-node(node); 80

Back to synchronization

Back to synchronization Back to synchronization The dining philosophers problem Deadlocks o Modeling deadlocks o Dealing with deadlocks Operating Systems, 28, I. Dinur, D. Hendler and R. Iakobashvili The Dining Philosophers Problem

More information

Yet another synchronization problem

Yet another synchronization problem Yet another synchronization problem The dining philosophers problem Deadlocks o Modeling deadlocks o Dealing with deadlocks Operating Systems, 25, Meni Adler, Danny Hendler & Roie Zivan The Dining Philosophers

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples Last Class: Synchronization Problems Reader Writer Multiple readers, single writer In practice, use read-write locks Dining Philosophers Need to hold multiple resources to perform task Lecture 10, page

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

Last Class: Deadlocks. Today

Last Class: Deadlocks. Today Last Class: Deadlocks Necessary conditions for deadlock: Mutual exclusion Hold and wait No preemption Circular wait Ways of handling deadlock Deadlock detection and recovery Deadlock prevention Deadlock

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

MC7204 OPERATING SYSTEMS

MC7204 OPERATING SYSTEMS MC7204 OPERATING SYSTEMS QUESTION BANK UNIT I INTRODUCTION 9 Introduction Types of operating systems operating systems structures Systems components operating systems services System calls Systems programs

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

PESIT Bangalore South Campus

PESIT Bangalore South Campus INTERNAL ASSESSMENT TEST II Date: 04/04/2018 Max Marks: 40 Subject & Code: Operating Systems 15CS64 Semester: VI (A & B) Name of the faculty: Mrs.Sharmila Banu.A Time: 8.30 am 10.00 am Answer any FIVE

More information

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All

More information

COMP 3361: Operating Systems 1 Midterm Winter 2009

COMP 3361: Operating Systems 1 Midterm Winter 2009 COMP 3361: Operating Systems 1 Midterm Winter 2009 Name: Instructions This is an open book exam. The exam is worth 100 points, and each question indicates how many points it is worth. Read the exam from

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems

Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems University of California, Berkeley College of Engineering Computer Science Division EECS all 2016 Anthony D. Joseph Midterm Exam #2 Solutions October 25, 2016 CS162 Operating Systems Your Name: SID AND

More information

Operating Systems. Synchronization, part 3 Monitors, classical sync. problems

Operating Systems. Synchronization, part 3 Monitors, classical sync. problems Operating Systems Synchronization, part 3 Monitors, classical sync. problems 1 Monitor Monitor a synchronization primitive A monitor is a collection of procedures, variables and data structures, grouped

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

COMP 3361: Operating Systems 1 Final Exam Winter 2009

COMP 3361: Operating Systems 1 Final Exam Winter 2009 COMP 3361: Operating Systems 1 Final Exam Winter 2009 Name: Instructions This is an open book exam. The exam is worth 100 points, and each question indicates how many points it is worth. Read the exam

More information

Process Synchronization

Process Synchronization Chapter 7 Process Synchronization 1 Chapter s Content Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors 2 Background

More information

Chapter 7: Process Synchronization. Background. Illustration

Chapter 7: Process Synchronization. Background. Illustration Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID: COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE Time/Date: 5:30 6:30 pm Oct 19, 2011 (Wed) Name: Student ID: 1. Short Q&A 1.1 Explain the convoy effect with FCFS scheduling algorithm.

More information

Operating Systems (1DT020 & 1TT802)

Operating Systems (1DT020 & 1TT802) Uppsala University Department of Information Technology Name: Perso. no: Operating Systems (1DT020 & 1TT802) 2009-05-27 This is a closed book exam. Calculators are not allowed. Answers should be written

More information

CS3502 OPERATING SYSTEMS

CS3502 OPERATING SYSTEMS CS3502 OPERATING SYSTEMS Spring 2018 Synchronization Chapter 6 Synchronization The coordination of the activities of the processes Processes interfere with each other Processes compete for resources Processes

More information

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

(b) External fragmentation can happen in a virtual memory paging system.

(b) External fragmentation can happen in a virtual memory paging system. Alexandria University Faculty of Engineering Electrical Engineering - Communications Spring 2015 Final Exam CS333: Operating Systems Wednesday, June 17, 2015 Allowed Time: 3 Hours Maximum: 75 points Note:

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

SAMPLE MIDTERM QUESTIONS

SAMPLE MIDTERM QUESTIONS SAMPLE MIDTERM QUESTIONS CS 143A Notes: 1. These questions are just for you to have some questions to practice. 2. There is no guarantee that there will be any similarities between these questions and

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018 Deadlock and Monitors CS439: Principles of Computer Systems September 24, 2018 Bringing It All Together Processes Abstraction for protection Define address space Threads Share (and communicate) through

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

Remaining Contemplation Questions

Remaining Contemplation Questions Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and

More information

CPSC/ECE 3220 Summer 2017 Exam 2

CPSC/ECE 3220 Summer 2017 Exam 2 CPSC/ECE 3220 Summer 2017 Exam 2 Name: Part 1: Word Bank Write one of the words or terms from the following list into the blank appearing to the left of the appropriate definition. Note that there are

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo CSE 120 Principles of Operating Systems Fall 2007 Lecture 8: Scheduling and Deadlock Keith Marzullo Aministrivia Homework 2 due now Next lecture: midterm review Next Tuesday: midterm 2 Scheduling Overview

More information

Fall 2015 COMP Operating Systems. Lab 06

Fall 2015 COMP Operating Systems. Lab 06 Fall 2015 COMP 3511 Operating Systems Lab 06 Outline Monitor Deadlocks Logical vs. Physical Address Space Segmentation Example of segmentation scheme Paging Example of paging scheme Paging-Segmentation

More information

Process Synchronization

Process Synchronization Process Synchronization Part II, Modified by M.Rebaudengo - 2013 Silberschatz, Galvin and Gagne 2009 Classical Problems of Synchronization Consumer/Producer with Bounded-Buffer Problem s and s Problem

More information

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics! Why is synchronization needed?! Synchronization Language/Definitions:» What are race

More information

FCM 710: Architecture of Secure Operating Systems

FCM 710: Architecture of Secure Operating Systems FCM 710: Architecture of Secure Operating Systems Practice Exam, Spring 2010 Email your answer to ssengupta@jjay.cuny.edu March 16, 2010 Instructor: Shamik Sengupta Multiple-Choice 1. operating systems

More information

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University CSC 4103 - Operating Systems Fall 2009 Lecture - XI Deadlocks - II Tevfik Ko!ar Louisiana State University September 29 th, 2009 1 Roadmap Classic Problems of Synchronization Bounded Buffer Readers-Writers

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko! CSC 4103 - Operating Systems Fall 2009 Lecture - XI Deadlocks - II Roadmap Classic Problems of Synchronization Bounded Buffer Readers-Writers Dining Philosophers Sleeping Barber Deadlock Prevention Tevfik

More information

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Interprocess Communication (IPC)

Operating Systems. User OS. Kernel & Device Drivers. Interface Programs. Interprocess Communication (IPC) Operating Systems User OS Kernel & Device Drivers Interface Programs Interprocess Communication (IPC) Brian Mitchell (bmitchel@mcs.drexel.edu) - Operating Systems 1 Interprocess Communication Shared Memory

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:

More information

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011

Operating Systems Comprehensive Exam. Spring Student ID # 2/17/2011 Operating Systems Comprehensive Exam Spring 2011 Student ID # 2/17/2011 You must complete all of Section I You must complete two of the problems in Section II If you need more space to answer a question,

More information

Module 6: Process Synchronization

Module 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Synchronization: semaphores and some more stuff. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

Synchronization: semaphores and some more stuff. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili Synchronization: semaphores and some more stuff 1 What's wrong with busy waiting? The mutual exclusion algorithms we saw used busy-waiting. What s wrong with that? Doesn't make sense for uni-processor

More information

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory

( D ) 4. Which is not able to solve the race condition? (A) Test and Set Lock (B) Semaphore (C) Monitor (D) Shared memory CS 540 - Operating Systems - Final Exam - Name: Date: Wenesday, May 12, 2004 Part 1: (78 points - 3 points for each problem) ( C ) 1. In UNIX a utility which reads commands from a terminal is called: (A)

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization? Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions:» What are race conditions?»

More information

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019 CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,

More information

Department of Computer applications. [Part I: Medium Answer Type Questions]

Department of Computer applications. [Part I: Medium Answer Type Questions] Department of Computer applications BBDNITM, Lucknow MCA 311: OPERATING SYSTEM [Part I: Medium Answer Type Questions] UNIT 1 Q1. What do you mean by an Operating System? What are the main functions of

More information

CS162, Spring 2004 Discussion #6 Amir Kamil UC Berkeley 2/26/04

CS162, Spring 2004 Discussion #6 Amir Kamil UC Berkeley 2/26/04 CS162, Spring 2004 Discussion #6 Amir Kamil UC Berkeley 2/26/04 Topics: Deadlock, Scheduling 1 Announcements Office hours today 3:30-4:30 in 611 Soda (6th floor alcove) Project 1 code due Thursday, March

More information

Last Class: Synchronization Problems!

Last Class: Synchronization Problems! Last Class: Synchronization Problems! Reader Writer Multiple readers, single writer In practice, use read-write locks Dining Philosophers Need to hold multiple resources to perform task Lecture 11, page

More information

QUESTION BANK. UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks)

QUESTION BANK. UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks) QUESTION BANK DEPARTMENT: EEE SEMESTER VII SUBJECT CODE: CS2411 SUBJECT NAME: OS UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks) 1. What is deadlock? (AUC NOV2010) A deadlock is a situation

More information

Lecture 9: Midterm Review

Lecture 9: Midterm Review Project 1 Due at Midnight Lecture 9: Midterm Review CSE 120: Principles of Operating Systems Alex C. Snoeren Midterm Everything we ve covered is fair game Readings, lectures, homework, and Nachos Yes,

More information

COMP 3430 Robert Guderian

COMP 3430 Robert Guderian Operating Systems COMP 3430 Robert Guderian file:///users/robg/dropbox/teaching/3430-2018/slides/06_concurrency/index.html?print-pdf#/ 1/76 1 Concurrency file:///users/robg/dropbox/teaching/3430-2018/slides/06_concurrency/index.html?print-pdf#/

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Homework Assignment #5

Homework Assignment #5 Homework Assignment #5 Question 1: Scheduling a) Which of the following scheduling algorithms could result in starvation? For those algorithms that could result in starvation, describe a situation in which

More information

5 Classical IPC Problems

5 Classical IPC Problems OPERATING SYSTEMS CLASSICAL IPC PROBLEMS 2 5 Classical IPC Problems The operating systems literature is full of interesting problems that have been widely discussed and analyzed using a variety of synchronization

More information

Midterm Exam October 15, 2012 CS162 Operating Systems

Midterm Exam October 15, 2012 CS162 Operating Systems CS 62 Fall 202 Midterm Exam October 5, 202 University of California, Berkeley College of Engineering Computer Science Division EECS Fall 202 Ion Stoica Midterm Exam October 5, 202 CS62 Operating Systems

More information

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background Module 6: Process Synchronization Background Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization

More information

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID: CMPS 111 Spring 2003 Midterm Exam May 8, 2003 Name: ID: This is a closed note, closed book exam. There are 20 multiple choice questions and 5 short answer questions. Plan your time accordingly. Part I:

More information

Process Coordination

Process Coordination Process Coordination Why is it needed? Processes may need to share data More than one process reading/writing the same data (a shared file, a database record, ) Output of one process being used by another

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Midterm Review Midterm in class on Monday Covers material through scheduling and deadlock Based upon lecture material and modules of the book indicated on

More information

מצליחה. 1. int fork-bomb() 2. { 3. fork(); 4. fork() && fork() fork(); 5. fork(); printf("bla\n"); 8. return 0; 9. }

מצליחה. 1. int fork-bomb() 2. { 3. fork(); 4. fork() && fork() fork(); 5. fork(); printf(bla\n); 8. return 0; 9. } שאלה : (4 נקודות) א. ב. ג. (5 נקודות) הגדירו את המונח race-condition במדוייק לא להשמיט פרטים. ספקו דוגמא. (5 נקודות) מהו? Monitor נא לספק הגדרה מלאה. ( נקודות) ( נקודות) ציינו כמה תהליכים יווצרו בקוד הבא

More information

CS630 Operating System Design, Second Exam, Fall 2014

CS630 Operating System Design, Second Exam, Fall 2014 CS630 Operating System Design, Second Exam, Fall 2014 Problem 1. (25 Points) Assume that a process executes the following pseudo codes: #5 #6 #7 main (int argc, char *argv[ ]) { int i, keyin; /* the last

More information

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) 1/32 CPU Scheduling The scheduling problem: - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s) When do we make decision? 2/32 CPU Scheduling Scheduling decisions may take

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1019 L12 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Critical section: shared

More information

The Deadlock Lecture

The Deadlock Lecture Concurrent systems Lecture 4: Deadlock, Livelock, and Priority Inversion DrRobert N. M. Watson The Deadlock Lecture 1 Reminder from last time Multi-Reader Single-Writer (MRSW) locks Alternatives to semaphores/locks:

More information

Outline. Monitors. Barrier synchronization The sleeping barber problem Readers and Writers One-way tunnel. o Monitors in Java

Outline. Monitors. Barrier synchronization The sleeping barber problem Readers and Writers One-way tunnel. o Monitors in Java Outline Monitors o Monitors in Java Barrier synchronization The sleeping barber problem Readers and Writers One-way tunnel 1 Monitors - higher-level synchronization (Hoare, Hansen, 1974-5) Semaphores and

More information

Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - X Deadlocks - I. University at Buffalo. Synchronization structures

Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - X Deadlocks - I. University at Buffalo. Synchronization structures CSE 421/521 - Operating Systems Fall 2012 Lecture - X Deadlocks - I Tevfik Koşar University at Buffalo October 2nd, 2012 1 Roadmap Synchronization structures Problems with Semaphores Monitors Condition

More information

Roadmap. Problems with Semaphores. Semaphores. Monitors. Monitor - Example. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012

Roadmap. Problems with Semaphores. Semaphores. Monitors. Monitor - Example. Tevfik Koşar. CSE 421/521 - Operating Systems Fall 2012 CSE 421/521 - Operating Systems Fall 2012 Lecture - X Deadlocks - I Tevfik Koşar Synchronization structures Problems with Semaphores Monitors Condition Variables Roadmap The Deadlock Problem Characterization

More information

Concurrency and Synchronisation. Leonid Ryzhyk

Concurrency and Synchronisation. Leonid Ryzhyk Concurrency and Synchronisation Leonid Ryzhyk Textbook Sections 2.3 & 2.5 2 Concurrency in operating systems Inter-process communication web server SQL request DB Intra-process communication worker thread

More information

The Deadlock Problem (1)

The Deadlock Problem (1) Deadlocks The Deadlock Problem (1) A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example System has 2 disk drives. P 1 and P 2

More information

(MCQZ-CS604 Operating Systems)

(MCQZ-CS604 Operating Systems) command to resume the execution of a suspended job in the foreground fg (Page 68) bg jobs kill commands in Linux is used to copy file is cp (Page 30) mv mkdir The process id returned to the child process

More information

Introduction to OS Synchronization MOS 2.3

Introduction to OS Synchronization MOS 2.3 Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018 Deadlock and Monitors CS439: Principles of Computer Systems February 7, 2018 Last Time Terminology Safety and liveness Atomic Instructions, Synchronization, Mutual Exclusion, Critical Sections Synchronization

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 12: Scheduling & Deadlock Priority Scheduling Priority Scheduling Choose next job based on priority» Airline checkin for first class passengers Can

More information

Concurrency: Deadlock and Starvation. Chapter 6

Concurrency: Deadlock and Starvation. Chapter 6 Concurrency: Deadlock and Starvation Chapter 6 Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate with each other Involve conflicting needs for resources

More information

Operating Systems Structure

Operating Systems Structure Operating Systems Structure Monolithic systems basic structure: A main program that invokes the requested service procedure. A set of service procedures that carry out the system calls. A set of utility

More information

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem. CSE 421/521 - Operating Systems Fall 2011 Lecture - X Process Synchronization & Deadlocks Roadmap Classic Problems of Synchronization Readers and Writers Problem Dining-Philosophers Problem Sleeping Barber

More information

DEADLOCKS M O D E R N O P E R A T I N G S Y S T E M S C H A P T E R 6 S P R I N G

DEADLOCKS M O D E R N O P E R A T I N G S Y S T E M S C H A P T E R 6 S P R I N G DEADLOCKS M O D E R N O P E R A T I N G S Y S T E M S C H A P T E R 6 S P R I N G 2 0 1 8 NON-RESOURCE DEADLOCKS Possible for two processes to deadlock each is waiting for the other to do some task Can

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

Operating Systems. OPS Processes

Operating Systems. OPS Processes Handout Introduction Operating Systems OPS Processes These notes were originally written using (Tanenbaum, 1992) but later editions (Tanenbaum, 2001; 2008) contain the same information. Introduction to

More information

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018 SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T 2. 3. 8 A N D 2. 3. 1 0 S P R I N G 2018 INTER-PROCESS COMMUNICATION 1. How a process pass information to another process

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Module 6: Process Synchronization Chapter 6: Process Synchronization Background! The Critical-Section Problem! Peterson s Solution! Synchronization Hardware! Semaphores! Classic Problems of Synchronization!

More information

Midterm Exam Amy Murphy 19 March 2003

Midterm Exam Amy Murphy 19 March 2003 University of Rochester Midterm Exam Amy Murphy 19 March 2003 Computer Systems (CSC2/456) Read before beginning: Please write clearly. Illegible answers cannot be graded. Be sure to identify all of your

More information

Sections 01 (11:30), 02 (16:00), 03 (8:30) Ashraf Aboulnaga & Borzoo Bonakdarpour

Sections 01 (11:30), 02 (16:00), 03 (8:30) Ashraf Aboulnaga & Borzoo Bonakdarpour Course CS350 - Operating Systems Sections 01 (11:30), 02 (16:00), 03 (8:30) Instructor Ashraf Aboulnaga & Borzoo Bonakdarpour Date of Exam October 25, 2011 Time Period 19:00-21:00 Duration of Exam Number

More information

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01 1 UNIT 2 Basic Concepts of CPU Scheduling UNIT -02/Lecture 01 Process Concept An operating system executes a variety of programs: **Batch system jobs **Time-shared systems user programs or tasks **Textbook

More information

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages The Dining Philosophers Problem CMSC 0: Organization of Programming Languages Threads Classic Concurrency Problems Philosophers either eat or think They must have two forks to eat Can only use forks on

More information

CPS 310 second midterm exam, 11/14/2014

CPS 310 second midterm exam, 11/14/2014 CPS 310 second midterm exam, 11/14/2014 Your name please: Part 1. Sticking points Consider the Java code snippet below. Is it a legal use of Java synchronization? What happens if two threads A and B call

More information