Page 104, Problem 2.4. Page 104, Problem 2.4
|
|
- Susanna Perkins
- 6 years ago
- Views:
Transcription
1 A3 Answer Key Page 1 Page 104, Problem 2.4 Tuesday, November 02, :20 PM Page 104, Problem 2.4 A "system call" occurs in a process when the process needs to access a subroutine that is located in the kernel rather than in user space. According to the principle of user/kernel separation, user subroutines can only access data within the user process. Thus "system calls" are required to manipulate data that is maintained centrally in the kernel. This includes all data that is shared among all processes. Looking deeper, there are several things that must be located in kernel space: a. Device state information, because otherwise there is a possibility of state skew when multiple processes access the same device. b. Memory management information, such as the contents of all PCB's.
2 A3 Answer Key Page 2 Page 150, Problem 3.4 Tuesday, November 02, :22 PM Page 150, Problem 3.4. Also describe exactly why your strategy is "intermediate". First, look at Figure 3.9B: Note first that in the book's model, the ready/suspend state is a state in which processes are swapped out of memory. One very simple intermediate strategy is to move high-priority ready/suspend processes to the "ready" state at the time that an existing lower-priority process in the "ready" state blocks, possibly moving the lower-priority process into the blocked/suspend state. This allows lower-priority processes to run until a reasonable stopping point before suspending them, but still has the property that the low-priority processes "starve" as long as there is not enough space for them in the ready queue. There are many such solutions with similar behavior, and I would accept this answer as correct. But, what, pray tell, does "intermediate" mean? Intermediate between what and what? This takes a
3 A3 Answer Key Page 3 little thought. In one extreme, processes in the ready queue "stay there" until there is a reason to leave. This minimizes swapping but starves higher-priority processes in the ready/suspend queue. In the other extreme, highpriority processes in the ready/suspend queue immediately bump lower-priority processes from the ready queue, as needed. This maximizes high-priority throughput but starves low-priority processes. So another way of looking at "intermediate" is that an "intermediate" solution avoids starving either kind of process (high or low priority) while still keeping swapping down to reasonable levels. There are many, many ways of accomplishing this. For example, consider a scheme in which each process's priority determines how frequently it should be placed in the ready state. If each process keeps track of how much time it spends in the ready state, and we try to make that time proportional to priority, then we choose a process to make ready by choosing the process with the minimum ratio of time/priority. This insures that every process runs sometime, but that higher-priority processes stay in the runnable state more than lower-priority processes.
4 A3 Answer Key Page 4 Page 152, Problem 3.9. Tuesday, November 02, :23 PM Page 152, Problem 3.9. The key to this problem is to distinguish between What the processor does in response to an interrupt. What the operating system does in response to an interrupt. In general, there is no concept of a "Process Control Block" for a processor. The register values for the CPU are only part of the PCB. If the register values are stored in dedicated memory locations upon receipt of an interrupt, then among these values is the program counter location to which to return after the interrupt. If this value is destroyed, the operating system cannot restore the state of the program. The operating system's job is to take this data (stored by the processor) and incorporate it into a reasonable PCB of its own design, so that it can then handle the interrupt. It can do this any way that it pleases, once the data stored in the locations
5 A3 Answer Key Page 5 p, has been incorporated into a reasonable idea of process context. Now the key to the problem is that the operating system has no control over the frequency or pattern of interrupt arrivals. These are generated by external hardware over which the operating system has only indirect control. If, e.g., two interrupts of the same kind occur too fast for the operating system to copy the data from the first into the PCB for the process, the data for the first interrupt is lost. It follows, therefore, that interrupts of the same type must be blocked during processing of one of these interrupts, or a race condition occurs between the operating system and frequent interrupts that can result in partial or total loss of the context before the interrupt. The proposed behavior will not cause serious problems if the hardware blocks interrupts of the same type while interrupt processing is in progress. This means that interrupts of the same kind cannot stack or occur simultaneously. Unless the hardware also queues these interrupts somehow, they are lost. So it becomes critical to design interrupt response software for quick response and return from interrupts.
6 p q p return from interrupts. A3 Answer Key Page 6 This, however, is inconvenient in general from a systems engineering perspective. It means that interrupts that must occur concurrently must be different interrupts according to whatever definition of similarity is used. Since there are a finite and limited number of distinct interrupt types available, this makes it difficult to design software that waits for many external events concurrently. Also, as an ongoing problem, service routines must be very short to avoid potentially losing interrupts during interrupt service. This means that interrupt service must be segmented into a producer-consumer architecture with a fast producer (the actual interrupt service routine) and a slower consumer (the software that does what the ISR requests).
7 A3 Answer Key Page 7
8 A3 Answer Key Page 8
9 A3 Answer Key Page 9
10 A3 Answer Key Page 10 Page 199, Problem 4.6. Tuesday, November 02, :23 PM Page 199, Problem 4.6. Clearly, an ASCB "acts like" a process-control-block while a TCB "acts like" a thread-control-block. The overall effect is that the ASCB/TCB scheme acts like a user-level thread scheme; Accordingly, benefits are similar: a. One groups all tasks with the same priority in a single ASCB; accordingly, scheduling within the TCB is particularly simple; round-robin scheduling suffices and is optimal. b. Switching contexts between tasks in a TCB is lightweight, compared to switching between ASCB's. It can be done without switching to kernel mode. Accordingly, scheduling is more efficient when within the ASCB.
11 A3 Answer Key Page 11 Page 249, Problem 5.2. Tuesday, November 02, :23 PM Page 249, Problem 5.2. Orderings include: ABCDE ABDCE ABDEC ADBCE ADBEC ADEBC DABCE DABEC DAEBC DEABC Algorithm: place D in all places it can occur, then place E after it. I started with the prototypical: ABCDE ABDCE ADBCE DABCE And then looked at where E can go
12 A3 Answer Key Page 12 Page 251, Problem 5.6. Tuesday, November 02, :24 PM Page 251, Problem 5.6. a. This is complicated. When walking into a bakery, users get two numbers, a slot number and a ticket number. Ticket numbers may be the same, in which case slot number arbitrarily determines who is served next. First the customer takes a ticket that is one greater than the current maximum ticket (but may be the same as any other customer's ticket if that customer arrived at the exact same time). Then the customer looks around at other customers, and considers them in slot order for each slot j. First the customer waits for the customer in slot j to be done with any choices in progress. After this, either the slot j customer has chosen or has not chosen, but is not in the state "choosing". At this point, if customer j is before customer i (the current one), customer i waits for customer j to be served. After all customers before the current one have been served, it is the
13 A3 Answer Key Page 13, current customer's turn, after which she relinquishes her choice of ticket. This is roughly what happens in a normal bakery. Everyone takes a ticket, and then waits until all tickets before theirs are done. Then it's their turn. b. The question is whether any group of customers can be waiting for each other at the same time. We know that every customer being considered has already taken a number, because there is no dependency between processes in the choosing phase, so that choosing cannot deadlock and no deadlock can occur within the first while loop. So deadlock, if it occurs, must occur when all involved processes are within the second while loop in the code. To see why this does not happen, note that among a group of processes that might deadlock, there is always a highest-priority process j for which the pair (number[j],j) is minimum among the group. For this process, the while condition is false and the while will continue. Thus that process will complete the loop and proceed to the next loop.
14 A3 Answer Key Page 14 p p p Since that process has the lowest number (highest priority), the for loop will complete and it will be allowed to execute. Part of the reason for this is that any new process entering the scheme is guaranteed to have a pair (number[j],j) greater than that of any current process. c. By the exact same argument, all processes with lower priority (higher ticket) will block at some while before the end of the for loop. Thus the algorithm enforces mutual exclusion and we are done.
15 A3 Answer Key Page 15 Page 252, Problem 5.9. Tuesday, November 02, :24 PM Page 252, Problem 5.9. Suppose two processes 1 and 2 call semwait(s) when s is initially 0, and process 1 proceeds to the position between semsignalb(mutex) and semwaitb(delay) when process 2 is called. This allows process 2 to proceed to the same semwaitb(delay). There is no guarantee as to the order in which the two processes get to the semwaitb(delay), because it is conceivable that process 1 gets swapped out and that process 2 gets there first. If this happens, process 2 will return from semwait before process 1, violating the condition that semwaits are queued. So we don't have a real semwait. This means that the semaphore that this implements is a "weak semaphore" in the nomenclature of the book. To solve this problem, switch the order of semsignalb(mutex) and semwaitb(delay). Then in the same situation, process 1 will first wait for delay, while process 2 waits for mutex. When the delay is sensed as over, process 2 will be released to wait for its own delay, etc. The effect is that the waits are guaranteed to be serialized and queued properly, regardless of the number of waits.
16 A3 Answer Key Page 16 Page 254, Problem Tuesday, November 02, :24 PM Page 254, Problem a. Implementing message-passing using semaphores: for a very simple message-passing paradigm, try: char mailbox[length]; semaphore data=0; void send(char *message, int slot) { strcpy(mailbox,message); semsignal(data); void recv(char message[], int slot) { semwait(avail); strcpy(message,mailbox); This doesn't handle the case where one is writing into a buffer when someone is trying to read it. For that, I utilize a readers-writers lock paradigm (page 243): char mailbox[length]; semaphore data=0; // false: whether data avail semaphore lock=1; // true: lock avail int readcount=0; // reads in progress void send(char *message, int slot) { semwait(lock); strcpy(mailbox,message); semsignal(lock); semsignal(data); void recv(char message[], int slot) { semwait(data); semwait(lock);
17 A3 Answer Key Page 17 semwait(lock); strcpy(message,mailbox); semsignal(lock); b. Implementing semaphores using message-passing: A message send-receive pair is a synchronization event between the sending and receiving process. Also, we wish for whatever operation we perform to be atomic over a set of cooperating processes. The easiest way to accomplish this is to utilize a shared mailbox as a semaphore. In order to accomplish this, however, we will have to assure that updates to the mailbox are atomic. Now in order for this to make sense, we have to define the semantics of our send and receive. I am using blocking send and receive for simplicity; send blocks if the prior message hasn't been received; recvs are atomic and exclusive. Other assumptions for send and recv lead to differing solutions that are equally valid. // wait for semaphore using message mailbox // this only works if send/recv are mutually // exclusive (like the ones I just wrote above!) // and enforce blocking send. void wait (semaphore s) { recv(value,s.mailbox); if (value==0) { send(value,s.mailbox); return; else { value--; send(value,s.mailbox); while (1) { recv(value,s.mailbox); send(value,s.mailbox); if (value==0) return;
18 A3 Answer Key Page 18 if (value==0) return; sleep; void signal(semaphore s) { recv(value, s.mailbox); value++; send(value, s.mailbox); Many, many other solutions are possible and equally valid.
19 A3 Answer Key Page 19 Page 296, Problem 6.6. Tuesday, November 02, :25 PM Page 296, Problem 6.6. We know that the deadlock will not occur in a steady state, because of our assurances about process behavior. So we need to look at extremal conditions. We know that I and O are well-behaved. What about P? All we know is that P will output "a finite amount of data" for each input that it consumes. We also know that P will decide for itself how much input to consume. So suppose that P always reads 10 blocks and outputs 20 blocks. If P is slower than I, it is conceivable that Max=100 Input=95 Output=0 In this case, P will consume 10 blocks, leaving 15 blocks, and will not be able to write its output. Making this process deadlock-free requires more conditions on the behavior of P. In general, a situation is deadlock-free if one can plug in any P and not get a deadlock. In a situation where a P exists that causes deadlock, the situation admits deadlock. This is a subtlety of reasoning that eluded some people...
20 A3 Answer Key Page 20 Page 297, Problem Tuesday, November 02, :25 PM Page 297, Problem Choices: a. Banker's algorithm b. Detect deadlock and kill thread. c. Reserve all resources in advance. d. Restart thread if thread needs to wait. e. Order locks on resources. f. Detect deadlock and roll back thread's actions. a. Which approach provides the greatest concurrency? Least concurrency: all resources in advance. Intermediate: restart thread if wait. Almost best: banker's algorithm. Roll back thread: because deadlock is really infrequent. Most concurrency: detect deadlock and kill thread. b. Which approach is most efficient (requires the least processor overhead)? Roll back thread. Least efficient: restart thread, Intermediate: order locks. Almost best: banker's algorithm. Most efficient: reserve all resources in advance. In general, it was rather obvious which was "most" and which was "least", but intermediate steps require some assumptions on the frequency of deadlock. If it is relatively infrequent, one ordering works, while if it is relatively frequent (due to a brain-dead programmer!) another is in order. Particularly, the overhead of rollback is high if the frequency of deadlock is high.
21 A3 Answer Key Page 21 Page Thursday, November 04, :27 PM Page 151 Problem 3.6 Advantages of four modes rather than two: a. Finer-grain protection scheme: can determine more accurately what should be protected in each mode. The supervisor should not have access to the whole kernel; just the things needed to process user commands. b. Modularity of OS code: this enforces strict lack of access to inappropriate data. In theory, any instruction in the kernel can access anything whatever; the separation of supervisor and executive, e.g., means that code for responding to user commands cannot do I/O without calling upon the executive. This means that programmers cannot take "shortcuts" that might lead to poor behavior later. Disadvantages: a. There is a performance penalty each time one has to change modes. b. OS code must be mode-aware, leading to more complex system calls. More than four modes? There is a tradeoff between the complexity and overhead involved in switching modes and the utility of separating code into differing code spaces for purposes of memory protection and security. In the VMS scheme, multi-level mode switches require passing through intermediate modes (see 3.7). This takes a lot of overhead. One might envision, e.g., one mode per kind of I/O device; one mode per kind of system call; etc. There is a law of diminishing returns, however; introducing another mode introduces both overhead in coding and overhead in running the OS code.
22 A3 Answer Key Page 22 Page Thursday, November 04, :29 PM Page 151 Problem 3.7 The "need to know" principle is that at a particular privilege level, each instruction has access to the information that it "needs to know" and no other information. In VMS, the problem is that in principle, the kernel should not do I/O directly but should utilize the executive. But because of the ring structure, the kernel could do I/O if it pleased; it has access to data and control that it should not use. Likewise, the executive should in principle avoid doing anything with a user process, but should leave that to the supervisor. The problem with "ring structure" is that there is nothing to stop the executive from usurping control from the supervisor. The solution is simple; programming discipline replaces protection modes. In any complex system, we agree not to do certain things in order to assure operation. Although it is possible to perform tasks outside the realm of a particular protection mode, the operating system is the only entity with the privilege to do so. "Doctor, it hurts if I do this." "Then don't do that."
23 A3 Answer Key Page 23 Page Thursday, November 04, :30 PM Page 199 Problem 4.1 A mode switch between threads involves storing and retrieving TCB's but does not involve a change in memory map. So it requires less work.
24 A3 Answer Key Page 24 Page Thursday, November 04, :31 PM Page 199 Problem 4.5 This is a really tricky question involving the meaning of what a "thread" is. If the thread is a user-level thread, then of course, the thread will implicitly exit, because it is implemented as part of the process itself. If the thread is kernel-level, then the operating system determines what will happen. It is normal for exit of the parent thread to automatically kill the child threads, because usually the threads depend upon their parent PCB for context. This, however, is not an absolutely necessary thing. What happens depends upon whether the thread implementation is symmetric or not, i.e., whether a thread can assume the role of a master in case of master process death. This can be arranged through locating a control block for the process/thread group outside the process control block. In such an arrangement, there is little distinction between processes and threads, and we refer to both as "coroutines". In that implementation, threads can outlive the parent process, because the control block is not de-allocated at process death. This arrangement is very unusual; normally the concept of a thread is subordinate to a given process and dies if the process dies.
25 A3 Answer Key Page 25 Page Thursday, November 04, :31 PM Page 248 Problem 5.1 a. Taking a hint from the problem description: main() { read(); massage(); write(); void read() { FILE *f = fopen("/dev/reader","r"); FILE *g = fopen("/tmp/t1", "w"); char buffer[80]; while (!feof(f)) { fread(buffer,1,80,f); fwrite(buffer,1,80,g); fclose(f); fclose(g); void massage() { char oldc = '\0'; int gotchar = 0; FILE *f = fopen("/tmp/t1", "r"); FILE *g =fopen("/tmp/t2", "w"); char buffer[80]; while (! feof(f)) { fread(buffer,1,1,f) char newc = buffer[0]; if (gotchar==1 && newc=='*' && oldc=='*') { newc='^'; fwrite(&newc,1,1,g); gotchar=0; else { fwrite(&oldc,1,1,g); oldc=newc; gotchar=1;
26 A3 Answer Key Page 26 fclose(f); fclose(g); void write() { FILE *f = fopen("/tmp/t2","r"); FILE *g = fopen("/dev/punch","w"); char buffer[125]; while (!feof(f)) { int size=fread(buffer,1,125,f); for (; size<125; size++) { buffer[size]=0; fwrite(buffer,1,125,g); fclose(f); fclose(g); b. The idea is that the reader pushes something into the buffer inbuf, then jumps to the program squash. Squash reads through the buffer character-bycharacter, turning '**' into '^' and putting the result into outbuf; if the buffer becomes full, it is printed and emptied. c. // change read: void read() { while (true) { If (READCARD(inbuf)) { // stuff in original solution else { OUTPUT (outbuf); // may have spaces! exit; d. This is relatively trivial. First, make semaphores for each process. Then replace each "resume X" inside process Y with "signal(x), wait(y)".
27 A3 Answer Key Page 27 Page Thursday, November 04, :32 PM Page 297 Problem 6.12 An astute student points out that my original solution is wrong I did not account for circular processing. In the pipeline, process n-1 is a re-initializer; it writes out the result and re-initializes the pipeline with new data. My previous solution didn't do that. Key is to establish a processing lock for each process. Each process has a particular cell of the buffer that it's working upon now. We need the global variables: T buffer[size]; // buffer of data int count[procs]={0,,0; // which cell to work on now. semaphore sem[procs]={1,0,,0; // process 0 is ready to go // we presume that every cell is initialized // with data to process, somehow, and that // it's time to call process 0 on every cell. // Then each process has the form: void process(int i) {// i=process number while(true) { semwait(sem[i]); buffer[count[i]]=some_function(i, buffer[count[i]] ); count[i]=(count[i]+1)%size; semsignal(sem[(i+1)%size]); b. The key is that process i gets a signal when it has received input from process i-1, and sends a signal to
28 process i+1 when it completes. The sequence is started with process 0. At any one time, processes are therefore working on differing cells. Since each process needs lock rights on exactly one resource at a time, dining philosophers does not apply. Further, the result of each processing step is to wake up the next process in the pipeline. A3 Answer Key Page 28
29 A3 Answer Key Page 29 Page Thursday, November 04, :33 PM Page 297 Problem 6.13 a. First, with three processes and four resource units, consider all possible allocations. If one process has two units allocated to it, then it can proceed and there is no deadlock, because once it completes, both other processes can proceed. If on the other hand each process has one unit already, then there is one unit left over and one process can claim it and proceed. b. This is considerably harder. Consider an allocation where there are N processes, M resource units, and the combined limits are < M+N. If M=1, then the combined limits are < N+1 => <=N, so the limits are all one and the theorem is true. So let us proceed by induction on M. So presume true for a given M and consider the case of M+1. The key is that only one allocation can be done at a time, so that the act of allocating K units can be split into allocating K-1 units and then allocating one more. Choose a processor P that needs a maximum number of units. Now if P needed one less unit, the case for M applies and the situation does not deadlock even if we have only M units available. Thus we can add the extra unit to both P and M; doing so does not change P's ability to complete, so that the case for M+1 is true as well.
Chapter 6: Process Synchronization
Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS
More informationInterprocess Communication By: Kaushik Vaghani
Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the
More informationCHAPTER 6: PROCESS SYNCHRONIZATION
CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background
More informationThreads Tuesday, September 28, :37 AM
Threads_and_fabrics Page 1 Threads Tuesday, September 28, 2004 10:37 AM Threads A process includes an execution context containing Memory map PC and register values. Switching between memory maps can take
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More informationChapter 6: Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization
More informationCSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.
CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon
More informationOperating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski
Operating Systems Design Fall 2010 Exam 1 Review Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 To a programmer, a system call looks just like a function call. Explain the difference in the underlying
More informationConcurrent & Distributed Systems Supervision Exercises
Concurrent & Distributed Systems Supervision Exercises Stephen Kell Stephen.Kell@cl.cam.ac.uk November 9, 2009 These exercises are intended to cover all the main points of understanding in the lecture
More informationBackground. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.
Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency
More informationSample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)
Sample Questions Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Sample Questions 1393/8/10 1 / 29 Question 1 Suppose a thread
More informationMidterm Exam. October 20th, Thursday NSC
CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included
More informationConcurrency. Chapter 5
Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits
More informationChapter 6: Process Synchronization
Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic
More informationENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS
Name: Student Number: SOLUTIONS ENGR 3950U / CSCI 3020U (Operating Systems) Quiz on Process Synchronization November 13, 2012, Duration: 40 Minutes (10 questions and 8 pages, 45 Marks) Instructor: Dr.
More informationProcesses The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies
Chapter 2 Processes and Threads Processes The Process Model 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling Multiprogramming of four programs Conceptual
More informationChapter 6: Process Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores
More informationChapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.
Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure
More informationOperating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017
Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing
More informationLast Class: CPU Scheduling! Adjusting Priorities in MLFQ!
Last Class: CPU Scheduling! Scheduling Algorithms: FCFS Round Robin SJF Multilevel Feedback Queues Lottery Scheduling Review questions: How does each work? Advantages? Disadvantages? Lecture 7, page 1
More informationProblem Set 2. CS347: Operating Systems
CS347: Operating Systems Problem Set 2 1. Consider a clinic with one doctor and a very large waiting room (of infinite capacity). Any patient entering the clinic will wait in the waiting room until the
More informationCS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017
CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 1 Review: Sync Terminology Worksheet 2 Review: Semaphores 3 Semaphores o Motivation: Avoid busy waiting by blocking a process execution
More informationConcurrency and Synchronisation
Concurrency and Synchronisation 1 Sections 2.3 & 2.4 Textbook 2 Making Single-Threaded Code Multithreaded Conflicts between threads over the use of a global variable 3 Inter- Thread and Process Communication
More informationChapter 7: Process Synchronization. Background. Illustration
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris
More informationConcurrency and Synchronisation
Concurrency and Synchronisation 1 Learning Outcomes Understand concurrency is an issue in operating systems and multithreaded applications Know the concept of a critical region. Understand how mutual exclusion
More informationProcess Co-ordination OPERATING SYSTEMS
OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 PROCESS - CONCEPT Processes executing concurrently in the
More informationPrecept 2: Non-preemptive Scheduler. COS 318: Fall 2018
Precept 2: Non-preemptive Scheduler COS 318: Fall 2018 Project 2 Schedule Precept: Monday 10/01, 7:30pm (You are here) Design Review: Monday 10/08, 3-7pm Due: Sunday 10/14, 11:55pm Project 2 Overview Goal:
More informationBackground. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code
Old Producer Process Code Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Our
More informationModule 1. Introduction:
Module 1 Introduction: Operating system is the most fundamental of all the system programs. It is a layer of software on top of the hardware which constitutes the system and manages all parts of the system.
More informationWhat s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable
What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an
More informationUNIT:2. Process Management
1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria
More informationProcesses The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation
Chapter 2 Processes The Process Model Processes and Threads 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling Multiprogramming of four programs Conceptual
More informationModule 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication
Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Operating System Concepts 4.1 Process Concept An operating system executes
More informationMidterm Exam Amy Murphy 19 March 2003
University of Rochester Midterm Exam Amy Murphy 19 March 2003 Computer Systems (CSC2/456) Read before beginning: Please write clearly. Illegible answers cannot be graded. Be sure to identify all of your
More informationModule 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication
Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication 4.1 Process Concept An operating system executes a variety of programs: Batch
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,
More informationThreads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading
Threads Figure 1: Multi-Threading Figure 2: Multi-Threading Concurrency What it is 1. Two or more threads of control access a shared resource. Scheduler operation must be taken into account fetch-decode-execute-check
More informationCS370 Operating Systems Midterm Review
CS370 Operating Systems Midterm Review Yashwant K Malaiya Fall 2015 Slides based on Text by Silberschatz, Galvin, Gagne 1 1 What is an Operating System? An OS is a program that acts an intermediary between
More informationScheduling. Monday, November 22, 2004
Scheduling Page 1 Scheduling Monday, November 22, 2004 11:22 AM The scheduling problem (Chapter 9) Decide which processes are allowed to run when. Optimize throughput, response time, etc. Subject to constraints
More informationLearning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.
Learning Outcomes Concurrency and Synchronisation Understand concurrency is an issue in operating systems and multithreaded applications Know the concept of a critical region. Understand how mutual exclusion
More informationConcurrency: Deadlock and Starvation. Chapter 6
Concurrency: Deadlock and Starvation Chapter 6 Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate with each other Involve conflicting needs for resources
More informationChapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)
Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to
More information! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA
Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics! Why is synchronization needed?! Synchronization Language/Definitions:» What are race
More informationChapter 2 Processes and Threads. Interprocess Communication Race Conditions
Chapter 2 Processes and Threads [ ] 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling 85 Interprocess Communication Race Conditions Two processes want to access shared memory at
More informationOperating Systems. Operating Systems Summer 2017 Sina Meraji U of T
Operating Systems Operating Systems Summer 2017 Sina Meraji U of T More Special Instructions Swap (or Exchange) instruction Operates on two words atomically Can also be used to solve critical section problem
More informationDr. D. M. Akbar Hussain DE5 Department of Electronic Systems
Concurrency 1 Concurrency Execution of multiple processes. Multi-programming: Management of multiple processes within a uni- processor system, every system has this support, whether big, small or complex.
More informationProcess! Process Creation / Termination! Process Transitions in" the Two-State Process Model! A Two-State Process Model!
Process! Process Creation / Termination!!! A process (sometimes called a task, or a job) is a program in execution"!! Process is not the same as program "!! We distinguish between a passive program stored
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:
More informationOperating Systems EDA092, DIT 400 Exam
Chalmers University of Technology and Gothenburg University Operating Systems EDA092, DIT 400 Exam 2015-04-14 Date, Time, Place: Tuesday 2015/04/14, 14:00 18:00, Väg och vatten -salar Course Responsible:
More informationChapter 2 Processes and Threads
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads The Process Model Figure 2-1. (a) Multiprogramming of four programs. (b) Conceptual model of four independent,
More informationChapter 5 Concurrency: Mutual Exclusion and Synchronization
Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent
More informationRunning. Time out. Event wait. Schedule. Ready. Blocked. Event occurs
Processes ffl Process: an abstraction of a running program. ffl All runnable software is organized into a number of sequential processes. ffl Each process has its own flow of control(i.e. program counter,
More informationMain Points of the Computer Organization and System Software Module
Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a
More informationProcess Synchronization
CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem
More informationDept. of CSE, York Univ. 1
EECS 3221.3 Operating System Fundamentals No.5 Process Synchronization(1) Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University Background: cooperating processes with shared
More informationCS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019
CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,
More informationChapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?
Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions:» What are race conditions?»
More informationConcurrent Processes Rab Nawaz Jadoon
Concurrent Processes Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS Lahore Pakistan Operating System Concepts Concurrent Processes If more than one threads
More informationIntroduction to OS Synchronization MOS 2.3
Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how
More informationLecture 4: Process Management
Lecture 4: Process Management (Chapters 2-3) Process: execution context of running program. A process does not equal a program! Process is an instance of a program Many copies of same program can be running
More informationRecall from deadlock lecture. Tuesday, October 18, 2011
Recall from deadlock lecture Tuesday, October 18, 2011 1:17 PM Basic assumptions of deadlock theory: If a process gets the resources it requests, it completes, exits, and releases resources. There are
More informationRoadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009
CSC 4103 - Operating Systems Fall 2009 Lecture - III Processes Tevfik Ko!ar Louisiana State University September 1 st, 2009 1 Roadmap Processes Basic Concepts Process Creation Process Termination Context
More informationProcess Synchronization
Chapter 7 Process Synchronization 1 Chapter s Content Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors 2 Background
More informationPart V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection
Part V Process Management Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Roadmap of Chapter 5 Notion of Process and Thread Data Structures Used to Manage
More informationLast Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples
Last Class: Synchronization Problems Reader Writer Multiple readers, single writer In practice, use read-write locks Dining Philosophers Need to hold multiple resources to perform task Lecture 10, page
More informationOperating Systems Comprehensive Exam. Spring Student ID # 3/16/2006
Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select
More informationAnnouncements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)
Announcements Reading Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) 1 Relationship between Kernel mod and User Mode User Process Kernel System Calls User Process
More informationProcess Coordination
Process Coordination Why is it needed? Processes may need to share data More than one process reading/writing the same data (a shared file, a database record, ) Output of one process being used by another
More informationCMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:
CMPS 111 Spring 2003 Midterm Exam May 8, 2003 Name: ID: This is a closed note, closed book exam. There are 20 multiple choice questions and 5 short answer questions. Plan your time accordingly. Part I:
More informationChapter 3: Processes. Operating System Concepts 8 th Edition,
Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2009
More informationThe Big Picture So Far. Chapter 4: Processes
The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt
More informationProcess Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology
Process Synchronization: Semaphores CSSE 332 Operating Systems Rose-Hulman Institute of Technology Critical-section problem solution 1. Mutual Exclusion - If process Pi is executing in its critical section,
More informationCSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)
CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) Past & Present Have looked at two constraints: Mutual exclusion constraint between two events is a requirement that
More informationChapter 7: Process Synchronization. Background
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris
More informationProcess Management And Synchronization
Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the
More informationReal-Time Programming
Real-Time Programming Week 7: Real-Time Operating Systems Instructors Tony Montiel & Ken Arnold rtp@hte.com 4/1/2003 Co Montiel 1 Objectives o Introduction to RTOS o Event Driven Systems o Synchronization
More informationLecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)
Lecture Topics Today: Concurrency (Stallings, chapter 5.1-5.4, 5.7) Next: Exam #1 1 Announcements Self-Study Exercise #5 Project #3 (due 9/28) Project #4 (due 10/12) 2 Exam #1 Tuesday, 10/3 during lecture
More informationDealing with Issues for Interprocess Communication
Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed
More informationProcess- Concept &Process Scheduling OPERATING SYSTEMS
OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple
More informationUNIT -3 PROCESS AND OPERATING SYSTEMS 2marks 1. Define Process? Process is a computational unit that processes on a CPU under the control of a scheduling kernel of an OS. It has a process structure, called
More informationPOSIX / System Programming
POSIX / System Programming ECE 650 Methods and Tools for Software Eng. Guest lecture 2017 10 06 Carlos Moreno cmoreno@uwaterloo.ca E5-4111 2 Outline During today's lecture, we'll look at: Some of POSIX
More informationConcurrency, Mutual Exclusion and Synchronization C H A P T E R 5
Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing
More informationChapter 3: Process Concept
Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2
More informationChapter 3: Process Concept
Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2
More informationCourse: Operating Systems Instructor: M Umair. M Umair
Course: Operating Systems Instructor: M Umair Process The Process A process is a program in execution. A program is a passive entity, such as a file containing a list of instructions stored on disk (often
More informationSubject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)
A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware
More informationOPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:
OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared
More informationParallel Programming Languages COMP360
Parallel Programming Languages COMP360 The way the processor industry is going, is to add more and more cores, but nobody knows how to program those things. I mean, two, yeah; four, not really; eight,
More informationDeadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018
Deadlock and Monitors CS439: Principles of Computer Systems September 24, 2018 Bringing It All Together Processes Abstraction for protection Define address space Threads Share (and communicate) through
More informationCS-537: Midterm Exam (Spring 2001)
CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long
More informationby Marina Cholakyan, Hyduke Noshadi, Sepehr Sahba and Young Cha
CS 111 Scribe Notes for 4/11/05 by Marina Cholakyan, Hyduke Noshadi, Sepehr Sahba and Young Cha Processes What is a process? A process is a running instance of a program. The Web browser you're using to
More informationCSE 153 Design of Operating Systems
CSE 153 Design of Operating Systems Winter 2018 Midterm Review Midterm in class on Monday Covers material through scheduling and deadlock Based upon lecture material and modules of the book indicated on
More informationCS3502 OPERATING SYSTEMS
CS3502 OPERATING SYSTEMS Spring 2018 Synchronization Chapter 6 Synchronization The coordination of the activities of the processes Processes interfere with each other Processes compete for resources Processes
More informationChapter 7: Process Synchronization!
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Monitors 7.1 Background Concurrent access to shared
More informationLecture 9: Midterm Review
Project 1 Due at Midnight Lecture 9: Midterm Review CSE 120: Principles of Operating Systems Alex C. Snoeren Midterm Everything we ve covered is fair game Readings, lectures, homework, and Nachos Yes,
More informationArdOS The Arduino Operating System Reference Guide Contents
ArdOS The Arduino Operating System Reference Guide Contents 1. Introduction... 2 2. Error Handling... 2 3. Initialization and Startup... 2 3.1 Initializing and Starting ArdOS... 2 4. Task Creation... 3
More informationCoupling Thursday, October 21, :23 PM
Coupling Page 1 Coupling Thursday, October 21, 2004 3:23 PM Two kinds of multiple-processor systems Tightly-coupled Can share efficient semaphores. Usually involve some form of shared memory. Loosely-coupled
More informationCSC 4320 Test 1 Spring 2017
CSC 4320 Test 1 Spring 2017 Name 1. What are the three main purposes of an operating system? 2. Which of the following instructions should be privileged? a. Set value of timer. b. Read the clock. c. Clear
More informationSTUDENT NAME: STUDENT ID: Problem 1 Problem 2 Problem 3 Problem 4 Problem 5 Total
University of Minnesota Department of Computer Science & Engineering CSci 5103 - Fall 2018 (Instructor: Tripathi) Midterm Exam 1 Date: October 18, 2018 (1:00 2:15 pm) (Time: 75 minutes) Total Points 100
More informationInterprocess Communication and Synchronization
Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set
More information