Concurrency Principles
|
|
- Grace Waters
- 5 years ago
- Views:
Transcription
1 Concurrency Concurrency Principles Where is the problem Interleaving: Multi-programming: Management of multiple processes within a uniprocessor system, every system has this support, whether big, small or complex. Distributed Processing: Management of multiple processes executing on number of distributed computer systems, for example clusters. (Do not Share Memory) Multi-processing: Management of multiple processes within a multi-processor system, servers and works stations. (Shared Memory) What is common among these categories? Concurrency: execution of multiple processes no mater on single or more processing elements. In a single processor case multiple processes are interleaved in time to provide the illusion of simultaneous execution of these processes. Although, it is not really parallel processing but there are benefits using such technique, apart from having overheads involved in switching of these processes. Issues: Basically, it is not possible to predict the speed of execution of processes Optimal allocation of resources is not possible by the o/s. Sharing of global variables. Pseudo Code for Demonstration: Suppose we have two or more applications reading input from the keyboard and putting the result on the screen. It would make sense to have the same procedure for all these applications and obviously it will be loaded into the global address space. Pseudo Code: function_test ( string charact) readin (input_variable, keyboard); charact = input_variable; readout (charact, display); exit; 2 P Process P has called function_test and have just read (X) the input and being interrupted, so charact = X; P P resumed it will display what? P2 Process P2 has called function_test and allowed to run till end with input Y, e.g., charact = Y. So the first charact is lost, what can we do to prevent such problem (mutual exclusion), the above example is for uni-processor system, but similar problem can occur on a Multiprocessor platform while sharing of resources, for example, accessing the same location (variable) by two or more processes running on different machines. The problem is with that particular region: where both processes wants to enter. 3 How to Achieve Mutual Exclusion Basics to get mutual exclusion: Conditions for Mutual Exclusion Only one process is allowed into its critical section A process which may stop execution in non critical section must not affect/interfere with others A process should not be delayed indefinitely requiring access to critical section Any process must be allowed to enter into critical section without delay, if no process is in critical section No assumptions about speeds or numbers of CPUs to be considered A process should remain in the critical section for a finite time only Operating System Concerns What are the issues o/s has to consider for concurrency: O/S must keep track of all the processes (normally it is done through PCB s) O/S must allocate and de-allocate the resources for the processes. Which includes processor time, memory, files and i/o devices. Protection of resources against unintended interference from other processes. Result of a process must be independent of the speed at which the execution is taking place relative to the speed of other concurrent processes. Let us explain this last step in more detail: Degree of Awareness Unaware of others Indirectly aware of others Relationship Competition Cooperation by sharing Influence on others. Result is independent of others 2. Timing may be affected. Result is not independent of others 2. Timing may be affected Problems. Mutual Exclusion 2. Deadlock 3. Starvation. Mutual Exclusion 2. Deadlock 3. Starvation 4. Data Coherence Directly aware of others Cooperation by communication. Result is not independent of others. Deadlock 2. Timing may be affected 2. Starvation Fall 27 -
2 Mutual Exclusion has cost Deadlock Starvation Mutual Exclusion Suppose two processes P and P2 are to accomplish a task by using two resources R and R2 And assume o/s has allocated R to P and R2 to P2. What happened next? P P2 is starved P2 P3 P P2 Deadlock Starvation R R Waiting (for ever) R2 P P3 P P3 P P Data Coherency Dekkers Algorithm Modification In addition to deadlock and starvation another requirement may also be necessary, for example, suppose an application needs that a must condition should always be in placed say: a = b In other words any process updating a must update b or vice versa e.g., P: a = a - ; b = b - ; P2: b = b * 2; a = a * 2; Which is fine if the consistency is achieved, but consider the following situation and both impose mutual exclusion; say a=b= 2; P: a = a - ; P2: b = b * 2; P: b = b - ; P2: a = a * 2; Which clearly demonstrate that a = b does not hold any more. There is a strict alternation rule ME is guaranteed Consequences: Execution is dictated by the slowest of the two If one is lost other one is blocked for ever Dictated by the slowest of the two No guarantee of Mutual Exclusion 2 Fall 27-2
3 Modification 2 Modification 3 Correct Solution As we have observed that knowing the state of other processes is not enough to make our problem workable, basically one needs an order the way processes proceeds. Dictated by the slowest of the two Guarantee Mutual Exclusion Deadlock, if both has set the Flag to true before checking others Flag status 3 Dictated by the slowest of the two Guarantee Mutual Exclusion Deadlock? Almost there 4 Dictated by the slowest of the two Guarantee Mutual Exclusion No Deadlock 5 Peterson s Solution Hardware Mutual Exclusion Mutual Exclusion using Instructions Peterson s solution is similar but more elegant as the text says: Mutual exclusion can also be implemented in Hardware by using either interrupt handling or through special machine instructions: Interrupt Disabling: Just before entering the critical region, forcing interrupt disabling will ensure ME Potentially system performance is degraded No support for multi-processor system architecture Machine Instruction: Basically two actions are performed in one cycle for these type of instruction atomically, for example, read and write, reading and writing. Therefore, one can use them as they cannot be interrupted during these actions. Merits & Demerits of Machine Instruction Approach Easy to implement Support for any number of processors Multiple Critical section support (By using different variables) Busy waiting (performance degradation) Starvation is possible Deadlock is possible Test and Set Instruction Boolean testset (int a) if (a == ) a = ; return true; else return false; Exchange Instruction Exchanges the contents of a register with a memory location, obviously during this access to the memory location is blocked. void exchange (int register, memory) int temp; temp = memory; memory = register; register = temp; Fall 27-3
4 ME using testset ME using exchange s n number of processes bolt Shared variable (int) Function_me (process) while true while (!testset (bolt)) be happy and do nothing enter critical section set bolt to ; do remaining part Initially bolt is set to, any process finding it zero will enter into the critical section, obviously all others will go into be happy loop. Once that process leaves its critical section any other process finding bolt zero will immediately go into its critical section and so on. n number of processes bolt Shared variable (int) Function_me (process) key local variable (int) while true key = ; while (key!= ) exchange (key, bolt); be happy until you get bolt zero enter critical section exchange (key, bolt); do remaining part Initially bolt is set to, any process finding it zero will enter into the critical section, but this time it will be through the local variable and the bolt exchange mechanism of the exchange instruction. After completing its job, bolt is again set to through the exchange instruction. Design of an o/s as sequential processes with a reliable mechanism of support for cooperation. Dijkstra proposed a solution based on the fundamental principle of cooperation based on signals, so that a process can be forced to stop at a required place and later starts when instructed through a signal. These signals are called semaphores, to transmit a signal the process executes signal (s) and to receive a signal it executes wait (s). Therefore, if the corresponding signals are not transmitted the process is suspended till the transmission takes place. Properties: It is initialized to a non negative value Wait operation decrements semaphore value, if it goes negative the corresponding process is blocked Signal operation increments semaphore value, if the value is not positive, process can be unblocked from the suspended list. Signal and Wait are atomic (cannot be interrupted) Primitives Binary Primitives Mechanism (Strong s) could be declared as a simple structure having a variable declaration, may be an integer for count and another variable/structure for a process queue. wait (s) count = count - ; if count < put the process in queue block the process signal (s) count = count + ; if count get a process from the queue unblock the process and put it in ready queue if count == count = ; else put the process in queue block the process if queue is empty count = ; else get a process from the queue put it in ready queue Process A s = Process B s = C D B A C D B A C Process C s = Process D s = -3-3 D B A Critical Section (if count is zero) B Process D s = - - A C B A Process D s = -2-2 C The order of removal from the queue is not defined, FIFO etc., the only requirement is that no process should wait indefinitely in the queue. Example set count = ; which process will enter into critical section? Process D s = B A C Fall 27-4
5 Mutual Exclusion using s The following pseudo ops can be used for the mutual exclusion problem There could be n processes Each process executes wait before entering to its critical section Positive value means it can enter into critical section If the value becomes negative, process is suspended s = ; n = number of processes ; Process () loop wait (s) critical section signal (s) remaining execution As the semaphore is initialized to, the first process executing wait will enter into its critical section and will set the value of s to. Any other process may attempt to enter, resulting in decrementing s to a negative value and suspension of that process. Ultimately, the process initially entered into its critical section will depart with incrementing the value of s, resulting one of the suspended process to be released. Next a possible sequence of events for 3 processes are shown to achieve mutual exclusion. Mutual Exclusion using s Lock A B C Wait (lock) Normal Execution Wait (lock) B Wait (lock) B C signal (lock) C Blocked signal (lock) Critical Section signal (lock) First Time only Produce Producer Consumer Problem (Binary) waitb(s) Inset Inset n = n + If If n == == Critical Section waitb waitb (delay) waitb(delay) waitb(s) Take Take n = n -- (s) consume If If n == == Correct Solution? (s) waitb waitb (delay) Producer Consumer Possible Scenario Producer Consumer Problem (Binary) Producer Consumer Problem (BB) Producer Consumer s n Delay n=n+ If n== then waitb (delay) waitb (delay) Produce waitb(s) Inset Inset n = n + If If n = WaitB (delay) (s) Previously for this problem an infinite buffer was consider, now for realistic situations one has bounded buffer (BB) some thing like a circular buffer, therefore, it can be implemented in the following way: Producer: Consumer: Loop Loop produce if (in == out) do nothing, remain here 8 9 n=n- if ((in + )%n) == out do nothing, remain here put[in] take[out] out = (out + )%n n=n+ Critical Section in = (in + )%n consume 2 If n== then waitb (delay) 3 out in 4 5 If n== then waitb (delay) First Time only 6 n=n- n = n If n== then waitb (delay) waitb(delay) waitb(s) Take Take (s) consume If If m = m = n WaitB (delay) [] [2] [3] [4] [5] [6] [n] n=n- - - Correct Solution filled slots Fall 27-5
6 Barbershop Problem used in the Problem Unfair Barber max_capacity = 2, sofa = 4, barber_chair = 3, coord = 3; A classical problem similar to a real operating system for the access of different resources. Problem definition: Three chairs. Three barbers. Waiting area having capacity for four seats and additional space for customers to stand a and wait. Fire code limits the total number of customers (seating +standing). Customer cannot enter if pack to capacity. When a barber is free, customer waiting longest on the sofa can be served. If there are any standing customer, he can now occupy the seat on the sofa. Once a barber finished haircut, any barber can take the payment. As there is only one cash register, so payment from one customer is accepted at a time. Barber(s) time is divided into haircut, accepting payment and waiting for customers (sleeping). max_capacity sofa barber_chair cust_read finished leave_b_chair payment receipt coord wait Customer waits for room to enter shop Customer wait for seat on sofa Customer wait for empty barber chair Barber waits until a customer is in chair Customer waits until his haircut finished Barber waits until customer gets up Cashier waits for a customer to pay Customer waits for a receipt for payment Wait for a barber resource to be free to perform either haircut or cashiering function Signal Exiting customer signals to the waiting customer for entry Customer leaving sofa signals to waiting standing customers Barber signals when his chair is empty Customer signals that he is in the b chair Barber signals once he finished haircut Customer signals barber when he gets up Customer signals cashier that he has paid Cashier signals the acceptance of payment Signal that a barber resource is free cust_ready=, finished =, leave_b_chair =, payment =, receipt = ; void customer ( ) void barber ( ) wait (max_capacity); enter_shop ( ); wait (sofa); wait (cust_ready); sit_on_sofa ( ); wait (coord); wait (barber_chair); cut_hair( ); get_up_from_sofa(); signal (coord); signal (sofa); signal (finished); sit_in_barber_chair(); wait (leave_b_chair); signal (cust_ready); signal (barber_chair); wait (finished); leave_barber_chair (); signal (leave_b_chair); pay( ); signal (payment); wait (receipt); exit_shop( ); signal (max_capaity); void main ( ) customer( ); barber ( ); void cashier ( ) wait (payment); wait (coord); accept_pay( ); signal (coord); signal (receipt); 3 32 cashier( ); 33 Fair Barber Monitors Monitor Structure Things can be improved interms of timing by using some more semaphore as; int count; mutex =, mutex2 = ; max_capacity = 2, sofa = 4, barber_chair = 3, coord = 3; cust_ready=, finished =, leave_b_chair =, payment =, receipt = ; void customer ( ) void barber ( ) void cashier ( ) int custnr; int b_cust; wait (max_capacity); enter_shop ( ); wait (payment); wait (mutex); wait (cust_ready); wait (coord); count = count + ; custnr = count; signal (mutex); wait (mutex2); accept_pay( ); wait (sofa); dequeue (b_cust); signal (coord); sit_on_sofa ( ); signal (mutex2); signal (receipt); wait (barber_chair); wait (coord); get_up_from_sofa(); Cut_hair( ); signal (sofa); signal (coord); sit_in_barber_chair(); signal (finished[custnr]); wait (mutex2); wait (leave_b_chair); enqueue(custnr); signal (barber_chair); signal (cust_ready); signal (mutex2); wait (finished[custnr]); leave_barber_chair (); signal (leave_b_chair); pay( ); signal (payment); wait (receipt); exit_shop( ); signal (max_capaity); s exhibit flexible and powerful mechanism for enforcing mutual exclusion. But potentially it can produce incorrect program because of the scattering of the operation. Monitors are another option which is basically a programming language construct providing equivalent functionality to the semaphores but easier to implement and control. A monitor is a software module consists of one or more functions (initializing sequence and the local data). Characteristics of a Monitor with Signal Local data is only accessible to monitor s functions. A process can enter into monitor by calling one of its function. Only one process is allowed to execute in monitor at any given time, any other process that may have invoked the monitor remains suspended till it becomes available. It is basically this characteristics which makes mutual exclusion possible. Because of the previous condition data variables in the monitor will also be accessible by only one process at a time, therefore, shared variables can be protected by placing them in the monitor. Synchronization is also provided in the monitor for concurrent processing through cwait (c) and csignal (c) operations. Monitor waiting area condition c condition cn urgent queue cwait (c) cwait (cn) csignal(c i ) Monitor Local Data Condition Variables Function () () Function () () Initialization Code Queue for Entering Exit Fall 27-6
7 Producer-Consumer Solution using Monitor Modified Monitor Modified Monitor Code char buffer [n]; int nextin, nextout; int count; int notfull, notempty; nextin = ; nextout = ; count = ; void append (char x) if (count ==n) cwait (notfull); buffer[nextin] = x; nextin = (nextin + ) % n; count ++; csignal (notempty); void producer ( ) char x; produce (x); append (x); void take (char x) if (count == ) cwait (notempty); x = buffer[nextout]; nextout = (nextout + )%n; count --; csignal (notfull); void consumer ( ) char x; take (x); consume (x); In the original version it requires that if there is at least one process in the condition queue, a process from that queue runs immediately when another process issues a csignal for that condition. Therefore, csignal issuing process either immediately exit monitor or be suspended in the monitor. There are two limitations with this approach: if csignal issuing process has not finished with the monitor, two additional process context switches are need (one to suspend, another to resume). A perfect reliable process scheduling is a must condition. For example, when a process issue csignal a process from the condition queue must be activated before any chance of scheduler allowing a process to enter into the monitor. Improved development uses cnotify primitive instead of csignal, meaning that when a process issues a cnotify (x), it tells the x condition queue and continue executing. The benefit of notifying the condition queue is that process residing at the front of queue will resume at some convenient time when monitor is available. On the other hand it is not guarantee that another process will not enter in the monitor before the waiting process. For that instead of if statement, a while statement is used, which ensures the above constraint to be solved. It is just one extra evaluation of the condition but it saves two process switches and no constraints on when a process must run after a notify. The improved version is less prone to errors and also it looked more modular approach to program construction. (HOARE & LAMPSON/REDELL) void append (char x) while (count ==n) /* instead of if statement */ cwait (notfull); buffer[nextin] = x; nextin = (nextin + ) % n; count ++; cnotify (notempty); void take (char x) while (count == ) /* instead of if statement */ cwait (notempty); x = buffer[nextout]; nextout = (nextout + )%n; count --; cnotify (notfull); Message Passing Design Issues in Message Passing Design Issues in Message Passing For any two processes to interact, it is essential to have synchronization and communication. Synchronization Addressing Format Synchronization is for Mutual Exclusion Communication is for exchange of information Queuing Message Passing can provide both Receive Send Direct Indirect Content Length Fairest of all Priority Communication Synchronization Send Added advantage; it works for distributed systems, multi-processors shared memory systems and uni-processors. Comes in many flavors. Essentially has this minimum set of operations: Blocking Non Blocking Test for Arrival Blocking Non Blocking Receive Explicit Static Dynamic Ownership Fixed Variable send (destination, message) receive (source, message) Implicit Fall 27-7
8 Synchronization Addressing Indirect Addressing Receiver cannot receive until a message is sent to him. Possible Discipline for send & receive Blocking send - Blocking receive Both sender and receiver is blocked until message is delivered, this is tight synchronization scheme and called rendezvous. Non Blocking send - Blocking receive It is a useful combination having liberty to send as many messages to different destinations quickly, blocking receive is useful too, as a server process waiting to provide any service to other processes. Non Blocking send - Non Blocking receive Non blocking send is the most natural option for many systems. Let us consider the simplest example case of printing request, it will be foolish to keep waiting till the printing job finished. The only concern could be in the event of error generation of multiple messages which could lead to use system resources for no productivity. From receive primitive, blocking is the natural option because the requested information has to be received first before proceeding. However, if the message is lost or the sending process have problem to send the requested information, receiver will be blocked may be indefinitely. So may be non blocking, but then one has to ensure that the messages are not lost, may be it can check (test for arrival) for any waiting messages before resuming its processing. Direct Addressing send primitive includes a specific identifier of the destination process. Receive can have two options: Explicit receive: which must designate a sending process, the receiving process must know ahead of time that a message is expected (which may be ok for cooperating processes). Otherwise, it is not possible to anticipate incoming process from different sources. Implicit receive: In which case source parameter of the receive primitive contains a returned value once the receive operation is accomplished. Indirect Addressing Messages are not directly sent to receiver and vice versa, instead they are sent to a shared data structure made of queues (mail boxes) to hold data for some time. The sender and receiver is decoupled and hence provide more flexibility in the use of messages. Number of options can be possible: one to one (could be a private link between two processes), one to many (broadcasting), many to one (client/server) or many to many. The association of processes could be static or dynamic, one to one could be static version, many to one could be dynamic by creating ports may be using connect and disconnect. Generally ports are created and owned by the receiving process, so when the process is destroyed port also goes with it. For a general mail box, operating system can create mail box service and hence owned by the operating system and an explicit command will be required to destroy this mail box. P Q Sending Processes Mailbox Receiving Processes Pn Q P Q Port Sending Processes Pn Message Format Queuing Mutual Exclusion using Message Passing Depends on the objective of the messaging facility and also configuration meaning single machine or distributed system. Fixed length messages: Minimize processing and storage overhead. Variable length messages: Flexible. Message has two parts, header which may contain a type field for distinction among various type of contents, control information for example a pointer field to create linked list of messages, a sequence of of numbers to keep record of the number and order of the messages and also may be priority field. Body is the actual message content. Header Body Typical Message Format Message Type Destination ID Source ID Message Length Control Information Message Contents The simplest and the fairest approach could be FIFO, obviously, it will not work for situation when messages have some urgent priority attached with them. Message can have a priority field based on their type so they will be dealt accordingly. The receiver can inspect the message queue and can decide what he is looking for. Non Blocking send - blocking receive Processes are sharing a mail box mutex Mail box is initialized with a single message with NULL contents Process wishing to enter its critical section first attempts to get a message If mail box is empty, message is blocked Once receiver has message, it can enter into the critical section Message function as a token for processes const int n /* number of processes */ void p (int i) message msg; receive (mutex, msg); critical section; send (mutex, msg); remainder Fall 27-8
9 Producer - Consumer Solution using Message Passing Description Reader - Writer Problem void producer ( ) message pmsg; receive (mayproduce, pmsg); pmsg = produce ( ); send (mayconsume, pmsg); const int capacity = /* buffering */; int i; void main ( ) create_mailbox (mayproduce); create_mailbox (mayconsume); for (int i= ; i<=capacity; i++) send (mayproduce, null); producer; consumer; void consumer ( ) message cmsg; receive (mayconsume, cmsg); cmsg = consume ( ); send (mayproduce, cmsg); const int null = /* empty message */; 49 What happens if more than one process performs receive operation concurrently: If mailbox has a message, it will be delivered to only one process, others will be blocked If mailbox queue is empty, all will be blocked Generally this is true for all message passing facilities. In the previous example, two mailboxes are created, producer send data to the mayconsume mailbox. It serves as a buffer, which is organized as a queue and as long as there is at least one message, consume can consume it. Initially, mayproduce is filled with a number of null messages equal to the capacity of the capacity variable. Number of messages shrinks with each production in mayproduce and grows with each consumption. mayproduce mayconsume 5 Problem Definition: Data (file, memory location or bank of registers) to be shared among processes Couple of reader and writer processes are present with following constraints: No restriction on readers to read data Only one writer can write at one time During writing no reader is allowed to read One must understand that no writer is allowed to read and no reader can write, this is a restricted case, if we consider the general case in which both are allowed to do these functions then any portion of the shared area can be considered critical section. Therefore, a general mutual exclusion solution can be employed. That solution is in general very slow, so here restricted case is typically very efficient. Consider for example a library catalog, ordinary users read the catalog to locate a book and only the librarian can update or change the data. So in the general case every access to the catalog will be consider critical section and only one user will be able to read the data. Which is Going to be very slow indeed. 5 Reader -Writer using Discussion State of Queues Reader has Priority Writer has Priority Reader has priority int readcount; semaphore x = ; int readcount, writecount; semaphore x =, y =, z = ; semaphore wsem =, rsem = ; semaphore wsem = ; void reader ( ) void writer ( ) In this case, writers can be starved as long as one reader is present void reader ( ) wsem is used to enforce mutual exclusion in the reader function Only one writer is allowed to have access, during which no reading and writing by others wait (z); wait (y); wait (x); wait (rsem); writecount ++; When there is no reader, the first one wants to read has to wait on wsem, the subsequent readers don t have readcount ++; wait (x); if (writecount == ) to wait. if (readcount == ) readcount ++; wait (rsem); wait (wsem); if (readcount == ) signal (y); readcount is used to keep track of number of readers. signal (x); wait (wsem); wait (wsem); semaphore x is used to assure that readcount is updated properly. readunit ( ); signal (x); writeunit ( ); wait (x); signal (rsem); signal (wsem); Writer has priority readcount --; signal (z); wait (y); if (readcount == ) readunit ( ); writecount --; The modified version, which guarantee that no reader can access the data once a writer has shown the desire signal (wsem); wait (x); if (writecount == ) to access, obviously, few more semaphores and variables are required. signal (x); readcount --; wait (rsem); if (readcount == ) signal (y); semaphore rsem is used to inhibit readers once a writer wishes to access. signal (wsem); void writer ( ) signal (x); writecount is used to control the setting of rsem. semaphore y is used to assure that writecount is updated properly. Only one reader is allowed to queue on rsem all other will queue on z, because writer will not be able to wait (wsem); jump all of them. writeunit ( ); signal (wsem); Readers only in the system Writers only in the system Both readers and writers with read first Both readers and writers with write first wsem set no queue wsem and rsem set writers queue on wsem wsem set by reader rsem is set by writer all writers queue on wsem one reader queue on rsem all other readers queue on z wsem set by writer rsem is set by reader all writers queue on wsem one reader queue on rsem all other readers queue on z 54 Fall 27-9
10 Reader-Writer using Message Passing Discussion void reader (int i) message rmsg; rmsg = i; send (readrequest, rmsg); receive (mbox[i], rmsg); readunit ( ); rmsg = i; send (finished, rmsg); void writer (int i) message rmsg; rmsg = i; send (writerequest, rmsg); writeunit ( ); rmsg = i; send (finished, rmsg); void controller ( ) if (count > ) if (!empty (finished)) receive (finished, msg); count ++; else if (!empty (writerequest)) receive (writerequest, msg); write_id = msg.id; count = count - ; else if (!empty (readrequest)) receive (readrequest, msg); count --; send (msg.id, ok ); if (count == ) send (write_id, ok ); receive (finished, msg); count = ; while (count < ) receive (finished, msg); count ++; Dr. D. M. Akbar Hussain 55 The controller process actually access the shared data Any process wishing to access the shared data send request to controller OK indicates access granted and finished message indicates completion Controller has three mailboxes for each type of messages Controller process gives priority to the write process request messages Mutual exclusion is forced through count variable, which is initialized to here, indicating maximum number of readers allowed If count >, indicates no writer is waiting, there may or may not be any reader If count =, only write request is outstanding, so allowed till finished message If count <, indicates a writer has launched a request, and waiting to clear all active readers. So only finished messages are serviced 56 Fall 27 -
Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems
Concurrency 1 Concurrency Execution of multiple processes. Multi-programming: Management of multiple processes within a uni- processor system, every system has this support, whether big, small or complex.
More informationsemsignal (s) & semwait (s):
Semaphores Two or more processes can cooperate through signals A semaphore is a special variable used for signaling semsignal (s) & semwait (s): primitive used to transmit a signal or to wait for a signal
More informationConcurrency: mutual exclusion and synchronization
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles, 8/E William Stallings (Chapter 5). Sistemi di Calcolo (II semestre) Roberto
More informationIT 540 Operating Systems ECE519 Advanced Operating Systems
IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles
More informationConcurrency: Mutual Exclusion and Synchronization - Part 2
CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu October 20, 2003 Concurrency: Mutual Exclusion and Synchronization - Part 2 To avoid all kinds of problems in either software approaches or
More informationChapter 5 Concurrency: Mutual Exclusion and Synchronization
Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent
More informationProcess Management And Synchronization
Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the
More informationConcurrency, Mutual Exclusion and Synchronization C H A P T E R 5
Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing
More informationLecture 6. Process Synchronization
Lecture 6 Process Synchronization 1 Lecture Contents 1. Principles of Concurrency 2. Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 2 1. Principles of Concurrency OS design issue
More informationChapter 6: Process Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores
More informationChapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles
Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent
More informationPROCESS SYNCHRONIZATION
PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization
More information1 Process Coordination
COMP 730 (242) Class Notes Section 5: Process Coordination 1 Process Coordination Process coordination consists of synchronization and mutual exclusion, which were discussed earlier. We will now study
More informationCHAPTER 6: PROCESS SYNCHRONIZATION
CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background
More informationChapter 6: Process Synchronization
Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic
More informationLecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)
Lecture Topics Today: Concurrency (Stallings, chapter 5.1-5.4, 5.7) Next: Exam #1 1 Announcements Self-Study Exercise #5 Project #3 (due 9/28) Project #4 (due 10/12) 2 Exam #1 Tuesday, 10/3 during lecture
More informationConcurrency. Chapter 5
Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits
More informationInterprocess Communication By: Kaushik Vaghani
Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the
More informationConcurrency(I) Chapter 5
Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More informationProcess Synchronization
CS307 Process Synchronization Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2018 Background Concurrent access to shared data may result in data inconsistency
More informationOperating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017
Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing
More informationChapter 6: Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization
More informationChapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.
Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure
More informationChapter 6: Process Synchronization
Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS
More informationDealing with Issues for Interprocess Communication
Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,
More informationProcess Synchronization
Process Synchronization Chapter 6 2015 Prof. Amr El-Kadi Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly
More informationSemaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event
Semaphores Synchronization tool (provided by the OS) that do not require busy waiting A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually
More informationSemaphores. Semaphores. Semaphore s operations. Semaphores: observations
Semaphores Synchronization tool (provided by the OS) that do not require busy waiting A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually
More informationProcess Synchronization
CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem
More informationThe concept of concurrency is fundamental to all these areas.
Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency
More informationMultitasking / Multithreading system Supports multiple tasks
Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing
More informationConcurrency: Mutual Exclusion and Synchronization
Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes
More informationSynchronization Principles
Synchronization Principles Gordon College Stephen Brinton The Problem with Concurrency Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms
More informationProcess Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology
Process Synchronization: Semaphores CSSE 332 Operating Systems Rose-Hulman Institute of Technology Critical-section problem solution 1. Mutual Exclusion - If process Pi is executing in its critical section,
More informationIntroduction to Operating Systems
Introduction to Operating Systems Lecture 4: Process Synchronization MING GAO SE@ecnu (for course related communications) mgao@sei.ecnu.edu.cn Mar. 18, 2015 Outline 1 The synchronization problem 2 A roadmap
More informationChapters 5 and 6 Concurrency
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapters 5 and 6 Concurrency Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Concurrency When several processes/threads
More informationOther Interprocess communication (Chapter 2.3.8, Tanenbaum)
Other Interprocess communication (Chapter 2.3.8, Tanenbaum) IPC Introduction Cooperating processes need to exchange information, as well as synchronize with each other, to perform their collective task(s).
More informationClassic Problems of Synchronization
Classic Problems of Synchronization Bounded-Buffer Problem s-s Problem Dining Philosophers Problem Monitors 2/21/12 CSE325 - Synchronization 1 s-s Problem s s 2/21/12 CSE325 - Synchronization 2 Problem
More informationChapter 6: Process Synchronization. Module 6: Process Synchronization
Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization
More information4.5 Cigarette smokers problem
4.5 Cigarette smokers problem The cigarette smokers problem problem was originally presented by Suhas Patil [8], who claimed that it cannot be solved with semaphores. That claim comes with some qualifications,
More informationChapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition
Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks
More informationCS3502 OPERATING SYSTEMS
CS3502 OPERATING SYSTEMS Spring 2018 Synchronization Chapter 6 Synchronization The coordination of the activities of the processes Processes interfere with each other Processes compete for resources Processes
More informationBackground. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.
Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency
More informationIntroduction. Interprocess communication. Terminology. Shared Memory versus Message Passing
Introduction Interprocess communication Cooperating processes need to exchange information, as well as synchronize with each other, to perform their collective task(s). The primitives discussed earlier
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:
More informationIntroduction to OS Synchronization MOS 2.3
Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how
More informationConcurrent Processes Rab Nawaz Jadoon
Concurrent Processes Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS Lahore Pakistan Operating System Concepts Concurrent Processes If more than one threads
More informationLesson 6: Process Synchronization
Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization
More informationPerformance Throughput Utilization of system resources
Concurrency 1. Why concurrent programming?... 2 2. Evolution... 2 3. Definitions... 3 4. Concurrent languages... 5 5. Problems with concurrency... 6 6. Process Interactions... 7 7. Low-level Concurrency
More informationCONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION
M05_STAL6329_06_SE_C05.QXD 2/21/08 9:25 PM Page 205 CHAPTER CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION 5.1 Principles of Concurrency A Simple Example Race Condition Operating System Concerns Process
More informationWhat is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?
What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular
More informationMaximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait
Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU
More informationProcess Co-ordination OPERATING SYSTEMS
OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 PROCESS - CONCEPT Processes executing concurrently in the
More informationIV. Process Synchronisation
IV. Process Synchronisation Operating Systems Stefan Klinger Database & Information Systems Group University of Konstanz Summer Term 2009 Background Multiprogramming Multiple processes are executed asynchronously.
More informationCSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.
CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon
More informationModule 6: Process Synchronization
Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic
More informationMidterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9
CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce
More informationProcess Coordination
Process Coordination Why is it needed? Processes may need to share data More than one process reading/writing the same data (a shared file, a database record, ) Output of one process being used by another
More informationModule 6: Process Synchronization. Operating System Concepts with Java 8 th Edition
Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores
More informationChapter 7: Process Synchronization. Background. Illustration
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris
More informationChapter 5: Process Synchronization. Operating System Concepts 9 th Edition
Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks
More informationConcurrency: Mutual Exclusion and
Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes
More informationChapter 7: Process Synchronization!
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Monitors 7.1 Background Concurrent access to shared
More informationRoadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University
CSC 4103 - Operating Systems Fall 2009 Lecture - XI Deadlocks - II Tevfik Ko!ar Louisiana State University September 29 th, 2009 1 Roadmap Classic Problems of Synchronization Bounded Buffer Readers-Writers
More informationConcurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.
Synchronization 1 Concurrency On multiprocessors, several threads can execute simultaneously, one on each processor. On uniprocessors, only one thread executes at a time. However, because of preemption
More informationInterprocess Communication and Synchronization
Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set
More informationOperating Systems. Operating Systems Summer 2017 Sina Meraji U of T
Operating Systems Operating Systems Summer 2017 Sina Meraji U of T More Special Instructions Swap (or Exchange) instruction Operates on two words atomically Can also be used to solve critical section problem
More informationRoadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!
CSC 4103 - Operating Systems Fall 2009 Lecture - XI Deadlocks - II Roadmap Classic Problems of Synchronization Bounded Buffer Readers-Writers Dining Philosophers Sleeping Barber Deadlock Prevention Tevfik
More informationProcess Synchronization
Chapter 7 Process Synchronization 1 Chapter s Content Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors 2 Background
More informationConcurrency: Mutual Exclusion and Synchronization. Concurrency
Concurrency: Mutual Exclusion and Synchronization Chapter 5 1 Concurrency Multiple applications Structured applications Operating system structure 2 1 Concurrency 3 Difficulties of Concurrency Sharing
More informationAdministrivia. Assignments 0 & 1 Class contact for the next two weeks Next week. Following week
Administrivia Assignments 0 & 1 Class contact for the next two weeks Next week midterm exam project review & discussion (Chris Chambers) Following week Memory Management (Wuchi Feng) 1 CSE 513 Introduction
More informationCS420: Operating Systems. Process Synchronization
Process Synchronization James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Background
More informationProcess Synchronization
Process Synchronization Concurrent access to shared data may result in data inconsistency Multiple threads in a single process Maintaining data consistency requires mechanisms to ensure the orderly execution
More informationModels of concurrency & synchronization algorithms
Models of concurrency & synchronization algorithms Lecture 3 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu
More informationDept. of CSE, York Univ. 1
EECS 3221.3 Operating System Fundamentals No.5 Process Synchronization(1) Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University Background: cooperating processes with shared
More informationSynchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.
CSE 120: Principles of Operating Systems Lecture 4 Synchronization January 23, 2006 Prof. Joe Pasquale Department of Computer Science and Engineering University of California, San Diego Before We Begin
More informationRoadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.
CSE 421/521 - Operating Systems Fall 2011 Lecture - X Process Synchronization & Deadlocks Roadmap Classic Problems of Synchronization Readers and Writers Problem Dining-Philosophers Problem Sleeping Barber
More informationProcess Synchronization. CISC3595, Spring 2015 Dr. Zhang
Process Synchronization CISC3595, Spring 2015 Dr. Zhang 1 Concurrency OS supports multi-programming In single-processor system, processes are interleaved in time In multiple-process system, processes execution
More informationRemaining Contemplation Questions
Process Synchronisation Remaining Contemplation Questions 1. The first known correct software solution to the critical-section problem for two processes was developed by Dekker. The two processes, P0 and
More informationSynchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)
Synchronization CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10) 1 Outline Critical region and mutual exclusion Mutual exclusion using busy waiting Sleep and
More informationConcurrency: Deadlock and Starvation
Concurrency: Deadlock and Starvation Chapter 6 E&CE 354: Processes 1 Deadlock Deadlock = situation in which every process from a set is permanently blocked, i.e. cannot proceed with execution Common cause:
More informationSYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018
SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T 2. 3. 8 A N D 2. 3. 1 0 S P R I N G 2018 INTER-PROCESS COMMUNICATION 1. How a process pass information to another process
More informationChapter 5: Process Synchronization. Operating System Concepts 9 th Edition
Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks
More informationUNIX Input/Output Buffering
UNIX Input/Output Buffering When a C/C++ program begins execution, the operating system environment is responsible for opening three files and providing file pointers to them: stdout standard output stderr
More informationChapter 7: Process Synchronization. Background
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 12 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ 2 Mutex vs Semaphore Mutex is binary,
More informationLecture 3: Synchronization & Deadlocks
Lecture 3: Synchronization & Deadlocks Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating
More informationMutual Exclusion and Synchronization
Mutual Exclusion and Synchronization Concurrency Defined Single processor multiprogramming system Interleaving of processes Multiprocessor systems Processes run in parallel on different processors Interleaving
More informationProcess Synchronization
TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Process Synchronization [SGG7] Chapter 6 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin
More informationChapter 6: Process Synchronization. Operating System Concepts 8 th Edition,
Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores
More informationTasks. Task Implementation and management
Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration
More informationOperating Systems ECE344
Operating Systems ECE344 Ding Yuan Announcement & Reminder Lab 0 mark posted on Piazza Great job! One problem: compilation error I fixed some for you this time, but won t do it next time Make sure you
More informationEnforcing Mutual Exclusion Using Monitors
Enforcing Mutual Exclusion Using Monitors Mutual Exclusion Requirements Mutually exclusive access to critical section Progress. If no process is executing in its critical section and there exist some processes
More informationProcess Synchronization(2)
CSE 3221.3 Operating System Fundamentals No.6 Process Synchronization(2) Prof. Hui Jiang Dept of Computer Science and Engineering York University Semaphores Problems with the software solutions. Not easy
More informationCS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables
CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 8: Semaphores, Monitors, & Condition Variables 8.0 Main Points: Definition of semaphores Example of use
More informationConcurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules
Brief Preview of Scheduling Concurrency Control Nan Niu (nn@cs.toronto.edu) CSC309 -- Summer 2008 Multiple threads ready to run Some mechanism for switching between them Context switches Some policy for
More informationOperating Systems Antonio Vivace revision 4 Licensed under GPLv3
Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through
More informationCS 333 Introduction to Operating Systems. Class 6 Monitors and Message Passing. Jonathan Walpole Computer Science Portland State University
CS 333 Introduction to Operating Systems Class 6 Monitors and Message Passing Jonathan Walpole Computer Science Portland State University 1 But first Continuation of Class 5 Classical Synchronization Problems
More information