Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Similar documents
Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Interprocess Communication By: Kaushik Vaghani

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Lesson 6: Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

Synchronization Principles

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS370 Operating Systems

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

CS370 Operating Systems

Chapter 6: Process Synchronization

Process Management And Synchronization

Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Process Co-ordination OPERATING SYSTEMS

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Chapter 7: Process Synchronization!

Module 6: Process Synchronization

Chapter 6: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization

Process Synchronization

Dept. of CSE, York Univ. 1

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

CS420: Operating Systems. Process Synchronization

Chapter 5: Process Synchronization

Process Synchronization

Process Synchronization

CSE 4/521 Introduction to Operating Systems

Chapter 6 Synchronization

IV. Process Synchronisation

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Chapter 6 Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 7: Process Synchronization. Background. Illustration

Process Synchronization

Lecture 3: Synchronization & Deadlocks

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 7: Process Synchronization. Background

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

OS Process Synchronization!

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

1. Motivation (Race Condition)

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

PESIT Bangalore South Campus

Concurrency. Chapter 5

CSE Opera,ng System Principles

Dealing with Issues for Interprocess Communication

Process Synchronization (Part I)

UNIT II PROCESS MANAGEMENT 9

Chapters 5 and 6 Concurrency

Real-Time Operating Systems M. 5. Process Synchronization

Introduction to Operating Systems

Process Synchronization(2)

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

CS370 Operating Systems

PROCESS SYNCHRONIZATION

Process Synchronization. studykorner.org

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Process Coordination

CS3502 OPERATING SYSTEMS

Process Synchronization(2)

Concurrency: Mutual Exclusion and

Module 6: Process Synchronization

Chapter 6: Synchronization

CSC501 Operating Systems Principles. Process Synchronization

Process Synchronization. Mehdi Kargahi School of ECE University of Tehran Spring 2008

Chapter 5 Asynchronous Concurrent Execution

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

CS370 Operating Systems

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Synchronization COMPSCI 386

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Process Synchronization

Mutual Exclusion and Synchronization

Lecture 5: Inter-process Communication and Synchronization

Process Synchronization(2)

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Concurrency: Mutual Exclusion and Synchronization


CS3733: Operating Systems

Silberschatz and Galvin Chapter 6

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Part II Process Management Chapter 6: Process Synchronization

Transcription:

Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference Operating System Concepts, ABRAHAM SILBERSCHATZ

Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic Transactions

Background Independent Process: Cannot affect /affected by the other executing processes in the system Does not share data with any other process. Cooperative Process: can affect /affected by the other executing processes in the system: Shares data with other processes. Require an Inter-Process Communication (IPC) mechanism to exchange Information. IPC Models (Shared memory or Message passing). Concurrent access to shared data may result in data inconsistency problem (consumer-producer problem). Solution: A mechanisms to ensure the orderly execution of cooperating processes. A solution to the consumer-producer problem that fills all the buffers. Having an integer count to keep track of the number of full buffers (Initially, count = 0). It is incremented / decremented by the producer after it produces/consumes a buffer.

Producer-consumer The producer and consumer routines are correct separately. They may not function correctly when executed concurrently. Let count = 5 and the producer and consumer processes execute the statements ++count and --count concurrently. Count may be 4, 5, or 6! And correct count == 5.

implemented in machine language counter++ register1 = counter register1 = register1 + 1 counter = register1 counter register2 = counter register2 = register2-1 count = register2 Consider this execution interleaving with count = 5 initially: We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently. S0: producer execute register1 = counter {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2-1 {register2 = 4} S4: producer execute counter = register1 {count = 6 } S5: consumer execute counter = register2 {count = 4 } If S4 and S5 are reserved

Race Condition: Several processes access and manipulate the same data concurrently and the output depends on the particular order. Ensure that only one process at a time can be manipulating the variable counter. Processes must be synchronized in some way. Different parts of the system manipulate resources. With the growth of multicore systems, multithreaded applications. Any changes that result from such activities not to interfere with one another

Critical Section Problem Consider a system consisting of N processes {P 0, P 1,..., P N 1 }. Each process has a segment of code, called a critical section. Critical section: is the segment of code of process used for changing common variables, updating table, writing file, etc. What is required? When one process is executing in its critical section, no other process is allowed to execute in its critical section. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section (execute section of code called the entry section). The critical section may be followed by executing section of code exit section. The remaining code (remainder section).

Critical Section Problem A solution to the critical-section problem must satisfy the following requirements: 1. Mutual exclusion. If process P i is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section. and some processes wish to enter their critical sections Those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. 3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Critical Section Problem

Critical Section Problem Does the operating system is free from such race conditions? An operating system (kernel code) is subject to several possible race conditions. Example 1: a kernel data structure that maintains a list of all open files in the system. This list must be modified when a new file is opened or closed (adding the file to the list or removing it from the list). If two processes were to open files simultaneously, the separate updates to this list could result in a race condition. Example 2: Structures for maintaining memory allocation, process lists and interrupt handling. Solution: Two general approaches are used to handle critical sections in OS: Preemptive kernels: Allows a process to be preempted while it is running in kernel mode. Must be carefully designed to ensure that shared kernel data are free from race conditions A non-preemptive kernel: does not allow a process running in kernel mode to be preempted; (only one process is active in the kernel at a time) No Race conditions.

Peterson s Solution Two process solution P1, P2 The two processes share two variables: int turn; Turn = i then process Pi is allowed to execute in its critical section Boolean flag[2] flag[i] = true implies that process P i is ready to enter its critical section. Each Pi enters its critical section only if either flag[j] = false or turn = i. if both processes can be executing in their critical sections at the same time, then flag[0] = flag[1] = true. And the value of turn can be either 0 or 1 but cannot be both.

Peterson s Solution

Synchronization Hardware Hardware features can make any programming task easier and improve system efficiency. Many systems provide hardware support for critical section code Uniprocessors could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems OS using this not broadly scalable Problems: Disabling interrupts on a multiprocessor can be time consuming. It is not wise to give the user the power of turning INT on and off. ( one make it on and forget to turn it off) Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words

TestAndSet Instruction The important characteristic of this instruction is that it is executed atomically. If two get-and-set instructions are executed simultaneously (each on a different CPU), they will be executed sequentially in some arbitrary order.

Solution to Critical-section Problem Using Mutex Locks Disadvantage: It require busy waiting. Any process tries to enter its critical section must loop continuously in the call to acquire(). Busy waiting wastes CPU cycles that some other process might be able to use productively. Software approach. mutex is short for mutual exclusion. Used to protect critical regions and thus prevent race conditions. A process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section.

Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp; } Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section key = FALSE; // remainder section } while (TRUE);

Semaphore The hardware-based solutions to the critical-section problem are complicated for application programmers to use. A synchronization tool called a semaphore can be used. A semaphore S contains an integer variable that initialized and accessed only through two standard operations: acquire() and release(). Some times termed P&V (to test and to increment). Modifications to the integer value of the semaphore in the acquire() and release() operations must be executed indivisibly (only one thread can modify the semaphore at a time). OS often distinguish between counting and binary semaphores. Counting semaphore The value can range over an unrestricted domain. Binary semaphore The value can range only between 0 and 1 (known mutex locks in some OS).

Semaphore Implementation Using Java

Semaphore Usage 1. Controlling access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each thread that wishes to use a resource performs an acquire(). When a thread releases a resource, it performs a release() operation. When the count =0 means that all resources are being used (any thread that wish to use a resource will block until the count becomes greater than 0). 2. solve various synchronization problems. Two concurrently running processes: P1 run S1 and P2 run S2. Suppose we require that S2 be executed only after S1 has completed. by letting P1 and P2 share a common semaphore synch, initialized to 0. Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked synch.release(),which is after statement S1 has been executed.

Semaphore Implementation Problem: Requires busy waiting. While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code (clear problem in a multiprogramming system). Busy waiting wastes CPU cycles that some other process might be able to use productively. Solution Modify the acquire() and release() semaphore operations. Instead the process continue looping it can block itself. The block operation places a process into a waiting queue of the semaphore, and change the state from running to waiting. The CPU scheduler can selects another process to execute. The blocked process should be restarted when some other process executes a release() operation using wakeup() operation (changes the process from the waiting state to the ready state).

Semaphore Solution to Busy waiting problem Semaphore Implementation Using Java

Deadlock and starvation The semaphore implementation with a waiting queue may result in a Deadlock situation A system consisting of two processes, P0 and P1, each accessing two semaphores, S and Q, set to the value 1 Scenario: P0 executes S.acquire(), and P1 executes Q.acquire() S & Q=0. P0 executes Q.acquire(), it must wait until P1 executes Q.release(). P1 executes S.acquire(), it must wait until P0 executes S.release(). These operations cannot be executed, P0 and P1 are deadlocked. Dead Lock: Two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes. Indefinite blocking, or starvation A processes wait indefinitely within the semaphore (if we add and remove processes from the list associated with a semaphore in (LIFO) order.

Priority Inversion A higher-priority process needs to modify kernel data that are currently being accessed by a lower-priority processes. Since kernel data are typically protected with a lock, the higher-priority process will have to wait for a lower-priority one to finish. The situation becomes more complicated if the lower-priority process is preempted in favor of another process with a higher priority. Example: assume three processes, L priority < M priority < H priority. and process H requires resource R, which is locked by process L. Process H would wait for L to finish using resource R. If process M becomes runnable, thereby preempting process L. Indirectly, a process with a lower priority process M has affected how long process H must wait for L to use resource R. Solution: Use only one priority. The Process that access the resource inherit the high priority (Priority inheritance Protocol)

Classical Problems of Synchronization Classical problems used to test newly-proposed synchronization schemes Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N

Bounded-Buffer Problem

Reader and Writer problem A database is to be shared among several concurrent processes, some of them are readers and the other are writers (read and write=update) the Database. If two readers access the shared data simultaneously, no effects. if a writer and some reader access the database simultaneously, a problem may occur. Writers must have exclusive access to the shared database. No reader should wait for other readers to finish simply because a writer is waiting. Once a writer is ready, that writer perform its write as soon as possible

Reader and Writer problem

Dining-Philosophers Problem 5 philosophers (PH) spend their lives thinking and eating. They share a circular table with five chairs, each belonging to one PH. In the center of the table is a bowl of rice, and a five single chopsticks When a PH thinks, she does not interact with others. When he gets hungry and tries to pick up the two chopsticks that are closest to her. A PH may pick up only one chopstick at a time. He cannot pick up a chopstick that is already in the hand of a neighbor. When a hungry PH has both her chopsticks at the same time, she eats without releasing her chopsticks. When she is finished eating, she puts down both of her chopsticks and starts thinking again. Representation of the need to allocate several resources among several processes in a deadlock-free and starvationfree manner.

Dining-Philosophers Problem-solution Represent each chopstick with a semaphore. A philosopher tries to grab the chopstick by executing an acquire() operation; she releases a chopstick by executing the release() operation. This Solution Have A Problems & not Acceptable it has the possibility of creating a deadlock

Dining-Philosophers Problem-solution Suppose that all five philosophers become hungry simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she will be delayed forever. Possible solutions by placing restrictions on the philosophers: 1. Allow at most four philosophers to be sitting simultaneously at the table. 2. Allow a philosopher to pick up her chopsticks only if both chopsticks are available (note that she must pick them up in a critical section). 3. Use an asymmetric solution; for example, an odd philosopher picks up first her left chopstick and then her right chopstick, whereas an even philosopher picks up her right chopstick and then her left chopstick.

End of Chapter 6