Concurrency pros and cons. Concurrent Programming Problems. Linked list example. Linked list example. Mutual Exclusion. Concurrency is good for users

Similar documents
Concurrency pros and cons. Concurrent Programming Problems. Mutual Exclusion. Concurrency is good for users

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization. studykorner.org

Chapter 6: Process Synchronization

Process Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

Synchronization Principles

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

CS370 Operating Systems

CS370 Operating Systems

Process Synchronization

CSE 4/521 Introduction to Operating Systems

Lesson 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

OS Process Synchronization!

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Interprocess Communication By: Kaushik Vaghani

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Introduction to Operating Systems

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 6: Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Module 6: Process Synchronization

Concept of a process

Lecture 3: Intro to Concurrent Processing using Semaphores

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

CS370 Operating Systems

Chapter 7: Process Synchronization!

CSE Traditional Operating Systems deal with typical system software designed to be:

CS3502 OPERATING SYSTEMS

CS370 Operating Systems

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Process Synchronization. Mehdi Kargahi School of ECE University of Tehran Spring 2008

9/29/2014. CS341: Operating System Mid Semester Model Solution Uploaded Semaphore ADT: wait(), signal()

Process Synchronization(2)

Synchronization Basic Problem:

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

1. Motivation (Race Condition)

Chapter 6: Process Synchronization

Chapters 5 and 6 Concurrency

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Dealing with Issues for Interprocess Communication

Module 6: Process Synchronization

Synchronization Classic Problems

CSC501 Operating Systems Principles. Process Synchronization

Deadlock. Concurrency: Deadlock and Starvation. Reusable Resources

Process Co-ordination OPERATING SYSTEMS

Process Synchronization

Lecture 3: Synchronization & Deadlocks

PESIT Bangalore South Campus

Chapter 5: Process Synchronization

Concurrency. Chapter 5

Process Coordination

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Process Management And Synchronization

Chapter 7: Process Synchronization. Background. Illustration

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Chapter 6: Process Synchronization

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Concurrency and Synchronisation

Concurrency and Synchronisation

Synchronization Principles II

Chapter 6 Synchronization

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Process Synchronization

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Concurrency: Principles of Deadlock. Processes and resources. Concurrency and deadlocks. Operating Systems Fall Processes need resources to run


Process Synchronization(2)

Process Synchronization

PROCESS SYNCHRONIZATION

Chapter 2 Processes and Threads

Classic Problems of Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Process Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Plan. Demos. Next Project. CSCI [4 6]730 Operating Systems. Hardware Primitives. Process Synchronization Part II. Synchronization Part 2

Processes & Concurrency

Concurrency: Mutual Exclusion and Synchronization

Processes & Concurrency

Chapter 7: Process Synchronization. Background

Chapter 5: Process Synchronization

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Transcription:

Concurrency pros and cons Con Programming Problems OS Spring 2011 Concurrency is good for users One of the reasons for multiprogramming Working on the same problem, simultaneous execution of programs, background execution Concurrency is a pain in the neck for the system Access to shared data structures Danger of deadlock due to resource contention Linked list example Linked list example insert_after(,): ->next=.next.next= remove_next(): tmp=.next;.next=.next.next; free(tmp); ->next=.next Mutual Exclusion remove_next():.next= OS is an instance of con programming Multiple activities may take place at the same time Con execution of operations involving multiple steps is problematic Example: updating linked list Con access to a shared data structure must be mutually exclusive

Atomic Operations A generic solution is to ensure atomic execution of operations All the steps are perceived as executed in a single point of time insert_after(,) remove_next(), or remove_next() insert_after(,) The Critical Section Problem n processes P 0,,P n-1 No assumptions on relative process speeds, no synchronized clocks, etc Models inherent non-determinism of process scheduling No assumptions on process activity when executing within and remainder sections The problem: Implement a general mechanism for entering and leaving a. Success Criteria 1. Mutual exclusion: Only one process is in the at a time. 2. Progress: There is no deadlock: some process will eventually get into the critical section. 3. Fairness: There is no starvation: no process will wait indefinitely while other processes continuously enter the. 4. Generality: It works for N processes. Peterson s Algorithm There are two shared arrays, initialized to 0. q[n]: q[i] is the stage of process i in entering the CS. Stage n means the process is in the CS. turn[n-1]: turn[j] says which process has to wait in stage j. The algorithm for process i: while (( k i s.t. q[k] j) 4 3 2 1 Definition: Process a is ahead of process b if q[a] > q[b]. Lemma 1: A process that is ahead of all others advances to the next stage (increments q[i]). Proof: This process is not stuck in the while loop because the first condition does not hold. Lemma 2: When a process advances to stage j+1, if it is not ahead of all processes, then there is at least one other process at stage j. Proof: To exit the while loop another process had to take turn[j].

Lemma 3: If there is more than one process at stage j, then there is at least one process at every stage below j. Proof: Use lemma 2, and prove by induction on the stages. && (turn[j] == i)){ Lemma 4: The maximal number of processes at stage j is n-j+1 Proof: By lemma 3, there are at least j-1 processes at lower stages. Mutual Exclusion: By Lemma 4, only one process can be in stage n Progress: There is always at least one process that can advance: If a process is ahead of all others it can advance If no process is ahead of all others, then there is more than one process at the top stage, and one of them can advance. Fairness: A process will be passed over no more than n times by each of the other processes (proof: in the notes). Generality: The algorithm works for any n. Peterson s Algorithm Peterson s algorithm creates a mechanism without any help from the OS. All the success criteria hold for this algorithm. It does use busy wait (no other option). Classical Problems of Synchronization Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

Bounded-Buffer Problem One cyclic buffer that can hold up to N items Producer and consumer use the buffer The buffer absorbs fluctuations in rates. The buffer is shared, so protection is required. We use counting semaphores: the number in the semaphore represents the number of resources of some type Bounded-Buffer Problem Semaphore mutex initialized to the value 1 Protects the index into the buffer Semaphore full initialized to the value 0 Indicates how many items in the buffer are full (can read them) Semaphore empty initialized to the value N Indicates how many items in the buffer are empty (can write into them) Bounded-Buffer Problem Cont. Readers-Writers Problem Producer: produce an item P (empty); P (mutex); add the item to the buffer V (mutex); V (full); Consumer: P (full); P (mutex); remove an item from buffer V (mutex); V (empty); consume the item A data structure is shared among a number of con processes: Readers Only read the data; They do not perform updates. Writers Can both read and write. Problem Allow multiple readers to read at the same time. Only one writer can access the shared data at the same time. If a writer is writing to the data structure, no reader may read it. Readers-Writers Problem: First Solution Shared Data: The data structure Integer readcount initialized to 0. Semaphore mutex initialized to 1. Protects readcount Semaphore write initialized to 1. Makes sure the writer doesn t use data at the same time as any readers Readers-Writers Problem: First solution The structure of a writer process: writing is performed V (write) ;

Readers-Writers Problem: First solution The structure of a reader process: readcount ++ ; if (readcount == 1) V (mutex) reading is performed readcount - - ; if (readcount == 0) V (write) ; V (mutex) ; Readers-Writers Problem: First solution The structure of a reader process: readcount ++ ; if (readcount == 1) V (mutex) reading is performed readcount - - ; if (readcount == 0) V (write) ; V (mutex) ; This solution is not perfect: What if a writer is waiting to write but there are readers that read all the time? Writers are subject to starvation! Second solution: Writer Priority Extra semaphores and variables: Semaphore read initialized to 1 inhibits readers when a writer wants to write. Integer writecount initialized to 0 counts waiting writers. Semaphore write_mutex initialized to 1 controls the updating of writecount. Semaphore mutex now called read_mutex Queue semaphore used only in the reader The writer: Second solution:writer Priority P(write_mutex) writecount++; //counts number of waiting writers if (write_count ==1) P(read) V(write_mutex) writing is performed V(write) ; P(write_mutex) writecount--; if (writecount ==0) V(read) V(write_mutex) Second Solution: Writer Priority (cont.) The reader: P(queue) P(read) P(read_mutex) ; readcount ++ ; if (readcount == 1) P(write) ; V(read_mutex) V(read) V(queue) reading is performed P(read_mutex) ; readcount - - ; if (readcount == 0) V(write) ; V(read_mutex) ; Queue semaphore, initialized to 1: Since we don t want to allow more than one reader at a time in this section (otherwise the writer will be blocked by multiple readers when doing P(read). ) Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1

Dining-Philosophers Problem Cont. The structure of Philosopher i: While (true) { P ( chopstick[i] ); P ( chopstick[ (i + 1) % 5] ); eat V ( chopstick[i] ); V (chopstick[ (i + 1) % 5] ); Dining Philosophers Problem This abstract problem demonstrates some fundamental limitations of deadlock-free synchronization. There is no symmetric solution Any symmetric algorithm leads to a symmetric output, that is everyone eats (which is impossible) or no-one does. think This can cause deadlocks Possible Solutions Use a waiter Execute different code for odd/even Give them another chopstick Allow at most 4 philosophers at the table Randomized (Lehmann-Rabin) Summary Concurrency can cause serious problems if not handled correctly. Atomic operations Critical sections Semaphores and mutexes Careful design to avoid deadlocks and livelocks.