Paralleland Distributed Programming. Concurrency

Similar documents
Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Introduction to parallel computing

Real Time Operating Systems and Middleware

Shared Memory Parallel Programming with Pthreads An overview

A Brief Introduction to OS/2 Multithreading

Multithreading Programming II

CSC Systems Programming Fall Lecture - XIV Concurrent Programming. Tevfik Ko!ar. Louisiana State University. November 2nd, 2010

Threads need to synchronize their activities to effectively interact. This includes:

High-level Synchronization

Operating systems and concurrency (B08)

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

Synchronization and Semaphores. Copyright : University of Illinois CS 241 Staff 1

POSIX Threads. Paolo Burgio

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Concurrency: Deadlock and Starvation. Chapter 6

Synchronization Primitives

Process Synchronization

Synchronization Mechanisms

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Posix Threads (Pthreads)

Process Management And Synchronization

Lecture 9: Thread Synchronizations. Spring 2016 Jason Tang

More Shared Memory Programming

real time operating systems course

ANSI/IEEE POSIX Standard Thread management

Process Synchronization. studykorner.org

COSC 6374 Parallel Computation. Shared memory programming with POSIX Threads. Edgar Gabriel. Fall References

Programming Languages

CSE 4/521 Introduction to Operating Systems

Pthreads. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Locks and semaphores. Johan Montelius KTH

Programming Shared Address Space Platforms (cont.) Alexandre David B2-206

CSC501 Operating Systems Principles. Process Synchronization

Lecture Outline. CS 5523 Operating Systems: Concurrency and Synchronization. a = 0; b = 0; // Initial state Thread 1. Objectives.

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Solving the Producer Consumer Problem with PThreads

Classic Problems of Synchronization

Synchronization and Semaphores. Copyright : University of Illinois CS 241 Staff 1

Concept of a process

THREADS. Jo, Heeseung

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

Concurrent Server Design Multiple- vs. Single-Thread

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Thread and Synchronization

Threads. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

POSIX Threads. HUJI Spring 2011

Warm-up question (CS 261 review) What is the primary difference between processes and threads from a developer s perspective?

CS 153 Lab6. Kishore Kumar Pusukuri

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Synchronising Threads

Locks and semaphores. Johan Montelius KTH

Process Synchronization(2)

Process Synchronization

Computer Systems Laboratory Sungkyunkwan University

recap, what s the problem Locks and semaphores Total Store Order Peterson s algorithm Johan Montelius 0 0 a = 1 b = 1 read b read a

High Performance Computing Lecture 21. Matthew Jacob Indian Institute of Science

Agenda. Process vs Thread. ! POSIX Threads Programming. Picture source:

CS3502 OPERATING SYSTEMS

Synchronization. Semaphores implementation

Concurrent Programming

POSIX PTHREADS PROGRAMMING

Interprocess Communication By: Kaushik Vaghani

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Lecture 3: Intro to Concurrent Processing using Semaphores

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

Chapter 6 Process Synchronization

Interprocess Communication and Synchronization

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Threads. Jo, Heeseung

High Performance Computing Course Notes Shared Memory Parallel Programming

Process Synchronization

The mutual-exclusion problem involves making certain that two things don t happen at once. A non-computer example arose in the fighter aircraft of

Pthreads (2) Dong-kun Shin Embedded Software Laboratory Sungkyunkwan University Embedded Software Lab.

Chapter 7: Process Synchronization!

Lab 12 - Concurrency 2

Introduction to PThreads and Basic Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Parallel Programming with Threads

Synchronization (Part 2) 1/40

LSN 13 Linux Concurrency Mechanisms

Lesson 6: Process Synchronization

Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Process Synchronization

CSE Traditional Operating Systems deal with typical system software designed to be:

Process Synchronization(2)

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization

CS 470 Spring Mike Lam, Professor. Semaphores and Conditions

Multithread Programming. Alexandre David

Deadlock. Concurrency: Deadlock and Starvation. Reusable Resources

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

Concurrency, Parallel and Distributed

Synchronization Principles

Chapter 6 Concurrency: Deadlock and Starvation

Transcription:

Paralleland Distributed Programming Concurrency

Concurrency problems race condition synchronization hardware (eg matrix PCs) software (barrier, critical section, atomic operations) mutual exclusion critical section critical section in and out protocols solutions to issues of synchronization features: desirable vitality, safety, integrity possible errors deadlock, starvation

Mutual exclusions Locks (signaller, barriers) int lock=0; thread_procedure() { while (lock!= 0) {} // (busy wait) lock = 1; critical_section(); lock = 0; } Problems: busy waiting consumes computer resources procedure is not safe and does not povide vitality

Mutual exclusions How to solve problems with the input and output protocols for critical section: more complex algorithms increase the number of variables, increase data records and readings hardware support appropriate processor instructions system support system procedures implemented using hardware support appropriate API implementation with the use of system procedures

Mutual exclusions Semaphores: theoretical construction that allows the correct solution to the mutual exclusion problem semaphore is a global variable, on which can be made two indivisible, mutually exclusive operations, usually called P (probeer, wait) and V (verhoog, signal) (both often are implemented in the kernel of the operating system, because they contain atomic operations) P (int s) { if (s> 0) s--; else suspend_thread(); } V (int s) { if (somebody_sleeps()) wake_thread(); else s++; } initiation of the semaphore, such as init (int s, int v) {s = v;} implementation of V determines the integrity of the semaphore (eg. FIFO) value of s is the number of accesses to the resource - for example, a binary semaphore.

Concurrency problems The problem of producers and consumers: One group of threads produce data, the second group consume it - how to ensure efficient (no jams and starvation) progress of the procedure The problem of readers and writers similarly as above, except that production (write) can be done at the same time only by one process, but eating (read) is done by a lot of processes at once

Concurrency problems Dining philosophers problem: philosopher: either eat, or think philosophers are sitting at the table, in front of each lies a plate between each pair of plates lies a fork at the center of the table is a bowl of spaghetti problem is that for eating spaghetti two forks are needed (on both sides of the plate) how to ensure the survival of the philosophers?

Mutual exclusions Semaphores - tackling the dining philosophers problem easy solution with allow blocking (incorrect) fork[i], i=0..4 // five binary semaphores for five forks (initiated with 1) philosopher_thread(int i) //procedure for i-th philosopher five { concurrent threads i=0..4 for(;;) { think(); wait(fork[i]); wait(fork[(i+1) mod 5]); eat(); signal(fork[i]); signal(fork[(i+1) mod 5]); } } When will be lockdown?

Mutual exclusions Semaphores - tackling the dining philosophers problem correct solution fork[i], i=0..4 // five binary semaphores for five forks (initiated with 1) permission = 4; // quadruple semaphore with initaial value = 4 philosopher_thread(int i) //procedure for i-th philosopher five { concurrent threads i=0..4 for(;;) { think(); wait(permission); wait(fork[i]); wait(fork[(i+1) mod 5]); eat(); signal(fork[i]); signal(fork[(i+1) mod 5]); signal(permission); } }

Mutex The POSIX specification: mutex - mutual exclusion creation of mutex int pthread_mutex_init (pthread_mutex_t *mutex, const pthread_mutex attr_t *mutexattr) locking (there is a version pthread_mutex_trylock) int pthread_mutex_lock (pthread_mutex_t *mutex) opening int pthread_mutex_unlock (pthread_mutex_t *mutex) responsibility for the proper use of mutexes (that guarantee safety and vitality) rests on the programmer

Conditionvariables POSIX implementation of semaphores A condition variable is always used in conjunction with a mutex lock Basic procedures: Initialization: int pthread_cond_init (pthread_cond_t *restrict cond, const pthread_condattr_t *restrict attr); pthread_cond_t cond = PTHREAD_COND_INITIALIZER; Communication: int pthread_cond_wait(pthread_cond_t *restrict cond, pthread_mutex_t *restrict mutex); int pthread_cond_broadcast(pthread_cond_t *cond); int pthread_cond_signal(pthread_cond_t *cond);

Conditionvariables Main Thread o Declare and initialize global data/variables which require synchronization (such as "count") o Declare and initialize a condition variable object o Declare and initialize an associated mutex o Create threads A and B to do work Thread A o Do work up to the point where a certain condition must occur (such as "count" must reach a specified Thread B o o value) o o Lock associated mutex and check value of a global variable o o Call pthread_cond_wait() to perform a blocking wait for signal from Thread-B. Note that a o call to pthread_cond_wait()automatically and o atomically unlocks the associated mutex variable so that it can be used by Thread-B. o When signalled, wake up. Mutex is automatically and atomically locked. o Explicitly unlock mutex o Continue Main Thread Join / Continue Do work Lock associated mutex Change the value of the global variable that Thread-A is waiting upon. Check value of the global Thread-A wait variable. If it fulfills the desired condition, signal Thread-A. Unlock mutex. Continue