Faculty of Computers & Information Computer Science Department

Similar documents
Threads Synchronization

Java Threads. Introduction to Java Threads

COMP 346 WINTER Tutorial 2 SHARED DATA MANIPULATION AND SYNCHRONIZATION

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process/Thread Synchronization

CS420: Operating Systems. Process Synchronization

Chapter 7: Process Synchronization!

Java s Implementation of Concurrency, and how to use it in our applications.

Process/Thread Synchronization

Only one thread can own a specific monitor

Week 7. Concurrent Programming: Thread Synchronization. CS 180 Sunil Prabhakar Department of Computer Science Purdue University

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

What is a thread anyway?

CS 159: Parallel Processing

CS 351 Design of Large Programs Threads and Concurrency

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Lecture 10: Introduction to Semaphores

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Process Synchronization

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Lecture 8: September 30

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

CS370 Operating Systems

Monitors & Condition Synchronization

Lesson 6: Process Synchronization

Dealing with Issues for Interprocess Communication

Introduction to OS Synchronization MOS 2.3

Part IV Other Systems: I Java Threads

Synchronization Spinlocks - Semaphores

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Lecture 9: Introduction to Monitors

Concurrent Programming using Threads

OS06: Monitors in Java

Concurrency in Java Prof. Stephen A. Edwards

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Synchronization Principles

CHAPTER 6: PROCESS SYNCHRONIZATION

G52CON: Concepts of Concurrency

Monitors & Condition Synchronisation

Chapter 6: Process Synchronization

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [THREADS] Frequently asked questions from the previous class survey

Chapter 7: Process Synchronization. Background. Illustration

Threads and Parallelism in Java

COMP346 Winter Tutorial 4 Synchronization Semaphores

CMSC 132: Object-Oriented Programming II. Threads in Java

Threads. Definitions. Process Creation. Process. Thread Example. Thread. From Volume II

Advanced Concepts of Programming

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

CS18000: Programming I

CS370 Operating Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Programming Language Concepts: Lecture 11

Process Synchronization

CS3502 OPERATING SYSTEMS

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Chapter 7: Process Synchronization. Background

IV. Process Synchronisation

Process Synchronization(2)

Recapitulation of basics. Concurrent Programming in Java. Java Threads. Managing Thread objects. Concurrent Thread Execution. Starting a Thread (1)

Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci

Problems with Concurrency. February 19, 2014

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

i219 Software Design Methodology 11. Software model checking Kazuhiro Ogata (JAIST) Outline of lecture

Introduction to Java Threads

CS 153 Design of Operating Systems Winter 2016

Chair of Software Engineering. Java and C# in depth. Carlo A. Furia, Marco Piccioni, Bertrand Meyer. Java: concurrency

Parallel & Concurrent Programming

Process Synchronization

Synchronization Principles I

Process Synchronization

Synchronization COMPSCI 386

27/04/2012. We re going to build Multithreading Application. Objectives. MultiThreading. Multithreading Applications. What are Threads?

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CSCI 5828: Foundations of Software Engineering

Concurrent Processes Rab Nawaz Jadoon

Contents. 6-1 Copyright (c) N. Afshartous

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

Models of concurrency & synchronization algorithms

Multithreaded Programming Part II. CSE 219 Stony Brook University, Department of Computer Science

Process Synchronization

Global variables and resources are shared between threads within the same process. Each thread has its own stack and program counter (see Fig 2-7).

Informatica 3. Marcello Restelli. Laurea in Ingegneria Informatica Politecnico di Milano 9/15/07 10/29/07

CS3733: Operating Systems

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Outline of lecture. i219 Software Design Methodology 10. Multithreaded programming. Kazuhiro Ogata (JAIST)

Shared Objects & Mutual Exclusion

Communication & Synchronization. Lecture 3: Synchronization & Communication - Part 1. Shared variables. Communication Using Shared Memory

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Multiple Inheritance. Computer object can be viewed as

Monitors & Condition Synchronization

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Process Synchronization

CSCI 447 Operating Systems Filip Jagodzinski

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Monitors & Condition Synchronization

Monitors; Software Transactional Memory

Contribution:javaMultithreading Multithreading Prof. Dr. Ralf Lämmel Universität Koblenz-Landau Software Languages Team

Transcription:

Cairo University Faculty of Computers & Information Computer Science Department Theoretical Part 1. Introduction to Critical Section Problem Critical section is a segment of code, in which the process may be changing common variables, updating a table, writing a file and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. The execution of critical sections by the processes is mutually exclusive. Each process must request permission to enter its critical section. 2. Race Condition Problem Assume for example that we have two processes named P1, P2. These two processes access a common variable called counter. P1 increment this variable by one while P2 decrement it by one. Assume the initial value of the counter is 1 and we execute both process concurrently. The value of the variable may be 0, 1 or 2! The only correct one is 1. Explanation: Assume that the statement counter++ is implemented as register = counter; register = register+1; counter = register; Similarly the statement counter--" is implemented as above except that the register value should be decremented. The concurrent execution of the two statements may be as the following: T0: P1 execute register = counter register = 1 T1: P2 execute register = counter register = 1

T2: P1 execute register = register + 1 register = 2 T3: P2 execute register = register -1 register = 0 T4: P1 execute counter = register counter = 2 T5: P2 execute counter = register counter = 0 Notice: that this is not the only possible scenario. Conclusion: We reach this incorrect state because we allow both processes to access the same variable concurrently. The outcome of the execution depends on the particular order in which the access takes place. To solve this problem, we need some form of synchronization of the processes to ensure data consistency between cooperating processes/threads. Technical Part (Java Synchronization) Java supports two kinds of thread synchronization: Mutual exclusion, which is supported in the Java virtual machine via object locks, enables multiple threads to independently work on shared data without interfering with each other. Cooperation, which is supported in the Java virtual machine via the wait and notify methods of class Object, enables threads to work together towards a common goal. 1. Mutual Exclusion 1.1. Objects locks: There is a lock for each object. The object lock must be acquired before accessing a critical section (synchronized region) and should be released after accessing it. 1.2. Synchronized Regions in Java: The Java programming language provides two basic synchronization idioms: synchronized methods and synchronized blocks. 1.2.1. Synchronized Methods: To make a method synchronized, simply add the synchronized keyword to its declaration.

When the virtual machine resolves a reference to a synchronized method, it acquires a lock before invoking the method. For an instance method, the virtual machine acquires the lock associated with the object upon which the method is being invoked. For a class method (static method), it acquires the lock associated with the class to which the method belongs (it locks a Class object). After a synchronized method completes, whether it completes by returning or by throwing an exception, the virtual machine releases the lock. It is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object. Example code: public class SynchronizedCounter private int c = 0; public synchronized void increment() int register = c; register = register+1; c = register; public synchronized void decrement() int register = c; register = register-1; c = register; public synchronized int value() return c; Try to trace the following code with students with synchronized and non synchronized methods: public class MyThread extends Thread private SynchronizedCounter sc; private String myname; public MyThread(String name, SynchronizedCounter sc) this.sc = sc; this.myname = name; public void run() sc.increment();

System.out.println("C = " + sc.value() + " in Thread " + myname); public class MutexThr public static void main(string[] args) SynchronizedCounter count = new SynchronizedCounter(); MyThread thra = new MyThread("A", count); MyThread thrb = new MyThread("B", count); thra.start(); thrb.start(); try thra.join(); thrb.join(); catch (InterruptedException e) System.out.println("Join interrupted"); 1.2.2. Synchronized Blocks Another way to create synchronized code is with synchronized block. Unlike synchronized methods, synchronized blocks must specify the object that provides the intrinsic lock: public void addname(string name) synchronized(this) lastname = name; namecount++; namelist.add(name); Synchronized blocks are also useful for improving concurrency with finegrained synchronization. Suppose, for example, class MsLunch has two instance fields, c1 and c2, that are never used together. All updates of these fields must be synchronized, but there's no reason to prevent an update of c1 from being interleaved with an update of c2 and doing so reduces concurrency by creating unnecessary blocking. Instead of using synchronized methods or otherwise using the lock associated with this, we create two objects solely to provide locks. public class MsLunch private long c1 = 0; private long c2 = 0; private Object lock1 = new Object();

private Object lock2 = new Object(); public void inc1() synchronized (lock1) c1++; public void inc2() synchronized (lock2) c2++; 2. Cooperation The java.lang.object class in Java contains five methods for threads cooperation. They can only be invoked from within a synchronized method or block. In other words, the lock associated with an object must be acquired before any of these methods are invoked. Method void wait(); void wait(long timeout); void wait(long timeout, int nanos); void notify(); void notifyall(); Description A thread that currently owns the monitor can suspend by executing a wait command. When a thread executes a wait, it releases the object lock and enters the object wait set. The thread will stay suspended in the wait set until some time after another thread executes a notify command for the object. Wait till be notified or timeout milliseconds elapses Wait till be notified or timeout milliseconds plus nanos elapses Walkup one thread in the waiting set Walkup all threads in the waiting set Technical problem 1: Counter Semaphore We will implement a tool for process cooperation and synchronization called a Semaphore. A semaphore S is an integer that is accessed only through two standard atomic operations: wait and signal. Wait should be called before entering the critical section and signal after finishing the critical section. There are two main types of semaphores: binary (only one process can enter the critical section) and counting semaphore (more than one process can enter the critical section). Count Semaphores:

The Semaphore count is an integer: o A non-negative count always means that there is no waiting process o A count of negative n indicates that there are n waiting processes. o A count of positive n indicates that n resources are available and n requests can be granted without delay. Atomic methods: o wait (sem) -- decrement the semaphore value. if negative, suspend the process (Also referred to as P(sem)) o signal (sem) -- increment the semaphore value, allow the first process in the waiting queue to continue. (Also referred to as V(sem)) Notice: We will use synchronized methods to ensure that these methods are atomic. A Semaphore can be implemented using busy waiting or by blocking the threads in a wait queue: Semaphore using Busy Waiting Wait(S) While(S <= 0) ;// no-op S--; Signal(S) S++; Semaphore using blocking in a waiting queue typedef struct int value; struct process *L; semaphore; Wait(semaphore S) S.value--; If (S.value < 0) //Object.wait(); Add this process to S.L Block(); Signal(semaphore S) S.value++; If (S.value <= 0) //Object.notify(); Remove a process P from S.L walkup(p);

Notice: The below code is a counter semaphore using blocking not busy waiting. // CS237 Concurrent Programming // ===== ========== =========== // A simplified version of the semaphore implementation from // Stephen Hartley's book. class semaphore protected int value = 0 ; protected semaphore() value = 0 ; protected semaphore(int initial) value = initial ; public synchronized void P() value-- ; if (value < 0) try wait() ; catch( InterruptedException e ) public synchronized void V() value++ ; if (value <= 0) notify() ; Technical problem 2: (Producer Consumer Problem using Semaphores) We need a co-operation between the producer and the consumer processes using a bounded buffer, as follows: The producer will insert new element only if there is a free space in the buffer. o Then after it produces the element, it will notify the consumer that there is an element ready for consuming. The producer will consume element only if there is a produced element in the buffer. o Then after it consumes the element, it will notify the producer that there is a free space in the buffer.

class buffer private int size = 5; // the buffer bound private Object store[] = new Object[size]; private int inptr = 0; private int outptr = 0; semaphore spaces = new semaphore(size); semaphore elements = new semaphore(0); public void produce(object value) spaces.p(); store[inptr] = value; inptr = (inptr + 1) % size; elements.v(); public Object consume() Object value; elements.p(); value = store[outptr]; outptr = (outptr + 1) % size; spaces.v(); return value; //Notice: You can ask students if we need to make the produce method or the consume method synchronized? The answer is no, because we have 1 thread "consumer thread" calls the produce method, and 1 thread "consumer thread" calls the consume method, So there is no risk for the updating the inptr or the outptr variables, because each variable is updated using 1 thread. class producer extends Thread buffer buf; public producer(buffer buf) this.buf = buf; public void run() for (int i = 1; i <= 50; i++) buf.produce(new Integer(i)); class consumer extends Thread buffer buf; public consumer(buffer buf) this.buf = buf;

public void run() for (int i = 1; i <= 50; i++) System.out.println(buf.consume()); class pc static buffer buf = new buffer(); public static void main(string[] args) producer P = new producer(buf); consumer C = new consumer(buf); P.start(); C.start(); For more information, please refer to the next reference: http://www.artima.com/insidejvm/ed2/threadsynch.html http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html http://java.sun.com/docs/books/tutorial/essential/concurrency/sync.html