Concurrent Programming. CS105 Programming Languages Supplement

Similar documents
CPS 506 Comparative Programming Languages. Programming Language Paradigms

Concurrency - Topics. Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads

Chapter 13 Topics. Introduction. Introduction

Chapter 13. Concurrency ISBN

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level

Concurrency: a crash course

Concurrency. Lecture 14: Concurrency & exceptions. Why concurrent subprograms? Processes and threads. Design Issues for Concurrency.

Concurrent Processes Rab Nawaz Jadoon

Dealing with Issues for Interprocess Communication

Performance Throughput Utilization of system resources

Lecture 8: September 30

Chapter 5 Asynchronous Concurrent Execution

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

PROCESS SYNCHRONIZATION

CS3502 OPERATING SYSTEMS

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

CSE Traditional Operating Systems deal with typical system software designed to be:

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Threads and Parallelism in Java

Note: in this document we use process and thread interchangeably.

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

IT 540 Operating Systems ECE519 Advanced Operating Systems

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Resource management. Real-Time Systems. Resource management. Resource management

CS370 Operating Systems

CS420: Operating Systems. Process Synchronization

Concept of a process

Chapter 6: Process Synchronization

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Interprocess Communication By: Kaushik Vaghani

G52CON: Concepts of Concurrency

Semaphores. May 10, Mutual exclusion with shared variables is difficult (e.g. Dekker s solution).

Last Class: Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Threading and Synchronization. Fahd Albinali

Multiple Inheritance. Computer object can be viewed as

CMSC 330: Organization of Programming Languages

Synchronization COMPSCI 386

Real-Time Systems. Lecture #4. Professor Jan Jonsson. Department of Computer Science and Engineering Chalmers University of Technology

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

Chapters 5 and 6 Concurrency

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization

Part II Process Management Chapter 6: Process Synchronization

CS Operating Systems

CS Operating Systems

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

Introduction to Operating Systems

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Multitasking / Multithreading system Supports multiple tasks

CS 153 Design of Operating Systems Winter 2016

Chapter 6: Process Synchronization

Concurrency Problems Signals & Synchronization Semaphore Mutual Exclusion Critical Section Monitors

UNIX Input/Output Buffering

Informatica 3. Marcello Restelli. Laurea in Ingegneria Informatica Politecnico di Milano 9/15/07 10/29/07

CS370 Operating Systems

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Process Synchronization

Monitors; Software Transactional Memory

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Concurrency. Chapter 5

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Synchronization: Semaphores

Synchroniza+on II COMS W4118

Synchronization for Concurrent Tasks

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Process Synchronization

Concurrency and Synchronisation

UNIT:2. Process Management

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Writing Parallel Programs COMP360

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Faculty of Computers & Information Computer Science Department

Exception handling. Exceptions can be created by the hardware or by software: Examples. Printer out of paper End of page Divide by 0

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Lesson 6: Process Synchronization

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

QUESTION BANK UNIT I

Coupling Thursday, October 21, :23 PM

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

HY 345 Operating Systems

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

27/04/2012. We re going to build Multithreading Application. Objectives. MultiThreading. Multithreading Applications. What are Threads?

Process Synchronization

Chapter 7: Process Synchronization!

Problem Set 2. CS347: Operating Systems

MultiThreading 07/01/2013. Session objectives. Introduction. Introduction. Advanced Java Programming Course

Transcription:

Concurrent Programming CS105 Programming Languages Supplement

Outline Introduction Categories Concepts Semaphores Monitors Message Passing Statement level concurrency

Introduction Definitions Process: execution of sequence of statements Private state, including associated resources and execution context Concurrent program: two or more execution contexts Multithreaded Parallel program: concurrent program w/ more than one execution context active at the same time Distributed program: programs run at same time on network of processors (w/out shared memory)

Introduction Types of concurrency Instruction level Statement level Subprogram (unit) level Program level

Multiprocessor Architectures Single Instruction Multiple Data (SIMD) Each processor executes same instruction per step No synchronization required Vector machines or vector processors Multiple Instruction Multiple Data (MIMD) Each processor executes private instruction stream Two subtypes: shared memory and distributed Both require synchronization

Categories of Concurrency Physical vs. logical concurrency If the language supports logical concurrency, then we don'tcare whether multiple execution contexts actually active simultaneously Single thread of control Concurrent program run on single processor virtually multithreaded

Significance of Concurrency A way of conceptualizing solutions to problems Especially, simulation of physical systems Distributed architectures increasingly available Need to write software to take advantage Need language support for that software

Concepts for Concurrency Tasks/processes Synchronization Producer Consumer Problem Race condition Deadlock

Concepts for Concurrency Tasks or processes A unit of program allowing concurrent execution Contrast with subprograms Implicitly started vs. explicitly called Program unit invoking task carries on w/out waiting Task completion need not return control to caller Synchronization Cooperative: task A needs result from task B Competitive: tasks A and B need same resource Producer Consumer problem

Producer Consumer Problem Shared buffer Producer generates items stored in buffer Consumer removes items from buffer Must prevent buffer underflow/overflow

Concepts for Concurrency Race condition Competitive synchronization problem {total++;}{total*=2;} involving fetch, change, store Multiple possible outcomes If first completes before second starts,... If second completes before first starts,... If both fetch, then first finishes before second,... If both fetch, then second finishes before first,...

Concepts for Concurrency Deadlock Tasks A and B require resources X and Y Task A obtains X and B gets Y Both must wait for other to release their resource Neither will release, thus neither will complete

Supporting Concurrency Managing mutually exclusive access to shared resources Process requests exclusive access to resource Releases exclusivity when done Management methods Semaphores Monitors Message passing A scheduler manages requests and sharing

States of Tasks New (created but not run) Ready (not running but runnable) Running (execution context active on processor) Blocked (previously running but now waiting) Dead (completed or killed)

Language Design Issues Support for concurrency: Synchronization (both competition & cooperation) Task scheduling Task initiation and termination

Outline Introduction Semaphores Monitors Message Passing Statement level concurrency

Semaphores Formulated by Dijkstra (1965) Data structure consists of Integer Queue of task descriptors Provides limited access, or guard, to shared resource Queue ensures delayed requests eventually get served Originally, P and V for Dutch pass and release We'll use wait and release

Cooperative Synchronization Use shared buffer for example Use two semaphores to manage access to buffer Counter of semaphores used to track contents Emptyspots semaphore tracks available capacity Fullspots tracks used Respective queues contain tasks waiting for respective resource

Cooperative Producer Consumer Buffer as ADT DEPOSIT adds data, FETCH removes DEPOSIT needs check for available capacity FETCH needs check for available data The semaphores: emptyspots and fullspots DEPOSIT: if emptyspots, first decrements emptyspots, then increments fullspots after; else caller added to wait queue FETCH: if fullspots, first decrements fullspots, then increments emptyspots after; else caller added to wait queue

Operations on Semaphores Wait and release subprograms access semaphore wait(asemaphore) if asemaphore's counter > 0 then decrement asemaphore's counter else put the caller in asemaphore's queue attempt to transfer control to some ready task (if none, deadlock) end release(asemaphore) if asemaphore's queue is empty (no task waiting) then increment asemaphore's counter else put calling task in the task ready queue transfer control to a task from asemaphore's queue end

semaphore fullspots, emptyspots; fullspots.count := 0; emptyspots.count := BUFLEN; task producer; loop... produce VALUE... wait(emptyspots); { wait for a space } DEPOSIT(VALUE); release(fullspots); { increase filled spaces } end loop; end producer; task consumer; loop wait(fullspots); { make sure it is not empty } FETCH(VALUE); release(emptyspots); { increase empty spaces }... consume VALUE... end loop; end consumer;

Competitive Synchronization Suppose multiple producers and consumers Previous solution supports coordination among two cooperating processes Use third semaphore to control access to buffer Wait on this grants access if not being used (1) If 0, then in use and caller placed on wait queue (doesn't really use counter binary semaphore)

semaphore access, fullspots, emptyspots; access.count := 1; fullspots.count := 0; emptyspots.count := BUFLEN; task producer; loop... produce VALUE... wait(emptyspots); { wait for a space } wait(access);{ wait for access } DEPOSIT(VALUE); release(access); { relinquish access } release(fullspots); { increase filled spaces } end loop; end producer; task consumer; loop wait(fullspots); { make sure it is not empty } wait(access); FETCH(VALUE); release(access); release(emptyspots); { increase empty spaces }... consume VALUE... end loop; end consumer;

Analysis of Semaphores Note: semaphore operations must be uninterruptible. Semaphore solution to synchronization Cannot be statically checked for correctness Forgetting wait(emptyspots) yields buffer overflow Forgetting wait(fullspots) yields buffer underflow Forgetting releases yields deadlock Similar problems for access semaphore The semaphore is an elegant synchronization tool for an ideal programmer who never makes mistakes. (Brinch Hansen, 1973)