Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

Similar documents
Dealing with Issues for Interprocess Communication

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002

Page 1. Challenges" Concurrency" CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks"

Before we can talk about concurrency, we need to first understand what happens under the hood when the interpreter executes a Python statement.

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

CSE 332: Locks and Deadlocks. Richard Anderson, Steve Seitz Winter 2014

POSIX Threads: a first step toward parallel programming. George Bosilca

CMSC 132: Object-Oriented Programming II

Concurrency: a crash course

Programmazione di sistemi multicore

IT 540 Operating Systems ECE519 Advanced Operating Systems

Week 3. Locks & Semaphores

Parallel Programming: Background Information

Parallel Programming: Background Information

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Chapter 6 Concurrency: Deadlock and Starvation

Synchronization I. Jo, Heeseung

Concurrent Processes Rab Nawaz Jadoon

Deferred operations. Continuations Structure and Interpretation of Computer Programs. Tail recursion in action.

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

Lecture. DM510 - Operating Systems, Weekly Notes, Week 11/12, 2018

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

CS 153 Design of Operating Systems Winter 2016

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Threads Cannot Be Implemented As a Library

Goals. Processes and Threads. Concurrency Issues. Concurrency. Interlacing Processes. Abstracting a Process

CSCI-1200 Data Structures Fall 2009 Lecture 25 Concurrency & Asynchronous Computing

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion. Tyler Robison Summer 2010

Process Management And Synchronization

CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

IV. Process Synchronisation

CSE332: Data Abstractions Lecture 25: Deadlocks and Additional Concurrency Issues. Tyler Robison Summer 2010

An Introduction to Parallel Programming

Warm-up question (CS 261 review) What is the primary difference between processes and threads from a developer s perspective?

Midterm Exam. October 20th, Thursday NSC

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

Synchronization COMPSCI 386

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

The University of Texas at Arlington

Deadlocks. Deadlock in Resource Sharing Environment. CIT 595 Spring Recap Example. Representing Deadlock

The University of Texas at Arlington

Introduction to Operating Systems

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University


CSC501 Operating Systems Principles. Process Synchronization

Synchronization for Concurrent Tasks

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CSE 332: Data Structures & Parallelism Lecture 17: Shared-Memory Concurrency & Mutual Exclusion. Ruth Anderson Winter 2019

Interprocess Communication By: Kaushik Vaghani

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

The mutual-exclusion problem involves making certain that two things don t happen at once. A non-computer example arose in the fighter aircraft of

CSE 153 Design of Operating Systems

Reminder from last time

Resource Sharing & Management

CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM *

CMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago

Page 1. Recap: ATM Bank Server" Recap: Challenge of Threads"

6.184 Lecture 4. Interpretation. Tweaked by Ben Vandiver Compiled by Mike Phillips Original material by Eric Grimson

Paralleland Distributed Programming. Concurrency

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

G Programming Languages Spring 2010 Lecture 13. Robert Grimm, New York University

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

RCU in the Linux Kernel: One Decade Later

Synchronization Basic Problem:

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

Threads Tuesday, September 28, :37 AM

Concurrency: Deadlock and Starvation. Chapter 6

Lecture 9: Midterm Review

CS 314 Principles of Programming Languages

Concurrent & Distributed Systems Supervision Exercises

High Performance Computing Lecture 21. Matthew Jacob Indian Institute of Science

Chapter 2 Processes and Threads

CSE 451 Midterm 1. Name:

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Class Structure. Prerequisites

Programmazione di sistemi multicore

Deadlock. Concurrency: Deadlock and Starvation. Reusable Resources

EECS 482 Introduction to Operating Systems

Liveness properties. Deadlock

CSE Traditional Operating Systems deal with typical system software designed to be:

Introducing Shared-Memory Concurrency

Parallelism Marco Serafini

Last Class: Monitors. Real-world Examples

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

CSE 374 Programming Concepts & Tools

Parallel And Distributed Compilers

Process Synchronization

CMSC 330: Organization of Programming Languages. Concurrency & Multiprocessing

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

POSIX Threads: a first step toward parallel programming. George Bosilca

Concurrency. Chapter 5

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

CS510 Advanced Topics in Concurrency. Jonathan Walpole

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Transcription:

Processor speed 6.037 - Structure and Interpretation of Computer Programs Mike Phillips <mpp> Massachusetts Institute of Technology http://en.wikipedia.org/wiki/file:transistor_count_and_moore%27s_law_- _2011.svg 1 / 27 2 / 27 Processor speed Multiple processors 10 3 Nowadays every laptop has multiple cores in it Fastest supercomputers all have thousands of processors Not a new problem Connection Machine (Lisp!) had 65,000 processors (1980s) MIPS/CPU clock speed 10 2 10 1 10 0 1980 1985 1990 1995 2000 2005 2010 Year http://csgillespie.wordpress.com/2011/01/25/cpu- and- gpu- trends- over- time/ with data originally from Wikipedia 3 / 27 4 / 27

Objects with state All of our code only makes use of one processor is the ability to do more than one computation in parallel Is a lot easier on the computer than on the programmer! In purely functional programming, time of evaluation is irrelevant: (define (add-17 x) (+ x 17)) (add-17 10) ; => 27 ;...later: (add-17 10) ; => 27 Just run sequences on different processors and we re done!...except, this does not work for objects with state: (define alexmv (new autonomous-person alexmv great-court 3 3)) ((alexmv LOCATION) NAME) ; => great-court ;...later: ((alexmv LOCATION) NAME) ; => great-dome 5 / 27 6 / 27 and time Canonical bank example Behavior of objects with state depends on sequence of events in real time This is fine in a concurrent program where that state is not shared explicitly or implicitly between threads For example, autonomous objects in our adventure game conceptually act concurrently. We d like to take advantage of this by letting them run at the same time But how does the order of execution affect the interactions between them? (define (make-account (define (deposit amount) (set! balance (+ balance amount))) (define (dispatch msg) (cond ((eq? msg withdraw) withdraw) ((eq? msg deposit) deposit) ((eq? msg (else (error "unknown request" msg)))) dispatch) 7 / 27 8 / 27

Why is time an issue? Race conditions (define alex (make-account 100)) (define ben alex) ((alex withdraw) 10) ((ben withdraw) 25) Alex Bank Ben 100-10 90 65-25 Alex Bank Ben 100 75-25 -10 65 ((alex withdraw) 10) ((ben withdraw) 25) Alex Bank Ben 100 Access balance (100) 100 100 Access balance (100) New value 100-10 = 90 100 100 New value 100-25 = 75 Set balance 90 90 75 Set balance 75 75 9 / 27 10 / 27 Correct behavior of concurrent programs (define x 10) (define f1 (lambda () (set! x (* x x)))) (define f2 (lambda () (set! x (+ x 1)))) Require: That no two operations that change any shared state can occur at the same time Guarantees correctness, but too conservative? That a concurrent system produces the same result as if the processes had run sequentially in some order Does not require the processes to actually run sequentially, only to produce results as if they had run sequentially There may be more than one correct result as a consequence! (parallel-execute f1 f2) f1 f2 a Look up first x in f1 b Look up second x in f1 c Assign product of a and b to x d Look up x in f2 e Assign sum of d and 1 to x a b c d e 10 10 100 100 101 a b d c e 10 10 10 100 11 a d b c e 10 10 10 100 11 d a b c e 10 10 10 100 11 a b d e c 10 10 10 11 100 a d b e c 10 10 10 11 100 d a b e c 10 10 10 11 100 a d e b c 10 10 11 11 110 d a e b c 10 10 11 11 110 d e a b c 10 11 11 11 121 Internal order preserved Internal order preserved 11 / 27 12 / 27

Serializing access to shared state Serializers mark critical regions Processes will execute concurrently, but there will certain sets of procedures such that only one execution of a procedure in each set is permitted to happen at a time If some procedure in the set is being executed, then any other process that attempts to execute any procedure in the set will be forced to wait until the first execution has finished Use serialization to control access to shared variables We can mark regions of code that cannot overlap execution in time. This adds an additional constraint to the partial ordering imposed by the separate processes Assume make-serializer returns a procedure that takes a procedure as input and returns a serialized procedure that behaves like the original, except that if another procedure in the same serialized set is underway, this procedure must wait Where do we put the serializers? 13 / 27 14 / 27 (define x 10) (define kelloggs (make-serializer)) (define f1 (kelloggs (lambda () (set! x (* x x))))) (define f2 (kelloggs (lambda () (set! x (+ x 1))))) (parallel-execute f1 f2) f1 f2 a Look up first x in f1 b Look up second x in f1 c Assign product of a and b to x d Look up x in f2 e Assign sum of d and 1 to x a b c d e 10 10 100 100 101 d e a b c 10 11 11 11 121 (define (make-account (define (deposit amount) (set! balance (+ balance amount))) (let ((kelloggs (make-serializer))) (define (dispatch msg) (cond ((eq? msg withdraw) (kelloggs withdraw)) ((eq? msg deposit) (kelloggs deposit)) ((eq? msg (else (error "unknown request" msg))))) dispatch) 15 / 27 16 / 27

Multiple shared resources Serializing object access (define (exchange account1 account2) (let ((difference (- (account1 (account2 ))) ((account1 withdraw) difference) ((account2 deposit) difference))) (parallel-execute (lambda () (exchange alex ben)) (lambda () (exchange alex mpp))) ; alex = 300, ben = 200, mpp = 100 1. Difference (- alex ben) = 100 2. Withdraw 100 from alex (has 200) 3. Deposit 100 into ben (has 300) 4. Difference (- alex mpp) = 100 5. Withdraw 100 from alex (has 100) 6. Deposit 100 into mpp (has 200) (define (make-account (define (deposit amount) (set! balance (+ balance amount))) (let ((kelloggs (make-serializer))) (define (dispatch msg) (cond ((eq? msg withdraw) withdraw) ((eq? msg deposit) deposit) ((eq? msg ((eq? msg serializer) kelloggs) (else (error "unknown request" msg)))) dispatch)) 17 / 27 18 / 27 Serialize access to all variables (define (deposit account amount) (let ((s (account serializer)) (d (account deposit))) ((s d) amount))) (define (serialized-exchange acct1 acct2) (let ((serializer1 (acct1 serializer)) (serializer2 (acct2 serializer))) ((serializer1 (serializer2 exchange)) acct1 acct2))) Suppose Alex attempts to exchange a1 with a2 And Ben attempts to exchange a2 with a1 Imagine that Alex gets the serializer for a1 at the same time that Ben gets the serializer for a2. Now Alex is stalled waiting for the serializer from a2, but Ben is holding it. And Ben is similarly waiting for the serializer from a1, but Alex is holding it. DEADLOCK! 19 / 27 20 / 27

Deadlocks Implementation time Classic deadlock case: Dining Philosophers problem Racket has concurrency, using threads We can implement serializers using a primitive synchronization method, called a semaphore A semaphore counts the number of available resources Semaphores can be atomically incremented and decremented Attempts to decrement when there are no available resources causes the thread to block If the semaphore has only up to one resource, it is a mutex ( mutual exclusion ) 21 / 27 Serializer Mutex (define (make-serializer) (let ((mutex (make-mutex))) (lambda (p) (lambda args (mutex acquire) (let ((val (apply p args))) (mutex release) val))))) (define (make-mutex) (let ((sema (make-semaphore 1))) (lambda (m) (cond ((eq? m acquire) (semaphore-wait sema)) ((eq? m release) (semaphore-post sema)) (else (error "Invalid argument:" m)))))) 23 / 27 22 / 27 24 / 27

Parallel evaluation Thread communication (define (parallel-execute p1 p2) (let* ((parent (current-thread)) (t1 (thread (send-parent p1 parent))) (t2 (thread (send-parent p2 parent)))) (list (thread-receive) (thread-receive)))) (define (send-parent f parent) (lambda () (thread-send parent (f)))) Shared object state is implicit thread communication More formal: thread-send and thread-receive are a simple asynchronous message queue Inter-process communication through events Sporadic requests to process something Events are a good model for user interaction 25 / 27 26 / 27 in large systems It s hard: Coarse-grained locking is inefficient, but ensures correct behavior Fine-grained locking can be efficient, but is bug-prone and brittle to new kinds of operations Locks break abstraction barriers Opens up whole new classes of bugs (race conditions, deadlock, livelock, etc.) These issues are recursive Also, very irritating to debug non-deterministic, debugging output intermixed between threads, heisenbugs, etc. But we need to do it 27 / 27