+ Today. Lecture 26: Concurrency 3/31/14. n Reading. n Objectives. n Announcements. n P&C Section 7. n Race conditions.

Similar documents
3/25/14. Lecture 25: Concurrency. + Today. n Reading. n P&C Section 6. n Objectives. n Concurrency

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 5 Programming with Locks and Critical Sections

CSE332: Data Abstractions Lecture 23: Programming with Locks and Critical Sections. Tyler Robison Summer 2010

Races. Example. A race condi-on occurs when the computa-on result depends on scheduling (how threads are interleaved)

CSE 332: Data Structures & Parallelism Lecture 18: Race Conditions & Deadlock. Ruth Anderson Autumn 2017

Thread-Local. Lecture 27: Concurrency 3. Dealing with the Rest. Immutable. Whenever possible, don t share resources

CSE332: Data Abstractions Lecture 19: Mutual Exclusion and Locking

Lab. Lecture 26: Concurrency & Responsiveness. Assignment. Maze Program

Programmazione di sistemi multicore

Thread-unsafe code. Synchronized blocks

Program Graph. Lecture 25: Parallelism & Concurrency. Performance. What does it mean?

CSE 332: Locks and Deadlocks. Richard Anderson, Steve Seitz Winter 2014

Sharing is the Key. Lecture 25: Parallelism. Canonical Example. Bad Interleavings. Common to have: CS 62 Fall 2016 Kim Bruce & Peter Mawhorter

CSE 332 Data Abstractions: Data Races and Memory, Reordering, Deadlock, Readers/Writer Locks, and Condition Variables (oh my!) Kate Deibel Summer 2012

THE FINAL EXAM MORE ON RACE CONDITIONS. Specific Topics. The Final. Book, Calculator, and Notes

CSE332: Data Abstractions Lecture 25: Deadlocks and Additional Concurrency Issues. Tyler Robison Summer 2010

Topics in Parallel and Distributed Computing: Introducing Concurrency in Undergraduate Courses 1,2

Programmazione di sistemi multicore

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion

CSE 374 Programming Concepts & Tools

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Thread Safety. Review. Today o Confinement o Threadsafe datatypes Required reading. Concurrency Wrapper Collections

CSE 332: Data Structures & Parallelism Lecture 17: Shared-Memory Concurrency & Mutual Exclusion. Ruth Anderson Winter 2019

CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

CSE332: Data Abstractions Lecture 23: Data Races and Memory Reordering Deadlock Readers/Writer Locks Condition Variables. Dan Grossman Spring 2012

Sharing Objects Ch. 3

The New Java Technology Memory Model

CS-XXX: Graduate Programming Languages. Lecture 20 Shared-Memory Parallelism and Concurrency. Dan Grossman 2012

Atomicity via Source-to-Source Translation

CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion. Tyler Robison Summer 2010

Learning from Bad Examples. CSCI 5828: Foundations of Software Engineering Lecture 25 11/18/2014

CMSC 132: Object-Oriented Programming II

The Java Memory Model

Advanced Threads & Monitor-Style Programming 1/24

Midterm #1. CMSC 433: Programming Language Technologies and Paradigms. October 14, 2013

L13: Synchronization

Concurrent Objects and Linearizability

Threads Synchronization

EECS 482 Introduction to Operating Systems

Josh Bloch Charlie Garrod Darya Melicher

Design of Thread-Safe Classes

G52CON: Concepts of Concurrency

Consistency. CS 475, Spring 2018 Concurrent & Distributed Systems

Principles of Software Construction: Objects, Design and Concurrency. The Perils of Concurrency, Part 2 (Can't live with it, can't live without it.

COMP 322: Fundamentals of Parallel Programming

Agenda. Highlight issues with multi threaded programming Introduce thread synchronization primitives Introduce thread safe collections

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

G Programming Languages Spring 2010 Lecture 13. Robert Grimm, New York University

Principles of Software Construction: Objects, Design, and Concurrency. The Perils of Concurrency Can't live with it. Cant live without it.

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

11/19/2013. Imperative programs

Motivation & examples Threads, shared memory, & synchronization

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Dealing with Issues for Interprocess Communication

Shared Mutable State SWEN-220

Monitors; Software Transactional Memory

Concurrency in Object Oriented Programs 1. Object-Oriented Software Development COMP4001 CSE UNSW Sydney Lecturer: John Potter

Operating Systems. Synchronization

W4118: concurrency error. Instructor: Junfeng Yang

CSE 332: Data Abstractions Lecture 22: Data Races and Memory Reordering Deadlock Readers/Writer Locks Condition Variables. Ruth Anderson Spring 2014

Introducing Shared-Memory Concurrency

CS510 Advanced Topics in Concurrency. Jonathan Walpole

CSE 153 Design of Operating Systems

CSE 160 Lecture 7. C++11 threads C++11 memory model

Computation Abstractions. Processes vs. Threads. So, What Is a Thread? CMSC 433 Programming Language Technologies and Paradigms Spring 2007

7/6/2015. Motivation & examples Threads, shared memory, & synchronization. Imperative programs

Race Conditions & Synchronization

Get out, you will, of this bind If, your objects, you have confined

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Lecture 7: Mutual Exclusion 2/16/12. slides adapted from The Art of Multiprocessor Programming, Herlihy and Shavit

Synchronization. Announcements. Concurrent Programs. Race Conditions. Race Conditions 11/9/17. Purpose of this lecture. A8 released today, Due: 11/21

CS5460: Operating Systems

Synchronization SPL/2010 SPL/20 1

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (II)

CMSC 433 Section 0101 Fall 2012 Midterm Exam #1

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

Sec$on 2: Specifica)on, ADTs, RI WITH MATERIAL FROM MANY

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CMSC 330: Organization of Programming Languages. Threads Classic Concurrency Problems

Keeping Order:! Stacks, Queues, & Deques. Travis W. Peters Dartmouth College - CS 10

Lecture 16: Thread Safety in Java

The University of Texas at Arlington

Synchronization in Java

Synchronization COMPSCI 386

Distributed Systems. 12. Concurrency Control. Paul Krzyzanowski. Rutgers University. Fall 2017

Stacks Fall 2018 Margaret Reid-Miller

Monitors; Software Transactional Memory

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

CMSC 330: Organization of Programming Languages. The Dining Philosophers Problem

Synchronising Threads

Last Class: Monitors. Real-world Examples

CS 153 Design of Operating Systems Winter 2016

RACE CONDITIONS AND SYNCHRONIZATION

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Applied Unified Ownership. Capabilities for Sharing Across Threads

6.830 Problem Set 3: SimpleDB Transactions

CS 241 Honors Concurrent Data Structures

Synchronization via Transactions

Review: Easy Piece 1

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004

Transcription:

+ Lecture 26: Concurrency Slides adapted from Dan Grossman + Today n Reading n P&C Section 7 n Objectives n Race conditions n Announcements n Quiz on Friday 1

+ This week s programming assignment n Answer population queries using data from the 2000 U.S. census n User inputs numrows, numcols n Query consists of corners of a rectangle. Returns population inside query rectangle n Timing and writeup required n Wednesday we will discuss class design. Highly recommended you work with a partner! 8 columns, 6 rows + Concurrency n Correctly and efficiently controlling access by multiple threads to shared resources n Programming model n Multiple uncoordinated threads n Sharing memory locations n Interleaved operations 2

+ Re-entrant Locks in Java n Re-entrant locks implemented via synchronized blocks synchronized (expression) { statements}! n synchronized acquires the lock (blocks until available) n expression must evaluate to a non-null object n statements execute when lock acquired n lock is released when execution leaves block for any reason + Recap: Bank Account (Best code) public class BankAccount{!!private int balance = 0;!!!synchronized int getbalance(){!!!return balance;!!synchronized void setbalance(int x) {!!!balance = x;!!synchronized void withdraw(int amount) {!!!int b = getbalance();!!!if(amount > b)!!!!throw new WithdrawTooLargeException();!!!setBalance(b-amount);!!...! }! 3

+ Race Conditions n A race condition occurs when the computation result depends on the order that the threads execute (how threads are interleaved) n Such bugs (by definition) do not exist in sequential programs n Typically, problem is one thread sees invariant-violating intermediate state produced by another thread n Two types of race conditions n Bad interleavings n Data races simultaneous read/write or write/write access to same memory location + Bad Interleaving Example: peek class Stack<E> {!!private E[] array;! // array to hold elements!!private int index; // points to next open slot!!!stack(int size){ array = (E[]) new Object[size]; }!!!synchronized boolean isempty() {!!!!return index == 0;!!synchronized void push(e val) {!!!!if(index == array.length) throw new...;!!!!array[index++] = val;!!synchronized E pop() {!!!!if(index == 0) throw new...;!!!!return array[--index];! }! 4

+ Bad Interleaving Example: peek n Implementing peek from a different class n Forgot to add synchronization! public class C{!!static <E> E mypeekhelperwrong(stack<e> s) {!!!!E ans = s.pop();!!!!s.push(ans);!!!!return ans;!!}! }! + Bad Interleaving Example: peek n peek has no overall effect on the shared data n It is a reader not a writer n Overall result is same stack if no interleaving n But the way it is implemented creates an inconsistent intermediate state n Even though calls to push and pop are synchronized so there are no data races on the underlying array n This intermediate state should not be exposed n Leads to several bad interleavings 5

+ One bad interleaving: peek and push n Property we want: values are returned from pop in LIFO order n With peek as written, property can be violated how? Time Thread 1 (peek) Thread 2 push(x) E ans = pop(); push(y) push(ans); E e = pop() return ans; + The solution n peek needs synchronization to disallow interleavings n The key is to make a larger critical section n Re-entrant locks allow calls to push and pop n Just because all changes to state done within synchronized pushes and pops doesn t prevent exposing intermediate state public class C{!!static <E> E mypeekhelperwrong(stack<e> s) {!!!!synchronized(s) {!!!!!!!!E ans = s.pop();!!!!!s.push(ans);!!!!!return ans;!!!!}! }! 6

+ Race conditions n Examples of data races in the text n Lesson: Do not introduce a data race even if every interleaving you can think of is correct n Avoiding race conditions on shared resources is difficult n Decades of bugs have led to some conventional wisdom: general techniques that are known to work + Providing Safe Access For every memory location (e.g., object field) in your program, you must obey at least one of the following: 1. Thread-local: Only one thread accesses it 2. Immutable: (After initialization) Only read never written 3. Synchronized: Locks used to ensure no race conditions need synchronization all memory thread-local memory immutable memory 7

+ Thread-local Whenever possible, do not share resources n n n Easier to have each thread have its own thread-local copy of a resource than to have one with shared updates This is correct only if threads do not need to communicate through the resource n That is, multiple copies are a correct approach n Example: Random objects Note: Because each call-stack is thread-local, never need to synchronize on local variables In typical concurrent programs, the vast majority of objects should be thread-local: shared-memory should be rare minimize it + Immutable Whenever possible, do not update objects n Make new objects instead n One of the key tenets of functional programming n Functional programming studied in 52 n If a location is only read, never written, then no synchronization is necessary! n Simultaneous reads are not races and not a problem In practice, programmers usually over-use mutation minimize it 8

+ Guidelines for unavoidable concurrency n After minimizing the amount of memory that is (1) threadshared and (2) mutable, we need guidelines for how to use locks to keep other data consistent n Guideline #0: Use threads to ensure no simultaneous read/ write or write/write operations to the same field n Necessary but not sufficient (peek example) + Consistent Locking n Guideline #1: For each location needing synchronization, have a lock that is always held when reading or writing the location n We say the lock guards the location n The same lock can (and often should) guard multiple locations n Clearly document in comments the guard for each location n In Java, the guard is often the object containing the location n this inside the object s methods n But also often guard a larger structure with one lock to ensure mutual exclusion on the structure 9

+ Lock Granularity n Guideline #2: Start with coarse-grained and move to fine-grained only if contention becomes an issue. n Coarse-grained locking n Fewer locks more objects per lock (e.g. one lock for all bank accounts) n Simpler to implement n Faster/easier to implement operations that access multiple locations n Can lead to contention among threads threads blocked waiting for lock n Fine-grained locking n More locks i.e. fewer objects per lock n May need to acquire multiple locks n Alas, will probably lead to bugs! + Critical-section granularity n Guideline #3: Do not do expensive computations or I/O in critical sections, but also don t introduce race conditions n Critical-section size is orthogonal to lock granularity n How much work to do while holding lock(s) n If critical sections run for too long: n Performance loss because other threads are blocked n If critical sections are too short: n Too short can lead to bad interleavings 10

+ Atomicity n Guideline #4: Think in terms of what operations need to be atomic n Make critical sections just long enough to preserve atomicity n Then design the locking protocol to implement the critical sections correctly n An operation is atomic if no other thread can see it partly executed n Atomic as in appears indivisible + Don t roll your own n Guideline #5: Use built-in libraries whenever they meet your needs n ConcurrentHashMap written by world experts n Vector versus ArrayList! 11