Advances in Programming Languages

Similar documents
CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

G Programming Languages Spring 2010 Lecture 13. Robert Grimm, New York University

THREADS AND CONCURRENCY

RACE CONDITIONS AND SYNCHRONIZATION

IT 540 Operating Systems ECE519 Advanced Operating Systems

Concurrency: a crash course

THREADS & CONCURRENCY

Synchronization Lecture 24 Fall 2018

Concurrency in Object Oriented Programs 1. Object-Oriented Software Development COMP4001 CSE UNSW Sydney Lecturer: John Potter

Results of prelim 2 on Piazza

Synchronization. Announcements. Concurrent Programs. Race Conditions. Race Conditions 11/9/17. Purpose of this lecture. A8 released today, Due: 11/21

Concurrency, Thread. Dongkun Shin, SKKU

CMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Concurrent Programming

Lecture #7: Implementing Mutual Exclusion

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

CMSC 330: Organization of Programming Languages. Threads Classic Concurrency Problems

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Synchronization Lecture 23 Fall 2017

Multiple Inheritance. Computer object can be viewed as

THREADS & CONCURRENCY

High Performance Computing Course Notes Shared Memory Parallel Programming

Performance Throughput Utilization of system resources

Threads and Parallelism in Java

연세대학교전기전자공학과프로세서연구실박사과정김재억.

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

Principles of Software Construction: Concurrency, Part 2

Concurrency COMS W4115. Prof. Stephen A. Edwards Spring 2002 Columbia University Department of Computer Science

Implementing Coroutines. Faking Coroutines in Java

Race Conditions & Synchronization

Program Graph. Lecture 25: Parallelism & Concurrency. Performance. What does it mean?

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

Monitors; Software Transactional Memory

The Dining Philosophers Problem CMSC 330: Organization of Programming Languages

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

3/25/14. Lecture 25: Concurrency. + Today. n Reading. n P&C Section 6. n Objectives. n Concurrency

CMSC421: Principles of Operating Systems

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Java Programming Lecture 23

MultiThreading 07/01/2013. Session objectives. Introduction. Introduction. Advanced Java Programming Course

More Synchronization; Concurrency in Java. CS 475, Spring 2018 Concurrent & Distributed Systems

Advanced Java Programming Course. MultiThreading. By Võ Văn Hải Faculty of Information Technologies Industrial University of Ho Chi Minh City

Reminder from last time

CMSC 330: Organization of Programming Languages. The Dining Philosophers Problem

CMSC 132: Object-Oriented Programming II. Threads in Java

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Animation Part 2: MoveableShape interface & Multithreading

Models of concurrency & synchronization algorithms

Java Threads. COMP 585 Noteset #2 1

Java Threads. Introduction to Java Threads

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

The New Java Technology Memory Model

Thread Safety. Review. Today o Confinement o Threadsafe datatypes Required reading. Concurrency Wrapper Collections

Shared-Memory and Multithread Programming

27/04/2012. We re going to build Multithreading Application. Objectives. MultiThreading. Multithreading Applications. What are Threads?

Operating Systems EDA092, DIT 400 Exam

CS11 Java. Fall Lecture 7

Concurrent Programming using Threads

COMP 346 WINTER Tutorial 5 MONITORS

CPS 310 second midterm exam, 11/14/2014

Operating Systems ECE344

User Space Multithreading. Computer Science, University of Warwick

Chair of Software Engineering. Java and C# in depth. Carlo A. Furia, Marco Piccioni, Bertrand Meyer. Java: concurrency

Monitors; Software Transactional Memory

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Cache Coherence and Atomic Operations in Hardware

What is a Thread? Why Multicore? What is a Thread? But a fast computer runs hot THREADS AND CONCURRENCY. Concurrency (aka Multitasking)

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Principles of Software Construction: Objects, Design, and Concurrency. The Perils of Concurrency Can't live with it. Cant live without it.

G52CON: Concepts of Concurrency

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Concurrency in Java Prof. Stephen A. Edwards

CS 537 Lecture 8 Monitors. Thread Join with Semaphores. Dining Philosophers. Parent thread. Child thread. Michael Swift

Dealing with Issues for Interprocess Communication

PROVING THINGS ABOUT PROGRAMS

Sharing Objects Ch. 3

JAVA CONCURRENCY FRAMEWORK. Kaushik Kanetkar

Real-Time and Concurrent Programming Lecture 4 (F4): Monitors: synchronized, wait and notify

C340 Concurrency: Semaphores and Monitors. Goals

Lecture 29: Java s synchronized statement

10/17/2011. Cooperating Processes. Synchronization 1. Example: Producer Consumer (3) Example

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Lecture 6 (cont.): Semaphores and Monitors

Concurrency. Chapter 5

Introduction to programming with semaphores Fri 1 Sep 2017

Synchronization I. Jo, Heeseung

Multitasking Multitasking allows several activities to occur concurrently on the computer. A distinction is usually made between: Process-based multit

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

Synchronization II: EventBarrier, Monitor, and a Semaphore. COMPSCI210 Recitation 4th Mar 2013 Vamsi Thummala

CSCD 330 Network Programming

Today: Synchronization. Recap: Synchronization

CS180 Review. Recitation Week 15

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Parallel Programming Languages COMP360

Concurrency. Glossary

Reminder from last time

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Thread Programming. Comp-303 : Programming Techniques Lecture 11. Alexandre Denault Computer Science McGill University Winter 2004

Transcription:

Advances in Programming Languages Lecture 8: Concurrency Ian Stark School of Informatics The University of Edinburgh Thursday 11 October 2018 Semester 1 Week 4 https://wp.inf.ed.ac.uk/apl18 https://course.inf.ed.uk/apl

Outline 1 Opening 2 Concurrency 3 Basics of concurrency in Java 4 Closing

Outline 1 Opening 2 Concurrency 3 Basics of concurrency in Java 4 Closing

Homework from Monday 1. Read This The instructions for the APL written coursework assignment. https://is.gd/apl18coursework 2. Write This The outline draft for your assignment, following in detail the instructions above.

What s in the course? The lectures cover four sample areas of advances in programming languages : Types: Parameterized, Polymorphic, Dependent, Refined Programming for Concurrency Augmented Languages for Correctness and Certification Programming for Memory and Thread Safety with Rust

Programming-Language Techniques for Concurrency This is the first of a block of lectures looking at programming-language techniques for concurrent programs and concurrent architectures. Introduction, basic Java concurrency Concurrency abstractions Concurrency in different languages Current challenges in concurrency

Outline 1 Opening 2 Concurrency 3 Basics of concurrency in Java 4 Closing

The free lunch is over For the past 30 years, computer performance has been driven by Moore s Law; from now on, it will be driven by Amdahl s Law. Doron Rajwan, Intel Power Management Architect

The free lunch is over For the past 30 years, computer performance has been driven by Moore s Law; from now on, it will be driven by Amdahl s Law. Doron Rajwan, Intel Power Management Architect Speedup 20.00 18.00 16.00 14.00 12.00 10.00 8.00 Amdahl s Law Parallel Portion 50% 75% 90% 95% 6.00 4.00 2.00 0.00 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 Number of Processors Daniels220, Wikimedia Commons

The free lunch is over Concurrency is the next major revolution in how we write software.... The vast majority of programmers today don t grok concurrency, just as the vast majority of programmers 15 years ago didn t yet grok objects. Speedup 20.00 18.00 16.00 14.00 12.00 10.00 8.00 6.00 Amdahl s Law Parallel Portion 50% 75% 90% 95% Herb Sutter Microsoft, C++ ISO chair 4.00 2.00 0.00 1 2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 Number of Processors Daniels220, Wikimedia Commons

The free lunch is so over https://herbsutter.com/welcome-to-the-jungle/

Why write concurrent programs? Concurrent programming is about writing code that can handle doing more than one thing at a time. There are various motivations, such as: Separation of duties (fetch data, render images) Efficient use of mixed resources (disk, memory, network) Responsiveness (GUI, sensors and devices, embedded systems) Speed (multiprocessing, hyperthreading, multi-core, GPGPU) Scale (distribution, cloud, elasticity) Serving multiple clients (database engine, web server) Note that the aims here are different to parallel programming, which is generally about the efficient (and speedy) processing of large sets of data.

Other computational modalities have this too + Frontal Lobe Cerebrum Parietal Lobe Temporal Lobe Parietal Lobes Occipital Lobes Brainstem Midbrain Pons Medulla Occipital Lobe Cerebellum Left Hemisphere Right Hemisphere Cerebrocerebellum Spinocerebellum Vestibulocerebellum

How hard can it be? What's Wrong With Threads? casual all programmers wizards Visual Basic programmers C programmers C++ programmers Threads programmers John Ousterhout: Why Threads Are A Bad Idea (for most purposes) USENIX Technical Conference, invited talk, 1996

It s hard to walk and chew gum Concurrent programming offers much, including entirely new problems. Interference code that is fine on its own may fail if run concurrently. Liveness making sure that a program does anything at all. Starvation making sure that all parts of the program make progress. Fairness making sure that everyone makes reasonable progress. Scalability making sure that more workers means more progress. Causality making sure that things happen in a particular order. Safety making sure that the program always does the right thing. Specification just working out what is the right thing can be tricky. Concurrent programming is hard, and although there is considerable research, and even progress, on how to do it well, it is often wise to avoid doing it (explicitly) yourself unless absolutely necessary. The greatest performance improvement of all is when a system goes from not-working to working Ousterhout

Concurrent programming in practice +

It s hard to walk and chew gum Concurrent programming offers much, including entirely new problems. Interference code that is fine on its own may fail if run concurrently. Liveness making sure that a program does anything at all. Starvation making sure that all parts of the program make progress. Fairness making sure that everyone makes reasonable progress. Scalability making sure that more workers means more progress. Causality making sure that things happen in a particular order. Safety making sure that the program always does the right thing. Specification just working out what is the right thing can be tricky. Concurrent programming is hard, and although there is considerable research, and even progress, on how to do it well, it is often wise to avoid doing it (explicitly) yourself unless absolutely necessary. The greatest performance improvement of all is when a system goes from not-working to working Ousterhout

Processes, Threads, Tasks, Events All operating systems and many programming languages provide for concurrent programming, usually through a notion of process, thread, task, or event handler. The idea is that a process/thread/task/handler captures a single flow of control. A concurrent program will have many of these at a time. A scheduler directs which threads are running at any time, and how control passes among them. There are many design tradeoffs here, concerning memory separation, mutual protection, communication, scheduling, signalling,... Usually processes are heavyweight (e.g., separate memory space), threads lightweight (shared memory) and tasks lightest. But usage is not precise or consistent. Any complete system will include multiple layers of concurrency, possibly of different kinds. The concurrency presented to a programmer may not match that available on any particular hardware; and there will be a cost in maintaining the illusion of a persistent flow of control. I think it s possible the model of multiple concurrent threads may itself become outdated.

Critical Section A key issue with multiple explicit threads is to avoid interference through shared memory. void moveby(int dx, int dy) { System.out.println("Moving by "+dx+","+dy); x = x+dx; y = y+dy; System.out.println("Completed move"); } void moveto(int newx, int newy) { System.out.println("Moving to "+newx+","+newy); x = newx; y = newy; System.out.println("Completed move"); }

Critical Section A key issue with multiple explicit threads is to avoid interference through shared memory. void moveby(int dx, int dy) { void moveto(int newx, int newy) {.... x = x+dx; x = newx; y = y+dy; y = newy;.... } } Because both methods access the fields x and y, it is vital that these two critical sections of code are not executing at the same time.

Critical Section A key issue with multiple explicit threads is to avoid interference through shared memory. void moveby(int dx, int dy) { void moveto(int newx, int newy) {.... x = x+dx; x = newx; y = y+dy; y = newy;.... } } Interference can result in erroneous states perhaps violating object invariants and causing crashes, or possibly going undetected until later.

Critical Section A key issue with multiple explicit threads is to avoid interference through shared memory. void moveby(int dx, int dy) { void moveto(int newx, int newy) {.... x = x+dx; x = newx; y = y+dy; y = newy;.... } } Debugging can be difficult because scheduling is often non-deterministic. The only thing worse than a problem that happens all the time is a problem that doesn t happen all the time Ousterhout

Locks and more There are many ways to ensure critical sections do not interfere, and refinements to make sure that these constraints do not disable desired concurrency. Locks Mutexes Semaphores Condition variables Monitors etc. Constructs such as these can ensure mutual exclusion or enforce serialization orderings. They are implemented on top of other underlying locking mechanisms, either in hardware (test-and-set, compare-and-swap,... ) or software (spinlock, various busy-wait algorithms).

Outline 1 Opening 2 Concurrency 3 Basics of concurrency in Java 4 Closing

Concurrency in Java Java supports concurrent programming as an integral part of the language: threading is always available, and a range of libraries provide high-level abstractions over the basic primitives. The class Thread encapsulates a thread, which can run arbitrary code. Threads have unique identifiers, names, and integer priorities. Parent code can spawn multiple child threads, and then wait for individual children to terminate or (co-operatively) interrupt them. class Watcher extends Thread { public void run() { // check what s going on... } } class WorkerJob implements Runnable { public void run() { // do some work... } } Watcher watcher = new Watcher(); watcher.start();... WorkerJob job = new WorkerJob(); Thread worker = new Thread(job).start();... work.join(); watcher.interrupt();

Synchronized methods Java provides mutual exclusion for critical sections through the synchronized primitive. synchronized void moveby(int dx, int dy) { System.out.println("Moving by "+dx+","+dy); x = x+dx; y = y+dy; System.out.println("Completed move"); } synchronized void moveto(int newx, int newy) { System.out.println("Moving to "+newx+","+newy); x = newx; y = newy; System.out.println("Completed move"); }

Synchronized methods Two move methods cannot now execute at the same time on the same object. synchronized void moveby(int dx, int dy) { System.out.println("Moving by "+dx+","+dy); x = x+dx; y = y+dy; System.out.println("Completed move"); } synchronized void moveto(int newx, int newy) { System.out.println("Moving to "+newx+","+newy); x = newx; y = newy; System.out.println("Completed move"); }

Synchronized methods Each synchronized method must acquire a lock before starting and release it when finished. synchronized void moveby(int dx, int dy) { System.out.println("Moving by "+dx+","+dy); x = x+dx; y = y+dy; System.out.println("Completed move"); } synchronized void moveto(int newx, int newy) { System.out.println("Moving to "+newx+","+newy); x = newx; y = newy; System.out.println("Completed move"); }

Synchronized blocks Every Java object has an implicit associated lock, used by its synchronized methods. This can also be used to arrange exclusive access to any block of code: void moveby(int dx, int dy) { System.out.println("Moving by "+dx+","+dy); synchronized(this) { x = x+dx; // Only this section of the y = y+dy; // code is critical } System.out.println("Completed move"); } The locking object need not be this. Sometimes a lock may be better on a contained (or owned ) object, or may be needed on a containing (or owning ) object.

Condition variables Java refines critical regions with basic condition variables. Synchronized code that finds things are not suitable for it to proceed may wait() on the condition variable associated with its lock. This blocks the code and releases the lock. Another thread can acquire the lock and do some work. Because this may change the situation for the other thread, it should notify() or notifyall() other threads of this. Threads waiting on the condition variable will be made runnable again, and can check to see if they are now ready to proceed. Having threads block saves on busy waiting, and ensures that they only wake when there is something to check.

A simple blocking method class Pigeonhole { private Object contents = null; synchronized void put (Object o) { while (contents!= null) // Wait until the pigeonhole is empty try { wait(); } catch (InterruptedException ignore) { return; } contents = o; notifyall(); }... } // Fill the pigeonhole // Tell anyone who might be interested

Outline 1 Opening 2 Concurrency 3 Basics of concurrency in Java 4 Closing

Summary Concurrency is essential (efficiency, responsiveness, speed). But can be tricky (interference, deadlock, fairness,... ). Concurrency can be in the language, in libraries, or in both. Java provides shared-memory concurrency in the language: Locks and critical regions with synchronized; Condition variables with wait, notify and notifyall; Explicit spawning of multiple threads. Java provides further concurrency abstractions in a library: java. util.concurrent.

Homework for Monday! 1. Do This Find out what a data race is. What happens to C or C++ code with a data race? 2. Read This Read the first three sections of the Java Concurrency tutorial. http://java.sun.com/docs/books/tutorial/essential/concurrency Processes and Threads Thread Objects Synchronization Have another Java concurrency tutorial to recommend? Great! Post on Piazza or mail me.