CS 261 Fall Mike Lam, Professor. Threads

Similar documents
Warm-up question (CS 261 review) What is the primary difference between processes and threads from a developer s perspective?

Warm-up question (CS 261 review) What is the primary difference between processes and threads from a developer s perspective?

CS 475. Process = Address space + one thread of control Concurrent program = multiple threads of control

CS 3305 Intro to Threads. Lecture 6

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS333 Intro to Operating Systems. Jonathan Walpole

Threads. Threads (continued)

CS510 Operating System Foundations. Jonathan Walpole

Concurrency, Thread. Dongkun Shin, SKKU

CS510 Operating System Foundations. Jonathan Walpole

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

pthreads CS449 Fall 2017

Intro to POSIX Threads with FLTK

EPL372 Lab Exercise 2: Threads and pthreads. Εργαστήριο 2. Πέτρος Παναγή

CSCI4430 Data Communication and Computer Networks. Pthread Programming. ZHANG, Mi Jan. 26, 2017

High Performance Computing Lecture 21. Matthew Jacob Indian Institute of Science

High Performance Computing Course Notes Shared Memory Parallel Programming

CS 153 Lab4 and 5. Kishore Kumar Pusukuri. Kishore Kumar Pusukuri CS 153 Lab4 and 5

CMPSC 311- Introduction to Systems Programming Module: Concurrency

POSIX threads CS 241. February 17, Copyright University of Illinois CS 241 Staff

Outline. CS4254 Computer Network Architecture and Programming. Introduction 2/4. Introduction 1/4. Dr. Ayman A. Abdel-Hamid.

CMPSC 311- Introduction to Systems Programming Module: Concurrency

Concurrent Programming

Programming with Shared Memory. Nguyễn Quang Hùng

CMPSC 311- Introduction to Systems Programming Module: Concurrency

Last class: Today: Thread Background. Thread Systems

Multicore and Multiprocessor Systems: Part I

CSci 4061 Introduction to Operating Systems. (Threads-POSIX)

POSIX PTHREADS PROGRAMMING

Concurrent Server Design Multiple- vs. Single-Thread

Concurrent Programming

Introduction to PThreads and Basic Synchronization

Introduction to pthreads

Shared Memory Programming. Parallel Programming Overview

Shared Memory Programming: Threads and OpenMP. Lecture 6

CS533 Concepts of Operating Systems. Jonathan Walpole

Introduction to Threads

LSN 13 Linux Concurrency Mechanisms

CSE 333 SECTION 9. Threads

Shared-Memory Programming

PThreads in a Nutshell

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

The course that gives CMU its Zip! Concurrency I: Threads April 10, 2001

A Thread is an independent stream of instructions that can be schedule to run as such by the OS. Think of a thread as a procedure that runs

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits

Computer Systems Laboratory Sungkyunkwan University

C Grundlagen - Threads

Multithreading. Reading: Silberschatz chapter 5 Additional Reading: Stallings chapter 4

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

Parallel Programming. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Overview. Administrative. * HW 2 Grades. * HW 3 Due. Topics: * What are Threads? * Motivating Example : Async. Read() * POSIX Threads

Threads. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Operating systems fundamentals - B06

Thread and Synchronization

Concurrent Programming

ENCM 501 Winter 2019 Assignment 9

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Processes, Threads, SMP, and Microkernels

CS370 Operating Systems

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CS 326: Operating Systems. Process Execution. Lecture 5

HPCSE - I. «Introduction to multithreading» Panos Hadjidoukas

4. Concurrency via Threads

Chapter 4 Concurrent Programming

User Space Multithreading. Computer Science, University of Warwick

CS370 Operating Systems

CSE 306/506 Operating Systems Threads. YoungMin Kwon

Threading Language and Support. CS528 Multithreading: Programming with Threads. Programming with Threads

Multithreading and Interactive Programs

Q1: /8 Q2: /30 Q3: /30 Q4: /32. Total: /100

Threaded Programming. Lecture 9: Alternatives to OpenMP

THREADS. Jo, Heeseung

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1

Concurrency and Synchronization. ECE 650 Systems Programming & Engineering Duke University, Spring 2018

CS370 Operating Systems

POSIX Threads. HUJI Spring 2011

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

Concurrent Programming is Hard! Concurrent Programming. Reminder: Iterative Echo Server. The human mind tends to be sequential

Lecture 4. Threads vs. Processes. fork() Threads. Pthreads. Threads in C. Thread Programming January 21, 2005

CS 470 Spring Mike Lam, Professor. OpenMP

Concurrent Programming. Concurrent Programming with Processes and Threads

Threads. studykorner.org

Parallel Programming with Threads

Concurrent Programming. Alexandre David

CS-345 Operating Systems. Tutorial 2: Grocer-Client Threads, Shared Memory, Synchronization

TCSS 422: OPERATING SYSTEMS

Approaches to Concurrency

POSIX Threads Programming

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

CSE 333 Section 9 - pthreads

POSIX Threads and OpenMP tasks

Multithreaded Programming

Multiprocessors 2007/2008

CS370 Operating Systems Midterm Review

Threads. CS 475, Spring 2018 Concurrent & Distributed Systems

What is concurrency? Concurrency. What is parallelism? concurrency vs parallelism. Concurrency: (the illusion of) happening at the same time.

Concurrency. Johan Montelius KTH

CS 5220: Shared memory programming. David Bindel

PRACE Autumn School Basic Programming Models

Chapter 4: Threads. Chapter 4: Threads

Transcription:

CS 261 Fall 2017 Mike Lam, Professor Threads

Parallel computing Goal: concurrent or parallel computing Take advantage of multiple hardware units to solve multiple problems simultaneously Motivations: Maintain high utilization during slow I/O downtime Maintain UI responsiveness during computation Respond simultaneously to multiple realtime events Split up a large problem and solve sub-pieces concurrently to achieve faster time-to-solution (strong scaling) Solve larger problems by adding more hardware (weak scaling)

Parallel computing Process: currently-executing program Code and state (PC, stack, data, heap) Private address space Thread: unit of execution or logical flow Exists within the context of a single process Shares code/data/heap/files w/ other threads Keeps private PC, stack, and registers Stacks are technically shared, but harder to access

Threads One main thread for each process Can create multiple peer threads Single-core example

POSIX threads Pthreads POSIX standard interface for threads in C Not part of the standard library Requires -lpthread flag during linking pthread_create: spawn a new child thread pthread_t struct for storing thread info attributes (or NULL) thread work routine (function pointer) thread routine parameter (void*, can be NULL) pthread_self: get current thread ID pthread_exit: terminate current thread can also terminate implicitly by urning from the thread routine pthread_join: wait for another thread to terminate

Thread creation example #include <stdio.h> #include <pthread.h> void* work (void* arg) { printf("hello from work routine!\n"); urn NULL; } int main () { printf("spawning single child...\n"); pthread_t child; pthread_create(&child, NULL, work, NULL); pthread_join(child, NULL); printf("done!\n"); work() peer main create() join() } urn 0;

Shared memory Variables in threaded programs Global variables (shared, single static copy) Local variables (multiple copies, one on each stack) Technically still shared if in memory, but harder to access Not shared if cached in register Safer to assume they're private; this is conventional Local static variables (shared, single static copy) Heap-allocated variables (shared, dynamic)

Processes vs. threads Process: currently-executing program Code and state (PC, stack, data, heap) Created via system call (fork); parent and child continue equally Private address space not shared w/ other processes Advantages: isolation, safety, and mutual exclusion Thread: unit of execution or logical flow Exists within a process context Created via library call (pthread_create); child runs separate routine Shared address space w/ other threads Private PC, registers, condition codes, and stack Advantages: faster context switching, more shared resources

Parallel patterns Common pattern: master/worker threads One original (main thread) creates multiple child threads Each worker thread does a chunk of the work Coordinate via shared global data structures Keep as much data as possible local to the thread Main thread waits for workers, then aggregates results master workers create join

Issues with shared memory Nondeterminism Data races and deadlock thread1 thread2 foo: foo() x:.quad 0

Issues with shared memory Nondeterminism Data races and deadlock thread1 thread2 foo: x:.quad 0 foo() This interleaving is ok.

Issues with shared memory Nondeterminism Data races and deadlock thread1 thread2 foo: x:.quad 0 foo()

Issues with shared memory Nondeterminism Data races and deadlock thread1 thread2 foo: x:.quad 0 foo() PROBLEM!

Issues with shared memory Nondeterminism Data races and deadlock thread1 thread2 foo: x:.quad 0 foo() This will be a major topic in CS 361.

Mutual exclusion Fixing a data race requires some form of mutual exclusion Only one thread at a time should update a particular memory region In Pthreads, this can be accomplished using either a mutex or a semaphore (more details in CS 361) However, these mechanism introduce overhead! Threads must perform additional checks before updating memory Some threads may have to pause and wait before they may continue If not implemented carefully, the additional overhead may defeat the purpose of using multiple threads Efficient parallel and distributed computing can be hard!

Automatic parallelism Wouldn t it be great if the compiler could automatically parallelize our programs? This is a HARD problem In some cases, it is (kind of) possible Approach #1: code annotations in existing language Example: OpenMP (CS 450, CS 470) Approach #2: new language designed for parallelism Example: HPF and Chapel (CS 430, CS 470) int a[100]; #pragma omp parallel for int i; for (i=0; i < 100; i++) a[i] = i*i; var a: [100] int; forall i in 0..100 do a[i] = i*i; OpenMP example Chapel example

Parallel systems Uniprogramming / batch (1950s) - CS 261 One process at a time w/ complete control of CPU Minimal OS (mostly for launching programs) Multiprogramming / multitasking / time sharing (1960s) - CS 261, CS 450 Multiple processes taking turns on a single CPU Increased utilization, lower response time OS handles scheduling and context switching (Symmetric) multiprocessing (1970s) - CS 361, CS 450, CS 470 Multiple processes share multiple CPUs or cores Increased throughput, increased parallelism OS handles scheduling, context switching, and communication Distributed processing (1980s and onward) - CS 361, CS 470 Multiple processes share multiple computers Massive scaling; OS no longer sufficient (other middleware required)