Parallel Programming with OpenMP. CS240A, T. Yang

Similar documents
Parallel Programming with OpenMP. CS240A, T. Yang, 2013 Modified from Demmel/Yelick s and Mary Hall s Slides

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

Tieing the Threads Together

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS4961 Parallel Programming. Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms 9/5/12. Administrative. Mary Hall September 4, 2012

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP. Serializes the execunon of a thread

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Introduction to OpenMP.

Data Environment: Default storage attributes

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

Shared Memory Programming: Threads and OpenMP. Lecture 6

Heterogeneous Compu.ng using openmp lecture 2

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

Introduction to. Slides prepared by : Farzana Rahman 1

DPHPC: Introduction to OpenMP Recitation session

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

ECE 574 Cluster Computing Lecture 10

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

Alfio Lazzaro: Introduction to OpenMP

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #16. Warehouse Scale Computer

DPHPC: Introduction to OpenMP Recitation session

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

Lecture 4: OpenMP Open Multi-Processing

COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP

Parallel Programming. OpenMP Parallel programming for multiprocessors for loops

Allows program to be incrementally parallelized

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

Parallel Computing Parallel Programming Languages Hwansoo Han

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

Introduction to OpenMP

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Distributed Systems + Middleware Concurrent Programming with OpenMP

A brief introduction to OpenMP

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas

Parallel Programming in C with MPI and OpenMP

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

Parallel Programming using OpenMP

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

Parallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops

Parallel Programming using OpenMP

Overview: The OpenMP Programming Model

EPL372 Lab Exercise 5: Introduction to OpenMP

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

Lecture 2 A Hand-on Introduction to OpenMP

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

An Introduction to OpenMP

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory

Introduction to OpenMP

Introduction to OpenMP

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

CS 470 Spring Mike Lam, Professor. OpenMP

Multithreading in C with OpenMP

CSL 860: Modern Parallel

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

Introduction to Standard OpenMP 3.1

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

OpenMP Fundamentals Fork-join model and data environment

Shared Memory Programming with OpenMP (3)

CS 470 Spring Mike Lam, Professor. OpenMP

Multi-core Architecture and Programming

Shared Memory Programming Model

Review!of!Last!Lecture! CS!61C:!Great!Ideas!in!! Computer!Architecture! Synchroniza+on,- OpenMP- Great!Idea!#4:!Parallelism! Agenda!

COMP Parallel Computing. SMM (2) OpenMP Programming Model

Concurrent Programming with OpenMP

Cluster Computing 2008

Programming with OpenMP*

CS4961 Parallel Programming. Lecture 9: Task Parallelism in OpenMP 9/22/09. Administrative. Mary Hall September 22, 2009.

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

EE/CSCI 451: Parallel and Distributed Computation

Parallel programming using OpenMP

Parallel Numerical Algorithms

High Performance Computing: Tools and Applications

Shared Memory Programming Paradigm!

CMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)

OPENMP OPEN MULTI-PROCESSING

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number

Parallel Programming

Introduction to OpenMP. Motivation


OpenMP Tutorial. Rudi Eigenmann of Purdue, Sanjiv Shah of Intel and others too numerous to name have contributed content for this tutorial.

Concurrent Programming with OpenMP

Parallel and Distributed Programming. OpenMP

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

Parallel and Distributed Computing

Transcription:

Parallel Programming with OpenMP CS240A, T. Yang 1

A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs openmp.org Talks, examples, forums, etc. OpenMP is a portable, threaded, shared-memory programming specification with light syntax Exact behavior depends on OpenMP implementation! Requires compiler support (C or Fortran) OpenMP will: Allow a programmer to separate a program into serial regions and parallel regions, rather than T concurrently-executing threads. Hide stack management Provide synchronization constructs OpenMP will not: Parallelize automatically Guarantee speedup Provide freedom from data races 2

Motivation OpenMP int main() { // Do this part in parallel printf( "Hello, World!\n" ); return 0; 3

Motivation OpenMP int main() { All OpenMP directives begin: #pragma omp_set_num_threads(4); // Do this part in parallel #pragma omp parallel { printf( "Hello, World!\n" ); Printf Printf Printf Printf return 0; 4

OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly (SPMD) Work within work-sharing constructs is distributed among the threads in a team Example with C/C++ syntax #pragma omp parallel [ clause [ clause ]... ] new-line structured-block clause can include the following: private (list) shared (list) Example: OpenMP default is shared variables To make private, need to declare with pragma: #pragma omp parallel private (x)

OpenMP Programming Model - Review Fork - Join Model: Sequential code Thread 0 Thread 1 Thread 0 Thread 1 OpenMP programs begin as single process (master thread) and executes sequentially until the first parallel region construct is encountered FORK: Master thread then creates a team of parallel threads Statements in program that are enclosed by the parallel region construct are executed in parallel among the various threads JOIN: When the team threads complete the statements in the parallel region construct, they synchronize and terminate, leaving only the master thread 6

parallel Pragma and Scope More Examples #pragma omp parallel num_threads(2) { x=1; y=1+x; X=1; y=1+x; x=1; y=1+x; X and y are shared variables. There is a risk of data race 7

parallel Pragma and Scope - Review #pragma omp parallel { x=1; y=1+x; Thread 0 Thread 1 X=1; y=1+x; x=1; y=1+x; Assume number of threads=2 X and y are shared variables. There is a risk of data race 8

parallel Pragma and Scope - Review #pragma omp parallel num_threads(2) { x=1; y=1+x; X=1; y=x+1; x=1; y=x+1; X and y are shared variables. There is a risk of data race 9

Divide for-loop for parallel sections for (int i=0; i<8; i++) x[i]=0; //run on 4 threads #pragma omp parallel { // Assume number of threads=4 int numt=omp_get_num_thread(); int id = omp_get_thread_num(); //id=0, 1, 2, or 3 for (int i=id; i<8; i +=numt) x[i]=0; Thread 0 Thread 1 Thread 2 Thread 3 Id=0; Id=1; Id=2; Id=3; x[0]=0; x[1]=0; x[2]=0; x[3]=0; X[4]=0; X[5]=0; X[6]=0; X[7]=0; 10

Use pragma parallel for for (int i=0; i<8; i++) x[i]=0; #pragma omp parallel for { for (int i=0; i<8; i++) x[i]=0; System divides loop iterations to threads Id=0; x[0]=0; X[4]=0; Id=1; x[1]=0; X[5]=0; Id=2; x[2]=0; X[6]=0; Id=3; x[3]=0; X[7]=0; 11

OpenMP Data Parallel Construct: Parallel Loop Compiler calculates loop bounds for each thread directly from serial source (computation decomposition) Compiler also manages data partitioning Synchronization also automatic (barrier)

Programming Model Parallel Loops Requirement for parallel loops No data dependencies (reads/write or write/write pairs) between iterations! Preprocessor calculates loop bounds and divide iterations among parallel threads #pragma omp parallel for for( i=0; i < 25; i++ ) { printf( Foo );? 13

Example for (i=0; i<max; i++) zero[i] = 0; Breaks for loop into chunks, and allocate each to a separate thread e.g. if max = 100 with 2 threads: assign 0-49 to thread 0, and 50-99 to thread 1 Must have relatively simple shape for an OpenMP-aware compiler to be able to parallelize it Necessary for the run-time system to be able to determine how many of the loop iterations to assign to each thread No premature exits from the loop allowed i.e. No break, return, exit, goto statements In general, don t jump outside of any pragma 14 block

Parallel Statement Shorthand #pragma omp parallel { #pragma omp for for(i=0;i<max;i++) { can be shortened to: #pragma omp parallel for for(i=0;i<max;i++) { This is the only directive in the parallel section Also works for sections 15

Example: Calculating π 16

Sequential Calculation of π in C #include <stdio.h> /* Serial Code */ static long num_steps = 100000; double step; void main () { int i; double x, pi, sum = 0.0; step = 1.0/(double)num_steps; for (i = 1; i <= num_steps; i++) { x = (i - 0.5) * step; sum = sum + 4.0 / (1.0 + x*x); pi = sum / num_steps; printf ("pi = %6.12f\n", pi); 17

Parallel OpenMP Version (1) #include <omp.h> #define NUM_THREADS 4 static long num_steps = 100000; double step; void main () { int i; double x, pi, sum[num_threads]; step = 1.0/(double) num_steps; #pragma omp parallel private ( i, x ) { int id = omp_get_thread_num(); for (i=id, sum[id]=0.0; i< num_steps; i=i+num_threads) { x = (i+0.5)*step; sum[id] += 4.0/(1.0+x*x); for(i=1; i<num_threads; i++) sum[0] += sum[i]; pi = sum[0] / num_steps printf ("pi = %6.12f\n", pi); 18

OpenMP Reduction double avg, sum=0.0, A[MAX]; int i; #pragma omp parallel for private ( sum Sum+=A[0] ) for (i = 0; i <= MAX ; i++) sum += A[i]; avg = sum/max; // bug Sum+=A[1] Sum+=A[2] Sum+=A[3 Problem is that we really want sum over all threads! Reduction: specifies that 1 or more variables that are private to each thread are subject of reduction operation at end of parallel region: reduction(operation:var) where Operation: operator to perform on the variables (var) at the end of the parallel region Var: One or more variables on which to perform scalar reduction. double avg, sum=0.0, A[MAX]; int i; #pragma omp for reduction(+ : sum) for (i = 0; i <= MAX ; i++) sum += A[i]; avg = sum/max; Sum+=A[0] Sum+=A[1] Sum+=A[2] Sum+=A[3] 19

OpenMp: Parallel Loops with Reductions OpenMP supports reduce operation sum = 0; #pragma omp parallel for reduction(+:sum) for (i=0; i < 100; i++) { sum += array[i]; Reduce ops and init() values (C and C++): + 0 bitwise & ~0 logical & 1-0 bitwise 0 logical 0 * 1 bitwise ^ 0

Calculating π Version (1) - review #include <omp.h> #define NUM_THREADS 4 static long num_steps = 100000; double step; void main () { int i; double x, pi, sum[num_threads]; step = 1.0/(double) num_steps; #pragma omp parallel private ( i, x ) { int id = omp_get_thread_num(); for (i=id, sum[id]=0.0; i< num_steps; i=i+num_threads) { x = (i+0.5)*step; sum[id] += 4.0/(1.0+x*x); for(i=1; i<num_threads; i++) sum[0] += sum[i]; pi = sum[0] / num_steps printf ("pi = %6.12f\n", pi); 21

Version 2: parallel for, reduction #include <omp.h> #include <stdio.h> /static long num_steps = 100000; double step; void main () { int i; double x, pi, sum = 0.0; step = 1.0/(double) num_steps; #pragma omp parallel for private(x) reduction(+:sum) for (i=1; i<= num_steps; i++){ x = (i-0.5)*step; sum = sum + 4.0/(1.0+x*x); pi = sum / num_steps; printf ("pi = %6.8f\n", pi); 22

Loop Scheduling in Parallel for pragma #pragma omp parallel for for (i=0; i<max; i++) zero[i] = 0; Master thread creates additional threads, each with a separate execution context All variables declared outside for loop are shared by default, except for loop index which is private per thread (Why?) Implicit barrier synchronization at end of for loop Divide index regions sequentially per thread Thread 0 gets 0, 1,, (max/n)-1; Thread 1 gets max/n, max/n+1,, 2*(max/n)-1 Why? 23

Impact of Scheduling Decision Load balance Same work in each iteration? Processors working at same speed? Scheduling overhead Static decisions are cheap because they require no run-time coordination Dynamic decisions have overhead that is impacted by complexity and frequency of decisions Data locality Particularly within cache lines for small chunk sizes Also impacts data reuse on same processor

OpenMP environment variables OMP_NUM_THREADS sets the number of threads to use during execution when dynamic adjustment of the number of threads is enabled, the value of this environment variable is the maximum number of threads to use For example, setenv OMP_NUM_THREADS 16 [csh, tcsh] export OMP_NUM_THREADS=16 [sh, ksh, bash] OMP_SCHEDULE applies only to do/for and parallel do/for directives that have the schedule type RUNTIME sets schedule type and chunk size for all such loops For example, setenv OMP_SCHEDULE GUIDED,4 [csh, tcsh] export OMP_SCHEDULE= GUIDED,4 [sh, ksh, bash]

Programming Model Loop Scheduling schedule clause determines how loop iterations are divided among the thread team static([chunk]) divides iterations statically between threads Each thread receives [chunk] iterations, rounding as necessary to account for all iterations Default [chunk] is ceil( # iterations / # threads ) dynamic([chunk]) allocates [chunk] iterations per thread, allocating an additional [chunk] iterations when a thread finishes Forms a logical work queue, consisting of all loop iterations Default [chunk] is 1 guided([chunk]) allocates dynamically, but [chunk] is exponentially reduced with each allocation 26

Loop scheduling options 2 (2)

Programming Model Data Sharing Parallel programs often employ two types of data Shared data, visible to all threads, similarly named Private data, visible to a single thread (often stack-allocated) PThreads: Global-scoped variables are shared Stack-allocated variables are private OpenMP: shared variables are shared private variables are private // shared, globals int bigdata[1024]; void* foo(void* bar) { // int private, tid; stack int tid; #pragma omp parallel \ /* shared Calculation ( bigdata goes) \ private here */( tid ) { /* Calc. here */ 28

Programming Model - Synchronization OpenMP Synchronization OpenMP Critical Sections Named or unnamed No explicit locks / mutexes Barrier directives Explicit Lock functions When all else fails may require flush directive #pragma omp critical { /* Critical code here */ #pragma omp barrier omp_set_lock( lock l ); /* Code goes here */ omp_unset_lock( lock l ); #pragma omp single Single-thread regions within parallel regions { master, single directives /* Only executed once */ 29

Omp critical vs. atomic int sum=0 #pragma omp parallel for for(int j=1; j <n; j++){ int x = j*j; #pragma omp critical { sum=sum+x;// One thread enters the critical section at a time. * May also use #pragma omp atomic x += exper Faster, but can support only limited arithmetic operation such as ++, --, +=, -=, *=, /=, &=, = 30

OpenMP Timing Elapsed wall clock time: double omp_get_wtime(void); Returns elapsed wall clock time in seconds Time is measured per thread, no guarantee can be made that two distinct threads measure the same time Time is measured from some time in the past, so subtract results of two calls to omp_get_wtime to get elapsed time 31

Parallel Matrix Multiply: Run Tasks Ti in parallel on multiple threads T 1 T 1 T 2

Parallel Matrix Multiply: Run Tasks Ti in parallel on multiple threads T 2 T 1 T 2 33

Matrix Multiply in OpenMP // C[M][N] = A[M][P] B[P][N] start_time = omp_get_wtime(); #pragma omp parallel for private(tmp, j, k) for (i=0; i<m; i++){ Outer loop spread across N threads; for (j=0; j<n; j++){ inner loops inside a single tmp = 0.0; thread for( k=0; k<p; k++){ /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += A[i][k] * B[k][j]; C[i][j] = tmp; run_time = omp_get_wtime() - start_time; 34

OpenMP Summary OpenMP is a compiler-based technique to create concurrent code from (mostly) serial code OpenMP can enable (easy) parallelization of loop-based code with fork-join parallelism pragma omp parallel pragma omp parallel for pragma omp parallel private ( i, x ) pragma omp atomic pragma omp critical #pragma omp for reduction(+ : sum) OpenMP performs comparably to manually-coded threading Not a silver bullet for all applications 35