Introduction to OpenMP

Similar documents
Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

EPL372 Lab Exercise 5: Introduction to OpenMP

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

Introduction to OpenMP

OpenMP Shared Memory Programming

Parallel Programming with OpenMP. CS240A, T. Yang

Introduction to OpenMP. Lecture 2: OpenMP fundamentals

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

Lecture 4: OpenMP Open Multi-Processing

Shared Memory Programming with OpenMP

Introduction to. Slides prepared by : Farzana Rahman 1

Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

Introduction to OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

Shared Memory programming paradigm: openmp

Open Multi-Processing: Basic Course

Programming Shared Memory Systems with OpenMP Part I. Book

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group

Introduction to OpenMP.

A Short Introduction to OpenMP. Mark Bull, EPCC, University of Edinburgh

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

DPHPC: Introduction to OpenMP Recitation session

Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

ECE 574 Cluster Computing Lecture 10

Shared Memory Programming Paradigm!

Introduction to OpenMP

OpenMP Tutorial. Dirk Schmidl. IT Center, RWTH Aachen University. Member of the HPC Group Christian Terboven

Session 4: Parallel Programming with OpenMP

CMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)

Amdahl s Law. AMath 483/583 Lecture 13 April 25, Amdahl s Law. Amdahl s Law. Today: Amdahl s law Speed up, strong and weak scaling OpenMP

Alfio Lazzaro: Introduction to OpenMP

OpenMP on Ranger and Stampede (with Labs)

Introduction to Standard OpenMP 3.1

OpenMP: Open Multiprocessing

Introduction to OpenMP

[Potentially] Your first parallel application

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory

OpenMP: Open Multiprocessing

Parallel programming using OpenMP

Introduction to OpenMP

A brief introduction to OpenMP

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

Practical in Numerical Astronomy, SS 2012 LECTURE 12

Allows program to be incrementally parallelized

Overview: The OpenMP Programming Model

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

High Performance Computing: Tools and Applications

OpenMP Tutorial. Seung-Jai Min. School of Electrical and Computer Engineering Purdue University, West Lafayette, IN

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

Introduction to OpenMP

OpenMP programming. Thomas Hauser Director Research Computing Research CU-Boulder

OpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

Introduction to OpenMP

Barbara Chapman, Gabriele Jost, Ruud van der Pas

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

Lecture 2 A Hand-on Introduction to OpenMP

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

OPENMP TIPS, TRICKS AND GOTCHAS

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

OPENMP TIPS, TRICKS AND GOTCHAS

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

Multithreading in C with OpenMP

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

Shared Memory Programming Models I

Introduction to OpenMP. Lecture 4: Work sharing directives

OpenMP+F90 p OpenMP+F90

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

Parallel Programming

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

OpenMP - Introduction

15-418, Spring 2008 OpenMP: A Short Introduction

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

Our new HPC-Cluster An overview

CS691/SC791: Parallel & Distributed Computing

Parallel Programming in C with MPI and OpenMP

Parallel Programming: OpenMP + FORTRAN

Parallel Programming in C with MPI and OpenMP

Data Environment: Default storage attributes

Lab: Scientific Computing Tsunami-Simulation

Shared Memory Programming with OpenMP (3)

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

EE/CSCI 451: Parallel and Distributed Computation

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

DPHPC: Introduction to OpenMP Recitation session

CS 470 Spring Mike Lam, Professor. OpenMP

Introduction to OpenMP

INTRODUCTION TO OPENMP

HPC Workshop University of Kentucky May 9, 2007 May 10, 2007

Distributed Systems + Middleware Concurrent Programming with OpenMP

Introduction to OpenMP

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Parallel Programming in C with MPI and OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP

!OMP #pragma opm _OPENMP

Transcription:

1 Introduction to OpenMP NTNU-IT HPC Section John Floan Notur: NTNU HPC http://www.notur.no/ www.hpc.ntnu.no/ Name, title of the presentation

2 Plan for the day Introduction to OpenMP and parallel programming Tutorial 1. Parallel region. Thread creation. (Exercise Helloworld) Tutorial 2. Parallel for/do loop and data sharing (Exercise Sine and Matrix multiplication) Tutorial 3. Synchronization: Critical and Atomic directives (Exercise Pi) Tutorial 4. Reduction. (Pi) How to optimize my sequential code with OpenMP? Memory allocation

3 I. OpenMP (Open Multi-processing). OpenMP supports multi-platform shared-memory parallel programming in C/C++ and Fortran. OpenMP is a portable, scalable model with a simple and flexible interface for developing parallel applications for laptop to supercomputer. OpenMP is implemented in several Fortran and C/C++ compiler as GNU, IBM, Intel, Portland, Cray, HP, Microsoft etc. The OpenMP is a SPMD Single Program Multiple Data. Each thread redundantly execute the same code. See http://openmp.org.

4 Vilje (SGI Altix 8600)) See http://docs.notur.no/ntnu Each Node Total Cores 16 23040 Nodes - 1440 Memory 32GB 46TB Storage - 1000TB Flops 479 Tflops (peak) Kongull (HP) Each Node Total Cores 12 1116 Nodes - 93 Memory 48GB(44 nodes) 3TB 24GB(49 nodes) Storage - 73TB Flops 11 Tflops

5 Parallel computation A program can be split up to run on several processors that runs in parallel. Sequential program: Init 1 2 Serial computation p t serial Post proc Init Par. comp. 1 Par. comp. 2 Par. comp. 3 Par. comp. p Post proc t parallel Speedup S = t serial / t parallel (t-serial: Execution time for a single core/processor progam t-parallel:execution time for the multicore/multiprocessor program) Speedup for p processors/cores: S p.

6 What is a parallel computation on a multicore, shared memory processor. Supercomputers, clusters and PC/laptops today have processors with several cores, and with shared memory. 2, 4, 6 and 8 cores processors are common today. Intel MIC processors will come in 2012, and will have more then 50 cores. MIC (Many Integrated Cores) NTNU: Vilje(2012): 16 cores (2 proc) each node, 1440 nodes =23 040 cores Njord(2006): 16 cores (8 proc) each node, 192 nodes = 2 976 cores Kongull(2010): 12 cores (2 proc) each node, 93 nodes = 1 116 cores.

7 Multi-core processor Figure 1. Intel i7 Sandy Bridge, 4 core processor. ( L1: <500kB, L2<10MB,L3<40MB) RAM - Each core can run there own program block (thread), and simultaneously with the other cores. - All cores share all the memory, and with fast memory access. - All communication between the threads are via variables (shard memory).

8 Example: Matrix calculation. B = c * A, where A and B is mxn matrices and c is a constant Sequential computation: All computation is carry out on only one processor or core. i j Program Init the matrix A for i = 1 to m for j = 1 to n B(i,j) = c * A(i,j) Benefits: OK for small computation, fast memory access and none memory conflicts. Drawback: Limited memory space (GB) and sequential computations.

9 Parallel computation with MPI and cluster. The matrix is split up and scattered to several computers/nodes which are interconnected to each other via IP, infinity band or other high performance serial link. Program: Master: Initialize the matrix. Master split up and spread the Node 1 Node 2 matrix to all nodes. Node 3 Node 4 For i = 1 to m_mynode for j = 1 to n_mynode myb(i,j) = c * locala(i,j) Benefits: More memory space (TB) and parallel computation on each node. Drawback: Communication latency between the nodes.

10 Parallel computation with OpenMP and shared memory. The matrix remains in the memory and each core/thread in the processor compute its part of the matrix in parallel. i j Thread 0 in core 0 Thread 1 in core 1 Thread 2 in core 2 Program Set compiler directive for a parallel region. for i = 1 to m for j = 1 to n B(i,j) = f(t) * A(i,j) Thread 3 in core 3 Benefits: Parallel computation and low communication latency between the cores. Drawback: Small memory space (GB) and memory conflicts.

11 OpenMP runtime library routines and environment variable. Important environment variable OMP_NUM_THREADS (export OMP_NUM_THREADS=8;./myprogram) This environment variable set the number of threads. The default is max threads. OpenMP Runtime Library Routines Some routines for testing and debugging. omp_set_num_threads(n) // Set n number of threads before // a parallel region omp_get_num_threads()// Get the number of OpenMP // threads. Return Integer omp_get_thread_num() // Get the current thread number. // Return integer omp_get_wtime() // Get wall clock time in seconds. // Return double/real(8)

12 Tutorials. Some informations: Login at Ve/Vilje and Kongull Ve ssh -X ve.hpc.ntnu.no Kongull: ssh -X kongull.hpc.ntnu.no Programs Copy all files from /work/floan/tutorials/openmp to your home folder. Compile your program make helloworld ( or make mult, make pi) Run a job. Note! Do not start a job interactively (./myprogram) Njord: llsubmit helloworld_c.sh (Fortran:..._f.sh) Kongull: qsub helloworld_c.sh (Note on Kongull! module load intel) (Note! It is a job script for each tutorials) Check the queue status: Njord: llq Kongull: qstat

13 Tutorial 1. Parallel region. Thread creation. A parallel region is the part of the program where program is spread in to several threads and core. Before and after a parallel region the program run on 1 thread (master thread). It is called fork when the program go from 1 thread to parallel region and join when the program go back to 1 thread (master thread).

14 Example C Fortran... //1 thread (Master thread) //Fork to several threads in parallel #pragma omp parallel!$omp PARALLEL { do_something_in_parallel(); do_something_in_parallel() }!$OMP END PARALLEL //Join to 1 thread...

15 Synchronization: Barrier. C #pragma omp barrier Fortran!$OMP BARRIER Each thread waits until all threads arrive.

16 Master construction. The master construct specifies a structured block that is executed by a master thread of a team. There are no implemented barrier either on entry to, or exit from, the master construction. #pragma omp master!$omp MASTER Single construction The single construct specifies that the associated structured block is executed by only one of the threads in the team (not necessarily the master thread). A barrier is implemented at the end of the single block. #pragma omp single!$omp SINGLE (Example ex_barrier.c)

17 Example Barrier and Master C Fortran #pragma omp parallel!$omp PARALLEL { do_many_things_in_parallel(); do_many...() //All threads wait here until all arrives. #pragma omp barrier!$omp BARRIER #pragma omp master!$omp MASTER { // Only the master thread // will call this function: save_results() save_results (); }!$OMP END MASTER //All threads wait here until all arrives. #pragma omp barrier!$omp BARRIER do_many_other_things_in_parallel(); do_many...() }!$OMP END PARALLEL

18 Exercise a. Hello world. Modify the sequential Hello world program and add the parallel region around printf/write command C Fortran int main() program helloworld { printf( Hello world\n ); write(6,*) 'Hello world' } end program helloworld -Compile your program: make helloworld -Execute the batch job helloworld_c.sh (C) or helloworld_f.sh (Fortran) -Open the output file helloworld_xxxxxxxx. Exercise b. Use OpenMP runtime library routines. Modify your program to print out number of threads and thread no like This:. Number of threads: 16 (Always first) Hello world from thread 1

19 Tutorial 2. Parallel for/do loop and data sharing. OpenMP automatically split up the for-loop to several threads and send a copy of the block to each core. This construction is called worksharing, and shall be initialize as this: C Fortran #pragma omp parallel for!$omp parallel do for (i=0 ; i<n ; i++) do i=0, n do_someting(); do_something() end do!$omp end parallel do It is also allowed to initialize the for/do loop as #pragma omp parallel!$omp parallel { #pragma omp for!$omp do for (i=0;... do i=0, n......

20 Example: 4 threads and n=40. OpenMP divide the for/do loop into chunks and the chunk size is 10. #pragma omp parallel for!$omp paralle do for(i=1;i<=n;i++) do i=1,n Thread 1 Thread 2 Thread 3 Thread 4 for i=1 to 10 for i=11 to 20 for i=21 to 30 for i=31 to 40............ Note! It is important that the parallel for/do loop is iterational independent. That means; one iteration is independent of the iteration before. Parallel loop iterations are not in sequential order. (Example ex_loop.c)

21 Data sharing: Shared, private and firstprivate clause. All variables declared outside a parallel region is shared inside the parallel region as default. Note! The for/do iterator (e.g. i ) is set to private/local inside the parallel region. Shared Set a variable shared inside a parallel region. Private Public variables can be set to private inside the parallel region, but then the variable has no value.

22 Example 1. Private. int i; integer::i,n int n=1000; real::tmp double tmp=0; n=1000... tmp=0 #pragma omp parallel for private (tmp)!$omp parallel do private(tmp) for(i=0;i<n;i++) do i=1, n { //Tmp is local tmp = check(a[i]); tmp = check(a(i)) if (tmp > 0) if (tmp > 0) A[i] = tmp; A(i) = tmp end if } end do...!$end omp parallel do...

23 Firstprivate Public variables can be set to be private inside the parallel region and initialize its value with the corresponding value from the master thread.

24 Example 2. Firstprivate. C Fortran int main () program tut3ex2 { int a=0, b=1; integer::a,b,i int i; a = 1 b = 0...... #pragma omp parallel for firstprivate(a,b)!$omp parallel do firstprivate(a,b) for (i=0;i<16;i++) do i=1,16 { A[i] = func(a) + func(b) A(i) = func(a) + func(b) a++; a = a +1 b++; b = b+1 } end do!$end omp parallel do }//End main end program tut3ex1

25 Exercise a. Sine and cos a1) This program calculate sine of each element in array X into its element for Y Y(i) = 2 * X(i) Measure the execution time for the sequential code. Modify the program forsin with parallel for/do-loop. Measure the execution time again and calculate the speedup. module load intelcomp Compile: make forsin a2) Add sin and cos to the equation 1 Y(i) = sin (X(i)) + cos(x(i)) Measure the execution time (seq. and parallel) and calculate the speedup. Why is there a different speedup for exercise a1 and a2? Exercise b. Matrix multiplication. C=AB, Measure the execution time for the sequential code. Modify the program mult with parallel for/do-loop. Measure the execution time again and calculate the speedup. Compile: make mult (Try with size=1000,2000 and 3000)

26 Tutorial 3. Synchronization: Critical and Atomic directives. The OpenMP do not protect a variable or a region as default. If several threads shall update same variable in same time, the result can be that one thread do not update the variable and cause wrong results of the calculation. Critical: Critical provides mutual exclusion: Only one thread at time can enter a critical region. Example: C Fortran #pragma omp critical!$omp critical calculate(b,n); calculate(b,n) Atomic: Atomic provides mutual exclusion but only applies to the update of a memory location. Example: C Fortran #pragma omp atomic!$omp atomic x += tmp; x = x + tmp

27 Exercise. Calculation of П (3.14159265358979...). To calculate pi we can use this formula 1 0 4.0 1+ x 2 dx = П Create a parallel version of the pi.c or pi.f90. -make pi Calculate the speedup S (Measure execution time before and after including OpenMP).

28 Tutorial 4. Reduction. The OpenMP reduction clause: Reduction (op:list) A local copy of each list variable is made and initialized depending on the operator op (ex + ). Compiler finds standard reduction expressions containing op and uses them to update the local copy Local copy are reduced into a single value and combined with the original global value.

29 Example Average C Fortran double ave=0; real :: ave double A[n]; real, dimension (n)::a int i; integer :: i ave = 0 put_something_in (A); put_something_in (A) #pragma omp parallel for reduction (+:ave)!$omp parallel do reduction (+:ave) for (i=0 ; i < n ; i++) do i = 1, n ave += A[i]; ave = ave + A(i) end do ave = ave / n;!$omp end parallel do ave = ave / n

30 Different reduction operators: C/C++ Fortran + + * * - - & /.AND. ^.OR. &&.EQV..NEQV.

31 Exercise. Modify you pi program with reduction. Calculate the speedup. (Measure execution time before and after including OpenMp

32 1 2 3 Typical speedup performance 1. The program scale 2. Part of the program can not be/or is not parallelized 3. Typical memory conflict

33 Allocating memory in C. Auto allocation of a memory. Example: { // Inside a scop Int a[10][10]; } - The array is not reachable outside a scop and will be deleted when instruction pointer exit the scop. - The array will be stored into the stack. The stack has fast accessibility. - The stack is limited. If you allocate more memory than available can result in a program crash due to stack overflow. - For large memory allocation; use dynamic memory allocation.

34 Dynamic memory allocation. { } Int n=1000; // Size of the array; double *array; // Init array array = (double *) malloc ( sizeof (double) * n );... myfunction (n,array);... free (array); - Dynamic memory allocated arrays will be stored in the heap. - You can allocate all memory space. - Heap has slower accessibility than the stack. - The array will not be deleted when the instruction pointer go outside the scop. You have to use free(array);

35 How to optimize my sequential code with OpenMP. 1.Measure the time consumption of each for loop. 2.Start with the loop that have highest time consumption. 3.Check that each loop iterations are independent of each other. 4.Check for critical regions/variables. 5.Modify other sequential loops to parallel loops if possible.