Introduction to. Slides prepared by : Farzana Rahman 1

Similar documents
COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

Lecture 2 A Hand-on Introduction to OpenMP

OpenMP Tutorial. Rudi Eigenmann of Purdue, Sanjiv Shah of Intel and others too numerous to name have contributed content for this tutorial.

Multi-core Architecture and Programming

Data Environment: Default storage attributes

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Data Handling in OpenMP

Parallel and Distributed Programming. OpenMP

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number

Shared Memory Parallelism using OpenMP

Programming with OpenMP*

Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

Shared Memory Parallelism - OpenMP

Parallel Programming Principle and Practice. Lecture 6 Shared Memory Programming OpenMP. Jin, Hai

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

An Introduction to OpenMP

Lecture 2 A Hand-on Introduction to OpenMP

OPENMP OPEN MULTI-PROCESSING

EPL372 Lab Exercise 5: Introduction to OpenMP

Lecture 4: OpenMP Open Multi-Processing

15-418, Spring 2008 OpenMP: A Short Introduction

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

Introduction to OpenMP

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell. Specifications maintained by OpenMP Architecture Review Board (ARB)

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell

Parallel programming using OpenMP

Introduction [1] 1. Directives [2] 7

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

Introduction to Programming with OpenMP

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

Alfio Lazzaro: Introduction to OpenMP

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

OpenMP Application Program Interface

Parallel Programming with OpenMP. CS240A, T. Yang

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Cluster Computing 2008

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call

Parallel Programming in C with MPI and OpenMP

Distributed Systems + Middleware Concurrent Programming with OpenMP

Introduction to OpenMP

Multicore Computing. Arash Tavakkol. Instructor: Department of Computer Engineering Sharif University of Technology Spring 2016

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

Overview: The OpenMP Programming Model


Introduction to OpenMP.

OpenMP on Ranger and Stampede (with Labs)

Parallel Programming in C with MPI and OpenMP

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

OpenMP Application Program Interface

Introduction to OpenMP

ECE 574 Cluster Computing Lecture 10

CSL 860: Modern Parallel

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

Parallel Programming in C with MPI and OpenMP

OpenMP Tutorial. Seung-Jai Min. School of Electrical and Computer Engineering Purdue University, West Lafayette, IN

Introduction to OpenMP. Rogelio Long CS 5334/4390 Spring 2014 February 25 Class

Massimo Bernaschi Istituto Applicazioni del Calcolo Consiglio Nazionale delle Ricerche.

Parallel Programming. OpenMP Parallel programming for multiprocessors for loops

An Introduction to OpenMP

DPHPC: Introduction to OpenMP Recitation session

A brief introduction to OpenMP

Shared Memory Programming Paradigm!

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

PARLab Parallel Boot Camp

Introduction to OpenMP

Introduction to OpenMP

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

OpenMP Language Features

OpenMP - Introduction

Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

Shared memory programming

OpenMP Technical Report 3 on OpenMP 4.0 enhancements

Programming Shared-memory Platforms with OpenMP. Xu Liu

Parallel Programming with OpenMP. CS240A, T. Yang, 2013 Modified from Demmel/Yelick s and Mary Hall s Slides

Computational Mathematics

PC to HPC. Xiaoge Wang ICER Jan 27, 2016

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing

Synchronisation in Java - Java Monitor

Practical in Numerical Astronomy, SS 2012 LECTURE 12

Multithreading in C with OpenMP

Parallel Programming using OpenMP

Parallel Programming using OpenMP

Review. Tasking. 34a.cpp. Lecture 14. Work Tasking 5/31/2011. Structured block. Parallel construct. Working-Sharing contructs.

Parallel Computing Parallel Programming Languages Hwansoo Han

DPHPC: Introduction to OpenMP Recitation session

Programming Shared Memory Systems with OpenMP Part I. Book

Session 4: Parallel Programming with OpenMP

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

CS4961 Parallel Programming. Lecture 9: Task Parallelism in OpenMP 9/22/09. Administrative. Mary Hall September 22, 2009.

Parallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops

CS691/SC791: Parallel & Distributed Computing

Transcription:

Introduction to OpenMP Slides prepared by : Farzana Rahman 1

Definition of OpenMP Application Program Interface (API) for Shared Memory Parallel Programming Directive based approach with library support Targets existing applications and widely used languages: Fortran API released October `97 C, C++ API released October `98 Supports multiple platform Slides prepared by : Farzana Rahman 2

OpenMP Specification Application Program Interface (API) for Shared Memory Parallel Programming non-profit organization: www.openmp.org full reference manual http://www.openmp.org/specs Slides prepared by : Farzana Rahman 3

A Programmer s View of OpenMP OpenMP is a portable, threaded, shared-memory programming specification with light syntax Exact behavior depends on OpenMP implementation! Requires compiler support (C/C++ or Fortran) OpenMP will: Allow a programmer to separate a program into serial regions and parallel regions, rather than concurrently-executing threads. Provide synchronization constructs OpenMP will not: Parallelize automatically Guarantee speedup Provide freedom from data races Slides prepared by : Farzana Rahman 4

Multithreading in OpemMP OpenMP is an implementation of multithreading, a method of parallelization where master thread forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment allocating threads to different processors. The runtime environment allocates threads to processors depending on usage, machine load and other factors. The number of threads can be assigned by the runtime environment based on environment variables or in code using functions. The OpenMP functions are included in a header file labelled "omp.h" in C/C++ Slides prepared by : Farzana Rahman 5

OpenMP programming model Shared Memory, Thread Based Parallelism: OpenMP is based upon the existence of multiple threads in the shared memory programming paradigm. A shared memory process consists of multiple threads. Explicit Parallelism: OpenMP is an explicit (not automatic) programming model, offering the programmer full control over parallelization. Fork - Join Model: OpenMP uses the fork-join model of parallel execution All OpenMP programs begin as a single process: The master thread executes sequentially until the first parallel region construct is encountered Slides prepared by : Farzana Rahman 6

OpenMP programming model (Cont.) FORK: the master thread then creates a team of parallel threads The statements in the program that are enclosed by parallel region construct are then executed in parallel among the various team threads JOI : When the team threads complete the statements in the parallel region construct, they synchronize and terminate, leaving only the master thread Slides prepared by : Farzana Rahman 7

OpenMP programming model (Cont.) Compiler Directive Based: OpenMP parallelism is specified through the use of compiler directives. ested Parallelism Support: API facilitates placement of parallel constructs inside other parallel constructs Implementations may or may not support this feature. Dynamic Threads: I/O: The API provides for dynamically altering the number of threads which may used to execute different parallel regions Implementations may or may not support this feature. OpenMP specifies nothing about parallel I/O. This is particularly important if multiple threads attempt to write/read from the same file. If every thread conducts I/O to a different file, the issues are not as significant. It is entirely up to the programmer to insure that I/O is conducted correctly within the context of a multi-threaded program. Slides prepared by : Farzana Rahman 8

OpenMP Interface Model Directives And pragmas Runtime library routines Environment Variables Control structures Work sharing synchronization Data scope attributes: private firstprivate lastprivate shared reduction Control and query routines: number of threads throughput mode nested parallelism Lock API Runtime environment: schedule type max #threads nested parallelism throughput mode Slides prepared by : Farzana Rahman 9

OpenMP Directives Format #pragma omp Required for all OpenMP C/C++ directives directive-name [clause, ] Optional clauses can be in any order, and repeated as necessary unless otherwise restricted. Newline Example: #pragma omp parallel default (shared) private (beta, pi) Slides prepared by : Farzana Rahman 10

OpenMP Directives Parallel Specifies a region that will executed in parallel. Do/for Shares iterations of a loop across a team of threads. Sections Breaks work into separate, discrete sections. Each section is executed by a thread. Single Serialized a section of code. Slides prepared by : Farzana Rahman 11

OpenMP Synchronization Master Specifies a region that is to be executed only by the master thread of the team. Critical Specifies a region of code that must be executed only one thread at a time Barrier Synchronizes all threads on the team Slides prepared by : Farzana Rahman 12

OpenMP Synchronization Atomic Specifies that a memory location can only be written by one thread at a time. Flush Identifies a point in which all thread visible variable are written back to memory. Ordered Specifies that iterations of the enclosed loop will be executed in the same order as if they were executed on a serial processor. Slides prepared by : Farzana Rahman 13

OpenMP basic syntax Most of the constructs in OpenMP are compiler directives. #pragma omp construct [clause [clause] ] Function prototypes and types in the file: #include <omp.h> Most OpenMP constructs apply to a structured block. Structured block: a block of one or more statements with one point of entry at the top and one point of exit at the bottom. It s OK to have an exit() within the structured block. Slides prepared by : Farzana Rahman 14

Thread Creation: Parallel Regions You create threads in OpenMP with the parallel construct. For example, To create a 4 thread Parallel region: double A[1000]; omp_set_num_threads(4); #pragma omp parallel { int ID = omp_get_thread_num(); pooh(id,a); } Slides prepared by : Farzana Rahman 15

Synchronization High level synchronization: Critical Atomic Barrier ordered Low level synchronization Flush locks (both simple and nested) Slides prepared by : Farzana Rahman 16

Synchronization: critical Mutual exclusion: Only one thread at a time can enter a critical region. float res; #pragma omp parallel { float B; int i, id, nthrds; id = omp_get_thread_num(); nthrds = omp_get_num_threads(); } for(i=id;i<niters;i+nthrds) { B = big_job(i); #pragma omp critical consume (B, res); } Slides prepared by : Farzana Rahman 17

Synchronization: Atomic Atomic provides mutual exclusion but only applies to the update of a memory location (the update of X in the following example) #pragma omp parallel { double tmp, B; B = DOIT(); tmp = big_ugly(b); #pragma omp atomic X += tmp; } Slides prepared by : Farzana Rahman 18

The loop worksharing Constructs The loop workharing construct splits up loop iterations among the threads in a team #pragma omp parallel { #pragma omp for for (I=0;I<N;I++) { NEAT_STUFF(I); } } Slides prepared by : Farzana Rahman 19

Combined parallel/worksharing construct OpenMP shortcut: Put the parallel and the worksharing directive on the same line double res[max]; int i; #pragma omp parallel { #pragma omp for for (i=0;i< MAX; i++) { res[i] = huge(); } } Slides prepared by : Farzana Rahman 20

Reduction OpenMP reduction clause: reduction (op : list) Inside a parallel or a work-sharing construct: A local copy of each list variable is made and initialized depending on the op (e.g. 0 for + ). Compiler finds standard reduction expressions containing op and uses them to update the local copy. Local copies are reduced into a single value and combined with the original global value. The variables in list must be shared in same parallel region. Slides prepared by : Farzana Rahman 21

OpenMP: Reduction operands/initial-values Many different associative operands can be used with reduction: Initial values are the ones that make sense mathematically. Slides prepared by : Farzana Rahman 22

Synchronization: Barrier Barrier: Each thread waits until all threads arrive. implicit barrier at the end of a for worksharing construct #pragma omp parallel shared (A, B, C) private(id) { id=omp_get_thread_num(); A[id] = big_calc1(id); #pragma omp barrier #pragma omp for for(i=0;i<n;i++) {C[i]=big_calc3(i,A);} #pragma omp for nowait for(i=0;i<n;i++) { B[i]=big_calc2(C, i);} A[id] = big_calc4(id); } Slides prepared by : Farzana Rahman 23

Master Construct The master construct denotes a structured block that is only executed by the master thread. The other threads just skip it #pragma omp parallel { do_many_things(); #pragma omp master { exchange_boundaries(); } #pragma omp barrier do_many_other_things(); } Slides prepared by : Farzana Rahman 24

Single worksharing Construct The single construct denotes a block of code that is executed by only one thread (not necessarily the master thread). #pragma omp parallel { do_many_things(); #pragma omp single { exchange_boundaries(); } Do_other_things(); } Slides prepared by : Farzana Rahman 25

Synchronization: ordered Used in conjunction with a DO or SECTIONS construct to impose a serial order on the execution of a section of code. #pragma omp parallel private (tmp) #pragma omp for ordered reduction(+:res) for (I=0;I<N;I++) { tmp = NEAT_STUFF(I); #pragma ordered res += consum(tmp); } Slides prepared by : Farzana Rahman 26

Data Sharing Data sharing is specified at the start of a parallel region or worksharing construct by using the SHARED and PRIVATE clauses. All variables in the SHARED clause are shared among the members of a team. The application must do the following: Synchronize access to these variables. Insure that all variables in the PRIVATE clause are private to each team member. Slides prepared by : Farzana Rahman 27

Data Sharing (Cont.) Initialize PRIVATE variables at the start of a parallel region, unless the FIRSTPRIVATE clause is specified. In this case, the PRIVATE copy is initialized from the global copy at the start of the construct at which the FIRSTPRIVATE clause is specified. Update the global copy of a PRIVATE variable at the end of a parallel region. However, the LASTPRIVATE clause of a DO directive enables updating the global copy from the team member that executed serially the last iteration of the loop. Slides prepared by : Farzana Rahman 28

Sections worksharing Construct Use the non iterative worksharing SECTIONS directive to divide the enclosed sections of code among the team. Each section is executed just one time by one thread. Each section should be preceded with a SECTION directive, except for the first section. The last section ends at the END SECTIONS directive. When a thread completes its section and there are no undispatched sections, it waits at the END SECTION directive unless you specify NOWAIT. The SECTIONS directive takes an optional comma-separated list of clauses that specifies which variables are PRIVATE, FIRSTPRIVATE, LASTPRIVATE, or REDUCTION. Slides prepared by : Farzana Rahman 29

Schedule clause : loop worksharing constructs The schedule clause affects how loop iterations are mapped onto threads schedule(static [,chunk]) Deal-out blocks of iterations of size chunk to each thread. schedule(dynamic[,chunk]) Each thread grabs chunk iterations off a queue until all iterations have been handled. schedule(guided[chunk]) Threads dynamically grab blocks of iterations. schedule(runtime) Schedule and chunk size taken from the OMP_SCHEDULE environment variable Slides prepared by : Farzana Rahman 30

Summary Slides prepared by : Farzana Rahman 31

Shared Memory Model private private thread1 thread5 Shared Memory thread2 thread3 Data can be shared or private Shared data is accessible by all threads Private data can be accessed only by the threads that owns it Data transfer is transparent to the programmer private thread4 private private Slides prepared by : Farzana Rahman 32

Critical Construct sum = 0; #pragma omp parallel private (lsum) { } lsum = 0; #pragma omp for for (i=0; i<n; i++) { } lsum = lsum + A[i]; #pragma omp critical { sum += lsum; } Threads wait their turn; only one thread at a time executes the critical section Slides prepared by : Farzana Rahman 33

Reduction Clause Shared variable sum = 0; #pragma omp parallel for reduction (+:sum) for (i=0; i<n; i++) { sum = sum + A[i]; } Slides prepared by : Farzana Rahman 34

OpenMP example: PI!$OMP PARALLEL PRIVATE(X, i) write (*,1004) omp_get_thread_num()!$omp DO REDUCTION (+:sum) DO i = 1, 1000000000, 1 x = step*((-0.5)+i) sum = sum + 4.0/(1.0+x**2) ENDDO!$OMP END DO NOWAIT!$OMP END PARALLEL Slides prepared by : Farzana Rahman 35

OpenMP example Running OpenMP applications on Steele qsub I l nodes=1:ppn=8 module avail module load intel icc/ifort omp_pi.f o omp_pi openmp setenv OMP_NUM_THREADS 4 time./pi Slides prepared by : Farzana Rahman 36

OpenMP Scoping The OpenMP Data Scope Attribute Clauses are used to explicitly define how variable should be scoped. They include: PRIVATE FIRSTPRIVATE LASTPRIVATE SHARED DEFAULT REDUCTION COPYIN Slides prepared by : Farzana Rahman 37

OpenMP Runtime Library Routines OMP_SET_NUM_THREADS OMP_GET_NUM_THREADS OMP_GET_MAX_THREADS OMP_GET_THREAD OMP_GET_NUM_PROCS OMP_IN_PARALLEL OMP_SET_DYNAMIC OMP_GET_DYNAMIC OMP_SET_NESTED OMP_GET_NESTED Slides prepared by : Farzana Rahman 38

OpenMP Lock Routines OMP_INIT_LOCK OMP_DESTROY_LOCK OMP_SET_LOCK OMP_UNSET_LOCK OMP_TEST_LOCK Slides prepared by : Farzana Rahman 39

For More Information OpenMP www.llnl.gov www.openmp.org Slides prepared by : Farzana Rahman 40

The End Slides prepared by : Farzana Rahman 41