A brief introduction to OpenMP

Similar documents
Session 4: Parallel Programming with OpenMP

Lecture 4: OpenMP Open Multi-Processing

OpenMP Fundamentals Fork-join model and data environment

Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing

Data Handling in OpenMP

Review. Tasking. 34a.cpp. Lecture 14. Work Tasking 5/31/2011. Structured block. Parallel construct. Working-Sharing contructs.

Parallel Computing Using OpenMP/MPI. Presented by - Jyotsna 29/01/2008

Introduction to OpenMP

Introduction to OpenMP

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

Parallel Computing Parallel Programming Languages Hwansoo Han

OpenMP Application Program Interface

Parallel Programming with OpenMP. CS240A, T. Yang, 2013 Modified from Demmel/Yelick s and Mary Hall s Slides

Parallel Programming with OpenMP. CS240A, T. Yang

Computer Architecture

Introduction to. Slides prepared by : Farzana Rahman 1

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

An Introduction to OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

EPL372 Lab Exercise 5: Introduction to OpenMP

Barbara Chapman, Gabriele Jost, Ruud van der Pas

Parallel and Distributed Programming. OpenMP

Tasking in OpenMP 4. Mirko Cestari - Marco Rorro -

DPHPC: Introduction to OpenMP Recitation session

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Overview: The OpenMP Programming Model

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

Jukka Julku Multicore programming: Low-level libraries. Outline. Processes and threads TBB MPI UPC. Examples

Shared Memory Programming with OpenMP

Distributed Systems + Middleware Concurrent Programming with OpenMP

CME 213 S PRING Eric Darve

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

DPHPC: Introduction to OpenMP Recitation session

Acknowledgments. Amdahl s Law. Contents. Programming with MPI Parallel programming. 1 speedup = (1 P )+ P N. Type to enter text

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

ECE 574 Cluster Computing Lecture 10

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

15-418, Spring 2008 OpenMP: A Short Introduction

Parallel Processing Top manufacturer of multiprocessing video & imaging solutions.

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

OpenMP Lab on Nested Parallelism and Tasks

High Performance Computing: Tools and Applications

Programming Shared-memory Platforms with OpenMP. Xu Liu

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

OPENMP OPEN MULTI-PROCESSING

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

OpenMP Application Program Interface

Allows program to be incrementally parallelized

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

<Insert Picture Here> OpenMP on Solaris

An Introduction to OpenMP

[Potentially] Your first parallel application

CS691/SC791: Parallel & Distributed Computing

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Shared Memory Programming Model

HPCSE - II. «OpenMP Programming Model - Tasks» Panos Hadjidoukas

Review. Lecture 12 5/22/2012. Compiler Directives. Library Functions Environment Variables. Compiler directives for construct, collapse clause

Introduction [1] 1. Directives [2] 7

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number

Introduction to Standard OpenMP 3.1

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell. Specifications maintained by OpenMP Architecture Review Board (ARB)

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

Shared memory parallel computing

Data Environment: Default storage attributes

Introduction to OpenMP.

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Alfio Lazzaro: Introduction to OpenMP

Parallel Programming using OpenMP

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Parallel Programming using OpenMP

Shared Memory programming paradigm: openmp

CS4961 Parallel Programming. Lecture 5: More OpenMP, Introduction to Data Parallel Algorithms 9/5/12. Administrative. Mary Hall September 4, 2012

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

POSIX Threads and OpenMP tasks

OpenMP 4.0/4.5: New Features and Protocols. Jemmy Hu

OpenMP - Introduction

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Parallel Programming

More Advanced OpenMP. Saturday, January 30, 16

Parallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops

OpenMP Examples - Tasking

Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

Concurrent Programming with OpenMP

Multithreading in C with OpenMP

OpenMP Technical Report 3 on OpenMP 4.0 enhancements

Parallel Programming. OpenMP Parallel programming for multiprocessors for loops

Introduction to OpenMP. Lecture 2: OpenMP fundamentals

OpenMP Tasking Model Unstructured parallelism

Introduction to OpenMP

Introduction to OpenMP

Transcription:

A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center

Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 2 / 47

Outline Introduction 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 3 / 47

Introduction What is OpenMP? It s an API extension to the C, C++ and Fortran languages to write parallel programs for shared memory machines Current version is 3.1 (June 2010) Supported by most compiler vendors Intel,IBM,PGI,Oracle,Cray,Fujitsu,HP,GCC,... Natural fit for multicores as it was designed for SMPs Maintained by the Architecture Review Board (ARB), a consortium of industry and academia http://www.openmp.org Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 4 / 47

A bit of history Introduction OpenMP Fortran 1.0 OpenMP C/C++ 1.0 OpenMP Fortran 1.1 OpenMP Fortran 2.0 OpenMP C/C++ 2.0 OpenMP 2.5 OpenMP 3.0 OpenMP 3.1 1997 1998 1999 2000 2002 2005 2008 2010 Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 5 / 47

Target machines Introduction Shared Multiprocessors Chip Chip Chip Memory interconnect Memory Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 6 / 47

Introduction Shared memory Cpu1 x=5 x=5 x= Cpu2 Memory is shared across different processors Communication and synchronization happen implicitely through shared memory Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 7 / 47

Including... Introduction Multicores/SMTs Core L1 Caches Core L1 Caches Chip L2 Cache Off-chip Cache Memory Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 8 / 47

More commonly Introduction NUMA Memory Chip Chip Memory Chip Chip Memory interconnect Access to memory addresses is not uniform Memory migration and locality are very important Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 9 / 47

Introduction Why OpenMP? Mature standard and implementations Standardizes practice of the last 20 years Good performance and scalability Portable across architectures Incremental parallelization Maintains sequential version (mostly) High level language Some people may say a medium level language :-) Supports both task and data parallelism Communication is implicit Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 10 / 47

Introduction Why not OpenMP? Communication is implicit beware false sharing Flat memory model can lead to poor performance in NUMA machines Incremental parallelization creates false sense of glory/failure No support for accelerators No error recovery capabilities Difficult to compose Pipelines are difficult Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 11 / 47

Outline Writing OpenMP programs 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 12 / 47

OpenMP at a glance Writing OpenMP programs OpenMP components Constructs Compiler OpenMP Exec OpenMP API Environment Variables OpenMP Runtime Library ICVs OS Threading Libraries CPU CPU CPU CPU CPU CPU SMP Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 13 / 47

Writing OpenMP programs OpenMP directives syntax In Fortran Through a specially formatted comment: s e n t i n e l c o n s t r u c t [ clauses ] where sentinel is one of:!$omp or C$OMP or *$OMP in fixed format!$omp in free format In C/C++ Through a compiler directive: #pragma omp c o n s t r u c t [ clauses ] OpenMP syntax is ignored if the compiler does not recognize OpenMP Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 14 / 47

Hello world! Writing OpenMP programs i n t id ; char message = "Hello world!" ; #pragma omp parallel private ( i d ) i d = omp_get_thread_num ( ) ; p r i n t f ( "Thread %d says: %s\n", id, message ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 15 / 47

Hello world! Writing OpenMP programs i n t id ; char message = "Hello world!" ; #pragma omp parallel private ( i d ) i d = omp_get_thread_num ( ) ; p r i n t f ( "Thread %d says: %s\n", id, message ) ; Creates a parallel region of OMP_NUM_THREADS All threads execute the same code Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 15 / 47

Hello world! Writing OpenMP programs i n t id ; char message = "Hello world!" ; #pragma omp parallel private ( i d ) i d = omp_get_thread_num ( ) ; p r i n t f ( "Thread %d says: %s\n", id, message ) ; id is private to each thread Each thread gets its id in the team Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 15 / 47

Hello world! Writing OpenMP programs i n t id ; char message = "Hello world!" ; #pragma omp parallel private ( i d ) i d = omp_get_thread_num ( ) ; p r i n t f ( "Thread %d says: %s\n", id, message ) ; message is shared among all threads Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 15 / 47

Writing OpenMP programs Execution model Fork-join model OpenMP uses a fork-join model The master thread spawns a team of threads that joins at the end of the parallel region Threads in the same team can collaborate to do work Master Thread Nested Parallel Region Parallel Region Parallel Region Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 16 / 47

Writing OpenMP programs Memory model OpenMP defines a weak relaxed memory model Threads can see different values for the same variable Memory consistency is only guaranteed at specific points syncronization constructs, parallelism creation points,... Luckily, the default points are usually enough Variables can have shared or private visibility for each thread Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 17 / 47

Outline Data-sharing attributes 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 18 / 47

Data-sharing attributes Data environment When creating a new parallel region (and in other cases) a new data environment needs to be constructed for the threads. This is defined by means of clauses in the construct: shared private firstprivate default threadprivate... Not a clause! Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 19 / 47

Data-sharing attributes Data-sharing attributes Shared When a variable is marked as shared all threads see the same variable Not necessarily the same value Usually need some kind of synchronization to update them correctly Private When a variable is marked as private, the variable inside the construct is a new variable of the same type with an undefined value. Can be accessed without any kind of synchronization Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 20 / 47

Data-sharing attributes Data-sharing attributes Firstprivate When a variable is marked as firstprivate, the variable inside the construct is a new variable of the same type but it is initialized to the original variable value. In a parallel construct this means all threads have a different variable with the same initial value Can be accessed without any kind of synchronization Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 21 / 47

Data-sharing attributes Data-sharing attributes i n t x =1, y =1, z =1; #pragma omp parallel shared ( x ) private ( y ) firstprivate ( z ) \ num_threads ( 2 ) x ++; y ++; z ++; p r i n t f ( "%d\n", x ) ; p r i n t f ( "%d\n", y ) ; p r i n t f ( "%d\n", z ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 22 / 47

Data-sharing attributes Data-sharing attributes i n t x =1, y =1, z =1; #pragma omp parallel shared ( x ) private ( y ) firstprivate ( z ) \ num_threads ( 2 ) x ++; y ++; z ++; p r i n t f ( "%d\n", x ) ; p r i n t f ( "%d\n", y ) ; p r i n t f ( "%d\n", z ) ; The parallel region will have only two threads Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 22 / 47

Data-sharing attributes Data-sharing attributes i n t x =1, y =1, z =1; #pragma omp parallel shared ( x ) private ( y ) firstprivate ( z ) \ num_threads ( 2 ) x ++; y ++; z ++; p r i n t f ( "%d\n", x ) ; p r i n t f ( "%d\n", y ) ; p r i n t f ( "%d\n", z ) ; Prints 2 or 3. Unsafe update! Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 22 / 47

Data-sharing attributes Data-sharing attributes i n t x =1, y =1, z =1; #pragma omp parallel shared ( x ) private ( y ) firstprivate ( z ) \ num_threads ( 2 ) x ++; y ++; z ++; p r i n t f ( "%d\n", x ) ; p r i n t f ( "%d\n", y ) ; Prints any number p r i n t f ( "%d\n", z ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 22 / 47

Data-sharing attributes Data-sharing attributes i n t x =1, y =1, z =1; #pragma omp parallel shared ( x ) private ( y ) firstprivate ( z ) \ num_threads ( 2 ) x ++; y ++; z ++; p r i n t f ( "%d\n", x ) ; p r i n t f ( "%d\n", y ) ; p r i n t f ( "%d\n", z ) ; Prints 2 Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 22 / 47

Data-sharing attributes Threadprivate storage The threadprivate construct How to parallelize: Global variables Static variables Class-static members Use threadprivate storage Allows to create a per-thread copy of global variables. Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 23 / 47

Threaprivate storage Data-sharing attributes char foo ( ) s t a t i c char buffer [ BUF_SIZE ] ; #pragma omp t h r e a d p r i v a t e ( b u f f e r )... Creates one static copy of buffer per thread return buffer ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 24 / 47

Threaprivate storage Data-sharing attributes char foo ( ) s t a t i c char buffer [ BUF_SIZE ] ; #pragma omp t h r e a d p r i v a t e ( b u f f e r )... Now foo can be called by multiple threads at the same time return buffer ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 24 / 47

Threaprivate storage Data-sharing attributes char foo ( ) s t a t i c char buffer [ BUF_SIZE ] ; #pragma omp t h r e a d p r i v a t e ( b u f f e r )... Simpler than redefining the interface. More costly return buffer ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 24 / 47

Outline Synchronization 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 25 / 47

Synchronization Why synchronization? Mechanisms Threads need to synchronize to impose some ordering in the sequence of actions of the threads. OpenMP provides different synchronization mechanisms: barrier critical atomic taskwait low-level locks Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 26 / 47

Barrier Synchronization #pragma omp parallel foo ( ) ; #pragma omp barrier bar ( ) ; Syncronizes all threads of the team Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 27 / 47

Barrier Synchronization #pragma omp parallel foo ( ) ; #pragma omp barrier bar ( ) ; Forces all foo occurrences too happen before all bar occurrences Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 27 / 47

Critical construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp critical x ++; p r i n t f ( "%d\n", x ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 28 / 47

Critical construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp critical x ++; p r i n t f ( "%d\n", x ) ; Only one thread at a time here Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 28 / 47

Critical construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp critical x ++; p r i n t f ( "%d\n", x ) ; Only one thread at a time here Prints 3! Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 28 / 47

Atomic construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp atomic x ++; p r i n t f ( "%d\n", x ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 29 / 47

Atomic construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp atomic x ++; p r i n t f ( "%d\n", x ) ; Only one thread at a time updates x here Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 29 / 47

Atomic construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp atomic x ++; p r i n t f ( "%d\n", x ) ; Specially supported by hardware primitives Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 29 / 47

Atomic construct Synchronization i n t x =1; #pragma omp parallel num_threads ( 2 ) #pragma omp atomic x ++; p r i n t f ( "%d\n", x ) ; Prints 3! Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 29 / 47

Locks Synchronization OpenMP provides lock primitives for low-level synchronization omp_init_lock Initialize the lock omp_set_lock Acquires the lock omp_unset_lock Releases the lock omp_test_lock Tries to acquire the lock (won t block) omp_destroy_lock Frees lock resources Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 30 / 47

Outline Worksharings 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 31 / 47

Worksharings Worksharings Worksharing constructs divide the execution of a code region among the threads of a team Threads cooperate to do some work Better way to split work than using thread-ids In OpenMP, there are four worksharing constructs: loop worksharing single section workshare Restriction: worksharings cannot be nested Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 32 / 47

The for construct Worksharings void foo ( i n t m, i n t N, i n t M) i n t i ; #pragma omp parallel #pragma omp for private ( j ) for ( i = 0; i < N; i ++ ) for ( j = 0; j < M; j ++ ) m[ i ] [ j ] = 0; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 33 / 47

The for construct Worksharings void foo ( i n t m, i n t N, i n t M) i n t i ; #pragma omp parallel #pragma omp for private ( j ) for ( i = 0; i < N; i ++ ) for ( j = 0; j < M; j ++ ) m[ i ] [ j ] = 0; New created threads cooperate to execute all the iterations of the loop Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 33 / 47

The for construct Worksharings void foo ( i n t m, i n t N, i n t M) i n t i ; #pragma omp parallel #pragma omp for private ( j ) for ( i = 0; i < N; i ++ ) for ( j = 0; j < M; j ++ ) m[ i ] [ j ] = 0; Loop iterations must be independent Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 33 / 47

The for construct Worksharings void foo ( i n t m, i n t N, i n t M) i n t i ; #pragma omp parallel #pragma omp for private ( j ) for ( i = 0; i < N; i ++ ) for ( j = 0; j < M; j ++ ) m[ i ] [ j ] = 0; The i variable is automatically privatized Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 33 / 47

The for construct Worksharings void foo ( i n t m, i n t N, i n t M) i n t i ; #pragma omp parallel #pragma omp for private ( j ) for ( i = 0; i < N; i ++ ) for ( j = 0; j < M; j ++ ) m[ i ] [ j ] = 0; Must be explicitly privatized Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 33 / 47

The reduction clause Worksharings i n t vector_sum ( i n t n, i n t v [ n ] ) i n t i, sum = 0; #pragma omp parallel for for ( i = 0; i < n ; i ++ ) sum += v [ i ] ; return sum ; Common pattern. All threads accumulate to a shared variable Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 34 / 47

The reduction clause Worksharings i n t vector_sum ( i n t n, i n t v [ n ] ) i n t i, sum = 0; #pragma omp parallel for reduction(+:sum) for ( i = 0; i < n ; i ++ ) sum += v [ i ] ; Efficiently solved with the reduction clause return sum ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 34 / 47

The reduction clause Worksharings i n t vector_sum ( i n t n, i n t v [ n ] ) i n t i, sum = 0; #pragma omp parallel for for ( i = 0; i < n ; i ++ ) sum += v [ i ] ; return sum ; Private copy initialized here to the identity value Shared variable updated here with the partial values of each thread Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 34 / 47

Worksharings The schedule clause The schedule clause determines which iterations are executed by each thread. Importart to choose for performance reasons only There are several possible options as schedule: STATIC STATIC,chunk DYNAMIC[,chunk] GUIDED[,chunk] AUTO RUNTIME Good locality, low overhead, load imbalance Bad locality, higher overhead, load balance Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 35 / 47

The single construct Worksharings i n t main ( i n t argc, char argv ) #pragma omp parallel #pragma omp single p r i n t f ( "Hello world!\n" ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 36 / 47

The single construct Worksharings i n t main ( i n t argc, char argv ) #pragma omp parallel #pragma omp single p r i n t f ( "Hello world!\n" ) ; This program outputs just one Hello world Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 36 / 47

Outline Task parallelism 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 37 / 47

Task parallelism Task parallelism in OpenMP Task parallelism model Team Task pool Parallelism is extracted from several pieces of code Allows to parallelize very unstructured parallelism Unbounded loops, recursive functions,... Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 38 / 47

Task parallelism What is a task in OpenMP? Tasks are work units whose execution may be deferred they can also be executed immediately Tasks are composed of: code to execute a data environment Initialized at creation time internal control variables (ICVs) Threads of the team cooperate to execute them Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 39 / 47

Task parallelism When are task created? Parallel regions create tasks One implicit task is created and assigned to each thread So all task-concepts have sense inside the parallel region Each thread that encounters a task construct Packages the code and data Creates a new explicit task Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 40 / 47

List traversal Task parallelism void t r a v e r s e _ l i s t ( L i s t l ) Element e ; for ( e = l >f i r s t ; e ; e = e >next ) #pragma omp task process ( e ) ; e is firstprivate Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 41 / 47

Taskwait Task parallelism void t r a v e r s e _ l i s t ( L i s t l ) Element e ; for ( e = l >f i r s t ; e ; e = e >next ) #pragma omp task process ( e ) ; #pragma omp taskwait Suspends current task until all children are completed Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 42 / 47

Taskwait Task parallelism void t r a v e r s e _ l i s t ( L i s t l ) Element e ; for ( e = l >f i r s t ; e ; e = e >next ) #pragma omp task process ( e ) ; #pragma omp taskwait All tasks guaranteed to be completed here Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 42 / 47

Taskwait Task parallelism void t r a v e r s e _ l i s t ( L i s t l ) Element e ; for ( e = l >f i r s t ; e ; e = e >next ) #pragma omp task process ( e ) ; #pragma omp taskwait Now we need some threads to execute the tasks Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 42 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel t r a v e r s e _ l i s t ( l ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 43 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel t r a v e r s e _ l i s t ( l ) ; This will generate multiple traversals Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 43 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel t r a v e r s e _ l i s t ( l ) ; We need a way to have a single thread execute traverse_list Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 43 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel #pragma omp single t r a v e r s e _ l i s t ( l ) ; Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 44 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel #pragma omp single t r a v e r s e _ l i s t ( l ) ; One thread creates the tasks of the traversal Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 44 / 47

List traversal Completing the picture Task parallelism L i s t l #pragma omp parallel #pragma omp single t r a v e r s e _ l i s t ( l ) ; All threads cooperate to execute them Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 44 / 47

Another example Search problem Task parallelism void search ( i n t n, i n t j, bool state ) i n t i, res ; i f ( n == j ) / good solution, count i t / mysolutions ++; return ; / t r y each possible solution / for ( i = 0; i < n ; i ++) #pragma omp task bool new_state = alloca ( sizeof ( bool ) n ) ; memcpy( new_state, state, sizeof ( bool ) n ) ; new_state [ j ] = i ; i f ( ok ( j +1, new_state ) ) search ( n, j +1, new_state ) ; #pragma omp taskwait Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 45 / 47

Task parallelism Summary OpenMP... allows to incrementally parallelize applications for SMP has good support for data and task parallelism requires you to pay attention to locality has many other features beyond this short presentation http://www.openmp.org Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 46 / 47

The End Task parallelism Thanks for your attention! Alex Duran (BSC) A brief introduction to OpenMP 10th October 2011 47 / 47