Parallel Processing Top manufacturer of multiprocessing video & imaging solutions.

Similar documents
Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

Synchronization. Event Synchronization

JANUARY 2004 LINUX MAGAZINE Linux in Europe User Mode Linux PHP 5 Reflection Volume 6 / Issue 1 OPEN SOURCE. OPEN STANDARDS.

Lecture 4: OpenMP Open Multi-Processing

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

ECE 574 Cluster Computing Lecture 10

15-418, Spring 2008 OpenMP: A Short Introduction

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

Session 4: Parallel Programming with OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

Barbara Chapman, Gabriele Jost, Ruud van der Pas

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

Computer Architecture

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number

A brief introduction to OpenMP

Shared Memory Parallelism - OpenMP

Programming Shared Memory Systems with OpenMP Part I. Book

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Distributed Systems + Middleware Concurrent Programming with OpenMP

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

Data Environment: Default storage attributes

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

OpenMPand the PGAS Model. CMSC714 Sept 15, 2015 Guest Lecturer: Ray Chen

Introduction to Standard OpenMP 3.1

Compiling and running OpenMP programs. C/C++: cc fopenmp o prog prog.c -lomp CC fopenmp o prog prog.c -lomp. Programming with OpenMP*

OpenMP. Application Program Interface. CINECA, 14 May 2012 OpenMP Marco Comparato

Introduction to OpenMP

Multithreading in C with OpenMP

Open Multi-Processing: Basic Course

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

Jukka Julku Multicore programming: Low-level libraries. Outline. Processes and threads TBB MPI UPC. Examples

Allows program to be incrementally parallelized

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

OpenMP. Dr. William McDoniel and Prof. Paolo Bientinesi WS17/18. HPAC, RWTH Aachen

Introduction to. Slides prepared by : Farzana Rahman 1

COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

Shared Memory Programming Models I

Parallel Programming

HPC Workshop University of Kentucky May 9, 2007 May 10, 2007

Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

Overview: The OpenMP Programming Model

Parallel Programming in C with MPI and OpenMP

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group

OpenMP. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

Amdahl s Law. AMath 483/583 Lecture 13 April 25, Amdahl s Law. Amdahl s Law. Today: Amdahl s law Speed up, strong and weak scaling OpenMP

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

Parallel Programming in C with MPI and OpenMP

Introduction [1] 1. Directives [2] 7

Multi-core Architecture and Programming

Parallel Computing Using OpenMP/MPI. Presented by - Jyotsna 29/01/2008

OpenMP Fundamentals Fork-join model and data environment

OpenMP - Introduction

Concurrent Programming with OpenMP

Parallel Programming: OpenMP

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

Shared Memory Programming Model

Parallel Programming with OpenMP. CS240A, T. Yang

CS 470 Spring Mike Lam, Professor. OpenMP

Module 11: The lastprivate Clause Lecture 21: Clause and Routines. The Lecture Contains: The lastprivate Clause. Data Scope Attribute Clauses

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

Parallel Numerical Algorithms

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

CSL 860: Modern Parallel

High Performance Computing: Tools and Applications

CS 470 Spring Mike Lam, Professor. OpenMP

Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

An Introduction to OpenMP

ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 2016 Solutions Name:...

Parallel Programming in C with MPI and OpenMP

DPHPC: Introduction to OpenMP Recitation session

OpenMP Application Program Interface

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Parallel Programming

GCC Developers Summit Ottawa, Canada, June 2006

Introduction to OpenMP

CMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)

Shared Memory Parallelism using OpenMP

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

Little Motivation Outline Introduction OpenMP Architecture Working with OpenMP Future of OpenMP End. OpenMP. Amasis Brauch German University in Cairo

[Potentially] Your first parallel application

Department of Informatics V. HPC-Lab. Session 2: OpenMP M. Bader, A. Breuer. Alex Breuer

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015

Introduction to OpenMP

Parallel Programming using OpenMP

Parallel Programming using OpenMP

Introduction to OpenMP.

OpenMP 4.5: Threading, vectorization & offloading

Parallel and Distributed Programming. OpenMP

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

Transcription:

1 of 10 3/3/2005 10:51 AM Linux Magazine March 2004 C++ Parallel Increase application performance without changing your source code. Parallel Processing Top manufacturer of multiprocessing video & imaging solutions. Copyright Linux Magazine 2004 EXTREME LINUX Using OpenMP, Part 3 by Forrest Hoffman This is the third and final column in a series on shared memory parallelization using OpenMP. Often used to improve performance of scientific models on symmetric multi-processor (SMP) machines or SMP nodes in a Linux cluster, OpenMP consists of a portable set of compiler directives, library calls, and environment variables. It's supported by a wide range of FORTRAN and C/C++ compilers for Linux and commercial supercomputers. OpenMP is based on the fork and join model of execution in which a team of threads is spawned (or forked) at the beginning of a concurrent section of code (called a parallel region) and subsequently killed (or joined) at the end of the parallel region. OpenMP is portable across platforms and is intended for use in programs that execute correctly either sequentially (that is, when compiled without OpenMP enabled) or in parallel (with OpenMP enabled). An introduction to the concepts and syntax of OpenMP directives was presented in January's column (available online at http://www.linux-mag.com/2004-01/extreme_01.html). February's column (available online at http://www.linux-mag.com/2004-02/extreme_01.html) covered more directives and all of the library functions and environment variables. Both previous columns included example C code, demonstrating many of the features of OpenMP. This month's column presents the remaining directives and OpenMP's data environment clauses. Reviewing Constructs OpenMP directives take the form #pragma omp directive-name [clause[[,] clause]...] and sit just above the structured code blocks that they affect. A directive, along with all the clauses that modify it and the subsequent structured block of code, constitute what is called a construct. We've already seen how to use the parallel construct. It's the fundamental construct that starts parallel execution. The work-sharing constructs -- for, sections, and single -- distribute the execution of associated program statements among the thread team members that encounter them. Combined parallel work-sharing

2 of 10 3/3/2005 10:51 AM constructs are shortcuts for parallel regions containing only one work-sharing construct. The combined constructs are parallel for (used in January's example program) and parallel sections. The Last of the Directives The sections and parallel sections directives are used to declare blocks of code that can be executed concurrently. While the for and parallel for directives spread loop iterations across thread team members, sections and parallel sections spread non-iterative blocks of code across threads in a team. Each section or structured block is executed once by one of the threads. For example, some code may call a series of subroutines to compute physics processes on each surface of a cube. Since processes on each face can be computed independently and each has its own subroutine, the sections or parallel sections directives can be used to tell the compiler that computations for each section of code may completely overlap. Such a construct might look like this: void do_physics() #pragma omp parallel sections top_physics(); bottom_physics(); left_physics(); right_physics(); front_physics(); rear_physics(); Here, we used the combined parallel sections directive instead of having separate parallel and sections directives. Within the structured block of the parallel sections construct, each statement that may be concurrently executed has its own section directive. As a result, the program is free to completely overlap the computation of all these subroutines by distributing them among threads in the team. When the code snippet above is compiled (with sufficiently time-consuming subroutines), it should be about twice as fast when using two threads (with OpenMP enabled) than when compiled and run without OpenMP. In the example below, the code is first compiled and run with OpenMP disabled. Then the code is compiled with OpenMP support (enabled by the -mp flag on the compile line when using the Portland Group compiler) and run with two threads on a dual-processor Pentium III. [node01]$ pgcc -O -o sections sections.c

3 of 10 3/3/2005 10:51 AM [node01]$ time./sections real 0m41.205s user 0m41.201s sys 0m0.002s node01]$ pgcc -mp -O -o sections sections.c node01]$ OMP_NUM_THREADS=2 time./sections 41.19user 0.15system 0:20.70elapsed 199%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (134major+14minor) pagefaults 0swaps As you can see, the serial version ran in 41.2 seconds. The OpenMP parallel version (using two threads) still consumed 41.2 seconds of user time, but real elapsed time was only 20.7 seconds. Therefore, using only very simple compiler directives, we were able to use both processors on an SMP machine to cut wallclock time in half. The single directive in a parallel region identifies a block of code to be executed by only one thread in the team. The thread that executes this code block need not be the master thread -- the block is usually executed by the first thread that encounters it. The following code snippet demonstrates this feature. #pragma omp parallel private(tid) tid = omp_get_thread_num(); #pragma omp single printf("%d: Starting process_block1\n", tid); process_block1(); #pragma omp single nowait printf("%d: Starting process_block2\n", tid); process_block2(); #pragma omp single printf("%d: All done\n", tid); The code contains a parallel region for which the variable tid is private to each thread. Within the parallel region, the single directive is contained above each of the printf statements so that the messages are printed only once no matter how many threads are executing statements in the parallel region. The thread id, obtained from the call to omp_ get_thread_num() and stored in the private variable tid, is printed by whichever thread executes each printf statement. When compiled and run, you can see that thread one executed the first and third print statements, while thread zero (the master thread) executed the one in the middle. [node01]$ pgcc -mp -O -o single single.c [node01]$ OMP_NUM_THREADS=2./single 1: Starting process_block1 0: Starting process_block2 1: All done There is an implied barrier at the end of a single construct. As a result, after one thread executes the print statement, all other threads must "catch up" to the barrier point before they all simultaneously execute the

4 of 10 3/3/2005 10:51 AM next statements. The nowait clause can be used to eliminate the implied barrier. In the example code above, all threads begin executing process_block1() simultaneously, because of the single construct above it. However, threads may begin executing process_ block2() at slightly different times because the nowait clause is specified as part of the single construct above process_ block2(). The master directive is similar to the single directive, although it requires that only the master thread execute the adjoining code block. The critical directive is used to identify a section of code within a parallel region that should be executed by only one thread at a time. This directive should be used with caution, because too many criticals can result in frequent synchronization, thus slowing down processing. While critical constructs could be used for updating counters or performing similar reductions within parallel loops on global shared variables, the reduction clause is often better suited to that task. The critical directive is often useful for queuing applications in which calls are made to obtain new requests from a shared queue. A critical directive above a function call that returns a request identifier prevents two or more threads from requesting a new identifier at the same time, preventing a race condition. For example, in the following code snippet, the critical directive sits above the call to get_next_request(): #pragma omp parallel shared(request_queue) private(request_id,request_status) for (;;) #pragma omp critical (get_request) request_id = get_next_request(request_queue); printf("processing request %d\n", request_id); request_status = process_request (request_id); update_request_status(request_id, request_status); As a result, this function is called by only one thread at a time, ensuring that each receives a unique request identifier. Notice that the critical construct is contained within a parallel construct that identifies request_queue as a shared variable and request_id and request_status as variables private to each thread. The barrier directive provides a means for synchronizing all threads in a team. When encountered in the program, each thread in the team waits for all other team members to reach the same, specified point before collectively starting execution of the subsequent statements in parallel.

5 of 10 3/3/2005 10:51 AM The barrier directive is often useful for ensuring that all threads have completed some phase of work prior to exchanging results as in the following code example. #pragma omp parallel work_phase1(); #pragma omp barrier exchange_results(); work_phase2(); Here work_phase1() is executed simultaneously by all threads in the team. As each thread returns from the routine, it waits for all threads to complete work_phase1() prior to calling exchange_results() and executing work_phase2(). In general, barriers should be avoided except where necessary to preserve the integrity of the data environment. Spending valuable time synchronizing threads that could operate completely independently is not a good use of computer time. The atomic directive ensures that a memory location is updated atomically instead of allowing multiple threads to write to the same location at once. Only certain mathematical expressions may be used in the atomic construct. For example, the following piece of code contains a parallel for construct with an atomic directive within the loop to protect against simultaneous updates of an element of the ts array that is accessed through an index array. #pragma omp parallel for shared(ts, index) for (i = 0; i < SIZE; i++) #pragma omp atomic ts[index[i]] += compute1(i); The advantage of using the atomic directive in this case is that multiple elements of ts can be simultaneously updated. If a critical directive had been used instead, all updates to ts would be serialized, resulting in poor performance. The flush directive is used to synchronize shared objects in memory across a team of threads. A list of variables that must be synchronized can be provided with the flush directive. Alternatively, flush without a variable list synchronizes all shared objects (and probably incurs more overhead). The ordered directive identifies a block of code that's executed in the order in which iterations would if they were executed sequentially. An ordered directive must be within the extent of a for or parallel for construct. Moreover, the for or parallel for must also specify an ordered clause. In the following example, the compute1() routine is called within a parallel for construct containing an ordered clause. The print

6 of 10 3/3/2005 10:51 AM statement in compute1() has an ordered directive above it so that the output is generated in the expected sequential order. void compute1(int i) int tid; tid = omp_get_thread_num(); #pragma omp ordered printf("%d: compute1 called for iteration %d\n", tid, k); /* lots of work removed from here */ int main(int argc, char **argv) int i; #pragma omp parallel for ordered schedule(dynamic) for (i = 0; i < 10; i++) compute1(i); exit(0); The parallel for directive also has a schedule clause that specifies dynamic adjustment of threads. This clause causes each iteration to be assigned (in order) to the next available thread. In the output below, iteration 0 is assigned to the master thread (thread 0) and iteration 1 is assigned to thread 1. Since thread 1 completes its work first, the thread becomes available and is assigned iteration 2, the very next iteration. [node01]$ OMP_NUM_THREADS=2 time./ordered 0: compute1 called for iteration 0 1: compute1 called for iteration 1 1: compute1 called for iteration 2 0: compute1 called for iteration 3 0: compute1 called for iteration 4 1: compute1 called for iteration 5 0: compute1 called for iteration 6 1: compute1 called for iteration 7 0: compute1 called for iteration 8 1: compute1 called for iteration 9 48.83user 0.16system 0:24.66elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (144major+16minor)pagefaults 0swaps Thread Data Environment The data environment for OpenMP threads in a team is controlled by the threadprivate directive and a variety of data sharing clauses. We've already used the most common of these clauses -- private and shared -- in examples. Table One contains a list of all OpenMP clauses, including the data sharing attribute clauses, and the directives with which they may be used.

7 of 10 3/3/2005 10:51 AM Table One: All OpenMP clauses and the directives with which they may be used Clause OPENMP Directives copyin copyprivate default firstprivate if lastprivate nowait num_threads ordered private reduction schedule shared parallel single parallel parallel, for, sections, single parallel for, sections for, sections, single parallel for parallel, for, sections, single parallel, for, sections for parallel The threadprivate directive is used to make various data objects, specified in a list along with the directive, private to each thread. As usual, the list is contained within parentheses and separated by commas. This amounts to creating a copy of the variable for each thread in the team. Each copy is initialized once prior to the first reference of that copy. As with all private objects, one thread may not reference another thread's copy of a threadprivate object. Within serial and master regions of the program, the master thread's copy of the object is used. threadprivate objects persist outside the parallel region in which they are copied only if the dynamic thread mechanism is disabled and the number of threads doesn't change.

8 of 10 3/3/2005 10:51 AM The threadprivate directive must precede all references to any of the variables or objects in its list. In the following example, a counter variable called counter is declared then followed by a threadprivate directive at the same level (not within subroutines) and prior to being referenced. In main(), a parallel loop calls bump_counter() ten times, printing out its value in each iteration. int counter = 0; #pragma omp threadprivate(counter) int bump_counter() counter++; return counter; int main(int argc, char **argv) int i; #pragma omp parallel for for (i = 0; i < 10; i++) bump_counter(); printf("%d: i=%d and my copy of counter = %d\n", omp_get_thread_num(), i, counter); exit(0); When run without OpenMP (or with only one thread), a single copy of counter is bumped ten times resulting in a final value of 10. As seen below, when run with two threads, each copy of counter is bumped five times. This loop executes so quickly that all the output from thread zero appears before output from thread one. [node01]$ OMP_NUM_THREADS=2./tp 0: i=0 and my copy of counter = 1 0: i=1 and my copy of counter = 2 0: i=2 and my copy of counter = 3 0: i=3 and my copy of counter = 4 0: i=4 and my copy of counter = 5 1: i=5 and my copy of counter = 1 1: i=6 and my copy of counter = 2 1: i=7 and my copy of counter = 3 1: i=8 and my copy of counter = 4 1: i=9 and my copy of counter = 5 In addition to the threadprivate directive, a number of data sharing attribute clauses may be used with other directives to control whether data objects are shared or private, as well as how they are initialized before and saved after the associated code block. If an existing variable is not specified in a sharing attribute clause or threadprivate directive when a parallel or work-sharing construct is encountered, it is shared. Static variables and heap allocated memory is also shared. However, the pointer to this memory may be either private or shared. Automatic variables declared within a parallel region are private.

9 of 10 3/3/2005 10:51 AM Most clauses accept a comma-separated list of variables contained within parentheses. Variables can't be specified in multiple clauses except for the firstprivate and lastprivate clauses. Not all clauses are valid for all directives. Table One provides a list of clauses and the directives with which they may be used. The combined parallel work-sharing constructs parallel for and parallel sections accept the same clauses as the for and sections constructs, respectively. As we've already seen in previous examples, the private clause declares variables to be private for each thread in a team. When objects are declared private, new objects with automatic storage duration are allocated on each thread. These new private variables are used for the extent of the construct. The original objects have an indeterminate value upon entry to and exit from the construct. The firstprivate clause has the same behavior as the private clause, except with regard to initialization of the private object. When used with a parallel construct, the firstprivate clause causes the specified variables to be initialized to the values of the original objects as they exist immediately prior to the parallel construct for the thread that encounters it. With a work-sharing construct, the initial value of new private objects is set to the value of the original object just prior to the point in time when the participating thread encountered the construct. In a similar fashion, the lastprivate clause behaves just like private, except that the final values of the specified variables are saved to the original objects outside of the parallel or work-sharing constructs upon exit of the construct. Variables not assigned a value in the last iteration of a for or parallel for construct or by the last section of a sections or parallel sections construct have indeterminate values upon exit of the construct. The shared clause makes specified objects shared among all threads in a team. It is usually not necessary to specify objects created outside a construct as shared since this is the default behavior. However, the default clause, which requires either (shared) or (none) as a parameter, may be used to change this behavior. Specifying default(none) requires that each variable be listed explicitly in a data-sharing attribute clause, unless it's declared within the parallel construct. The reduction clause performs a reduction on the scalar variables that appear in the variable list along with some operator. We used this clause in previous examples to sum up scalar variables across threads. Like the private clause, the reduction clause tells the compiler to create a private copy of the specified variables for each thread. Then at the end of the region for which the clause was specified, the original object is updated to reflect the combined result from all the threads based on the operator specified in the reduction clause. The copyin clause provides a way to assign the same value to

10 of 10 3/3/2005 10:51 AM threadprivate variables for each thread in a team. The value of each variable in a copyin clause is copied from the master thread to the private copies on every other thread at the beginning of a parallel region. Similarly, the copyprivate clause, which may only appear with the single directive, may be used to broadcast to all threads values of variables from the thread which executed the single construct. This updating of private variables on each thread occurs after the execution of the code within the single construct and before any threads have left the implied barrier at the end of the construct. These data-sharing attribute clauses provide a powerful mechanism for manipulating the data environment for threads. Using the clauses, you can avoid writing your own shared memory data handling software. With a small number of fairly simple directives and powerful clauses, OpenMP can often be a very easy way to take advantage of shared memory systems for modeling and data processing. When combined with MPI for distributed memory parallelism, it can further improve performance and resource utilization on SMP clusters. We didn't discuss nesting of OpenMP directives, and some details of directive and clause restrictions have been glossed over. So when you are ready to add OpenMP to your own code, be sure to read the specification documents on the OpenMP web site at http://www.openmp.org. Forrest Hoffman is a computer modeling and simulation researcher at Oak Ridge National Laboratory. He can be reached at forrest@climate.ornl.gov. Linux Magazine March 2004 Fast Software Builds Distributed parallel Make that speeds up builds 10-20 times. Parallel language for SMP Integrated parallelism, exceptions Practical million line systems Copyright Linux Magazine 2004