OpenMP 4.5: Threading, vectorization & offloading
|
|
- Nathaniel Palmer
- 5 years ago
- Views:
Transcription
1 OpenMP 4.5: Threading, vectorization & offloading Michal Merta 2nd of March 2018
2 Agenda Introduction The Basics OpenMP Tasks Vectorization with OpenMP 4.x Offloading to Accelerators
3 Why OpenMP? CPU frequency no longer increasing significantly Number of transistors still increases (Moore s law holds) Number of cores increasing SIMD vector capabilities increasing parallelization inevitable (Image:
4 Why OpenMP? OpenMP 4.5 now enables you to use the same standard for threading/tasking your application in shared memory, vectorization for the SIMD units, accelerating your code using the GPUs/many core coprocessors. Less need for a combination of CUDA, OpenCL, OpenGL, OpenACC, Intrinsic, Cilk, TBB,...
5 The Basics
6 Fork-join model Program starts with one thread master Program branches off for parallel execution worker threads are spawned at parallel regions Parallel code executed by master and workers Between the parallel regions workers are put to sleep. Synchronization barrier after each parallel region. (Image: Wikipedia)
7 Parallel region The parallel region is created using #pragma omp parallel directive. Executed by a newly created team of threads. Number of threads can be specified by environmental variable: export OMP NUM THREADS=..., or via num threads(...) clause. if clause can be used to control whether region will be executed in parallel # pragma omp parallel printf (" hello "); printf (" world "); # pragma omp parallel \ num_threads (2) // Executed by 2 threads... # pragma omp parallel if(a >1) // Executed in parallel // only if a >1...
8 For loops If only a parallel construct is used, all threads execute the same code. To speedup the program we have to employ worksharing Usually loops account for much of programs runtime. The most common worksharing construct: for Distributes the loop s iterations among threads in a team. # pragma omp parallel # pragma omp for for ( int i = 0; i < 100; ++i) a[i] = b[i] + c[i]; # pragma omp parallel for for ( int i = 0; i < 100; ++i) a[i] = b[i] + c[i];
9 For loop Canonical loop form OpenMP is only able to parallelize loops in the canonical form: for ( initialize ; test ; increment )... initialize: expression in the form var = lb where var is integer or random access iterator, lb is loop invariant, test: expression in the form var operator b where b is loop invariant, operator is one of the <, <=, >, >=, increment: expression in the form ++var, var++, --var, var--, var+=incr, var-=incr, var=var+incr, var=incr+var, var=var-incr. The for loop must not contain statements that allows the loop to be exited prematurely (break, return, exit, goto).
10 Loop scheduling We can influence the way the iterations are distributed among threads schedule(static [, chunk]) iterations are divided into blocks of chunk size and assigned to threads in a round-robin fashion. schedule(dynamic [, chunk]) iterations are divided into blocks of chunk size, when a thread finishes it will be assigned the next iteration that hasn t been executed yet. schedule(guided [, chunk]) similar to dynamic, but starts with implementation-defined block size and exponentially decreases to chunk. Default on most implementations is schedule(static).
11 Single construct The single construct ensures that only one thread will execute given structured block. Can be used for I/O, memory allocation/deallocation, creating tasks # pragma omp parallel... # pragma omp single // Only one thread executes the block // while others wait for work....
12 Synchronization A data race occurs when two threads access the same memory without proper synchronization. If between two synchronization points at least one thread writes to a memory location from which at least one other thread reads, the result is not deterministic (race condition). double sum = 0.0; # pragma omp parallel for for ( int i = 0; i < 1000; ++i) sum = sum + a[ i];
13 Synchronization critical section Ensures only one thread at a time will execute a given block. double sum = 0.0; # pragma omp parallel for for ( int i = 0; i < 1000; ++i) # pragma omp critical sum = sum + a[ i];
14 Synchronization atomic operation The memory update in the next instruction will be performed atomically, not the whole statement. A compiler may use special hw. instructions for better performance than with critical. double sum = 0.0; # pragma omp parallel for for ( int i = 0; i < 1000; ++i) // The word update is optional # pragma omp atomic update sum = sum + a[ i]; The atomic supports following operations ++x; --x; x ++; x - -; x += expr ; x -= expr ; x *= expr ; x /= expr ; x &= expr ; x = x+ expr ; x = x- expr ; x = x* expr ; x = x/ expr ; x = x& expr ; x = expr+x; x = expr -x; x = expr *x; x = expr /x; x = expr &x; x = expr ; x ˆ= expr ; x <<= expr ; x >>= expr ; x = x expr ; x = xˆ expr ; x = x<< expr ; x = x>> expr ; x = expr x; x = exprˆx; x = expr <<x; x = expr >>x;
15 Synchronization Reduction Reduction operator is applied to all variables in the list. Syntax: reduction(operator : list). The result is provided in the associated reduction variable. double sum = 0.0; # pragma omp parallel for reduction (+ : sum ) for ( int i = 0; i < 1000; ++i) sum = sum + a[ i];
16 Synchronization Reduction It is possible to create a user-defined reduction: #pragma omp declare reduction (reduction-identifier : typename-list : combiner) [initializer(expression)] reduction-identifier gives name to the operator typename-list list of types to which it applies combiner expression specifying how to combine values initializer can specify initializing value of the operator e.g. #pragma omp declare reduction (merge : std::vector<int> : omp out.insert(omp out.end(), omp in.begin(), omp in.end()))
17 Synchronization The barrier construct Implicit or explicit. All tasks created by any thread of the current team are guaranteed to be completed at barrier exit. All worksharing constructs have an implicit barrier. In some cases the barrier can be left out by the nowait clause. # pragma omp barrier
18 Data Scope Two types of data in a parallel region - shared or private. shared data any data declared outside a parallel region are by default shared (any thread using a variable x will access the same memory location). private data any variable declared inside the block following an OpenMP directive will be local to the executing thread. firstprivate private variable initialized by the value of the variable lastprivate after the parallel region the variable is set equal to the value of the private version of whichever thread executed the final iteration int i, j; int k = 0; double x; # pragma omp parallel for shared ( i) private ( j) \ firstprivate ( k) lastprivate ( x)
19 Controlling thread affinity Thread affinity becomes important on multi-socket system. OMP PLACES defines a series of places to which threads are assigned. threads: each place corresponds to a single hw. thread. cores: each place corresponds to a single core (consisting of one or more threads). sockets: each place corresponds to a single socket (consisting of one or more cores). OMP PROC BIND false: thread affinity disabled, runtime may move threads between OpenMP places true: locks threads to cores spread: spread threads evenly among the places close: pack threads close the master in the places list master: collocates threads with master
20 OpenMP Tasks
21 Tasks in OpenMP OpenMP specification version 3.0 introduced a new feature called tasking. Tasking enables parallelization of applications where units of work are generated dynamically, as in recursive structures or while loops. In OpenMP, an explicit task is specified using the task directive. # pragma omp task [ clause ]...
22 Task Execution When a thread encounters a task construct, it may choose to execute the task immediately or defer its execution until a later time. If task execution is deferred, task is placed on a task pool. A thread executing a task may differ from the thread that originally encountered it.
23 Data Scoping with Tasks Data scoping clauses: shared(list), private(list), firstprivate(list) Static and global variables are shared. Automatic storage (local) variables are private. Orphaned task variables are firstprivate by default. Non-orphaned task variables inherit the shared attribute. Variables are firstprivate unless shared in the enclosing context.
24 Example: Fibonacci Sequential code int main ( int argc, char * argv [] ) fib (N); int fib ( int n ) if (n < 2) return n; int x = fib (n - 1); int y = fib (n - 2); return x + y;
25 Example: Fibonacci Sequential code int main ( int argc, char * argv [] ) fib (N); int fib ( int n ) if (n < 2) return n; int x = fib (n - 1); int y = fib (n - 2); return x + y; Parallel code int main ( int argc, char * argv [] ) # pragma omp parallel # pragma omp single fib (N); int fib ( int n ) if (n < 2) return n; int x, y; # pragma omp task shared (x) x = fib (n - 1); # pragma omp task shared (y) y = fib (n - 2); # pragma omp taskwait return x + y;
26 barrier vs. taskwait vs. taskgroup barrier directive All tasks created by any thread of the current Team are guaranteed to be completed at barrier exit. # pragma omp barrier taskwait directive Encountering task is suspended until child tasks are complete. Applies only to direct children, not descendants! # pragma omp taskwait taskgroup directive Specifies the wait on completion of child tasks and their descendants. Deeper synchronization than taskwait Can be restricted to a subset of tasks (as opposed to barrier) Can be used for cancellation # pragma omp taskgroup...
27 Example: taskwait vs. taskgroup Taskwait int main () # pragma omp parallel # pragma omp single # pragma omp task # pragma omp critical printf (" Task 1\n"); # pragma omp task sleep (1); # pragma omp critical printf (" Task 2\n"); # pragma omp taskwait # pragma omp task # pragma omp critical printf (" Task 3\n"); Taskgroup int main () # pragma omp parallel # pragma omp single # pragma omp taskgroup # pragma omp task # pragma omp critical printf (" Task 1\n"); # pragma omp task sleep (1); # pragma omp critical printf (" Task 2\n"); /* end of taskgroup */ # pragma omp task # pragma omp critical printf (" Task 3\n");
28 Task dependency # pragma omp task depend ( dependency - type : list ) The task dependence is fulfilled when the predecessor task has completed in dependency-type: the generated task will be a dependent task of all previously generated sibling tasks that reference at least one of the list items in an out or inout clause. out and inout dependency-type: the generated task will be a dependent task of all previously generated sibling tasks that reference at least one of the list items in an in, out, or inout clause. # pragma omp task x = f (); # pragma omp task y = g(x); # pragma omp task depend ( out : x) x = f (); # pragma omp task depend ( in: x) y = g(x);
29 taskloop construct Parallelizes loop by creating tasks for one or more iterations of the loop. Cut loops into chunks and create a task for each loop chunk. Inherits clauses from worksharing and task construct. Provides better load balancing in some cases. Attitional clauses: grainsize(grain-size): chunks have at least grain-size and max 2*grain-size loop iterations, num tasks(num-tasks): create num-tasks tasks for iterations of the loop. # pragma omp parallel shared (a, b, c) # pragma omp single # pragma omp task long_running_comp (); // can execute concurrently # pragma omp taskloop grainsize (1000) for ( int i = 0; i < ; ++i) // can execute concurrently c[i] = a[i] + b[i];
30 Cancellation User can request a cancellation of a construct. Threads/tasks will be cancelled and execution continues after the end of the construct. Applicable to: parallel, for, taskgroup, sections. Threads/tasks stop execution at a certain point and not immediately. # pragma omp parallel shared ( matrix ) # pragma omp for for ( int row = 0; row < rows ; row ++) for ( int col = 0; col < cols ; col ++) if ( matrix (row, col ) == 0) # pragma omp cancel for # pragma omp cancellation point for
31 Vectorization with OpenMP 4.x
32 Vectorization with OpenMP 4.x (Image: Intel)
33 Creating a Code Suitable for Vectorization I. Original code double *x = new double [2* M]; // s stored in Array of Structures // x = [ x1_1, x1_2, x2_1, x2_2,...]... // quadrature points double *wx = new double [M];... double f = 0.0; double a = get_a (); for ( int i = 0; i < M; ++i) f = eval (x + 2*i, a); entry += f * wx[i]; return entry ; double eval ( double * x, double a) return ( sin (x [0]) + cos (x [1]) ) * a;
34 Creating a Code Suitable for Vectorization I. Code with aligned memory allocations and AoS converted to SoA Original code double *x = new double [2* M]; // s stored in Array of Structures // x = [ x1_1, x1_2, x2_1, x2_2,...]... // quadrature points double *wx = new double [M];... double f = 0.0; double a = get_a (); for ( int i = 0; i < M; ++i) f = eval (x + 2*i, a); entry += f * wx[i]; return entry ; double eval ( double * x, double a) return ( sin (x [0]) + cos (x [1]) ) * a; // allocate aligned arrays for 1 st // and 2 nd coordinates of points x double *x_1 = ( double *) _mm_malloc ( M* sizeof ( double ), 64 ); double *x_2 = ( double *) _mm_malloc ( M* sizeof ( double ), 64 ); // x stored in Structure of Arrays // x_1 = [ x1_1, x2_1, x3_1,...] // x_2 = [ x1_2, x2_2, x3_2,...]... double *w = ( double *) _mm_malloc ( M* sizeof ( double ), 64 );... double f = 0.0; double a = get_a (); for ( int i = 0; i < M; ++i) f = eval ( x_1 [i], x_2 [i], a); entry += f * wx[i]; return entry ; double eval ( double & x_1, double & x_2, double & a) return ( sin ( x_1 ) + cos ( x_2 ) ) * a;
35 SIMD Construct SIMD Construct: The simd construct can be applied to a loop to indicate that the loop can be transformed into a SIMD loop (that is, multiple iterations of the loop can be executed concurrently using SIMD instructions). Syntax: # pragma omp simd [ clause [, clause ]...] for - loop For-loop has to be in canonical loop form (see OpenMP 4.5 API:2.6)
36 SIMD Construct Clauses safelen(n1[,n2]...) n1, n2,... must be power of 2: The compiler can assume a vectorization for a VL of n1, n2,... to be safe private(v1, v2,...): Variables private to each iteration lastprivate(...): Last value is copied out from the last iteration instance linear(v1:step1, v2:step2,...) For every iteration of original scalar loop v1 is incremented by step1,... Therefore it is incremented by step1 * VL for the vectorized loop. reduction(operator:v1, v2,...) Variables v1, v2,... are reduction variables for operation operator collapse(n): Combine nested loops - collapse them aligned(v1:base, v2:base,...) Tell variables v1, v2,... are aligned; default is architecture specific alignment
37 SIMD Construct Example Ignore data dependencies, indirectly mitigate control flow dependence and assert alignment: (Image: Intel) Get the info from the optimization report by compiling with -qopt-report=[0-5].
38 Declare SIMD Construct SIMD-enabled function (aka. declare simd construct): The declare simd construct can be applied to a function [...] to enable the creation of one or more versions that can process multiple arguments using SIMD instructions from a single invocation from a SIMD loop. Syntax: # pragma omp declare simd [ clause [, clause ]...] function definition or declaration Intent: Express work as scalar operations (kernel) and let compiler create a vector version of it. The size of vectors can be specified at compile time (SSE, AVX,...) which makes it portable!
39 Declare SIMD Construct Clauses simdlen(len) len must be power of 2: Allow as many elements per argument (default is implementation specific) linear(v1:step1, v2:step2,...) Defines v1, v2,... to be private to SIMD lane and to have linear (step1, step2,... ) relationship when used in context of a loop uniform(a1, a2,...) Arguments a1, a2,... etc. are not treated as vectors (constant values across SIMD lanes) inbranch, notinbranch: SIMD-enabled function called only inside branches or never aligned(a1:base, a2:base,...) Tell arguments a1, a2,... are aligned; default is architecture specific alignment
40 Declare SIMD Construct Example Ignore data dependencies, indirectly mitigate control flow dependence and assert alignment: (Image: Intel)
41 OpenMP 4.5 OpenMP 4.5 was ratified November 2015: OpenMP simd linear clause: linear(val(var):[step]) (default): Specifies that the value of each list item on each lane corresponds to the value of the list item upon entry to the function plus the logical number of the lane times linear-step. linear(uval(var):[step]) (C++, Fortran): Similar to VAL but each invocation uses the same storage location for each SIMD lane. For val, a vector of addresses (references) is passed to the vector variant of the routine; for uval, only one address (reference) is passed, which may improve performance. linear(ref(var):step) (C++, Fortran): Specifies that the storage location of each list item on each lane corresponds to an array at the storage location upon entry to the function indexed by the logical number of the lane times linear-step. OpenMP simd declare newly for C++ virtual functions.
42 Creating a Code Suitable for Vectorization II. Original code double *x = new double [2* M]; // x stored in Array of Struct. // x = [ x1_1, x1_2, // x2_1, x2_2,...]... // quadrature points double *wx = new double [M];... double f = 0.0; double a = get_a (); for ( int i = 0; i < M; ++i) f = eval (x + 2*i, a); entry += f * wx[i]; return entry ; Code with aligned memory allocations, SoA, and vectorization pragmas double *x_1 = ( double *) _mm_malloc ( M* sizeof ( double ), 64 ); double *x_2 = ( double *) _mm_malloc ( M* sizeof ( double ), 64 );... double *w = ( double *) _mm_malloc ( M* sizeof ( double ), 64 );... double f = 0.0; double a = get_a (); # pragma omp simd reduction ( + : entry ) \ aligned ( x_1, x_2, w ) private ( f ) \ simdlen ( 4 ) for ( int i = 0; i < M; ++i) f = eval ( x_1 [i], x_2 [i], a); entry += f * wx[i]; return entry ; double eval ( double * x, double a) return ( sin (x [0]) + cos (x [1]) ) * a; # pragma omp declare simd \ linear ( ref ( x_1, x_2 ) ) uniform ( a ) \ simdlen ( 4 ) notinbranch aligned (x_1, x_2 ) double eval ( double & x_1, double & x_2, double & a) return ( sin ( x_1 ) + cos ( x_2 ) ) * a;
43 Offloading to Accelerators
44 Execution model host-centric the execution of an OpenMP program starts on the host device and it may offload target regions to the target device. If a target device is not present, not supported, or not available, the target region is executed by the host device. The most important OpenMP constructs #pragma omp target #pragma omp target data #pragma omp target update #pragma omp declare target, #pragma omp end declare target #pragma omp teams, #pragma omp distribute
45 target data construct Creates a device data environment for the extent of the region # pragma omp target data [ clause ]... Here, clause may be device( integer-expression ), map( map-type: list), if( scalar-expression ). The map clause maps a variable from current task s data environment to the device data environment associated with the construct. alloc-type: each new corresponding list item has an undefined initial value, to-type: each new corresponding list item is initialized with the original list item value, from-type: on the exit from the region the corresponding list item s value is assigned to the original list item, tofrom-type: combination of the previous two, default.
46 target construct Creates a device data environment and execute the construct on the same device. In addition to the target data construct specifies that the associated region is executed by a device. The encountering task waits for the device to complete the target region. # pragma omp target [ clause ]... Here, clause may be device( integer-expression ), map( map-type: list), if( scalar-expression ).
47 target update construct Makes the corresponding list items in the device data environment consistent with their original list items, according to the specified motion clauses. # pragma omp target update motion - clause [ clause ] motion-clause: to( list ), from( list ). clause: device( integer-expression ), if( scalar-expression )
48 declare target directive Specifies that variables, functions (C, C++, Fortran) and subroutines (Fortran) are mapped to a device If a list item is a function or subroutine then a device-specific version of the routines are created that can be called from a target region. If a list item is a variable then the original variable is mapped to a corresponding variable in the initial device data environment for all devices (if the variable is initialized it is mapped with the same value). Both declaration and definition of a function must have a declare target directive. # pragma omp declare target... # pragma omp end declare target
49 teams construct Creates a league of thread teams and the master thread of each team executes the region. The number of team is determined by the num teams clause, the number of threads in each team is determined by the thread limit clause. omp get team num() to identify current team. The team region is executed by the master thread of each team. Threads other than master do not begin execution until the master thread encounters a parallel region. Threads in different teams cannot synchronize with each other. Must be perfectly nested in a target construct. Only special OpenMP constructs can be nested inside a team construct: distribute, parallel, parallel for, and parallel sections.
50 distribute construct Work sharing construct for target and teams region Distribute the iterations of a loop across the master threads of the teams executing the region. No implicit barrier at the end of the construct. clause dist schedule( kind, chunk-size ): kind must be static, distributes chunks of chunk-size across master threads of teams in a round-robin fashion. double sum = 0.0; int i, i0; # pragma omp target map (to: B [0:N], C [0:N]) map ( tofrom : sum ) # pragma omp teams num_teams ( num_teams ) thread_limit ( block_threads ) \ reduction (+: sum ) # pragma omp distribute for ( i0 = 0; i0 < N; i0 += block_size ) # pragma omp parallel for reduction (+: sum ) for ( i = i0; i < min (i0+block_size, N); ++i ) sum += B[i] * C[i];
51 Other examples double sum = 0.0; int i, i0; # pragma omp target teams map (to: B [0:N], C [0:N]) \ map ( tofrom : sum ) reduction (+: sum ) # pragma omp distribute parallel for reduction (+: sum ) for ( i = 0; i < N; ++i ) sum += B[i] * C[i]; double sum = 0.0; int i, i0; # pragma omp target map (to: B [0:N], C [0:N]) map ( tofrom : sum ) # pragma omp teams num_teams (8) thread_limit (16) reduction (+: sum ) # pragma omp distribute parallel for reduction (+: sum ) / dist_schedule ( static, 1024) schedule ( static, 64) for ( i = 0; i < N; ++i ) sum += B[i] * C[i]; init (v1, v2, N); int i, i0; # pragma omp target teams map (to: v1 [0:N], v2 [0:N]) map (from : p [0:N]) # pragma omp distribute simd for ( i = 0; i < N; ++i ) p[i] = v1[i] * v2[i];
OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.
OpenMP Overview in 30 Minutes Christian Terboven 06.12.2010 / Aachen, Germany Stand: 03.12.2010 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda OpenMP: Parallel Regions,
More informationOpenMP 4.0/4.5. Mark Bull, EPCC
OpenMP 4.0/4.5 Mark Bull, EPCC OpenMP 4.0/4.5 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all
More informationOpenMP 4.0. Mark Bull, EPCC
OpenMP 4.0 Mark Bull, EPCC OpenMP 4.0 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all devices!
More informationAdvanced OpenMP. Lecture 11: OpenMP 4.0
Advanced OpenMP Lecture 11: OpenMP 4.0 OpenMP 4.0 Version 4.0 was released in July 2013 Starting to make an appearance in production compilers What s new in 4.0 User defined reductions Construct cancellation
More informationModule 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program
The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program Amdahl's Law About Data What is Data Race? Overview to OpenMP Components of OpenMP OpenMP Programming Model OpenMP Directives
More informationIntroduction to OpenMP
Christian Terboven, Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group terboven,schmidl@itc.rwth-aachen.de IT Center der RWTH Aachen University History De-facto standard for Shared-Memory
More informationECE 574 Cluster Computing Lecture 10
ECE 574 Cluster Computing Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 1 October 2015 Announcements Homework #4 will be posted eventually 1 HW#4 Notes How granular
More informationParallel Programming
Parallel Programming OpenMP Nils Moschüring PhD Student (LMU) Nils Moschüring PhD Student (LMU), OpenMP 1 1 Overview What is parallel software development Why do we need parallel computation? Problems
More informationProgress on OpenMP Specifications
Progress on OpenMP Specifications Wednesday, November 13, 2012 Bronis R. de Supinski Chair, OpenMP Language Committee This work has been authored by Lawrence Livermore National Security, LLC under contract
More informationTopics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP
Topics Lecture 11 Introduction OpenMP Some Examples Library functions Environment variables 1 2 Introduction Shared Memory Parallelization OpenMP is: a standard for parallel programming in C, C++, and
More informationAdvanced OpenMP Features
Christian Terboven, Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group {terboven,schmidl@itc.rwth-aachen.de IT Center der RWTH Aachen University Vectorization 2 Vectorization SIMD =
More informationLecture 4: OpenMP Open Multi-Processing
CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP
More informationIntroduction to OpenMP
Introduction to OpenMP Christian Terboven 10.04.2013 / Darmstadt, Germany Stand: 06.03.2013 Version 2.3 Rechen- und Kommunikationszentrum (RZ) History De-facto standard for
More information[Potentially] Your first parallel application
[Potentially] Your first parallel application Compute the smallest element in an array as fast as possible small = array[0]; for( i = 0; i < N; i++) if( array[i] < small ) ) small = array[i] 64-bit Intel
More informationOpenMP. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen
OpenMP Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 Worksharing constructs To date: #pragma omp parallel created a team of threads We distributed
More informationOpenMP 4.0/4.5: New Features and Protocols. Jemmy Hu
OpenMP 4.0/4.5: New Features and Protocols Jemmy Hu SHARCNET HPC Consultant University of Waterloo May 10, 2017 General Interest Seminar Outline OpenMP overview Task constructs in OpenMP SIMP constructs
More informationOverview: The OpenMP Programming Model
Overview: The OpenMP Programming Model motivation and overview the parallel directive: clauses, equivalent pthread code, examples the for directive and scheduling of loop iterations Pi example in OpenMP
More informationOpenMP C and C++ Application Program Interface Version 1.0 October Document Number
OpenMP C and C++ Application Program Interface Version 1.0 October 1998 Document Number 004 2229 001 Contents Page v Introduction [1] 1 Scope............................. 1 Definition of Terms.........................
More informationAllows program to be incrementally parallelized
Basic OpenMP What is OpenMP An open standard for shared memory programming in C/C+ + and Fortran supported by Intel, Gnu, Microsoft, Apple, IBM, HP and others Compiler directives and library support OpenMP
More informationOpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen
OpenMP - III Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT
More informationA brief introduction to OpenMP
A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism
More informationEE/CSCI 451: Parallel and Distributed Computation
EE/CSCI 451: Parallel and Distributed Computation Lecture #7 2/5/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 Outline From last class
More informationOpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen
OpenMP - II Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT
More informationParallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops
Parallel Programming Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Single computers nowadays Several CPUs (cores) 4 to 8 cores on a single chip Hyper-threading
More informationOpenMP. Dr. William McDoniel and Prof. Paolo Bientinesi WS17/18. HPAC, RWTH Aachen
OpenMP Dr. William McDoniel and Prof. Paolo Bientinesi HPAC, RWTH Aachen mcdoniel@aices.rwth-aachen.de WS17/18 Loop construct - Clauses #pragma omp for [clause [, clause]...] The following clauses apply:
More informationEE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California
EE/CSCI 451 Introduction to Parallel and Distributed Computation Discussion #4 2/3/2017 University of Southern California 1 USC HPCC Access Compile Submit job OpenMP Today s topic What is OpenMP OpenMP
More informationIntroduction to Standard OpenMP 3.1
Introduction to Standard OpenMP 3.1 Massimiliano Culpo - m.culpo@cineca.it Gian Franco Marras - g.marras@cineca.it CINECA - SuperComputing Applications and Innovation Department 1 / 59 Outline 1 Introduction
More informationJukka Julku Multicore programming: Low-level libraries. Outline. Processes and threads TBB MPI UPC. Examples
Multicore Jukka Julku 19.2.2009 1 2 3 4 5 6 Disclaimer There are several low-level, languages and directive based approaches But no silver bullets This presentation only covers some examples of them is
More information15-418, Spring 2008 OpenMP: A Short Introduction
15-418, Spring 2008 OpenMP: A Short Introduction This is a short introduction to OpenMP, an API (Application Program Interface) that supports multithreaded, shared address space (aka shared memory) parallelism.
More informationParallel Programming. OpenMP Parallel programming for multiprocessors for loops
Parallel Programming OpenMP Parallel programming for multiprocessors for loops OpenMP OpenMP An application programming interface (API) for parallel programming on multiprocessors Assumes shared memory
More informationOpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018
OpenMP 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationMPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016
MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared
More informationIntroduction to OpenMP.
Introduction to OpenMP www.openmp.org Motivation Parallelize the following code using threads: for (i=0; i
More informationOverview Implicit Vectorisation Explicit Vectorisation Data Alignment Summary. Vectorisation. James Briggs. 1 COSMOS DiRAC.
Vectorisation James Briggs 1 COSMOS DiRAC April 28, 2015 Session Plan 1 Overview 2 Implicit Vectorisation 3 Explicit Vectorisation 4 Data Alignment 5 Summary Section 1 Overview What is SIMD? Scalar Processing:
More informationComparing OpenACC 2.5 and OpenMP 4.1 James C Beyer PhD, Sept 29 th 2015
Comparing OpenACC 2.5 and OpenMP 4.1 James C Beyer PhD, Sept 29 th 2015 Abstract As both an OpenMP and OpenACC insider I will present my opinion of the current status of these two directive sets for programming
More informationSession 4: Parallel Programming with OpenMP
Session 4: Parallel Programming with OpenMP Xavier Martorell Barcelona Supercomputing Center Agenda Agenda 10:00-11:00 OpenMP fundamentals, parallel regions 11:00-11:30 Worksharing constructs 11:30-12:00
More informationReview. Lecture 12 5/22/2012. Compiler Directives. Library Functions Environment Variables. Compiler directives for construct, collapse clause
Review Lecture 12 Compiler Directives Conditional compilation Parallel construct Work-sharing constructs for, section, single Synchronization Work-tasking Library Functions Environment Variables 1 2 13b.cpp
More informationNew Features after OpenMP 2.5
New Features after OpenMP 2.5 2 Outline OpenMP Specifications Version 3.0 Task Parallelism Improvements to nested and loop parallelism Additional new Features Version 3.1 - New Features Version 4.0 simd
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationCS 470 Spring Mike Lam, Professor. OpenMP
CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism
More informationProgramming Shared-memory Platforms with OpenMP. Xu Liu
Programming Shared-memory Platforms with OpenMP Xu Liu Introduction to OpenMP OpenMP directives concurrency directives parallel regions loops, sections, tasks Topics for Today synchronization directives
More informationOpenMP. Application Program Interface. CINECA, 14 May 2012 OpenMP Marco Comparato
OpenMP Application Program Interface Introduction Shared-memory parallelism in C, C++ and Fortran compiler directives library routines environment variables Directives single program multiple data (SPMD)
More informationOpenMP Application Program Interface
OpenMP Application Program Interface Version.0 - RC - March 01 Public Review Release Candidate Copyright 1-01 OpenMP Architecture Review Board. Permission to copy without fee all or part of this material
More informationby system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call
OpenMP Syntax The OpenMP Programming Model Number of threads are determined by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by
More informationPresenter: Georg Zitzlsberger Date:
C++ SIMD parallelism with Intel Cilk Plus and OpenMP* 4.0 Presenter: Georg Zitzlsberger Date: 05-12-2014 Agenda SIMD & Vectorization How to Vectorize? Vectorize with OpenMP* 4.0 Vectorize with Intel Cilk
More informationLittle Motivation Outline Introduction OpenMP Architecture Working with OpenMP Future of OpenMP End. OpenMP. Amasis Brauch German University in Cairo
OpenMP Amasis Brauch German University in Cairo May 4, 2010 Simple Algorithm 1 void i n c r e m e n t e r ( short a r r a y ) 2 { 3 long i ; 4 5 for ( i = 0 ; i < 1000000; i ++) 6 { 7 a r r a y [ i ]++;
More informationCS 470 Spring Mike Lam, Professor. OpenMP
CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism
More informationIntroduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi
More informationAdvanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele
Advanced C Programming Winter Term 2008/09 Guest Lecture by Markus Thiele Lecture 14: Parallel Programming with OpenMP Motivation: Why parallelize? The free lunch is over. Herb
More informationIntroduction [1] 1. Directives [2] 7
OpenMP Fortran Application Program Interface Version 2.0, November 2000 Contents Introduction [1] 1 Scope............................. 1 Glossary............................ 1 Execution Model.........................
More informationOpenMP 4.5 target. Presenters: Tom Scogland Oscar Hernandez. Wednesday, June 28 th, Credits for some of the material
OpenMP 4.5 target Wednesday, June 28 th, 2017 Presenters: Tom Scogland Oscar Hernandez Credits for some of the material IWOMP 2016 tutorial James Beyer, Bronis de Supinski OpenMP 4.5 Relevant Accelerator
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationOpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++
OpenMP OpenMP Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum 1997-2002 API for Fortran and C/C++ directives runtime routines environment variables www.openmp.org 1
More informationMultithreading in C with OpenMP
Multithreading in C with OpenMP ICS432 - Spring 2017 Concurrent and High-Performance Programming Henri Casanova (henric@hawaii.edu) Pthreads are good and bad! Multi-threaded programming in C with Pthreads
More informationTasking in OpenMP 4. Mirko Cestari - Marco Rorro -
Tasking in OpenMP 4 Mirko Cestari - m.cestari@cineca.it Marco Rorro - m.rorro@cineca.it Outline Introduction to OpenMP General characteristics of Taks Some examples Live Demo Multi-threaded process Each
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming Outline OpenMP Shared-memory model Parallel for loops Declaring private variables Critical sections Reductions
More informationLab: Scientific Computing Tsunami-Simulation
Lab: Scientific Computing Tsunami-Simulation Session 4: Optimization and OMP Sebastian Rettenberger, Michael Bader 23.11.15 Session 4: Optimization and OMP, 23.11.15 1 Department of Informatics V Linux-Cluster
More informationParallel Programming with OpenMP. CS240A, T. Yang
Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs
More informationShared Memory Programming with OpenMP
Shared Memory Programming with OpenMP (An UHeM Training) Süha Tuna Informatics Institute, Istanbul Technical University February 12th, 2016 2 Outline - I Shared Memory Systems Threaded Programming Model
More informationPOSIX Threads and OpenMP tasks
POSIX Threads and OpenMP tasks Jimmy Aguilar Mena February 16, 2018 Introduction Pthreads Tasks Two simple schemas Independent functions # include # include void f u n c t i
More informationMake the Most of OpenMP Tasking. Sergi Mateo Bellido Compiler engineer
Make the Most of OpenMP Tasking Sergi Mateo Bellido Compiler engineer 14/11/2017 Outline Intro Data-sharing clauses Cutoff clauses Scheduling clauses 2 Intro: what s a task? A task is a piece of code &
More informationCSL 860: Modern Parallel
CSL 860: Modern Parallel Computation Hello OpenMP #pragma omp parallel { // I am now thread iof n switch(omp_get_thread_num()) { case 0 : blah1.. case 1: blah2.. // Back to normal Parallel Construct Extremely
More informationOpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono
OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/
More informationOpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen
OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,
More informationDistributed Systems + Middleware Concurrent Programming with OpenMP
Distributed Systems + Middleware Concurrent Programming with OpenMP Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola
More information1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008
1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction
More informationShared Memory Parallelism - OpenMP
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (http://www.llnl.gov/computing/tutorials/openmp/#introduction) OpenMP sc99 tutorial
More informationUvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP
Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel
More informationMasterpraktikum - High Performance Computing
Masterpraktikum - High Performance Computing OpenMP Michael Bader Alexander Heinecke Alexander Breuer Technische Universität München, Germany 2 #include ... #pragma omp parallel for for(i = 0; i
More informationOpenMP loops. Paolo Burgio.
OpenMP loops Paolo Burgio paolo.burgio@unimore.it Outline Expressing parallelism Understanding parallel threads Memory Data management Data clauses Synchronization Barriers, locks, critical sections Work
More informationCOMP Parallel Computing. SMM (2) OpenMP Programming Model
COMP 633 - Parallel Computing Lecture 7 September 12, 2017 SMM (2) OpenMP Programming Model Reading for next time look through sections 7-9 of the Open MP tutorial Topics OpenMP shared-memory parallel
More informationDepartment of Informatics V. HPC-Lab. Session 2: OpenMP M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 2: OpenMP M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationOpen Multi-Processing: Basic Course
HPC2N, UmeåUniversity, 901 87, Sweden. May 26, 2015 Table of contents Overview of Paralellism 1 Overview of Paralellism Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2 Pragmas/Sentinels
More informationOpenMP 4.0 implementation in GCC. Jakub Jelínek Consulting Engineer, Platform Tools Engineering, Red Hat
OpenMP 4.0 implementation in GCC Jakub Jelínek Consulting Engineer, Platform Tools Engineering, Red Hat OpenMP 4.0 implementation in GCC Work started in April 2013, C/C++ support with host fallback only
More informationOpenMP - Introduction
OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı - 21.06.2012 Outline What is OpenMP? Introduction (Code Structure, Directives, Threads etc.) Limitations Data Scope Clauses Shared,
More informationOpenMP Technical Report 3 on OpenMP 4.0 enhancements
OPENMP ARB OpenMP Technical Report on OpenMP.0 enhancements This Technical Report specifies OpenMP.0 enhancements that are candidates for a future OpenMP.1: (e.g. for asynchronous execution on and data
More informationOpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing
CS 590: High Performance Computing OpenMP Introduction Fengguang Song Department of Computer Science IUPUI OpenMP A standard for shared-memory parallel programming. MP = multiprocessing Designed for systems
More informationIntroduction to OpenMP. Lecture 4: Work sharing directives
Introduction to OpenMP Lecture 4: Work sharing directives Work sharing directives Directives which appear inside a parallel region and indicate how work should be shared out between threads Parallel do/for
More informationMulti-core Architecture and Programming
Multi-core Architecture and Programming Yang Quansheng( 杨全胜 ) http://www.njyangqs.com School of Computer Science & Engineering 1 http://www.njyangqs.com Programming with OpenMP Content What is PpenMP Parallel
More informationOpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018
OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationShared Memory Programming Model
Shared Memory Programming Model Ahmed El-Mahdy and Waleed Lotfy What is a shared memory system? Activity! Consider the board as a shared memory Consider a sheet of paper in front of you as a local cache
More informationCS 470 Spring Mike Lam, Professor. Advanced OpenMP
CS 470 Spring 2018 Mike Lam, Professor Advanced OpenMP Atomics OpenMP provides access to highly-efficient hardware synchronization mechanisms Use the atomic pragma to annotate a single statement Statement
More informationEPL372 Lab Exercise 5: Introduction to OpenMP
EPL372 Lab Exercise 5: Introduction to OpenMP References: https://computing.llnl.gov/tutorials/openmp/ http://openmp.org/wp/openmp-specifications/ http://openmp.org/mp-documents/openmp-4.0-c.pdf http://openmp.org/mp-documents/openmp4.0.0.examples.pdf
More informationParallel Programming: OpenMP
Parallel Programming: OpenMP Xianyi Zeng xzeng@utep.edu Department of Mathematical Sciences The University of Texas at El Paso. November 10, 2016. An Overview of OpenMP OpenMP: Open Multi-Processing An
More informationParallel Computing Parallel Programming Languages Hwansoo Han
Parallel Computing Parallel Programming Languages Hwansoo Han Parallel Programming Practice Current Start with a parallel algorithm Implement, keeping in mind Data races Synchronization Threading syntax
More informationProgramming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen
Programming with Shared Memory PART II HPC Fall 2012 Prof. Robert van Engelen Overview Sequential consistency Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading
More informationProgramming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen
Programming with Shared Memory PART II HPC Fall 2007 Prof. Robert van Engelen Overview Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading HPC Fall 2007 2 Parallel
More informationParallel Processing Top manufacturer of multiprocessing video & imaging solutions.
1 of 10 3/3/2005 10:51 AM Linux Magazine March 2004 C++ Parallel Increase application performance without changing your source code. Parallel Processing Top manufacturer of multiprocessing video & imaging
More informationShared Memory Programming : OpenMP
Multicore & GPU Programming : An Integrated Approach Shared Memory Programming : OpenMP By G. Barlas Objectives Learn how to use OpenMP compiler directives to introduce concurrency in a sequential program.
More informationIntroduction to OpenMP
Introduction to OpenMP Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Shared Memory Programming OpenMP Fork-Join Model Compiler Directives / Run time library routines Compiling and
More informationOpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means
High Performance Computing: Concepts, Methods & Means OpenMP Programming Prof. Thomas Sterling Department of Computer Science Louisiana State University February 8 th, 2007 Topics Introduction Overview
More informationhttps://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG
https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG OpenMP Basic Defs: Solution Stack HW System layer Prog. User layer Layer Directives, Compiler End User Application OpenMP library
More informationOpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013
OpenMP António Abreu Instituto Politécnico de Setúbal 1 de Março de 2013 António Abreu (Instituto Politécnico de Setúbal) OpenMP 1 de Março de 2013 1 / 37 openmp what? It s an Application Program Interface
More informationCluster Computing 2008
Objectives At the completion of this module you will be able to Thread serial code with basic OpenMP pragmas Use OpenMP synchronization pragmas to coordinate thread execution and memory access Based on
More informationShared memory parallel computing
Shared memory parallel computing OpenMP Sean Stijven Przemyslaw Klosiewicz Shared-mem. programming API for SMP machines Introduced in 1997 by the OpenMP Architecture Review Board! More high-level than
More informationProgramming Shared-memory Platforms with OpenMP
Programming Shared-memory Platforms with OpenMP John Mellor-Crummey Department of Computer Science Rice University johnmc@rice.edu COMP 422/534 Lecture 7 31 February 2017 Introduction to OpenMP OpenMP
More informationParallel Programming using OpenMP
1 Parallel Programming using OpenMP Mike Bailey mjb@cs.oregonstate.edu openmp.pptx OpenMP Multithreaded Programming 2 OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard
More informationTasking and OpenMP Success Stories
Tasking and OpenMP Success Stories Christian Terboven 23.03.2011 / Aachen, Germany Stand: 21.03.2011 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda OpenMP: Tasking
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP Simple SPMD etc. Nick Maclaren nmm1@cam.ac.uk September 2017 Introduction to OpenMP p. 2/?? Terminology I am badly abusing the term SPMD tough The
More informationParallel Programming using OpenMP
1 OpenMP Multithreaded Programming 2 Parallel Programming using OpenMP OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard to perform shared-memory multithreading
More information