COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Size: px
Start display at page:

Download "COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)"

Transcription

1 COSC 6374 Parallel Computation Introduction to OpenMP Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2015 OpenMP Provides thread programming model at a high level. The user does not need to specify all the details Especially with respect to the assignment of work to threads Creation of threads User makes strategic decisions Compiler figures out details 1

2 OpenMP Overview OpenMP is a shared memory model. Threads communicate by sharing variables. Unintended sharing of data causes race conditions: race condition: when the program s outcome changes as the threads are scheduled differently. To control race conditions: Use synchronization to protect data conflicts. Synchronization is expensive so: Change how data is accessed to minimize the need for synchronization. Syntax details Most of the constructs in OpenMP are compiler directives. For C and C++: Directives are pragmas with the form: #pragma omp construct [clause [clause] ] Include file: #include <omp.h> For Fortran: the directives are comments and take one of the forms: Fixed form C$OMP construct [clause [clause] ] Free form (but works for fixed form too)!$omp construct [clause [clause] ] The OpenMP lib module: use omp_lib 2

3 Structured blocks (C/C++) Most OpenMP constructs apply to structured blocks. Structured block: a block with one point of entry at the top and one point of exit at the bottom. The only branches allowed are STOP statements in Fortran and exit() in C/C++. In C/C++: a block is a single statement or a group of statements between brackets id = omp_get_thread_num(); res[id] = do_work(id); #pragma omp for for(i=0; i<n; i++) res[i] = big_calc(i); A[i] = B[i] + res[i]; Structured Block Boundaries int id = omp_get_thread_num(); more: res[id] = do_big_job(id); if(!conv(res[id]) goto more; printf( All done \n ); A structured block int id = omp_get_thread_num(); more: res[id] = do_big_job(id); if(conv(res[id]) goto done; goto more; done: if(!really_done()) goto more; Not a structured block 3

4 Parallel Regions Threads are created using omp parallel pragma Each thread executes a copy of the code within the structured block How many threads are created? Environment variable to set no. of threads Runtime function System default used if no additional information given double A[1000]; // do some work printf( all done\n ); Parallel Regions Fork-join model of OpenMP Threads are created at the beginning of a parallel region and destroyed at the end of the parallel region (conceptually) Sequential execution double A[1000]; Parallel execution pooh(0,a) pooh(1,a) pooh(2,a) pooh(3,a) Sequential execution printf( all done\n ); Threads wait here for all threads to finish before proceeding (i.e. a barrier) 4

5 A multi-threaded Hello world program Starting point: sequential hello world int main(int argc, char **argv) int ID = 0; printf( hello(%d), ID); printf( world(%d) \n, ID); A multi-threaded Hello world program #include omp.h int main( int argc, char **argv) int ID= omp_get_thread_num(); printf( hello(%d), ID); printf( world(%d) \n, ID); return 0; Sample output hello(1) hello(0) world(1) world(0) hello (3) hello(2) world(3) world(2) 5

6 OpenMP Library routines Modify/Check the number of threads omp_set_num_threads() omp_get_num_threads() omp_get_thread_num() omp_get_max_threads() Are we in a parallel region? omp_in_parallel() How many processors in the system? omp_num_procs() Example: vector add operation Sequential code for( i=0; i<n; i++) a[i] = a[i] + b[i]; OpenMP parallel version int id = omp_get_thread_num(); int Nthrds = omp_get_num_threads(); int istart = id * N / Nthrds; int iend = (id+1) * N / Nthrds; int i; for( i=istart; i<iend; i++) a[i] = a[i] + b[i]; 6

7 Example: vector add operation All variables declared inside of the parallel region are considered to be private to each thread Each thread has its own copy of the variable Variables declared outside of a parallel region are shared amongst threads Unless explicitly changed by the user If istart and iend are not coordinated cautiously, it can lead to cache coherence problems e.g. a[iend] on thread with id=x and a[istart] on the thread with id=x+1 are in the same cache line OpenMP work-sharing constructs The for work-sharing construct splits up loop iterations among the threads #pragma omp for for ( i=0; i<n; i++ ) neat_stuff(i); By default, there is a barrier at the end of the omp for. Use the nowait clause to turn off the barrier, e.g. #pragma omp for nowait nowait is useful between two consecutive, independent omp for loops. 7

8 Example: vector add operation #pragma omp for for ( i=0; i<n; i++ ) a[i] = a[i] + b[i]; Much simpler code than the previous OpenMP version Loop variable i is automatically declared to be private on each thread Does not define how the loop iterations are distributed among the threads Can use the schedule clause to influence the work distribution, e.g. #pragma omp for schedule(static) OpenMP for construct The schedule clause affects how loop iterations are mapped onto threads schedule(static [,chunk]) Deal-out blocks of iterations of size chunk to each thread. schedule(dynamic[,chunk]) Each thread grabs chunk iterations off a queue until all iterations have been handled. schedule(guided[,chunk]) Threads dynamically grab blocks of iterations. The size of the block starts large and shrinks down to size chunk as the calculation proceeds. schedule(runtime) Schedule and chunk size taken from the OMP_SCHEDULE environment variable. 8

9 OpenMP work-sharing constructs The sections work-sharing construct gives a different structured block to each thread. By default, there is a barrier at the end of the omp sections. Use the nowait clause to turn off the barrier. #pragma omp sections #pragma omp section X_calculation(); #pragma omp section y_calculation(); #pragma omp section z_calculation(); OpenMP work-sharing constructs The single construct denotes a block of code that is executed by only one thread. A barrier is implied at the end of the single block. do_many_things(); #pragma omp single exchange_boundaries(); do_many_other_things(); 9

10 Combined parallel/work-share constructs OpenMP shortcut: Put the parallel and the workshare on the same line double res[max]; int i; #pragma omp for for ( i=0; i<max; i++) res[i] = huge(); double res[max]; int i; for for ( i=0; i<max; i++) res[i] = huge(); These are equivalent Data Environment: Default storage attributes Shared Memory programming model: Most variables are shared by default Global variables are SHARED among threads C: File scope variables, static Fortran: COMMON blocks, SAVE variables, MODULE variables But not everything is shared... Stack variables in sub-programs called from parallel regions are PRIVATE Automatic variables within a statement block are PRIVATE. 10

11 Data Sharing Examples double A[10]; int index[10]; work(index); printf(%d\n,index[0]); void work (int * index) float temp[10]; int count; return; A and index are shared by all threads. temp is local to each thread A, index temp[10] temp[10] temp[10] A, index * Third party trademarks and names are the property of their respective owner. Data Environment: Changing storage attributes One can selectively change storage attributes constructs using the following clauses* SHARED PRIVATE FIRSTPRIVATE THREADPRIVATE All the clauses on this page apply to OpenMP construct NOT to the entire region. The value of a private inside a parallel loop can be transmitted to a global value outside the loop with: LASTPRIVATE The default status can be modified with: DEFAULT (PRIVATE SHARED NONE) 11

12 Private Clause private(var) creates a local copy of var for each thread The value is uninitialized Private copy is not storage-associated with the original The original is undefined at the end int var = 13; for private (var) for ( j=0; j<1000; j++ ) var = var + j; printf ( %d\n, var ); Each thread gets its own var which are however not initialized Regardless of initialization, var is undefined at the end of the parallel region Firstprivate Clause Firstprivate is a special case of private. Initializes each private copy with the corresponding value from the master thread. int var = 13; for firstprivate (var) for ( j=0; j<1000; j++ ) var = var + j; printf ( %d\n, var ); Each thread gets its own var with an initial value of 13 Regardless of initialization, var is undefined at the end of the parallel region 12

13 Lastprivate Clause Lastprivate passes the value of a private from the last iteration to a global variable. int var = 13; for firstprivate (var) lastprivate(var) for ( j=0; j<1000; j++ ) var = var + j; printf ( %d\n, var ); Each thread gets its own var with an initial value of 13 var is defined as its value at the last sequential iteration (i.e. for j=999) OpenMP: A data environment test Consider this example of PRIVATE and FIRSTPRIVATE int A=1, B=1, C=1; private(b) firstprivate (C) Are A,B,C local to each thread or shared inside the parallel region? What are their initial values inside and after the parallel region? Inside this parallel region... A is shared by all threads; equals 1 B and C are local to each thread. B s initial value is undefined C s initial value equals 1 Outside this parallel region... The values of B and C are undefined. 13

14 Default Clause Note that the default storage attribute is DEFAULT(SHARED) (so no need to use it) To change default: DEFAULT(PRIVATE) each variable in static extent of the parallel region is made private as if specified in a private clause mostly saves typing DEFAULT(NONE): no default for variables in static extent. Must list storage attribute for each variable in static extent Only the Fortran API supports default(private). C/C++ only has default(shared) or default(none). OpenMP Reduction Combines an accumulation operation across threads: reduction (op : list) Inside a parallel or a work-sharing construct: A local copy of each list variable is made and initialized depending on the op (e.g. 0 for + ). Compiler finds standard reduction expressions containing op and uses them to update the local copy. Local copies are reduced into a single value and combined with the original global value. The variables in list must be shared in the enclosing parallel region. 14

15 Example from last lecture slightly modified var = 0; for ( j=0; j<1000; j++ ) var = var + j; printf( %d\n, var ); int var = 0; for firstprivate (var) lastprivate(var) for ( j=0; j<1000; j++ ) var = var + j; printf ( %d\n, var ); Parallel code shown here does not lead to the same result as sequential solution Code in last lecture was used to explain demonstrate behavior of firstprivate and lastprivate clauses Example from last lecture cont. E.g. 2 threads, assuming thread 0 has j = 0 -> 499 thread 1 has j = 500 ->999 Lastprivate clause ensure that the variable listed contains the value of the last sequential iteration (i.e. j=1000) In parallel code, var = 999 j=500 In the sequential code, var = j 999 j=0 j 15

16 OpenMP reduction solution var = 0 for ( j=0; j<1000; j++ ) var = var + j printf( %d\n, var ); Parallel code leading to the identical result as the sequential version: var = 0 for reduction(+:var) for ( j=0; j<1000; j++ ) var = var + j; printf ( %d\n, var); Reduction operands/initial-values A range of associative operands can be used with reduction: Initial values are the ones that make sense mathematically. Operand Initial value + 0 * 1-0.AND..OR..NEQV..EQV. MIN* MAX*.true..false..false..true. Largest pos. number Most neg. number Operand Initial value iand All bits on ior 0 ieor 0 & ~0 0 ^ 0 && 1 0 * Min and Max are not available in C/C++ 16

17 OpenMP: Synchronization High level synchronization: critical atomic barrier ordered Low level synchronization flush locks (both simple and nested) Synchronization critical Only one thread at a time can enter a critical region. int val=13; int lval=0; #pragma omp for for ( i=0; i<1000; i++ ) lval = lval + i; #pragma omp critical val = val + lval; 17

18 Synchronization - barrier Barrier: Each thread waits until all threads arrive. Remember: implicit barriers at the end of work-sharing constructs and parallel regions shared (A,C) private(id) id = omp_get_thread_num(); A[id] = big_calc1(id); #pragma omp barrier #pragma omp for for(i=0;i<n;i++) C[i]=big_calc3(I,A); OpenMP Library Routines To use a known, fixed number of threads used in a program, tell the system that you don t want dynamic adjustment of the number of threads, set the number of threads, check the number of threads you got. Note: the system may give you fewer threads than requested. If the precise number of threads matters, test for it and respond accordingly. 18

19 OpenMP Library Routines int num_threads; omp_set_dynamic( 0 ); omp_set_num_threads( omp_num_procs() ); int id=omp_get_thread_num(); #pragma omp single num_threads = omp_get_num_threads(); do_lots_of_stuff(id); OpenMP Environment Variables Control how omp for schedule(runtime) loop iterations are scheduled. OMP_SCHEDULE schedule[, chunk_size] Set the default number of threads to use. OMP_NUM_THREADS int_literal How to set environment variables Depends on the shell that you use E.g. using bash (default on shark) export OMP_NUM_THREADS 8 Or adding export OMP_NUM_THREADS=8 in the.bashrc file in your home directory and logging in freshly 19

20 Example: Numerical Integration Mathematically, we know that: (1+x 2 ) 0 dx = 2.0 We can approximate the integral as a sum of rectangles: N F(x i ) x i = X 1.0 Where each rectangle has width x and height F(x i ) at the middle of interval i. PI Program: an example static long num_steps = ; double step; int main ( int argc, char **argv) int i; double x, pi, sum = 0.0; step = 1.0/(double) num_steps; for (i=0;i<= num_steps; i++) x = (i+0.5)*step; sum = sum + 4.0/(1.0+x*x); pi = step * sum; return 0; 20

21 First solution: using parallel regions only static long num_steps = ; int i, id, nthreads; double x, pi, sum[num_threads]; Promote scalar to an array dimensioned by number of threads to avoid race condition. double step = 1.0/(double) num_steps; omp_set_num_threads(num_threads); private (i, id, x) id = omp_get_thread_num(); #pragma omp single nthreads = omp_get_num_threads(); for (i=id, sum[id]=0.0;i< num_steps; i=i+nthreads) x = (i+0.5)*step; sum[id] += 4.0/(1.0+x*x); for(i=0, pi=0.0; i<nthreads; i++) pi += sum[i] * step; Performance will suffer due to false sharing of the sum array. Second solution: using critical region int i, id, nthreads; double x, pi, sum; step = 1.0/(double) num_steps; omp_set_num_threads(num_threads); private (i, id, x, sum) id = omp_get_thread_num(); #pragma omp single nthreads = omp_get_num_threads(); for (i=id, sum=0.0;i< num_steps; i=i+nthreads) x = (i+0.5)*step; sum += 4.0/(1.0+x*x); #pragma omp critical pi += sum * step; No array, so no false sharing. Note: this method of combining partial sums doesn t scale very well. 21

22 Third solution: parallel for with a reduction int i; double x, pi, sum = 0.0; step = 1.0/(double) num_steps; omp_set_num_threads(num_threads); For good OpenMP implementations, reduction is more scalable than critical. for private(x) reduction(+:sum) for (i=0;i<= num_steps; i++) x = (i+0.5)*step; sum = sum + 4.0/(1.0+x*x); pi = step * sum; In practice, you set number of threads by setting the environment variable, OMP_NUM_THREADS 22

Data Environment: Default storage attributes

Data Environment: Default storage attributes COSC 6374 Parallel Computation Introduction to OpenMP(II) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2014 Data Environment: Default storage attributes

More information

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) COSC 6374 Parallel Computation Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2014 Introduction Threads vs. processes Recap of

More information

OpenMP Tutorial. Rudi Eigenmann of Purdue, Sanjiv Shah of Intel and others too numerous to name have contributed content for this tutorial.

OpenMP Tutorial. Rudi Eigenmann of Purdue, Sanjiv Shah of Intel and others too numerous to name have contributed content for this tutorial. OpenMP * in Action Tim Mattson Intel Corporation Barbara Chapman University of Houston Acknowledgements: Rudi Eigenmann of Purdue, Sanjiv Shah of Intel and others too numerous to name have contributed

More information

https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG OpenMP Basic Defs: Solution Stack HW System layer Prog. User layer Layer Directives, Compiler End User Application OpenMP library

More information

Introduction to. Slides prepared by : Farzana Rahman 1

Introduction to. Slides prepared by : Farzana Rahman 1 Introduction to OpenMP Slides prepared by : Farzana Rahman 1 Definition of OpenMP Application Program Interface (API) for Shared Memory Parallel Programming Directive based approach with library support

More information

Lecture 2 A Hand-on Introduction to OpenMP

Lecture 2 A Hand-on Introduction to OpenMP CS075 1896 1920 1987 2006 Lecture 2 A Hand-on Introduction to OpenMP, 2,1 01 1 2 Outline Introduction to OpenMP Creating Threads Synchronization between variables Parallel Loops Synchronize single masters

More information

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas HPCSE - I «OpenMP Programming Model - Part I» Panos Hadjidoukas 1 Schedule and Goals 13.10.2017: OpenMP - part 1 study the basic features of OpenMP able to understand and write OpenMP programs 20.10.2017:

More information

Lecture 2 A Hand-on Introduction to OpenMP

Lecture 2 A Hand-on Introduction to OpenMP CS075 1896 1920 1987 2006 Lecture 2 A Hand-on Introduction to OpenMP Jeremy Wei Center for HPC, SJTU Feb 20th, 2017 Recap of the last lecture From instruction-level parallelism on a single core to multi-core,

More information

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) V. Akishina, I. Kisel, G. Kozlov, I. Kulakov, M. Pugach, M. Zyzak Goethe University of Frankfurt am Main 2015 Task Parallelism Parallelization

More information

Parallel Programming Principle and Practice. Lecture 6 Shared Memory Programming OpenMP. Jin, Hai

Parallel Programming Principle and Practice. Lecture 6 Shared Memory Programming OpenMP. Jin, Hai Parallel Programming Principle and Practice Lecture 6 Shared Memory Programming OpenMP Jin, Hai School of Computer Science and Technology Huazhong University of Science and Technology Outline OpenMP Overview

More information

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen OpenMP - II Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Amitava Datta University of Western Australia Compiling OpenMP programs OpenMP programs written in C are compiled by (for example): gcc -fopenmp -o prog1 prog1.c We have assumed

More information

Introduction to OpenMP.

Introduction to OpenMP. Introduction to OpenMP www.openmp.org Motivation Parallelize the following code using threads: for (i=0; i

More information

Multi-core Architecture and Programming

Multi-core Architecture and Programming Multi-core Architecture and Programming Yang Quansheng( 杨全胜 ) http://www.njyangqs.com School of Computer Science & Engineering 1 http://www.njyangqs.com Programming with OpenMP Content What is PpenMP Parallel

More information

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,

More information

Programming with OpenMP*

Programming with OpenMP* Objectives At the completion of this module you will be able to Thread serial code with basic OpenMP pragmas Use OpenMP synchronization pragmas to coordinate thread execution and memory access 2 Agenda

More information

Massimo Bernaschi Istituto Applicazioni del Calcolo Consiglio Nazionale delle Ricerche.

Massimo Bernaschi Istituto Applicazioni del Calcolo Consiglio Nazionale delle Ricerche. Massimo Bernaschi Istituto Applicazioni del Calcolo Consiglio Nazionale delle Ricerche massimo.bernaschi@cnr.it OpenMP by example } Dijkstra algorithm for finding the shortest paths from vertex 0 to the

More information

OpenMP Tutorial. Seung-Jai Min. School of Electrical and Computer Engineering Purdue University, West Lafayette, IN

OpenMP Tutorial. Seung-Jai Min. School of Electrical and Computer Engineering Purdue University, West Lafayette, IN OpenMP Tutorial Seung-Jai Min (smin@purdue.edu) School of Electrical and Computer Engineering Purdue University, West Lafayette, IN 1 Parallel Programming Standards Thread Libraries - Win32 API / Posix

More information

Parallel programming using OpenMP

Parallel programming using OpenMP Parallel programming using OpenMP Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering Department

More information

Shared Memory Programming Paradigm!

Shared Memory Programming Paradigm! Shared Memory Programming Paradigm! Ivan Girotto igirotto@ictp.it Information & Communication Technology Section (ICTS) International Centre for Theoretical Physics (ICTP) 1 Multi-CPUs & Multi-cores NUMA

More information

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer OpenMP examples Sergeev Efim Senior software engineer Singularis Lab, Ltd. OpenMP Is: An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism.

More information

Parallel Programming with OpenMP. CS240A, T. Yang

Parallel Programming with OpenMP. CS240A, T. Yang Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs

More information

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call OpenMP Syntax The OpenMP Programming Model Number of threads are determined by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by

More information

Lecture 4: OpenMP Open Multi-Processing

Lecture 4: OpenMP Open Multi-Processing CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP

More information

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

EPL372 Lab Exercise 5: Introduction to OpenMP

EPL372 Lab Exercise 5: Introduction to OpenMP EPL372 Lab Exercise 5: Introduction to OpenMP References: https://computing.llnl.gov/tutorials/openmp/ http://openmp.org/wp/openmp-specifications/ http://openmp.org/mp-documents/openmp-4.0-c.pdf http://openmp.org/mp-documents/openmp4.0.0.examples.pdf

More information

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++ OpenMP OpenMP Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum 1997-2002 API for Fortran and C/C++ directives runtime routines environment variables www.openmp.org 1

More information

Shared Memory Parallelism using OpenMP

Shared Memory Parallelism using OpenMP Indian Institute of Science Bangalore, India भ रत य व ज ञ न स स थ न ब गल र, भ रत SE 292: High Performance Computing [3:0][Aug:2014] Shared Memory Parallelism using OpenMP Yogesh Simmhan Adapted from: o

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Le Yan Scientific computing consultant User services group High Performance Computing @ LSU Goals Acquaint users with the concept of shared memory parallelism Acquaint users with

More information

Cluster Computing 2008

Cluster Computing 2008 Objectives At the completion of this module you will be able to Thread serial code with basic OpenMP pragmas Use OpenMP synchronization pragmas to coordinate thread execution and memory access Based on

More information

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008 1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction

More information

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP Topics Lecture 11 Introduction OpenMP Some Examples Library functions Environment variables 1 2 Introduction Shared Memory Parallelization OpenMP is: a standard for parallel programming in C, C++, and

More information

Objectives At the completion of this module you will be able to Thread serial code with basic OpenMP pragmas Use OpenMP synchronization pragmas to coordinate thread execution and memory access Based on

More information

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013 OpenMP António Abreu Instituto Politécnico de Setúbal 1 de Março de 2013 António Abreu (Instituto Politécnico de Setúbal) OpenMP 1 de Março de 2013 1 / 37 openmp what? It s an Application Program Interface

More information

Shared Memory Parallelism - OpenMP

Shared Memory Parallelism - OpenMP Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (http://www.llnl.gov/computing/tutorials/openmp/#introduction) OpenMP sc99 tutorial

More information

ECE 574 Cluster Computing Lecture 10

ECE 574 Cluster Computing Lecture 10 ECE 574 Cluster Computing Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 1 October 2015 Announcements Homework #4 will be posted eventually 1 HW#4 Notes How granular

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Christian Terboven 10.04.2013 / Darmstadt, Germany Stand: 06.03.2013 Version 2.3 Rechen- und Kommunikationszentrum (RZ) History De-facto standard for

More information

Multicore Computing. Arash Tavakkol. Instructor: Department of Computer Engineering Sharif University of Technology Spring 2016

Multicore Computing. Arash Tavakkol. Instructor: Department of Computer Engineering Sharif University of Technology Spring 2016 Multicore Computing Instructor: Arash Tavakkol Department of Computer Engineering Sharif University of Technology Spring 2016 Shared Memory Programming Using OpenMP Some Slides come From Parallel Programmingin

More information

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2. OpenMP Overview in 30 Minutes Christian Terboven 06.12.2010 / Aachen, Germany Stand: 03.12.2010 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda OpenMP: Parallel Regions,

More information

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group Parallelising Scientific Codes Using OpenMP Wadud Miah Research Computing Group Software Performance Lifecycle Scientific Programming Early scientific codes were mainly sequential and were executed on

More information

OpenMP: The "Easy" Path to Shared Memory Computing

OpenMP: The Easy Path to Shared Memory Computing OpenMP: The "Easy" Path to Shared Memory Computing Tim Mattson Intel Corp. timothy.g.mattson@intel.com 1 * The name OpenMP is the property of the OpenMP Architecture Review Board. Copyright 2012 Intel

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set of compiler directives

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Shared Memory Programming OpenMP Fork-Join Model Compiler Directives / Run time library routines Compiling and

More information

Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

Mango DSP Top manufacturer of multiprocessing video & imaging solutions. 1 of 11 3/3/2005 10:50 AM Linux Magazine February 2004 C++ Parallel Increase application performance without changing your source code. Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

More information

Introduction to OpenMP

Introduction to OpenMP Christian Terboven, Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group terboven,schmidl@itc.rwth-aachen.de IT Center der RWTH Aachen University History De-facto standard for Shared-Memory

More information

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele Advanced C Programming Winter Term 2008/09 Guest Lecture by Markus Thiele Lecture 14: Parallel Programming with OpenMP Motivation: Why parallelize? The free lunch is over. Herb

More information

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah Introduction to OpenMP Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Quick introduction. Parallel loops. Parallel loop directives. Parallel sections.

More information

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program

Module 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program Amdahl's Law About Data What is Data Race? Overview to OpenMP Components of OpenMP OpenMP Programming Model OpenMP Directives

More information

A Hands-on Introduction to OpenMP *

A Hands-on Introduction to OpenMP * A Hands-on Introduction to OpenMP * Tim Mattson Intel Corp. timothy.g.mattson@intel.com * The name OpenMP is the property of the OpenMP Architecture Review Board. Introduction OpenMP is one of the most

More information

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah Introduction to OpenMP Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Quick introduction. Parallel loops. Parallel loop directives. Parallel sections.

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Le Yan Objectives of Training Acquaint users with the concept of shared memory parallelism Acquaint users with the basics of programming with OpenMP Memory System: Shared Memory

More information

Overview: The OpenMP Programming Model

Overview: The OpenMP Programming Model Overview: The OpenMP Programming Model motivation and overview the parallel directive: clauses, equivalent pthread code, examples the for directive and scheduling of loop iterations Pi example in OpenMP

More information

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory $ ssh username@grex.westgrid.ca username@tatanka ~ username@bison ~ cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory sh get_node_workshop.sh [ username@n139

More information

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number

OpenMP C and C++ Application Program Interface Version 1.0 October Document Number OpenMP C and C++ Application Program Interface Version 1.0 October 1998 Document Number 004 2229 001 Contents Page v Introduction [1] 1 Scope............................. 1 Definition of Terms.........................

More information

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell. Specifications maintained by OpenMP Architecture Review Board (ARB)

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell. Specifications maintained by OpenMP Architecture Review Board (ARB) COMP4300/8300: The OpenMP Programming Model Alistair Rendell See: www.openmp.org Introduction to High Performance Computing for Scientists and Engineers, Hager and Wellein, Chapter 6 & 7 High Performance

More information

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell

COMP4300/8300: The OpenMP Programming Model. Alistair Rendell COMP4300/8300: The OpenMP Programming Model Alistair Rendell See: www.openmp.org Introduction to High Performance Computing for Scientists and Engineers, Hager and Wellein, Chapter 6 & 7 High Performance

More information

Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o

Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o igiro*o@ictp.it Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) OUTLINE Shared Memory Architectures

More information

PARLab Parallel Boot Camp

PARLab Parallel Boot Camp PARLab Parallel Boot Camp Introduction to OpenMP Tim Mattson Microprocessor and Programming Research Lab Intel Corp. Disclaimer The views expressed in this presentation are my own and do not represent

More information

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/

More information

CSL 860: Modern Parallel

CSL 860: Modern Parallel CSL 860: Modern Parallel Computation Hello OpenMP #pragma omp parallel { // I am now thread iof n switch(omp_get_thread_num()) { case 0 : blah1.. case 1: blah2.. // Back to normal Parallel Construct Extremely

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

Alfio Lazzaro: Introduction to OpenMP

Alfio Lazzaro: Introduction to OpenMP First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B. Bertinoro Italy, 12 17 October 2009 Alfio Lazzaro:

More information

Session 4: Parallel Programming with OpenMP

Session 4: Parallel Programming with OpenMP Session 4: Parallel Programming with OpenMP Xavier Martorell Barcelona Supercomputing Center Agenda Agenda 10:00-11:00 OpenMP fundamentals, parallel regions 11:00-11:30 Worksharing constructs 11:30-12:00

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set

More information

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen

Programming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen Programming with Shared Memory PART II HPC Fall 2012 Prof. Robert van Engelen Overview Sequential consistency Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading

More information

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Le Yan HPC Consultant User Services Goals Acquaint users with the concept of shared memory parallelism Acquaint users with the basics of programming with OpenMP Discuss briefly the

More information

Parallel Programming. OpenMP Parallel programming for multiprocessors for loops

Parallel Programming. OpenMP Parallel programming for multiprocessors for loops Parallel Programming OpenMP Parallel programming for multiprocessors for loops OpenMP OpenMP An application programming interface (API) for parallel programming on multiprocessors Assumes shared memory

More information

OpenMP programming. Thomas Hauser Director Research Computing Research CU-Boulder

OpenMP programming. Thomas Hauser Director Research Computing Research CU-Boulder OpenMP programming Thomas Hauser Director Research Computing thomas.hauser@colorado.edu CU meetup 1 Outline OpenMP Shared-memory model Parallel for loops Declaring private variables Critical sections Reductions

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical

More information

OpenMP. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 Worksharing constructs To date: #pragma omp parallel created a team of threads We distributed

More information

OpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

OpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen OpenMP - III Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT

More information

Multithreading in C with OpenMP

Multithreading in C with OpenMP Multithreading in C with OpenMP ICS432 - Spring 2017 Concurrent and High-Performance Programming Henri Casanova (henric@hawaii.edu) Pthreads are good and bad! Multi-threaded programming in C with Pthreads

More information

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah

Introduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah Introduction to OpenMP Martin Čuma Center for High Performance Computing University of Utah m.cuma@utah.edu Overview Quick introduction. Parallel loops. Parallel loop directives. Parallel sections. Some

More information

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015 OpenMP, Part 2 EAS 520 High Performance Scientific Computing University of Massachusetts Dartmouth Spring 2015 References This presentation is almost an exact copy of Dartmouth College's openmp tutorial.

More information

Module 11: The lastprivate Clause Lecture 21: Clause and Routines. The Lecture Contains: The lastprivate Clause. Data Scope Attribute Clauses

Module 11: The lastprivate Clause Lecture 21: Clause and Routines. The Lecture Contains: The lastprivate Clause. Data Scope Attribute Clauses The Lecture Contains: The lastprivate Clause Data Scope Attribute Clauses Reduction Loop Work-sharing Construct: Schedule Clause Environment Variables List of Variables References: file:///d /...ary,%20dr.%20sanjeev%20k%20aggrwal%20&%20dr.%20rajat%20moona/multi-core_architecture/lecture%2021/21_1.htm[6/14/2012

More information

Programming with OpenMP* Intel Software College

Programming with OpenMP* Intel Software College Programming with OpenMP* Intel Software College Objectives Upon completion of this module you will be able to use OpenMP to: implement data parallelism implement task parallelism Agenda What is OpenMP?

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP Instructors: John Wawrzynek & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Review

More information

Shared memory programming

Shared memory programming CME342- Parallel Methods in Numerical Analysis Shared memory programming May 14, 2014 Lectures 13-14 Motivation Popularity of shared memory systems is increasing: Early on, DSM computers (SGI Origin 3000

More information

Parallel and Distributed Programming. OpenMP

Parallel and Distributed Programming. OpenMP Parallel and Distributed Programming OpenMP OpenMP Portability of software SPMD model Detailed versions (bindings) for different programming languages Components: directives for compiler library functions

More information

Introduction to OpenMP

Introduction to OpenMP Presentation Introduction to OpenMP Martin Cuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu September 9, 2004 http://www.chpc.utah.edu 4/13/2006 http://www.chpc.utah.edu

More information

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including

More information

Shared Memory Programming with OpenMP

Shared Memory Programming with OpenMP Shared Memory Programming with OpenMP (An UHeM Training) Süha Tuna Informatics Institute, Istanbul Technical University February 12th, 2016 2 Outline - I Shared Memory Systems Threaded Programming Model

More information

High Performance Computing: Tools and Applications

High Performance Computing: Tools and Applications High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 2 OpenMP Shared address space programming High-level

More information

!OMP #pragma opm _OPENMP

!OMP #pragma opm _OPENMP Advanced OpenMP Lecture 12: Tips, tricks and gotchas Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! The macro _OPENMP is defined if code is

More information

A Short Introduction to OpenMP. Mark Bull, EPCC, University of Edinburgh

A Short Introduction to OpenMP. Mark Bull, EPCC, University of Edinburgh A Short Introduction to OpenMP Mark Bull, EPCC, University of Edinburgh Overview Shared memory systems Basic Concepts in Threaded Programming Basics of OpenMP Parallel regions Parallel loops 2 Shared memory

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical

More information

Introduction to Programming with OpenMP

Introduction to Programming with OpenMP Introduction to Programming with OpenMP Kent Milfeld; Lars Koesterke Yaakoub El Khamra (presenting) milfeld lars yye00@tacc.utexas.edu October 2012, TACC Outline What is OpenMP? How does OpenMP work? Architecture

More information

Distributed Systems + Middleware Concurrent Programming with OpenMP

Distributed Systems + Middleware Concurrent Programming with OpenMP Distributed Systems + Middleware Concurrent Programming with OpenMP Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola

More information

OpenMP - Introduction

OpenMP - Introduction OpenMP - Introduction Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı - 21.06.2012 Outline What is OpenMP? Introduction (Code Structure, Directives, Threads etc.) Limitations Data Scope Clauses Shared,

More information

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen

Programming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen Programming with Shared Memory PART II HPC Fall 2007 Prof. Robert van Engelen Overview Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading HPC Fall 2007 2 Parallel

More information

Parallel Processing Top manufacturer of multiprocessing video & imaging solutions.

Parallel Processing Top manufacturer of multiprocessing video & imaging solutions. 1 of 10 3/3/2005 10:51 AM Linux Magazine March 2004 C++ Parallel Increase application performance without changing your source code. Parallel Processing Top manufacturer of multiprocessing video & imaging

More information

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing CS 590: High Performance Computing OpenMP Introduction Fengguang Song Department of Computer Science IUPUI OpenMP A standard for shared-memory parallel programming. MP = multiprocessing Designed for systems

More information

A brief introduction to OpenMP

A brief introduction to OpenMP A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism

More information

OPENMP TIPS, TRICKS AND GOTCHAS

OPENMP TIPS, TRICKS AND GOTCHAS OPENMP TIPS, TRICKS AND GOTCHAS OpenMPCon 2015 2 Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! Extra nasty if it is e.g. #pragma opm atomic

More information

Data Handling in OpenMP

Data Handling in OpenMP Data Handling in OpenMP Manipulate data by threads By private: a thread initializes and uses a variable alone Keep local copies, such as loop indices By firstprivate: a thread repeatedly reads a variable

More information

Parallel Programming using OpenMP

Parallel Programming using OpenMP 1 Parallel Programming using OpenMP Mike Bailey mjb@cs.oregonstate.edu openmp.pptx OpenMP Multithreaded Programming 2 OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard

More information

Parallel Programming

Parallel Programming Parallel Programming OpenMP Nils Moschüring PhD Student (LMU) Nils Moschüring PhD Student (LMU), OpenMP 1 1 Overview What is parallel software development Why do we need parallel computation? Problems

More information

CMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)

CMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman) CMSC 714 Lecture 4 OpenMP and UPC Chau-Wen Tseng (from A. Sussman) Programming Model Overview Message passing (MPI, PVM) Separate address spaces Explicit messages to access shared data Send / receive (MPI

More information

Introduction [1] 1. Directives [2] 7

Introduction [1] 1. Directives [2] 7 OpenMP Fortran Application Program Interface Version 2.0, November 2000 Contents Introduction [1] 1 Scope............................. 1 Glossary............................ 1 Execution Model.........................

More information