Teaching parallel programming on multi-core and many-core processing architectures

Size: px
Start display at page:

Download "Teaching parallel programming on multi-core and many-core processing architectures"

Transcription

1 Teaching parallel programming on multi-core and many-core processing architectures P.Bakowski (SMTR) This presentation includes two parts : 1. Teaching parallel programing for multi-core processors (ARM based architecture) 2. Teaching massively parallel programming on many-core architectures (GPU nvidia) The first part is based on multi-core ARM architectures integrated in modern SoCs and embedded systems. These systems operate under the control of Linux system. The parallel programming environment is provided via openmp mechanisms (directives and pragmas). The second part is based on nvidia GPUs and CUDA programming environment. The programming exercises on CUDA also include the use of opencv providing us with image capture and recording operations, and the use of opengl for graphic operations on the video buffer. Part1 : Teaching parallel programing with OpenMP on embedded systems 1. Introduction In this part we show how to exploit the Embedded Systems on low cost development boards to teach the parallel programming on multi-core SoCs ARM processors and multi-core SoCs The great majority of the smart-phones and the embedded multimedia is based on ARM processor architecture and the associated circuits integrated into Systems on Chip (SoC). ARM processors are designed by ARM Holding as models and sold to a number of system design societies. The SoCs are developed on the base on these models then fabricated in the licensed fabs. Historically, the ARM processor architecture has evolved from the simple micro-controllers with no cache memory to complex multi-core architectures including memory management and multimedia processing units. In our study we are concerned only by recent ARM architecture version 7 that is often implemented as multicore Cortex-A7,A9, and A15. The architectural performance of a processor can be roughly measured by the number of instructions that the processor executes in one clock cycle. During two decades, the technological evolution has allowed for the increase of the clock frequency from several tens of MHz for the ARM micro-controllers to 1 or 2 GHz for recent implementations of ARM cores. If we take the average number of instructions per clock cycle (example Cortex-A15) equal to 3 and the clock frequency of 1.5 GHz, we obtain the processor performance of 4.5 Giga instructions per second. This is quite close to the performance of modern Intel x86 processors. Great quality of ARM processors and ARM based SoCs is their low power consumption (~1W/1Giga instructions per second). This is the principal reason for the use of these circuits in smart-phones and tablets. As the ARM SoCs contain complete systems including multimedia units, memory blocks, and I/O controllers, they allow for the construction of smart-phones and tablets from only from a few building blocks. Such a high level of integration lowers the production cost and increases the reliability of the devices. Nowadays the same ARM SoCs are used to build low cost and open source development boards. The open source aspect is related mainly to the software that drives the programmable components of the system. In our case the development boards are under the control of Linux system. Linux kernels and the

2 related packages provide the system software targeting specific development boards with specific SoCs, The Linux kernel operates on simple or multi-core ARM processors and related memory management units. The additional modules drive different SoC blocks such as GPU/VPU or WiFi and webcam circuits. All together, the hardware in the form of small development boards with multi-core SoCs and the open source software (Linux plus packages) provides us with an excellent platform for the teaching the Embedded Multiprocessing technology. The hardware side: In our study we use two kinds of ARM SoCs and computer units; the first kind is based on Odroid development boards, including Odroid-U2/X2 and Odroid-XU. Odroid U2/X2 boards are based on quad-core ARM Cortex-9 architecture while Odroid-XU integrates quad-core Cortex-15 SoC. - the second kind of boards is based on RK3188 quad-core Cortex-9 Soc integrated into Radxa board and RKM MK802IV dongle ARM Cortex-9 and Cortex-15 multi-core architecture The ARM Cortex-A9 MPCore is a 32-bit multi-core processor providing up to 4 cache-coherent Cortex-A9 cores, each implementing the ARM v7 instruction set architecture. The architectural of this processor is illustrated below: Each core owns one I-Cache and one D-cache. The chip control unit contains bus snoop unit, cache-tocache transfer unit and coherence unit necessary to organize shared memory operations. Shared memory operations are essential for the communication between the threads. In addition up to 8MB of L2 cache may be added through the optional L2 cache controller This architecture is the basis of Exynos-4212 and Rockchip-RK3188 SoCs integrated in the boards used in our practical classes. Both SoCs implement the ARM Neon SIMD (Single Instruction, Multiple Data) engine that is used to process multimedia formats and digital signals. It accelerates the speed at which signal processing algorithms are performed leading to a big increase in performance. This engine is faster than a general purpose CPU because it can perform the same instruction on multiple sets of data in parallel. The second kind of boards is based on ARM Cortex-A15 MPCore (Exynos5 octa core). The performance of Cortex-A15 MPCore is 40% higher than this of Cortex-A9 MPCore.

3 Cortex-A15 MPCore integrates a low-latency level-2 cache controller for up to 4 MB per cluster The software side: The development of examples is based on OpenMP programming interface. At its core, OpenMP is a set of compiler directives and function calls to enable you to run sections of your code in parallel on a shared memory parallel computer (multi-core). 1.1 OpenMP operational mode OpenMP assumes you will work with threads, which are basically processes that share the same memory address space as the other processes in a group of threads for a single program. If one thread makes a change to a variable that all threads can see, then the next access to that variable will use the new value. As it turns out, this model is fairly easy to think about. Imagine all your variables in a big block of memory, and all CPUs can see all the variables. This situation is not strictly true, but it is a good first approximation. The threads all see this memory and can modify this memory. The threads can all perform IO operations, file, print, and so on. So things as simple as our HelloWorld application may in fact be able to run in parallel, though generally IO operations tend to need to serialize access to global system state (file pointers, etc). 1.2 Compiler directives OpenMP operates mostly via compiler directives. A compiler directive is a comment that the compiler can ignore if not building for OpenMP. #pragma omp......code... where the...code... is called the parallel region. That is the area of the code that you want to try to run in parallel, if possible. When the compiler sees the start of the parallel region, it creates a pool of threads. When the program runs, these threads start executing, and are controlled by what information is in the directives. Without additional directives, we simply have a bunch of threads. A way to visualize this is to imagine an implicit loop around your parallel region, where you have N CPU/core iterations of the loop. These iterations all occur at the same time, unlike an explicit loop. The number of cores is controlled by an environment variable, OMP_NUM_THREADS. If it is not set, it could default to 1 or the number of cores on your machine. Just to be sure, you may want to do the following. export OMP_NUM_THREADS=`grep 'processor' /proc/cpuinfo wc l `

4 Now we re ready to parallelize hello.c. As a first step, lets put in the explicit compiler directives, and do nothing else. #include "stdio.h" int main(int argc, char *argv[]) #pragma omp parallel printf("\n"); return(0); Notice that I enclosed the parallelize region in a block denoted by an opening and closing set of braces, that is: printf... Now let s compile this this program with the following command line %gcc fopenmp o HelloMulticore HelloMultiCore.c and run it: %./HelloMulticore To get something like : By adjusting the value of the OMP_NUM_THREADS environment variable, we can adjust the number of execution threads. If we set 1 thread, we get, one print statement: %./HelloMulticore We can set more threads (8) than cores (4) : export OMP_NUM_THREADS="8" %./HelloMulticore It should be pointed out as well, that we really should insert a preprocessor directive in our code, in order to be able to pull in function prototypes and constants for use with OpenMP: #include <omp.h> If you want you can enclose this in an ifdef construct: #ifdef _OPENMP #include <omp.h> #endif

5 This way, that code will only be used if the compiler has been told to use OpenMP. Before we do something useful with this, lets explore a few functions that might be helpful. We can use several OpenMP function calls to query and control our environment. The most frequently used functions are those that return the number of threads operating, and the current thread ID. There are several others that are useful. Our new program incorporates several of these, along with a few tricks I have found useful over the years. The new hello code (HelloMulticoreID.c)now looks like this: #include "stdio.h" #include <omp.h> int main(int argc, char *argv[]) #pragma omp parallel int NCPU,tid,NPR,NTHR; /* get the total number of CPUs/cores available for OpenMP */ NCPU = omp_get_num_procs(); /* get the current thread ID in the parallel region */ tid = omp_get_thread_num(); /* get the total number of threads available in this parallel region */ NPR = omp_get_num_threads(); /* get the total number of threads requested */ NTHR = omp_get_max_threads(); /* only execute this on the master thread! */ if (tid == 0) printf("%i : NCPU\t= %i\n",tid,ncpu); printf("%i : NTHR\t= %i\n",tid,nthr); printf("%i : NPR\t= %i\n",tid,npr); printf("%i : I am thread %i out of %i\n",tid,tid,npr); return(0); We can compile and run it with 8 threads. %gcc fopenmp o HelloMulticoreID HelloMultiCoreID.c export OMP_NUM_THREADS=8 %./ HelloMulticoreID 1 : I am thread 1 out of 8 2 : I am thread 2 out of 8 0 : NCPU = 8 0 : NTHR = 1 0 : NPR = 8 0 : I am thread 0 out of 8 7 : I am thread 7 out of 8 3 : I am thread 3 out of 8 4 : I am thread 4 out of 8 5 : I am thread 5 out of 8 6 : I am thread 6 out of 8 The first number you see there is the thread number or thread ID (tid variable in the program). Notice that the output does not come out necessarily in thread order. And if you examine it closely, you might notice that it doesn t come out in time order either, though that is pretty close. One of the tricks that I have learned and

6 use is to tag each line with either the thread ID or the time, and then sort it. %./ HelloMulticoreID sort n 0 : I am thread 0 out of 8 0 : NCPU = 8 0 : NPR = 8 0 : NTHR = 1 1 : I am thread 1 out of 8 2 : I am thread 2 out of 8 3 : I am thread 3 out of 8 4 : I am thread 4 out of 8 5 : I am thread 5 out of 8 6 : I am thread 6 out of 8 7 : I am thread 7 out of 8 Now we can tell what thread did what. Though the time ordering is still off. It s a simple matter of programming to replace the thread ID with a time value that can be sorted. When this happens, I usually change the print format lines to look something like this: "%.3f D[%i]... ",timestamp,tid,... Once I ve added this, I can see what happened, in the order that it happened, and still get the thread ID data. Items like this are helpful when debugging parallel programs. Now it s time to move on to where a majority of the power of OpenMP becomes apparent to end users. 1.2 Loops OpenMP helps you in a number of clever ways, allowing you to add threading to your program without thinking through all the details of thread setup and tear-down. It also will effectively re-engineer loops that you tell it to re-engineer for you. As usual, it helps to start out with an understanding of where your program is spending its time. Without that, you could wind up parallelizing a loop that takes very little time in the overall execution path, and ignore the time expensive code. Additionally, it is important to test various sizes of runs, so you understand which portions of your code scale well, and which portions may need assistance. Profiling is the best way to do this, we will use a poor mans profiler, basically timing calipers around sections of the code. This technique will give us approximate millisecond resolution data (Specifically, it is limited to the clock timer tick interrupt rate, and could be anywhere from 10 milliseconds to 1 millisecond). You will not get more accurate single shot data than the timer resolution. In order to get better resolution, you need to iterate enough so that you can calculate an average time per iteration. It is also worth noting that OS jitter (management interrupts for disk I/O, network I/O, and running an OS in general) will also impact your measurements some, so please take this into account if you would like more precise data Ray tracing example To start with, suppose you re writing a ray tracing program. Without going too much into the details of how ray tracing works, it simply goes through each pixel of the screen, and using lighting, texture, and geometry information, the color of that pixel is determined. The program goes on to the next pixel and repeats the process. The important thing to note here is that the calculation for each pixel is completely separate from the calculation of any other pixel, therefore making this program highly suitable for OpenMP. Consider the following pseudo-code: for(int x=0; x < width; x++) for(int y=0; y < height; y++) finalimage[x][y] = RenderPixel(x,y, &scenedata);

7 This piece of code simply goes through each pixel of the screen, and calls a function, RenderPixel, to determine the final color of that pixel. Note that the results are simply stored in an array. Simply put, the entire scene that is being rendered is stored in a variable, scenedata, whose address is passed to the RenderPixel function. Because each pixel is independent of all other pixels, and because RenderPixel is expected to take a noticeable amount of time, this small snippet of code is a prime candidate for parallelization with OpenMP. Consider the following modified pseudo-code: #pragma omp parallel for for(int x=0; x < width; x++) for(int y=0; y < height; y++) finalimage[x][y] = RenderPixel(x,y, &scenedata); The only change to the code is the line directly above the outer for loop. This compiler directive tells the compiler to auto-parallelize the for loop with OpenMP. If a user is using a quad-core processor, the performance of your program can be expected to be 300% increased with the addition of just one line of code, which is amazing. In practice, true linear or super linear speedups are rare, while near linear speedups are very common. There are a few important things you need to keep in mind when parallelizing for loops or any other sections of code with OpenMP. For example, take a look at variable y in the pseudo code above. Because the variable is effectively being declared inside the parallelized region, each processor will have a unique and private value for y. However, take the following buggy code example below: int x,y; #pragma omp parallel for for(x=0; x < width; x++) for(y=0; y < height; y++) finalimage[x][y] = RenderPixel(x,y, &scenedata); The above code has a serious bug in it. The only thing that changed is the fact that now, variables x and y are declared outside the parallelized region. When we use the compiler directive to declare the outer for loop to be parallelized with OpenMP, the compiler already knows by common sense that the variable x is going to have different values for different threads. However, the default scope for the other variables, y, finalimage, and scenedata, are all shared by default, meaning that these values will be the same for all threads. All threads have access to read and write to these shared variables. The code above is buggy because variable y should be different for each thread. Declaring y inside of the parallelized region is one way to guarantee that a variable will be private to each thread, but there is another way to accomplish this. int x,y; #pragma omp parallel for private(y) for(x=0; x < width; x++) for(y=0; y < height; y++) finalimage[x][y] = RenderPixel(x,y, &scenedata);

8 Instead of declaring variable y inside the parallel region, we can declare it outside the parallel region and explicitly declare it a private variable during the OpenMP compiler directive. This effectively makes each thread have an independent variable called y. Each thread will only have access to it s own copy of this variable. 2. Application examples In the following part we are building two complete matrix oriented applications. The first is dot product algorithm, the second is matrix multiplication Dot product The following OpenMP example is a program which computes the dot product of two arrays a and b (that is sum(a[i]*b[i]) ) using a sum reduction. The input variables a and b are shared tables of double values. #include <omp.h> #include <stdio.h> #include <stdlib.h> #define N 1000 int main (int argc, char *argv[]) double a[n], b[n]; double sum = 0.0; int i, n, tid; /* Start a number of threads */ #pragma omp parallel shared(a) shared(b) private(i) tid = omp_get_thread_num(); /* Only one of the threads do this */ #pragma omp single n = omp_get_num_threads(); printf("number of threads = %d\n", n); /* Initialize a and b */ #pragma omp for for (i=0; i < N; i++) a[i] = 1.0; b[i] = 1.0; /* Parallel for loop computing the sum of a[i]*b[i] */ #pragma omp for reduction(+:sum) for (i=0; i < N; i++) sum += a[i]*b[i]; /* End of parallel region */ printf(" exit(0); Sum = %2.1f\n",sum); The reduction clause specifies two things: 1. When control enters the parallel region, each thread in the region gets a thread-private copy of sum, initialized to the identity element for When control leaves the parallel region, the original sum is updated by combining its value with the final values of the thread-private copies, using +. Since + is associative, the final sum has the same value as it would for serial execution of the code.

9 2.2 Matrix Multiplication In general with multiple threads you do not get any performance advantage relative to serial execution; worse, usually you get a performance disadvantage or penalty. That is why we are looking how to maximize the parallel execution part of the program/algorithm. The core algorithm in the matmul.c code is the triply nested for loop. The main multiplication loop looks like this: /* matrix multiply * * c[i][j]= a_row[i] dot b_col[j] for all i,j * a_row[i] > a[i][0.. DIM 1] * b_col[j] > b[0.. DIM 1][j] * */ for(i=0;i<dim;i++) for(j=0;j<dim;j++) dot=0.0; for(k=0;k<dim;k++) dot += a[i][k]*b[k][j]; c[i][j]=dot; You will probably notice immediately that the interior loop is a sum reduction. It is basically a dot product between two vectors. That inner loop is executed N squared times. This means that the parallel region is set up and torn down N squared times. The setup and tear down are not free: that is, they do have a non-zero time cost. Which would become abundantly clear in the event of placing the parallelization directives around that loop. In general, for most parallelization efforts, you want to enclose the maximum amount of work within the parallel region. This rule suggests you really want to put the directives outside the outer most loop, or as high up in the loop hierarchy as possible. So we insert a simple #pragma parallel directive as follows: #pragma omp parallel for private(i,j,k,dot) shared(a,b,c) for(i=0;i<dim;i++) for(j=0;j<dim;j++) dot=0.0; for(k=0;k<dim;k++) dot += a[i][k]*b[k][j]; c[i][j]=dot; In this example, private(i,j,k,dot) tells the compiler which variables are private relative to each thread (i.e. not shared between threads), and shared(a,b,c) indicates which ones are shared across threads. Notice we haven t specified anything about dimension of the array. With a relatively simple code adjustment (that some compilers might do for you if you can coax them), you can see significantly better performance in parallel. What we do is increase the amount of work done in parallel. We do this by unrolling a loop. Our code now looks like this: #pragma omp parallel for private(i,j,k,dot) shared(a,b,c) firstprivate(dim) for(i=0;i<dim;i+=4) for(j=0;j<dim;j++) dot[0]=dot[1]=dot[2]=dot[3]=0.0; for(k=0;k<dim;k++)

10 dot[0] += a[i+0][k]*b[k][j]; dot[1] += a[i+1][k]*b[k][j]; dot[2] += a[i+2][k]*b[k][j]; dot[3] += a[i+3][k]*b[k][j]; c[i+0][j]=dot[0]; c[i+1][j]=dot[1]; c[i+2][j]=dot[2]; c[i+3][j]=dot[3]; This version of the program operates on four rows at a time (we unrolled the loop), thus increasing the amount of work done per iteration. We have also reduced the cache miss penalty per iteration by reusing some of the more expensive elements (the b[k][j]). In addition, we added the firstprivate(dim) OpenMP directive which specifies that each thread should have its own instance of a variable, and that the variable should be initialized with the value of the variable as it exists before the parallel construct. Of course, after all of these modifications, we check the results to make sure that the addition of parallelization has not also caused the addition of bugs! 2.3 Comparing sequential and parallel matrix multiplication The following is the complete code for matrix multiplication including the performance test. The comparison is done with sequential and parallel execution of the same algorithm. The code also contains several parameters to be used for testing and debugging. For example the initial test and debugging runs with small matrix size (10 rows x 10 columns). Note the use of DEBUG constant set to 1 (true) for a run with debugging operations. #include <omp.h> #include <stdio.h> #include <stdlib.h> #define NR_THREADS 4 #define DEBUG 0 #define NRA 440 #define NCA 440 #define NCB 440 // number of threads used // normal run // number of rows in matrix A // number of columns in matrix A // number of columns in matrix B /* Use smaller matrices for testing and debugging */ /* #define DEBUG 1 // debug run #define NRA 10 // number of rows in matrix A #define NCA 10 // number of columns in matrix A #define NCB 10 // number of columns in matrix B */ int main (int argc, char *argv[]) int tid, nthreads, i, j, k; double **a, **b, **c; double *a_block, *b_block, *c_block; double **res; double *res_block; double starttime, stoptime; a = (double **) malloc(nra*sizeof(double *)); /* matrix a to be multiplied */ b = (double **) malloc(nca*sizeof(double *)); /* matrix b to be multiplied */ c = (double **) malloc(nra*sizeof(double *)); /* result matrix c */

11 a_block = (double *) malloc(nra*nca*sizeof(double)); b_block = (double *) malloc(nca*ncb*sizeof(double)); c_block = (double *) malloc(nra*ncb*sizeof(double)); /* Result matrix for the sequential algorithm */ res = (double **) malloc(nra*sizeof(double *)); res_block = (double *) malloc(nra*ncb*sizeof(double)); for (i=0; i<nra; i++) /* Initialize pointers to a */ a[i] = a_block+i*nra; for (i=0; i<nca; i++) /* Initialize pointers to b */ b[i] = b_block+i*nca; for (i=0; i<nra; i++) /* Initialize pointers to c */ c[i] = c_block+i*nra; for (i=0; i<nra; i++) /* Initialize pointers to res */ res[i] = res_block+i*nra; /* A static allocation of the matrices would be done like this */ /* double a[nra][nca], b[nca][ncb], c[nra][ncb]; */ /* Spawn a parallel region explicitly scoping all variables */ #pragma omp parallel shared(a,b,c,nthreads) private(tid,i,j,k) num_threads(nr_threads) tid = omp_get_thread_num(); if (tid == 0) /* Only thread 0 prints */ nthreads = omp_get_num_threads(); printf("starting matrix multiplication with %d threads\n",nthreads); printf("initializing matrices...\n"); /*** Initialize matrices ***/ #pragma omp for nowait /* No need to synchronize the threads before the */ for (i=0; i<nra; i++) /* last matrix has been initialized */ for (j=0; j<nca; j++) a[i][j]= (double) (i+j); #pragma omp for nowait for (i=0; i<nca; i++) for (j=0; j<ncb; j++) b[i][j]= (double) (i*j); #pragma omp for /* We synchronize the threads after this */ for (i=0; i<nra; i++) for (j=0; j<ncb; j++) c[i][j]= 0.0; if (tid == 0) /* Thread zero measures time */ starttime = omp_get_wtime(); /* Do matrix multiply sharing iterations on outer loop */ /* If DEBUG is TRUE display who does which iterations */ /* printf("thread %d starting matrix multiply...\n",tid); */

12 #pragma omp for for (i=0; i<nra; i++) if (DEBUG) printf("thread=%d did row=%d\n",tid,i); for(j=0; j<ncb; j++) for (k=0; k<nca; k++) c[i][j] += a[i][k] * b[k][j]; /* If DEBUG is true, print the results. Use smaller matrices for this */ if (DEBUG) printf("result Matrix:\n"); for (i=0; i<nra; i++) for (j=0; j<ncb; j++) printf("%6.1f ", c[i][j]); printf("\n"); printf ("Done.\n"); exit(0); #pragmaomp for nowait for (i=0; i<nca; i++) for (j=0; j<ncb; j++) b[i][j]= (double) (i*j); #pragma omp for /* We synchronize the threads after this */ for (i=0; i<nra; i++) for (j=0; j<ncb; j++) c[i][j]= 0.0; if (tid == 0) /* Thread zero measures time */ starttime = omp_get_wtime(); /* Master thread measures the execution time */ /* Do matrix multiply sharing iterations on outer loop */ /* If DEBUG is TRUE display who does which iterations */ /* printf("thread %d starting matrix multiply...\n",tid); */ #pragma omp for for (i=0; i<nra; i++) if (DEBUG) printf("thread=%d did row=%d\n",tid,i); for(j=0; j<ncb; j++) for (k=0; k<nca; k++) c[i][j] += a[i][k] * b[k][j]; if (tid == 0) stoptime = omp_get_wtime(); printf("time for parallel matrix multiplication: %3.2f s\n", stoptime starttime); /*** End of parallel region ***/

13 starttime = omp_get_wtime(); /* Do a sequential matrix multiplication and compare the results */ for (i=0; i<nra; i++) for (j=0; j<ncb; j++) res[i][j] = 0.0; for (k=0; k<nca; k++) res[i][j] += a[i][k]*b[k][j]; stoptime = omp_get_wtime(); printf("time for sequential matrix multiplication: %3.2f s\n", stoptimestarttime); /* Check that the results are the same as in the parallel solution. Actually, you should not compare floating point values for equality like this but instead compute the difference between the two values and check that it is smaller than a very small value epsilon. However, since all values in the matrices here are integer values, this will work. */ for (i=0; i<nra; i++) for (j=0; j<ncb; j++) if (res[i][j] == c[i][j]) /* Everything is OK if they are equal */ else printf("different result %5.1f!= %5.1f in %d %d\n ", res[i][j], c[i] [j], i, j); /* If DEBUG is true, print the results. Usa smaller matrices for this */ if (DEBUG) printf("result Matrix:\n"); for (i=0; i<nra; i++) for (j=0; j<ncb; j++) printf("%6.1f ", c[i][j]); printf("\n"); printf ("Done.\n"); exit(0); To do: Analyze and test the above code using different options. Summary We have shown how to obtain, build, and use an OpenMP compiler for Linux machines. The OpenMP examples shown range from simple hello examples, through parallel matrix multiplication, and all demonstrate excellent performance. With this short presentation you get the sense that OpenMP is both powerful, and easy to use. There is plenty more to learn, but at least you have already started working with OpenMP. As SoC units surge to eight and even 16 or more cores, OpenMP is likely to become increasingly important for applications development.

14 References MIT.Press.»Using.OpenMP».2008 OpenMP Application Program Interface, OpenMP, 2013

Parallel Processing/Programming

Parallel Processing/Programming Parallel Processing/Programming with the applications to image processing Lectures: 1. Parallel Processing & Programming from high performance mono cores to multi- and many-cores 2. Programming Interfaces

More information

Assignment 1 OpenMP Tutorial Assignment

Assignment 1 OpenMP Tutorial Assignment Assignment 1 OpenMP Tutorial Assignment B. Wilkinson and C Ferner: Modification date Aug 5, 2014 Overview In this assignment, you will write and execute run some simple OpenMP programs as a tutorial. First

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Shared Memory Programming OpenMP Fork-Join Model Compiler Directives / Run time library routines Compiling and

More information

Parallel Programming

Parallel Programming Parallel Programming Lecture delivered by: Venkatanatha Sarma Y Assistant Professor MSRSAS-Bangalore 1 Session Objectives To understand the parallelization in terms of computational solutions. To understand

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013 OpenMP António Abreu Instituto Politécnico de Setúbal 1 de Março de 2013 António Abreu (Instituto Politécnico de Setúbal) OpenMP 1 de Março de 2013 1 / 37 openmp what? It s an Application Program Interface

More information

Lecture 4: OpenMP Open Multi-Processing

Lecture 4: OpenMP Open Multi-Processing CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP

More information

/Users/engelen/Sites/HPC folder/hpc/openmpexamples.c

/Users/engelen/Sites/HPC folder/hpc/openmpexamples.c /* Subset of these examples adapted from: 1. http://www.llnl.gov/computing/tutorials/openmp/exercise.html 2. NAS benchmarks */ #include #include #ifdef _OPENMP #include #endif

More information

ECE 574 Cluster Computing Lecture 10

ECE 574 Cluster Computing Lecture 10 ECE 574 Cluster Computing Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 1 October 2015 Announcements Homework #4 will be posted eventually 1 HW#4 Notes How granular

More information

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors CS 61C: Great Ideas in Computer Architecture OpenMP, Transistors Instructor: Justin Hsia 7/17/2012 Summer 2012 Lecture #17 1 Review of Last Lecture Amdahl s Law limits benefits of parallelization Multiprocessor

More information

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors CS 61C: Great Ideas in Computer Architecture OpenMP, Transistors Instructor: Justin Hsia 7/23/2013 Summer 2013 Lecture #17 1 Review of Last Lecture Amdahl s Law limits benefits of parallelization Multiprocessor

More information

ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 2016 Solutions Name:...

ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 2016 Solutions Name:... ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 016 Solutions Name:... Answer questions in space provided below questions. Use additional paper if necessary but make sure

More information

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including

More information

Alfio Lazzaro: Introduction to OpenMP

Alfio Lazzaro: Introduction to OpenMP First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B. Bertinoro Italy, 12 17 October 2009 Alfio Lazzaro:

More information

Multithreading in C with OpenMP

Multithreading in C with OpenMP Multithreading in C with OpenMP ICS432 - Spring 2017 Concurrent and High-Performance Programming Henri Casanova (henric@hawaii.edu) Pthreads are good and bad! Multi-threaded programming in C with Pthreads

More information

Lecture 2: Introduction to OpenMP with application to a simple PDE solver

Lecture 2: Introduction to OpenMP with application to a simple PDE solver Lecture 2: Introduction to OpenMP with application to a simple PDE solver Mike Giles Mathematical Institute Mike Giles Lecture 2: Introduction to OpenMP 1 / 24 Hardware and software Hardware: a processor

More information

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

Raspberry Pi Basics. CSInParallel Project

Raspberry Pi Basics. CSInParallel Project Raspberry Pi Basics CSInParallel Project Sep 11, 2016 CONTENTS 1 Getting started with the Raspberry Pi 1 2 A simple parallel program 3 3 Running Loops in parallel 7 4 When loops have dependencies 11 5

More information

Performance Modeling of Intel and Portland Compilers Using Westmere-Based Infiniband HPC Cluster

Performance Modeling of Intel and Portland Compilers Using Westmere-Based Infiniband HPC Cluster Performance Modeling of Intel and Portland Compilers Using Westmere-Based Infiniband HPC Cluster Muhammed Al-Mulhem and Raed Al-Shaikh Department of Computer Science King Fahd University of Petroleum and

More information

EPL372 Lab Exercise 5: Introduction to OpenMP

EPL372 Lab Exercise 5: Introduction to OpenMP EPL372 Lab Exercise 5: Introduction to OpenMP References: https://computing.llnl.gov/tutorials/openmp/ http://openmp.org/wp/openmp-specifications/ http://openmp.org/mp-documents/openmp-4.0-c.pdf http://openmp.org/mp-documents/openmp4.0.0.examples.pdf

More information

Parallel Programming with OpenMP. CS240A, T. Yang

Parallel Programming with OpenMP. CS240A, T. Yang Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set of compiler directives

More information

Distributed Systems + Middleware Concurrent Programming with OpenMP

Distributed Systems + Middleware Concurrent Programming with OpenMP Distributed Systems + Middleware Concurrent Programming with OpenMP Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola

More information

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system OpenMP A parallel language standard that support both data and functional Parallelism on a shared memory system Use by system programmers more than application programmers Considered a low level primitives

More information

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015 OpenMP, Part 2 EAS 520 High Performance Scientific Computing University of Massachusetts Dartmouth Spring 2015 References This presentation is almost an exact copy of Dartmouth College's openmp tutorial.

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP Instructors: John Wawrzynek & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Review

More information

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 A little about me! PhD Computer Engineering Texas A&M University Computer Science

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical

More information

CSE 160 Lecture 8. NUMA OpenMP. Scott B. Baden

CSE 160 Lecture 8. NUMA OpenMP. Scott B. Baden CSE 160 Lecture 8 NUMA OpenMP Scott B. Baden OpenMP Today s lecture NUMA Architectures 2013 Scott B. Baden / CSE 160 / Fall 2013 2 OpenMP A higher level interface for threads programming Parallelization

More information

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP CS 61C: Great Ideas in Computer Architecture Synchronization, OpenMP Guest Lecturer: Justin Hsia 3/15/2013 Spring 2013 Lecture #22 1 Review of Last Lecture Multiprocessor systems uses shared memory (single

More information

Introduction to OpenMP. Motivation

Introduction to OpenMP.  Motivation Introduction to OpenMP www.openmp.org Motivation Parallel machines are abundant Servers are 2-8 way SMPs and more Upcoming processors are multicore parallel programming is beneficial and actually necessary

More information

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi

More information

Why C? Because we can t in good conscience espouse Fortran.

Why C? Because we can t in good conscience espouse Fortran. C Tutorial Why C? Because we can t in good conscience espouse Fortran. C Hello World Code: Output: C For Loop Code: Output: C Functions Code: Output: Unlike Fortran, there is no distinction in C between

More information

OpenMP programming. Thomas Hauser Director Research Computing Research CU-Boulder

OpenMP programming. Thomas Hauser Director Research Computing Research CU-Boulder OpenMP programming Thomas Hauser Director Research Computing thomas.hauser@colorado.edu CU meetup 1 Outline OpenMP Shared-memory model Parallel for loops Declaring private variables Critical sections Reductions

More information

ME759 High Performance Computing for Engineering Applications

ME759 High Performance Computing for Engineering Applications ME759 High Performance Computing for Engineering Applications Parallel Computing on Multicore CPUs October 25, 2013 Dan Negrut, 2013 ME964 UW-Madison A programming language is low level when its programs

More information

Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

Mango DSP Top manufacturer of multiprocessing video & imaging solutions. 1 of 11 3/3/2005 10:50 AM Linux Magazine February 2004 C++ Parallel Increase application performance without changing your source code. Mango DSP Top manufacturer of multiprocessing video & imaging solutions.

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set

More information

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call OpenMP Syntax The OpenMP Programming Model Number of threads are determined by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by

More information

Shared memory parallel computing

Shared memory parallel computing Shared memory parallel computing OpenMP Sean Stijven Przemyslaw Klosiewicz Shared-mem. programming API for SMP machines Introduced in 1997 by the OpenMP Architecture Review Board! More high-level than

More information

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP Topics Lecture 11 Introduction OpenMP Some Examples Library functions Environment variables 1 2 Introduction Shared Memory Parallelization OpenMP is: a standard for parallel programming in C, C++, and

More information

Parallel Programming

Parallel Programming Parallel Programming OpenMP Nils Moschüring PhD Student (LMU) Nils Moschüring PhD Student (LMU), OpenMP 1 1 Overview What is parallel software development Why do we need parallel computation? Problems

More information

Shared Memory Programming Model

Shared Memory Programming Model Shared Memory Programming Model Ahmed El-Mahdy and Waleed Lotfy What is a shared memory system? Activity! Consider the board as a shared memory Consider a sheet of paper in front of you as a local cache

More information

OpenMP Shared Memory Programming

OpenMP Shared Memory Programming OpenMP Shared Memory Programming John Burkardt, Information Technology Department, Virginia Tech.... Mathematics Department, Ajou University, Suwon, Korea, 13 May 2009.... http://people.sc.fsu.edu/ jburkardt/presentations/

More information

ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications

ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications Variable Sharing in OpenMP OpenMP synchronization issues OpenMP performance issues November 4, 2015 Lecture 22 Dan Negrut, 2015

More information

Geilo Winter School Programming Multicore Processors Session 2 Henrik Löf, Sverker Holmgren, Jarmo Rantakokko

Geilo Winter School Programming Multicore Processors Session 2 Henrik Löf, Sverker Holmgren, Jarmo Rantakokko Geilo Winter School 2008 Programming Multicore Processors Session 2 Henrik Löf, Sverker Holmgren, Jarmo Rantakokko Classic ways to high performance Locality Exploiting cache memories Instruction Level

More information

Introduc)on to OpenMP

Introduc)on to OpenMP Introduc)on to OpenMP Chapter 5.1-5. Bryan Mills, PhD Spring 2017 OpenMP An API for shared-memory parallel programming. MP = multiprocessing Designed for systems in which each thread or process can potentially

More information

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California EE/CSCI 451 Introduction to Parallel and Distributed Computation Discussion #4 2/3/2017 University of Southern California 1 USC HPCC Access Compile Submit job OpenMP Today s topic What is OpenMP OpenMP

More information

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP Instructor: Michael Greenbaum 1 Parallel Requests Assigned to computer e.g., Search Katz Parallel Threads

More information

Introduction to OpenMP.

Introduction to OpenMP. Introduction to OpenMP www.openmp.org Motivation Parallelize the following code using threads: for (i=0; i

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 8 ] OpenMP Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture and Performance

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18

More information

Concurrent Programming with OpenMP

Concurrent Programming with OpenMP Concurrent Programming with OpenMP Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico March 7, 2016 CPD (DEI / IST) Parallel and Distributed

More information

Some features of modern CPUs. and how they help us

Some features of modern CPUs. and how they help us Some features of modern CPUs and how they help us RAM MUL core Wide operands RAM MUL core CP1: hardware can multiply 64-bit floating-point numbers Pipelining: can start the next independent operation before

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing

More information

Question 13 1: (Solution, p 4) Describe the inputs and outputs of a (1-way) demultiplexer, and how they relate.

Question 13 1: (Solution, p 4) Describe the inputs and outputs of a (1-way) demultiplexer, and how they relate. Questions 1 Question 13 1: (Solution, p ) Describe the inputs and outputs of a (1-way) demultiplexer, and how they relate. Question 13 : (Solution, p ) In implementing HYMN s control unit, the fetch cycle

More information

COMP Parallel Computing. SMM (2) OpenMP Programming Model

COMP Parallel Computing. SMM (2) OpenMP Programming Model COMP 633 - Parallel Computing Lecture 7 September 12, 2017 SMM (2) OpenMP Programming Model Reading for next time look through sections 7-9 of the Open MP tutorial Topics OpenMP shared-memory parallel

More information

Parallel Computing Notes Topic: Notes on Hybrid MPI + OpenMP Programming

Parallel Computing Notes Topic: Notes on Hybrid MPI + OpenMP Programming Parallel Computing Notes Topic: Notes on Hybrid MPI + OpenMP Programming Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University (SDSU) Last Update:

More information

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means High Performance Computing: Concepts, Methods & Means OpenMP Programming Prof. Thomas Sterling Department of Computer Science Louisiana State University February 8 th, 2007 Topics Introduction Overview

More information

Computer Architecture

Computer Architecture Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 2 Part I Programming

More information

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,

More information

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather

More information

Basic Structure and Low Level Routines

Basic Structure and Low Level Routines SUZAKU Pattern Programming Framework Specification 1 - Structure and Low-Level Patterns B. Wilkinson, March 17, 2016. Suzaku is a pattern parallel programming framework developed at UNC-Charlotte that

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Christian Terboven 10.04.2013 / Darmstadt, Germany Stand: 06.03.2013 Version 2.3 Rechen- und Kommunikationszentrum (RZ) History De-facto standard for

More information

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group

Parallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group Parallelising Scientific Codes Using OpenMP Wadud Miah Research Computing Group Software Performance Lifecycle Scientific Programming Early scientific codes were mainly sequential and were executed on

More information

CS420: Operating Systems

CS420: Operating Systems Threads James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Threads A thread is a basic unit of processing

More information

the Intel Xeon Phi coprocessor

the Intel Xeon Phi coprocessor the Intel Xeon Phi coprocessor 1 Introduction about the Intel Xeon Phi coprocessor comparing Phi with CUDA the Intel Many Integrated Core architecture 2 Programming the Intel Xeon Phi Coprocessor with

More information

OpenMP: Open Multiprocessing

OpenMP: Open Multiprocessing OpenMP: Open Multiprocessing Erik Schnetter June 7, 2012, IHPC 2012, Iowa City Outline 1. Basic concepts, hardware architectures 2. OpenMP Programming 3. How to parallelise an existing code 4. Advanced

More information

Assignment 3 MPI Tutorial Compiling and Executing MPI programs

Assignment 3 MPI Tutorial Compiling and Executing MPI programs Assignment 3 MPI Tutorial Compiling and Executing MPI programs B. Wilkinson: Modification date: February 11, 2016. This assignment is a tutorial to learn how to execute MPI programs and explore their characteristics.

More information

Wide operands. CP1: hardware can multiply 64-bit floating-point numbers RAM MUL. core

Wide operands. CP1: hardware can multiply 64-bit floating-point numbers RAM MUL. core RAM MUL core Wide operands RAM MUL core CP1: hardware can multiply 64-bit floating-point numbers Pipelining: can start the next independent operation before the previous result is available RAM MUL core

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

Parallel Programming using OpenMP

Parallel Programming using OpenMP 1 Parallel Programming using OpenMP Mike Bailey mjb@cs.oregonstate.edu openmp.pptx OpenMP Multithreaded Programming 2 OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard

More information

OpenMP: Open Multiprocessing

OpenMP: Open Multiprocessing OpenMP: Open Multiprocessing Erik Schnetter May 20-22, 2013, IHPC 2013, Iowa City 2,500 BC: Military Invents Parallelism Outline 1. Basic concepts, hardware architectures 2. OpenMP Programming 3. How to

More information

Parallel Programming using OpenMP

Parallel Programming using OpenMP 1 OpenMP Multithreaded Programming 2 Parallel Programming using OpenMP OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard to perform shared-memory multithreading

More information

Workshop Agenda Feb 25 th 2015

Workshop Agenda Feb 25 th 2015 Workshop Agenda Feb 25 th 2015 Time Presenter Title 09:30 T. König Talk bwhpc Concept & bwhpc-c5 - Federated User Support Activities 09:45 R. Walter Talk bwhpc architecture (bwunicluster, bwforcluster

More information

High Performance Computing: Tools and Applications

High Performance Computing: Tools and Applications High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 2 OpenMP Shared address space programming High-level

More information

19.1. Unit 19. OpenMP Library for Parallelism

19.1. Unit 19. OpenMP Library for Parallelism 19.1 Unit 19 OpenMP Library for Parallelism 19.2 Overview of OpenMP A library or API (Application Programming Interface) for parallelism Requires compiler support (make sure the compiler you use supports

More information

Tieing the Threads Together

Tieing the Threads Together Tieing the Threads Together 1 Review Sequential software is slow software SIMD and MIMD are paths to higher performance MIMD thru: multithreading processor cores (increases utilization), Multicore processors

More information

Parallel Programming Languages 1 - OpenMP

Parallel Programming Languages 1 - OpenMP some slides are from High-Performance Parallel Scientific Computing, 2008, Purdue University & CSCI-UA.0480-003: Parallel Computing, Spring 2015, New York University Parallel Programming Languages 1 -

More information

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer OpenMP examples Sergeev Efim Senior software engineer Singularis Lab, Ltd. OpenMP Is: An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism.

More information

OpenMP. Today s lecture. Scott B. Baden / CSE 160 / Wi '16

OpenMP. Today s lecture. Scott B. Baden / CSE 160 / Wi '16 Lecture 8 OpenMP Today s lecture 7 OpenMP A higher level interface for threads programming http://www.openmp.org Parallelization via source code annotations All major compilers support it, including gnu

More information

Parallel Processing Top manufacturer of multiprocessing video & imaging solutions.

Parallel Processing Top manufacturer of multiprocessing video & imaging solutions. 1 of 10 3/3/2005 10:51 AM Linux Magazine March 2004 C++ Parallel Increase application performance without changing your source code. Parallel Processing Top manufacturer of multiprocessing video & imaging

More information

Shared Memory Parallelism - OpenMP

Shared Memory Parallelism - OpenMP Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (http://www.llnl.gov/computing/tutorials/openmp/#introduction) OpenMP sc99 tutorial

More information

CS 5220: Shared memory programming. David Bindel

CS 5220: Shared memory programming. David Bindel CS 5220: Shared memory programming David Bindel 2017-09-26 1 Message passing pain Common message passing pattern Logical global structure Local representation per processor Local data may have redundancy

More information

JANUARY 2004 LINUX MAGAZINE Linux in Europe User Mode Linux PHP 5 Reflection Volume 6 / Issue 1 OPEN SOURCE. OPEN STANDARDS.

JANUARY 2004 LINUX MAGAZINE Linux in Europe User Mode Linux PHP 5 Reflection Volume 6 / Issue 1 OPEN SOURCE. OPEN STANDARDS. 0104 Cover (Curtis) 11/19/03 9:52 AM Page 1 JANUARY 2004 LINUX MAGAZINE Linux in Europe User Mode Linux PHP 5 Reflection Volume 6 / Issue 1 LINUX M A G A Z I N E OPEN SOURCE. OPEN STANDARDS. THE STATE

More information

Parallel and Distributed Programming. OpenMP

Parallel and Distributed Programming. OpenMP Parallel and Distributed Programming OpenMP OpenMP Portability of software SPMD model Detailed versions (bindings) for different programming languages Components: directives for compiler library functions

More information

Lecture 2. Memory locality optimizations Address space organization

Lecture 2. Memory locality optimizations Address space organization Lecture 2 Memory locality optimizations Address space organization Announcements Office hours in EBU3B Room 3244 Mondays 3.00 to 4.00pm; Thurs 2:00pm-3:30pm Partners XSED Portal accounts Log in to Lilliput

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming Outline OpenMP Shared-memory model Parallel for loops Declaring private variables Critical sections Reductions

More information

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) COSC 6374 Parallel Computation Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2014 Introduction Threads vs. processes Recap of

More information

CS691/SC791: Parallel & Distributed Computing

CS691/SC791: Parallel & Distributed Computing CS691/SC791: Parallel & Distributed Computing Introduction to OpenMP 1 Contents Introduction OpenMP Programming Model and Examples OpenMP programming examples Task parallelism. Explicit thread synchronization.

More information

Announcements. Scott B. Baden / CSE 160 / Wi '16 2

Announcements. Scott B. Baden / CSE 160 / Wi '16 2 Lecture 8 Announcements Scott B. Baden / CSE 160 / Wi '16 2 Recapping from last time: Minimal barrier synchronization in odd/even sort Global bool AllDone; for (s = 0; s < MaxIter; s++) { barr.sync();

More information

Parallel and Distributed Computing

Parallel and Distributed Computing Concurrent Programming with OpenMP Rodrigo Miragaia Rodrigues MSc in Information Systems and Computer Engineering DEA in Computational Engineering CS Department (DEI) Instituto Superior Técnico October

More information

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) V. Akishina, I. Kisel, G. Kozlov, I. Kulakov, M. Pugach, M. Zyzak Goethe University of Frankfurt am Main 2015 Task Parallelism Parallelization

More information

A brief introduction to OpenMP

A brief introduction to OpenMP A brief introduction to OpenMP Alejandro Duran Barcelona Supercomputing Center Outline 1 Introduction 2 Writing OpenMP programs 3 Data-sharing attributes 4 Synchronization 5 Worksharings 6 Task parallelism

More information

Go Multicore Series:

Go Multicore Series: Go Multicore Series: Understanding Memory in a Multicore World, Part 2: Software Tools for Improving Cache Perf Joe Hummel, PhD http://www.joehummel.net/freescale.html FTF 2014: FTF-SDS-F0099 TM External

More information

Lecture 14: Mixed MPI-OpenMP programming. Lecture 14: Mixed MPI-OpenMP programming p. 1

Lecture 14: Mixed MPI-OpenMP programming. Lecture 14: Mixed MPI-OpenMP programming p. 1 Lecture 14: Mixed MPI-OpenMP programming Lecture 14: Mixed MPI-OpenMP programming p. 1 Overview Motivations for mixed MPI-OpenMP programming Advantages and disadvantages The example of the Jacobi method

More information

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel

More information