Parallel processing with OpenMP. #pragma omp
|
|
- Ellen Quinn
- 6 years ago
- Views:
Transcription
1 Parallel processing with OpenMP #pragma omp 1
2 Bit-level parallelism long words Instruction-level parallelism automatic SIMD: vector instructions vector types Multiple threads OpenMP GPU CUDA GPU + CPU in parallel CUDA 2
3 Multi-core computers Modern CPUs have multiple cores (Maari-A: 4) Each core has its own execution units, runs its own thread of code, independently Each core has access to the main memory Some shared resources (e.g. some caches) 3
4 Multi-core programming Simply launch multiple threads easy with OpenMP If you have 4 threads running in Maari-A, each thread will be on a different core if you have 1000 threads, operating system will have to do some time slicing 4
5 OpenMP Extension of C, C++, Fortran Standardised, widely supported Just compile and link your code with: gcc -fopenmp g++ -fopenmp 5
6 OpenMP You add #pragma omp directives in your code to tell what to parallelise and how Compiler & operating system take care of everything else You can often write your code so that it works fine even if you ignore all #pragmas 6
7 a(); #pragma omp parallel { b(); c(); a() b() b() b() b() c() 7
8 a(); #pragma omp parallel { b(); #pragma omp for for (int i = 0; i < 10; ++i) { c(i); d(); b() c(0) c(1) c(2) a() b() b() c(3) c(4) c(5) c(6) c(7) d() b() c(8) c(9) 8
9 a(); #pragma omp parallel { #pragma omp for for (int i = 0; i < 10; ++i) { c(i); d(); e(); c(0) c(1) c(2) d() a() c(3) c(4) c(5) c(6) c(7) d() d() e() c(8) c(9) d() 9
10 a(); #pragma omp parallel { #pragma omp for nowait for (int i = 0; i < 10; ++i) { c(i); d(); e(); c(0) c(1) c(2) d() a() c(3) c(4) c(5) c(6) c(7) d() d() e() c(8) c(9) d() 10
11 #pragma omp parallel { #pragma omp for for (int i = 0; i < 10; ++i) { c(i); = #pragma omp parallel for for (int i = 0; i < 10; ++i) { c(i); 11
12 a(); #pragma omp parallel { b(); #pragma omp critical { c(); d(); b() c() a() b() b() c() c() d() b() c() 12
13 a(); #pragma omp parallel { b(); #pragma omp for nowait for (int i = 0; i < 10; ++i) { c(i); #pragma omp critical { d(); b() c(0) c(1) c(2) e(); b() c(3) c(4) c(5) a() b() c(6) c(7) d() d() d() e() b() c(8) c(9) d() 13
14 // shared variable int sum_shared = 0; #pragma omp parallel { // private variables (one for each thread) int sum_local = 0; #pragma omp for nowait for (int i = 0; i < 10; ++i) { sum_local += i; #pragma omp critical { sum_shared += sum_local; print(sum_shared); 14
15 global_initialisation(); #pragma omp parallel { local_initialisation(); #pragma omp for nowait for (int i = 0; i < 10; ++i) { do_some_work(i); #pragma omp critical { update_global_data(); report_result(); 15
16 OpenMP: memory model what can be done without synchronisation? 16
17 OpenMP memory model Contract between programmer & system Local temporary view, global memory threads read & write temporary view may or may not be consistent with memory Consistency guaranteed only after a flush 17
18 OpenMP memory model Implicit flush e.g.: when entering/leaving parallel regions when entering/leaving critical regions Mutual exclusion: for critical regions 18
19 int a = 0; #pragma omp parallel { #pragma omp critical { a += 1; critical parallel thread 1: ?? 2 thread 2: 0 0?? critical memory? 0 0? 1 1?
20 Simple rules Permitted (without explicit synchronisation): multiple threads reading, no thread writing one thread writing, same thread reading Forbidden (without explicit synchronisation): multiple threads writing one thread writing, another thread reading 20
21 Simple rules Smallest meaningful unit = array element Many threads can access the same array Just be careful if they access the same array element even if you try to manipulate different bits 21
22 Simple rules Safe: thread 1: p[0] = q[0] + q[1] thread 2: p[1] = q[1] + q[2] thread 3: p[2] = q[2] + q[3] 22
23 Simple rules Safe: thread 1: p[0] = p[0] + q[1] thread 2: p[1] = p[1] + q[2] thread 3: p[2] = p[2] + q[3] 23
24 Simple rules Not permitted without synchronisation: thread 1: p[0] = q[0] + p[1] thread 2: p[1] = q[1] + p[2] thread 3: p[2] = q[2] + p[3] Data race, unspecified behaviour 24
25 Simple rules Not permitted without synchronisation: thread 1: p[0] = q[0] + q[1] thread 2: p[0] = q[1] + q[2] thread 3: p[0] = q[2] + q[3] Data race, unspecified behaviour 25
26 Simple rules Not permitted without synchronisation: thread 1: p[0] = 1 thread 2: p[0] = 1 thread 3: p[0] = 1 Data race, unspecified behaviour 26
27 Filtering is very easy void filter(const int* data, int* result) { #pragma omp parallel for for (int i = 0; i < n; ++i) { result[i] = compute(data[i]); 27
28 Filtering is very easy static void median(const array_t x, array_t y) { #pragma omp parallel for for (int i = 0; i < n; ++i) { for (int j = 0; j < n; ++j) { int ia = (i + n - 1) % n; int ib = (i + 1) % n; int ja = (j + n - 1) % n; int jb = (j + 1) % n; y[i][j] = median(x[i][j], x[ia][j], x[ib][j], x[i][ja], x[i][jb]); 28
29 OpenMP: variables private or shared? 29
30 // shared variable int sum_shared = 0; #pragma omp parallel { // private variables (one for each thread) int sum_local = 0; #pragma omp for nowait for (int i = 0; i < 10; ++i) { sum_local += i; #pragma omp critical { sum_shared += sum_local; print(sum_shared); 30
31 Two kinds of variables Shared variables shared among all threads be very careful with data races! Private variables each thread has its own variable safe and easy 31
32 // OK: for (int i = 0; i < n; ++i) { float tmp = x[i]; y[i] = tmp * tmp; // OK: float tmp; for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; 32
33 // OK: #pragma omp parallel for for (int i = 0; i < n; ++i) { float tmp = x[i]; y[i] = tmp * tmp; // Bad (data race): float tmp; #pragma omp parallel for for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; 33
34 // OK (just unnecessarily complicated): #pragma omp parallel { float tmp; #pragma omp for for (int i = 0; i < n; ++i) { tmp = x[i]; y[i] = tmp * tmp; 34
35 Two kinds of variables Shared variables and private variables If necessary, you can customise this: #pragma omp parallel private(x) #pragma omp parallel shared(x) #pragma omp parallel firstprivate(x) Seldom needed, defaults usually fine 35
36 Best practices Use subroutines! much easier to avoid accidents with shared variables this way Keep the function with #pragmas as short as possible just e.g. call another function in a for loop 36
37 OpenMP: synchronisation critical sections and atomics 37
38 // Good, no critical section needed: #pragma omp parallel for for (int i = 0; i < ; ++i) { ++v[i]; // Bad, very slow: #pragma omp parallel for for (int i = 0; i < ; ++i) { #pragma omp critical { ++v[i]; 38
39 // Good, no critical section needed: #pragma omp parallel for for (int i = 0; i < ; ++i) { ++v[i]; 4 ms // Bad, very slow: #pragma omp parallel for for (int i = 0; i < ; ++i) { #pragma omp critical { ++v[i]; ms 39
40 // Bad no data race but undefined output: int a = 0; #pragma omp parallel { int b; #pragma omp critical { b = a; ++b; #pragma omp critical { a = b; 40
41 // OK: int a = 0; #pragma omp parallel { int b; #pragma omp critical { b = a; ++b; a = b; 41
42 // Bad: same output file #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); std::cout << v << std::endl; // OK (but no guarantees on the order of lines) #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); #pragma omp critical { std::cout << v << std::endl; 42
43 Naming critical sections You can give names to critical sections: #pragma omp critical (myname) Different threads can enter simultaneously critical sections with different names No name = the same name 43
44 #pragma omp parallel for for (int i = 0; i < 10; ++i) { b(); #pragma omp critical (xxx) { c(); #pragma omp critical (yyy) { d(); b() c() d() b() c() d() b() c() d() 44
45 #pragma omp parallel for for (int i = 0; i < 10; ++i) { int v = calculate(i); #pragma omp critical (result) { result += v; #pragma omp critical (output) { std::cout << v << std::endl; 45
46 Atomic operation Like a tiny critical section Very restricted: just for e.g. updating a single variable Much more efficient 46
47 for (int i = 0; i < n; ++i) { int l = v[i] % m; ++p[l]; #pragma omp parallel for for (int i = 0; i < n; ++i) { int l = v[i] % m; #pragma omp atomic ++p[l]; #pragma omp parallel for for (int i = 0; i < n; ++i) { int l = v[i] % m; #pragma omp critical { ++p[l]; 200 ms 70 ms ms 47
48 OpenMP: scheduling #pragma omp for 48
49 a(); #pragma omp parallel for for (int i = 0; i < 16; ++i) { c(i); d(); c(0) c(1) c(2) c(3) a() c(4) c(8) c(5) c(6) c(7) c(9) c(10) c(11) d() c(12) c(13) c(14) c(15) 49
50 // Good memory locality: // each thread scans a consecutive part of array #pragma omp parallel for for (int i = 0; i < n; ++i) { c(x[i]); memory: thread 1 thread 2 thread 3 thread 4 50
51 a(); #pragma omp parallel for for (int i = 0; i < 16; ++i) { c(i); d(); c(0) c(4) c(1) c(2) c(3) c(5) c(6) c(7) c(8) c(9) c(10) c(11) c(12) c(13) c(14) c(15) 51
52 a(); #pragma omp parallel for schedule(static,1) for (int i = 0; i < 16; ++i) { c(i); d(); c(0) c(4) c(8) c(12) c(1) c(5) c(9) c(13) c(2) c(6) c(10) c(14) c(3) c(7) c(11) c(15) 52
53 a(); #pragma omp parallel for dynamic for (int i = 0; i < 16; ++i) { c(i); d(); c(0) c(4) c(8) c(1) c(5) c(9) c(12) c(15) c(2) c(6) c(10) c(13) c(3) c(7) c(11) c(14) 53
54 OpenMP scheduling Performance, n = : sequential: 50 ms parallel: 50 ms schedule(static,1): 200 ms schedule(dynamic): 4000 ms for (int i = 0; i < n; ++i) ++v[i]; 54
55 OpenMP scheduling Performance, n = : sequential: 800 ms parallel: 300 ms schedule(static,1): 300 ms schedule(dynamic): 4000 ms for (int i = 0; i < n; ++i) v[i] = sqrt(i); 55
56 OpenMP: reductions just a convenient shorthand 56
57 int g = 0; #pragma omp parallel { int l = 0; #pragma omp for for (int i = 0; i < n; ++i) { l += v[i]; #pragma omp atomic g += l; int g = 0; #pragma omp parallel for reduction(+:g) for (int i = 0; i < n; ++i) { g += v[i]; 57
58 OpenMP: speedups in practice measure! 58
59 images/second pixels linear window MF1 MF2 threads 59
60 images/second 4 cores Hyper- Threading threads 60
61 images/second pixels linear window MF3 threads 61
62 images/second pixels linear window 2 12 cores MF3 threads 62
63 Hyper-threading Maari-A: 4 physical cores each core can run 2 hardware threads OpenMP defaults: use 4 2 = 8 threads 63
64 Without hyper-threading Each core has 1 thread (1 instruction stream) CPU looks at the instruction stream (up to certain distance) and executes the next possible instruction must be independent! must have execution units available! 64
65 With hyper-threading Each core has 2 threads (2 instruction streams) CPU looks at both instruction streams Possibly more opportunities for finding instructions that can be executed now 65
66 Example: multiply and add Maari-A computers have Ivy Bridge CPUs They have independent parallel units for: floating point multiplication (vectorised) floating point addition (vectorised) Throughput: one + and one per cycle 66
67 Example: multiply and add Code: (all independent) 1 instruction / cycle Hyper-threading does not help one thread keeps + unit busy unit has nothing to do 67
68 Example: multiply and add Code: (all independent) 1 instruction / cycle Hyper-threading does not help one thread keeps unit busy + unit has nothing to do 68
69 Example: multiply and add Code: (all independent) 2 instructions / cycle Hyper-threading does not help one thread enough to keep both units busy 69
70 Example: multiply and add Code: (all independent) 1 instructions / cycle CPU does not see far enough first + unit busy, unit idle then + unit idle, unit busy 70
71 Example: multiply and add Code: thread 1: (all independent) thread 2: (all independent) 1 instruction / cycle without hyper-threading 2 instructions / cycle with hyper-threading 71
72 Example: multiply and add If everything is already perfectly interleaved in your code, hyper-threading does not help getting no speedups may be a good sign! May make it easier to program perfect instruction-level parallelism not necessary for maximum performance 72
73 Example: multiply and add Maybe a good idea to not rely too much on hyper-threading? typically only more expensive CPUs support hyper-threading almost the same performance with cheaper CPUs and careful implementation? 73
74 OpenMP: summary 74
75 OpenMP: summary Splits work across multiple threads, benefits from multiple CPU cores Each thread still needs to worry about e.g.: instruction-level parallelism vectorisation getting data from memory to CPU 75
Programming Parallel Computers
ICS-E4020 Programming Parallel Computers Jukka Suomela Jaakko Lehtinen Samuli Laine Aalto University Spring 2015 users.ics.aalto.fi/suomela/ppc-2015/ Introduction Modern computers have high-performance
More informationProgramming Parallel Computers
ICS-E4020 Programming Parallel Computers Jukka Suomela Jaakko Lehtinen Samuli Laine Aalto University Spring 2016 users.ics.aalto.fi/suomela/ppc-2016/ New code must be parallel! otherwise a computer from
More informationSome features of modern CPUs. and how they help us
Some features of modern CPUs and how they help us RAM MUL core Wide operands RAM MUL core CP1: hardware can multiply 64-bit floating-point numbers Pipelining: can start the next independent operation before
More informationWide operands. CP1: hardware can multiply 64-bit floating-point numbers RAM MUL. core
RAM MUL core Wide operands RAM MUL core CP1: hardware can multiply 64-bit floating-point numbers Pipelining: can start the next independent operation before the previous result is available RAM MUL core
More informationMultithreading in C with OpenMP
Multithreading in C with OpenMP ICS432 - Spring 2017 Concurrent and High-Performance Programming Henri Casanova (henric@hawaii.edu) Pthreads are good and bad! Multi-threaded programming in C with Pthreads
More informationCS 5220: Shared memory programming. David Bindel
CS 5220: Shared memory programming David Bindel 2017-09-26 1 Message passing pain Common message passing pattern Logical global structure Local representation per processor Local data may have redundancy
More informationAllows program to be incrementally parallelized
Basic OpenMP What is OpenMP An open standard for shared memory programming in C/C+ + and Fortran supported by Intel, Gnu, Microsoft, Apple, IBM, HP and others Compiler directives and library support OpenMP
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More informationLecture 4: OpenMP Open Multi-Processing
CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP
More informationMulti-core Architecture and Programming
Multi-core Architecture and Programming Yang Quansheng( 杨全胜 ) http://www.njyangqs.com School of Computer Science & Engineering 1 http://www.njyangqs.com Programming with OpenMP Content What is PpenMP Parallel
More informationParallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops
Parallel Programming Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Single computers nowadays Several CPUs (cores) 4 to 8 cores on a single chip Hyper-threading
More informationCOMP528: Multi-core and Multi-Processor Computing
COMP528: Multi-core and Multi-Processor Computing Dr Michael K Bane, G14, Computer Science, University of Liverpool m.k.bane@liverpool.ac.uk https://cgi.csc.liv.ac.uk/~mkbane/comp528 17 Background Reading
More informationProgramming with Shared Memory PART II. HPC Fall 2012 Prof. Robert van Engelen
Programming with Shared Memory PART II HPC Fall 2012 Prof. Robert van Engelen Overview Sequential consistency Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading
More informationOpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen
OpenMP - III Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT
More informationOpenMP. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen
OpenMP Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 Worksharing constructs To date: #pragma omp parallel created a team of threads We distributed
More informationCompiling for GPUs. Adarsh Yoga Madhav Ramesh
Compiling for GPUs Adarsh Yoga Madhav Ramesh Agenda Introduction to GPUs Compute Unified Device Architecture (CUDA) Control Structure Optimization Technique for GPGPU Compiler Framework for Automatic Translation
More informationParallel programming using OpenMP
Parallel programming using OpenMP Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering Department
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP Synchronisation Nick Maclaren Computing Service nmm1@cam.ac.uk, ext. 34761 June 2011 Introduction to OpenMP p. 2/?? Summary Facilities here are relevant
More informationParallel Programming. OpenMP Parallel programming for multiprocessors for loops
Parallel Programming OpenMP Parallel programming for multiprocessors for loops OpenMP OpenMP An application programming interface (API) for parallel programming on multiprocessors Assumes shared memory
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More information1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008
1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction
More informationParallel Programming using OpenMP
1 OpenMP Multithreaded Programming 2 Parallel Programming using OpenMP OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard to perform shared-memory multithreading
More informationIntroduction to OpenMP
1.1 Minimal SPMD Introduction to OpenMP Simple SPMD etc. N.M. Maclaren Computing Service nmm1@cam.ac.uk ext. 34761 August 2011 SPMD proper is a superset of SIMD, and we are now going to cover some of the
More informationParallel Programming using OpenMP
1 Parallel Programming using OpenMP Mike Bailey mjb@cs.oregonstate.edu openmp.pptx OpenMP Multithreaded Programming 2 OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard
More informationOpenMP. Application Program Interface. CINECA, 14 May 2012 OpenMP Marco Comparato
OpenMP Application Program Interface Introduction Shared-memory parallelism in C, C++ and Fortran compiler directives library routines environment variables Directives single program multiple data (SPMD)
More informationParallel Computing. Prof. Marco Bertini
Parallel Computing Prof. Marco Bertini Shared memory: OpenMP Implicit threads: motivations Implicit threading frameworks and libraries take care of much of the minutiae needed to create, manage, and (to
More informationOPENMP TIPS, TRICKS AND GOTCHAS
OPENMP TIPS, TRICKS AND GOTCHAS OpenMPCon 2015 2 Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! Extra nasty if it is e.g. #pragma opm atomic
More informationUvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP
Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel
More informationComputational Mathematics
Computational Mathematics Hamid Sarbazi-Azad Department of Computer Engineering Sharif University of Technology e-mail: azad@sharif.edu OpenMP Work-sharing Instructor PanteA Zardoshti Department of Computer
More informationScientific Programming in C XIV. Parallel programming
Scientific Programming in C XIV. Parallel programming Susi Lehtola 11 December 2012 Introduction The development of microchips will soon reach the fundamental physical limits of operation quantum coherence
More informationIntroduction to. Slides prepared by : Farzana Rahman 1
Introduction to OpenMP Slides prepared by : Farzana Rahman 1 Definition of OpenMP Application Program Interface (API) for Shared Memory Parallel Programming Directive based approach with library support
More informationHigh Performance Computing: Tools and Applications
High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 2 OpenMP Shared address space programming High-level
More informationAlfio Lazzaro: Introduction to OpenMP
First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B. Bertinoro Italy, 12 17 October 2009 Alfio Lazzaro:
More informationReview. Lecture 12 5/22/2012. Compiler Directives. Library Functions Environment Variables. Compiler directives for construct, collapse clause
Review Lecture 12 Compiler Directives Conditional compilation Parallel construct Work-sharing constructs for, section, single Synchronization Work-tasking Library Functions Environment Variables 1 2 13b.cpp
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationOpenMP. Dr. William McDoniel and Prof. Paolo Bientinesi WS17/18. HPAC, RWTH Aachen
OpenMP Dr. William McDoniel and Prof. Paolo Bientinesi HPAC, RWTH Aachen mcdoniel@aices.rwth-aachen.de WS17/18 Loop construct - Clauses #pragma omp for [clause [, clause]...] The following clauses apply:
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming Outline OpenMP Shared-memory model Parallel for loops Declaring private variables Critical sections Reductions
More informationOpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing
CS 590: High Performance Computing OpenMP Introduction Fengguang Song Department of Computer Science IUPUI OpenMP A standard for shared-memory parallel programming. MP = multiprocessing Designed for systems
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP Simple SPMD etc. Nick Maclaren nmm1@cam.ac.uk September 2017 Introduction to OpenMP p. 2/?? Terminology I am badly abusing the term SPMD tough The
More informationModern Processor Architectures (A compiler writer s perspective) L25: Modern Compiler Design
Modern Processor Architectures (A compiler writer s perspective) L25: Modern Compiler Design The 1960s - 1970s Instructions took multiple cycles Only one instruction in flight at once Optimisation meant
More informationOpenMP 4.0/4.5. Mark Bull, EPCC
OpenMP 4.0/4.5 Mark Bull, EPCC OpenMP 4.0/4.5 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all
More informationIntroduction to OpenMP.
Introduction to OpenMP www.openmp.org Motivation Parallelize the following code using threads: for (i=0; i
More information[Potentially] Your first parallel application
[Potentially] Your first parallel application Compute the smallest element in an array as fast as possible small = array[0]; for( i = 0; i < N; i++) if( array[i] < small ) ) small = array[i] 64-bit Intel
More informationShared memory parallel computing
Shared memory parallel computing OpenMP Sean Stijven Przemyslaw Klosiewicz Shared-mem. programming API for SMP machines Introduced in 1997 by the OpenMP Architecture Review Board! More high-level than
More informationSynchronization. Event Synchronization
Synchronization Synchronization: mechanisms by which a parallel program can coordinate the execution of multiple threads Implicit synchronizations Explicit synchronizations Main use of explicit synchronization
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical
More informationCS 470 Spring Mike Lam, Professor. OpenMP
CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism
More informationProgramming with Shared Memory PART II. HPC Fall 2007 Prof. Robert van Engelen
Programming with Shared Memory PART II HPC Fall 2007 Prof. Robert van Engelen Overview Parallel programming constructs Dependence analysis OpenMP Autoparallelization Further reading HPC Fall 2007 2 Parallel
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP Basics and Simple SIMD Nick Maclaren nmm1@cam.ac.uk September 2017 Introduction to OpenMP p. 2/?? Terminology I am abusing the term SIMD tough Strictly,
More informationParallel Programming. Libraries and Implementations
Parallel Programming Libraries and Implementations Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationModern Processor Architectures. L25: Modern Compiler Design
Modern Processor Architectures L25: Modern Compiler Design The 1960s - 1970s Instructions took multiple cycles Only one instruction in flight at once Optimisation meant minimising the number of instructions
More informationOpenMP: Open Multiprocessing
OpenMP: Open Multiprocessing Erik Schnetter June 7, 2012, IHPC 2012, Iowa City Outline 1. Basic concepts, hardware architectures 2. OpenMP Programming 3. How to parallelise an existing code 4. Advanced
More informationShared Memory Programming Models I
Shared Memory Programming Models I Peter Bastian / Stefan Lang Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg INF 368, Room 532 D-69120 Heidelberg phone: 06221/54-8264
More informationOpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++
OpenMP OpenMP Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum 1997-2002 API for Fortran and C/C++ directives runtime routines environment variables www.openmp.org 1
More informationDepartment of Informatics V. HPC-Lab. Session 2: OpenMP M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 2: OpenMP M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationOPENMP TIPS, TRICKS AND GOTCHAS
OPENMP TIPS, TRICKS AND GOTCHAS Mark Bull EPCC, University of Edinburgh (and OpenMP ARB) markb@epcc.ed.ac.uk OpenMPCon 2015 OpenMPCon 2015 2 A bit of background I ve been teaching OpenMP for over 15 years
More informationPROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec
PROGRAMOVÁNÍ V C++ CVIČENÍ Michal Brabec PARALLELISM CATEGORIES CPU? SSE Multiprocessor SIMT - GPU 2 / 17 PARALLELISM V C++ Weak support in the language itself, powerful libraries Many different parallelization
More informationShared Memory Programming Model
Shared Memory Programming Model Ahmed El-Mahdy and Waleed Lotfy What is a shared memory system? Activity! Consider the board as a shared memory Consider a sheet of paper in front of you as a local cache
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 9 ] Shared Memory Performance Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture
More informationOpenMP Code Offloading: Splitting GPU Kernels, Pipelining Communication and Computation, and selecting Better Grid Geometries
OpenMP Code Offloading: Splitting GPU Kernels, Pipelining Communication and Computation, and selecting Better Grid Geometries Artem Chikin, Tyler Gobran, José Nelson Amaral Tyler Gobran University of Alberta
More informationOverview: The OpenMP Programming Model
Overview: The OpenMP Programming Model motivation and overview the parallel directive: clauses, equivalent pthread code, examples the for directive and scheduling of loop iterations Pi example in OpenMP
More informationECE 574 Cluster Computing Lecture 10
ECE 574 Cluster Computing Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 1 October 2015 Announcements Homework #4 will be posted eventually 1 HW#4 Notes How granular
More informationCOMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP
COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including
More informationCS 470 Spring Mike Lam, Professor. OpenMP
CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP More Syntax and SIMD Nick Maclaren nmm1@cam.ac.uk September 2017 Introduction to OpenMP p. 2/?? C/C++ Parallel for (1) I said that I would give the
More informationOpenMP and more Deadlock 2/16/18
OpenMP and more Deadlock 2/16/18 Administrivia HW due Tuesday Cache simulator (direct-mapped and FIFO) Steps to using threads for parallelism Move code for thread into a function Create a struct to hold
More informationOpenMP: Open Multiprocessing
OpenMP: Open Multiprocessing Erik Schnetter May 20-22, 2013, IHPC 2013, Iowa City 2,500 BC: Military Invents Parallelism Outline 1. Basic concepts, hardware architectures 2. OpenMP Programming 3. How to
More informationPerformance analysis basics
Performance analysis basics Christian Iwainsky Iwainsky@rz.rwth-aachen.de 25.3.2010 1 Overview 1. Motivation 2. Performance analysis basics 3. Measurement Techniques 2 Why bother with performance analysis
More informationParallel Programming
Parallel Programming OpenMP Nils Moschüring PhD Student (LMU) Nils Moschüring PhD Student (LMU), OpenMP 1 1 Overview What is parallel software development Why do we need parallel computation? Problems
More informationLecture 2: Introduction to OpenMP with application to a simple PDE solver
Lecture 2: Introduction to OpenMP with application to a simple PDE solver Mike Giles Mathematical Institute Mike Giles Lecture 2: Introduction to OpenMP 1 / 24 Hardware and software Hardware: a processor
More informationOpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono
OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/
More informationProgramming with OpenMP*
Objectives At the completion of this module you will be able to Thread serial code with basic OpenMP pragmas Use OpenMP synchronization pragmas to coordinate thread execution and memory access 2 Agenda
More informationOpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018
OpenMP 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationParallel Programming with OpenMP
Advanced Practical Programming for Scientists Parallel Programming with OpenMP Robert Gottwald, Thorsten Koch Zuse Institute Berlin June 9 th, 2017 Sequential program From programmers perspective: Statements
More informationHPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)
HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) V. Akishina, I. Kisel, G. Kozlov, I. Kulakov, M. Pugach, M. Zyzak Goethe University of Frankfurt am Main 2015 Task Parallelism Parallelization
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 8 ] OpenMP Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture and Performance
More informationParallelising Scientific Codes Using OpenMP. Wadud Miah Research Computing Group
Parallelising Scientific Codes Using OpenMP Wadud Miah Research Computing Group Software Performance Lifecycle Scientific Programming Early scientific codes were mainly sequential and were executed on
More informationModule 10: Open Multi-Processing Lecture 19: What is Parallelization? The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program
The Lecture Contains: What is Parallelization? Perfectly Load-Balanced Program Amdahl's Law About Data What is Data Race? Overview to OpenMP Components of OpenMP OpenMP Programming Model OpenMP Directives
More information!OMP #pragma opm _OPENMP
Advanced OpenMP Lecture 12: Tips, tricks and gotchas Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! The macro _OPENMP is defined if code is
More informationComputer Architecture
Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 2 Part I Programming
More informationIntroduction to OpenMP. Lecture 4: Work sharing directives
Introduction to OpenMP Lecture 4: Work sharing directives Work sharing directives Directives which appear inside a parallel region and indicate how work should be shared out between threads Parallel do/for
More informationIntroduction to OpenMP. Martin Čuma Center for High Performance Computing University of Utah
Introduction to OpenMP Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Quick introduction. Parallel loops. Parallel loop directives. Parallel sections.
More informationFor those with laptops
For those with laptops Log into SciNet GPC devel node (ssh -X or ssh -Y) cd cp -r /scinet/course/ppp. cd ppp source setup cd omp-intro make./mandel An introduction to OpenMP OpenMP For Shared Memory systems
More informationIntroduction to OpenMP
Introduction to OpenMP p. 1/?? Introduction to OpenMP Intermediate OpenMP Nick Maclaren nmm1@cam.ac.uk September 2017 Introduction to OpenMP p. 2/?? Summary This is a miscellaneous collection of facilities
More informationUPDATES. 1. Threads.v. hyperthreading
UPDATES 1. Threads.v. hyperthreading Hyperthreadingis physical: set by BIOS, whether the operating sees 2 (not 1) logical cores for each physical core. Threads: lightweight processes running. Typically
More informationOpenMP 4.0. Mark Bull, EPCC
OpenMP 4.0 Mark Bull, EPCC OpenMP 4.0 Version 4.0 was released in July 2013 Now available in most production version compilers support for device offloading not in all compilers, and not for all devices!
More informationOpenMP dynamic loops. Paolo Burgio.
OpenMP dynamic loops Paolo Burgio paolo.burgio@unimore.it Outline Expressing parallelism Understanding parallel threads Memory Data management Data clauses Synchronization Barriers, locks, critical sections
More informationExploring Parallelism At Different Levels
Exploring Parallelism At Different Levels Balanced composition and customization of optimizations 7/9/2014 DragonStar 2014 - Qing Yi 1 Exploring Parallelism Focus on Parallelism at different granularities
More informationScientific Computing
Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 20 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling
More informationOpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means
High Performance Computing: Concepts, Methods & Means OpenMP Programming Prof. Thomas Sterling Department of Computer Science Louisiana State University February 8 th, 2007 Topics Introduction Overview
More informationTopics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP
Topics Lecture 11 Introduction OpenMP Some Examples Library functions Environment variables 1 2 Introduction Shared Memory Parallelization OpenMP is: a standard for parallel programming in C, C++, and
More informationConcurrent Programming with OpenMP
Concurrent Programming with OpenMP Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 11, 2012 CPD (DEI / IST) Parallel and Distributed
More informationAdvanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele
Advanced C Programming Winter Term 2008/09 Guest Lecture by Markus Thiele Lecture 14: Parallel Programming with OpenMP Motivation: Why parallelize? The free lunch is over. Herb
More informationParallel Programming Libraries and implementations
Parallel Programming Libraries and implementations Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License.
More informationOpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018
OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationLittle Motivation Outline Introduction OpenMP Architecture Working with OpenMP Future of OpenMP End. OpenMP. Amasis Brauch German University in Cairo
OpenMP Amasis Brauch German University in Cairo May 4, 2010 Simple Algorithm 1 void i n c r e m e n t e r ( short a r r a y ) 2 { 3 long i ; 4 5 for ( i = 0 ; i < 1000000; i ++) 6 { 7 a r r a y [ i ]++;
More informationNUMERICAL PARALLEL COMPUTING
Lecture 4: More on OpenMP http://people.inf.ethz.ch/iyves/pnc11/ Peter Arbenz, Andreas Adelmann Computer Science Dept, ETH Zürich, E-mail: arbenz@inf.ethz.ch Paul Scherrer Institut, Villigen E-mail: andreas.adelmann@psi.ch
More informationProgramming Shared Memory Systems with OpenMP Part I. Book
Programming Shared Memory Systems with OpenMP Part I Instructor Dr. Taufer Book Parallel Programming in OpenMP by Rohit Chandra, Leo Dagum, Dave Kohr, Dror Maydan, Jeff McDonald, Ramesh Menon 2 1 Machine
More informationMasterpraktikum - High Performance Computing
Masterpraktikum - High Performance Computing OpenMP Michael Bader Alexander Heinecke Alexander Breuer Technische Universität München, Germany 2 #include ... #pragma omp parallel for for(i = 0; i
More informationCluster Computing. Performance and Debugging Issues in OpenMP. Topics. Factors impacting performance. Scalable Speedup
Topics Scalable Speedup and Data Locality Parallelizing Sequential Programs Breaking data dependencies Avoiding synchronization overheads Performance and Debugging Issues in OpenMP Achieving Cache and
More information