ZKI-Tagung Supercomputing, Okt Parallel Patterns for Composing and Nesting Parallelism in HPC Applications

Size: px
Start display at page:

Download "ZKI-Tagung Supercomputing, Okt Parallel Patterns for Composing and Nesting Parallelism in HPC Applications"

Transcription

1 ZKI-Tagung Supercomputing, Okt Parallel Patterns for Composing and Nesting Parallelism in HPC Applications Hans Pabst Software and Services Group Intel Corporation

2 Agenda Introduction Parallel Patterns Today What about HPC? Summary 3

3 What is a Parallel Pattern? Design patterns An encoded expertise to capture that quality without a name that distinguishes truly excellent designs A small number of patterns can support a wide range of applications Parallel pattern A common occurring combination of task distribution and data sharing (data access).

4 Parallel Patterns Stencil Superscalar sequence Reduce Partition Gather/scatter Speculative selection Map Scan and Recurrence Pack and Expand Nest Pipeline Search and Match *

5 Why Parallel Patterns? Pattern: DSL for parallel programming Ready to use pieces efficiently mapping concurrency to hardware parallelism Less error-prone (data races, etc.) More productive and what is the reality? Parallel programming is still difficult. Standard languages progress slowly. 6

6 References Details a pattern language for parallel algorithm design Represents the author's hypothesis of how to think about parallel programming Source code samples in MPI, OpenMP and Java Patterns for Parallel Programming, Timothy G. Mattson, Beverly A. Sanders, Berna L. Massingill, Addison-Wesley, 2005, ISBN

7 References (cont.) Teaches parallel programming in a new more effective manner. It s about effective parallel programming. Not about any specific hardware. (c) 2012, publisher: Morgan Kaufmann

8 References (cont.) e/guyb/papers/ble90.pdf ation/hpc/composerxe/enus/2011update/tbbxe/design_patterns.pdf Arch D. Robinson: Composable Parallel Patterns with Intel Cilk Plus, IEEE,

9 Finding Concurrency Practical approach: optimizing sequential code* Use profiler, such as OProfile or Intel VTune Amplifier XE Identify the hotspots of an application; then: Check if the hotspots consist of independent tasks Check if the hotspots can execute independently Theoretical approach: design document Examine the design components, services, etc. Find components that contain independent operations Other approach: parallel patterns (Practical?) Find parallel pattern. Make the pattern explicit in the code. Use library based model e.g., C++11, Intel TBB Use language primitive e.g., C++11, Intel Cilk Plus * Parallelizing a sequential algorithm may never lead to the best known parallel algorithm.

10 Intel Advisor Tool for scalability and what-if analysis Modeling: use code annotations to introduce parallelism Evaluation: estimate speedup and check correctness GUI-driven assistant (5 steps) Productivity and safety Parallel correctness is checked based on a correct program Non-intrusive API It s not auto-parallelization It s not modifying the code

11 Intel Advisor: Five Steps 1. Survey the application (profiler) 2. Annotate the application (API) 3. Check suitability (estimation) 4. Check correctness (analysis) 5. Apply parallel model (user) Idea: Correctly parallelized application

12 Intel Advisor: Quicksort Example template<typename I> void serial_qsort(i begin, I end) { typedef typename std::iterator_traits<i>::value_type T; if (begin!= end) { const I pivot = end - 1; const I middle = std::partition(begin, pivot, std::bind2nd(std::less<t>(), *pivot)); std::swap(*pivot, *middle); ANNOTATE_SITE_BEGIN(Parallel Region); ANNOTATE_TASK_BEGIN(Left Partition); serial_qsort(begin, middle), ANNOTATE_TASK_END(Left Partition); } } ANNOTATE_TASK_BEGIN(Right Partition); serial_qsort(middle + 1, end)); ANNOTATE_TASK_END(Right Partition); ANNOTATE_SITE_END(Parallel Region); * Pure Quicksort (i.e., not attempt to avoid the tail recursion).

13 Intel Advisor: Fortran w/ Annotations subroutine sgemv(res, mat, vec, nrows, ncols) implicit none integer :: nrows, ncols real(kind=4), intent(out), dimension(0:nrows-1) :: res real(kind=4), intent(in), dimension(0:ncols-1) :: vec real(kind=4), intent(in), dimension(0:nrows*ncols-1) :: mat integer :: i, u, v ANNOTATE_SITE_BEGIN("parallel region") do i = 0, nrows 1 end do ANNOTATE_ITERATION_TASK("I") u = i * ncols v = u + ncols res(i) = DOT_PRODUCT(mat(u:v), vec) ANNOTATE_SITE_END("parallel region") end subroutine 14

14 Intel Advisor: Fortran (cont.) Subset of Intel Advisor annotations #define ANNOTATE_SITE_BEGIN(NAME) #define ANNOTATE_SITE_END(NAME) call annotate_site_begin(name) call annotate_site_end() #define ANNOTATE_ITERATION_TASK(NAME) call annotate_iteration_task(name) #define ANNOTATE_TASK_BEGIN(NAME) call annotate_task_begin(name) #define ANNOTATE_TASK_END(NAME) call annotate_task_end() #define ANNOTATE_LOCK_ACQUIRE(ADDRESS) call annotate_lock_acquire(address) #define ANNOTATE_LOCK_RELEASE(ADDRESS) call annotate_lock_release(address) Enable source code preprocessing $ ifort -fpp source_code_with_macros.f90 Advisor module (advisor_annotate) and library -I$ADVISOR_XE_2013_DIR/include/intel64 -L$ADVISOR_XE_2013_DIR/lib64 -ladvisor 15

15 Intel Advisor: Finding Concurrency Introduce as-if parallelism and evaluate it safely Works for C, C++, and Fortran Threading model agnostic 16

16 Agenda Introduction Parallel Patterns Today What about HPC? Summary 17

17 OpenCL* Elemental functions in particular, or kernel functions in general Similar to vectorizable loop body e e e e e e tiling Tiling serves multiple things Maps to memory hierarchy ND range loop blocking; N>1: nested loops Synchronization

18 Fortran Standardization advances quickly ( 03, 08, 13) Contrary to a dying language Cornerstones Array notation ( slices ) Elemental functions Fortran Co-array 19

19 C++: Intel Threading Building Blocks (Library Based Approach) Generic Parallel Algorithms Efficient scalable way to exploit the power of multi-core without having to start from scratch Miscellaneous Thread-safe timers Task scheduler The engine that empowers parallel algorithms that employs taskstealing to maximize concurrency Threads OS API wrappers Concurrent Containers Concurrent access, and a scalable alternative to containers that are externally locked for thread-safety TBB Flow Graph Thread Local Storage Scalable implementation of thread-local data that supports infinite number of TLS Synchronization Primitives User-level and OS wrappers for mutual exclusion, ranging from atomic operations to several flavors of mutexes and condition variables Memory Allocation Per-thread scalable memory manager and false-sharing free allocators 20

20 C++: Intel Threading Building Blocks (Library Based Approach) Generic Parallel Algorithms Efficient scalable way to exploit the power of multi-core without having to start from scratch Miscellaneous Thread-safe timers Task scheduler The engine that empowers parallel algorithms that employs taskstealing to maximize concurrency Threads OS API wrappers Concurrent Containers Concurrent access, and a scalable alternative to containers that are externally locked for thread-safety TBB Flow Graph Thread Local Storage Scalable implementation of thread-local data that supports infinite number of TLS Synchronization Primitives User-level and OS wrappers for mutual exclusion, ranging from atomic operations to several flavors of mutexes and condition variables Memory Allocation Per-thread scalable memory manager and false-sharing free allocators 21

21 C++: Intel Threading Building Blocks (Generic Algorithms) Loop parallelization parallel_for parallel_reduce - load balanced parallel execution - fixed number of independent iterations parallel_deterministic_reduce - run-to-run reproducible results parallel_scan - computes parallel prefix y[i] = y[i-1] op x[i] parallel_do Parallel Algorithms for Streams - Use for unstructured stream or pile of work - Can add additional work to pile while running parallel_for_each - parallel_do without an additional work feeder pipeline / parallel_pipeline - Linear pipeline of stages - Each stage can be parallel or serial in-order or serial out-of-order. - Uses cache efficiently Parallel sorting parallel_sort Parallel function invocation parallel_invoke - Parallel execution of a number of userspecified functions Computational graph flow::graph - Implements dependencies between tasks - Pass messages between tasks 22

22 C++11 a.k.a. C++0B it s late but not too late. Core language ext. memory consistency model Standard library parallelism can now rely on mem. consistency model What about Pthreads? Not portable*. Lack of memory consistency model 23 * Remember, Intel Architecture supports sequentially consistent atomics efficiently ( strong memory consistency model ).

23 C++11 Core language Lambda expressions, type inference, etc. Range-based for loops TLS, atomics Standard Template Lib. (STL) std::thread, std::async Synchronization (atomics, locks, and conditions) No tasks! struct background { background() : i(0), thread(work, this) {} ~background() { thread.join(); } static void work(background* self) { ++self->i; // do something } int i; std::thread thread; }; // work in the background background task;

24 N Threads M Tasks Thread vs. Task HW view thread Stream of instructions (along with a given state) Hardware resource e.g., core0 User view task It s not so interesting who will be the actual worker. OS view Stores the machine state (registers, etc.) At least one thread per process User code Code that runs in an own process space (separate from the OS kernel) Scheduler

25 Intel Cilk Plus Tasking model with only three keywords Library based as far as Hyperobjects are concerned Really, the Plus does not mean C++... It is for C and C++ programmer Vectorization model Array notation and SIMD functions ( elemental ) Pragmas: almost all are promoted to OpenMP 4.0 Array Notation is accepted in GNU* GCC mainline! 26

26 Intel Cilk Plus: Writing Vector Code Array Notation A[:] = B[:] + C[:]; Elemental Function declspec(vector) float ef(float a, float b) { return a + b; } A[:] = ef(b[:], C[:]); SIMD Directive #pragma simd for (int i = 0; i < N; ++i) { A[i] = B[i] + C[i]; } Auto-Vectorization for (int i = 0; i < N; ++i) { A[i] = B[i] + C[i]; } 27

27 Intel Cilk Plus and OpenMP 4.0 The programmer (i.e. you!) is responsible for correctness Available clauses (both OpenMP and Intel versions) PRIVATE FIRSTPRIVATE LASTPRIVATE --- like OpenMP REDUCTION COLLAPSE (OpenMP 4.0; for nested loops) LINEAR (additional induction variables) SAFELEN (OpenMP 4.0) VECTORLENGTH (Intel only) ALIGNED (OpenMP 4.0) ASSERT (Intel only; vectorize or die! ) 28

28 Intel Cilk Plus: Targeting Multicore void qsort(int* begin, int* end) { if (begin!= end) { int* pivot = end 1; int* middle = std::partition(begin, pivot, std::bind2nd(std::less<int>(), *pivot)); std::swap(*pivot, *middle); cilk_spawn qsort(begin, middle); qsort(middle + 1, end); cilk_sync; } } (std::bind2nd is deprecated in C++11) 29

29 Intel Cilk Plus: Lock Free (Reducers or Hyperobjects) int accumulate(const int* array, std::size_t size) { } reducer_opadd<int> result(0); cilk_for (std::size_t i = 0; i < size; ++i) { result += array[i]; } return result.get_value(); 30

30 Intel Cilk Plus: Array Notation Correspond to vector processing (SIMD) Explicit construct to express vectorization Compiler assumes no aliasing of pointers Synonyms array notation, array section, array slice, vector Syntax [start:size], or [start:size:stride] [:] all elements* * only works for array shapes known at compile-time 31 Intel Confidential

31 Intel Cilk Plus: Notation Example Array notation: y[0:10:10] = sin(x[20:10:2]); Corresponding loop: for (int i = 0, j = 0, k = 20; i < 10; ++i, j += 10, k += 2) { y[j] = sin(x[k]); } 32 Intel Confidential

32 Intel Cilk Plus: Array Operators Most C/C++ operators work with array sections Element-wise operators a[0:10] * b[4:10] (rank and size must match) Scalar expansion a[10:10] * c Assignment and evaluation Evaluation of RHS before assignment a[1:8] = a[0:8] + 1 Parallel assignment to LHS ^ temp! Gather and scatter a[idx[0:1024]] = 0 b[idx[0:1024]] = a[0:1024] c[0:512] = a[idx[0:512:2]] 33

33 Intel Cilk Plus: Array Op. (cont.) Index generation a[:] = sec_implicit_index(rank) Shift operators b[:] = sec_shift (a[:], signed shift_val, fill_val) b[:] = sec_rotate(a[:], signed shift_val) Cast-operation (array dimensionality) e.g., float[100] float[10][10] 34

34 Intel Cilk Plus: Array Reductions Reductions Built-in sec_reduce_add(a[:]), sec_reduce_mul(a[:]) sec_reduce_min(a[:]), sec_reduce_max(a[:]) sec_reduce_min_ind(a[:]) sec_reduce_max_ind(a[:]) sec_reduce_all_zero(a[:]) sec_reduce_all_nonzero(a[:]) sec_reduce_any_nonzero(a[:]) User-defined result sec_reduce (initial, a[:], fn-id) void sec_reduce_mutating(reduction, a[:], fn-id) 35

35 Intel Cilk Plus: SIMD Functions declspec(vector) void kernel(int& result, int a, int b) { result = a + b; } void sum(int* result, const int* a, const int* b, std::size_t size) { cilk_for (std::size_t i = 0; i < size; i += 8) { const std::size_t n = std::min(size - i, 8); kernel(result[i:n], a[i:n], b[i:n]); } } * For example, the remainder could be also handled separately (outside of the loop). 36

36 Intel Threading Runtimes: Summary Open source d Intel Threading Building Blocks Intel Cilk Plus Intel OpenMP* 37

37 Agenda Introduction Parallel Patterns Today What about HPC? Summary 38

38 Intel TBB*: Affinity Partitioner Applicable to parallel_for, parallel_reduce, and parallel_scan void sum(int* result, const int* a, const int* b, std::size_t size) { static affinity_partitioner partitioner; parallel_for( blocked_range<std::size_t>(0, size), [=](const blocked_range<std::size_t>& r) { for (std::size_t i = r.begin(); i!= r.end(); ++i) { result[i] = a[i] + b[i]; } }, partitioner ); } * That s not HPC!!! Well, that s no religion and yes it s C++ and people use it for HPC 39

39 OpenMP 4.0: proc_bind clause Give the system more information about the mapping needed at each parallel region void fft_func(...) { nthr = mkl_domain_get_max_threads(mkl_fft); # pragma omp parallel num_threads(nthr), proc_bind(spread) { // do an FFT } } void blas_func(...) { nthr = mkl_domain_get_max_threads(mkl_blas); # pragma omp parallel num_threads(nthr), proc_bind(close) { // execute a BLAS routine } } Summary Gives you the affinity you re looking for Actual implementations may currently be expensive It is already supported in the Intel Compiler 14.0 You can play with it right away! 40

40 Hybrid Parallelism: MPI + Threading Pinning and Affinity I_MPI_PIN_DOMAIN=socket KMP_AFFINITY=compact,1 Intel Xeon Phi MPI + Offload MPI symmetric model Host/coprocessor only mpiexec.hydra $* \ -host $HOST -n 1 \ -env I_MPI_PIN_DOMAIN=socket \ -env OMP_NUM_THREADS=2 \ -env KMP_AFFINITY=compact,1 \ $ROOT/compile/linux-isc-xeon-mpi${OGL}/tachyon $ROOT/scenes/teapot.dat \ -camfile $ROOT/scenes/teapot.cam \ -nosave \ : \ -host $HOST -n $RNKS \ -env I_MPI_PIN_DOMAIN=socket \ -env OMP_NUM_THREADS=30 \ -env OMP_SCHEDULE=dynamic \ -env KMP_AFFINITY=compact,1 \ $ROOT/compile/linux-isc-xeon-mpi/tachyon $ROOT/scenes/teapot.dat \ -camfile $ROOT/scenes/teapot.cam \ -nosave \ : \ -host mic0 -n $MICR \ -env LD_LIBRARY_PATH=$MIC_LD_LIBRARY_PATH \ -env I_MPI_PIN_DOMAIN=node \ -env OMP_NUM_THREADS=224 \ -env OMP_SCHEDULE=dynamic \ -env KMP_AFFINITY=balanced \ -env KMP_PLACE_THREADS=${MICT}T \ $ROOT/compile/linux-isc-mic-mpi/tachyon $ROOT/scenes/teapot.dat \ -camfile $ROOT/scenes/teapot.cam \ -nosave \ : \ -host mic1 -n $MICR \ -env LD_LIBRARY_PATH=$MIC_LD_LIBRARY_PATH \ -env I_MPI_PIN_DOMAIN=node \ -env OMP_NUM_THREADS=224 \ -env OMP_SCHEDULE=dynamic \ -env KMP_AFFINITY=balanced \ -env KMP_PLACE_THREADS=${MICT}T \ $ROOT/compile/linux-isc-mic-mpi/tachyon $ROOT/scenes/teapot.dat \ -camfile $ROOT/scenes/teapot.cam \ -nosave \ $NULL 41

41 Agenda Introduction Parallel Patterns Today What about HPC? Summary 42

42 Summary Prerequisites for composing and nesting parallelism Tasks in addition to (or instead of) plain threads Runtime dynamic scheduler (nesting!) Challenges Scheduling and load balancing Determinism and reproducibility 43

43

44 Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Copyright 2013, Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon Phi, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries. Optimization Notice Intel s compilers may or may not optimize to the same degree for non-intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision # Copyright 2013, 2012, Intel Corporation. All rights reserved.

What s New August 2015

What s New August 2015 What s New August 2015 Significant New Features New Directory Structure OpenMP* 4.1 Extensions C11 Standard Support More C++14 Standard Support Fortran 2008 Submodules and IMPURE ELEMENTAL Further C Interoperability

More information

Efficiently Introduce Threading using Intel TBB

Efficiently Introduce Threading using Intel TBB Introduction This guide will illustrate how to efficiently introduce threading using Intel Threading Building Blocks (Intel TBB), part of Intel Parallel Studio XE. It is a widely used, award-winning C++

More information

Parallel Programming Principle and Practice. Lecture 7 Threads programming with TBB. Jin, Hai

Parallel Programming Principle and Practice. Lecture 7 Threads programming with TBB. Jin, Hai Parallel Programming Principle and Practice Lecture 7 Threads programming with TBB Jin, Hai School of Computer Science and Technology Huazhong University of Science and Technology Outline Intel Threading

More information

Intel Thread Building Blocks, Part II

Intel Thread Building Blocks, Part II Intel Thread Building Blocks, Part II SPD course 2013-14 Massimo Coppola 25/03, 16/05/2014 1 TBB Recap Portable environment Based on C++11 standard compilers Extensive use of templates No vectorization

More information

Parallel Programming Models

Parallel Programming Models Parallel Programming Models Intel Cilk Plus Tasking Intel Threading Building Blocks, Copyright 2009, Intel Corporation. All rights reserved. Copyright 2015, 2011, Intel Corporation. All rights reserved.

More information

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Agenda VTune Amplifier XE OpenMP* Analysis: answering on customers questions about performance in the same language a program was written in Concepts, metrics and technology inside VTune Amplifier XE OpenMP

More information

C Language Constructs for Parallel Programming

C Language Constructs for Parallel Programming C Language Constructs for Parallel Programming Robert Geva 5/17/13 1 Cilk Plus Parallel tasks Easy to learn: 3 keywords Tasks, not threads Load balancing Hyper Objects Array notations Elemental Functions

More information

Obtaining the Last Values of Conditionally Assigned Privates

Obtaining the Last Values of Conditionally Assigned Privates Obtaining the Last Values of Conditionally Assigned Privates Hideki Saito, Serge Preis*, Aleksei Cherkasov, Xinmin Tian Intel Corporation (* at submission time) 2016/10/04 OpenMPCon2016 Legal Disclaimer

More information

Alexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria

Alexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria Alexei Katranov IWOCL '16, April 21, 2016, Vienna, Austria Hardware: customization, integration, heterogeneity Intel Processor Graphics CPU CPU CPU CPU Multicore CPU + integrated units for graphics, media

More information

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Parallel is the Path Forward Intel Xeon and Intel Xeon Phi Product Families are both going parallel Intel Xeon processor

More information

Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov

Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov What is the Parallel STL? C++17 C++ Next An extension of the C++ Standard Template Library algorithms with the execution policy argument Support for parallel

More information

OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel

OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel Clang * : An Excellent C++ Compiler LLVM * : Collection of modular and reusable compiler and toolchain technologies Created by Chris Lattner

More information

Presenter: Georg Zitzlsberger Date:

Presenter: Georg Zitzlsberger Date: C++ SIMD parallelism with Intel Cilk Plus and OpenMP* 4.0 Presenter: Georg Zitzlsberger Date: 05-12-2014 Agenda SIMD & Vectorization How to Vectorize? Vectorize with OpenMP* 4.0 Vectorize with Intel Cilk

More information

A Simple Path to Parallelism with Intel Cilk Plus

A Simple Path to Parallelism with Intel Cilk Plus Introduction This introductory tutorial describes how to use Intel Cilk Plus to simplify making taking advantage of vectorization and threading parallelism in your code. It provides a brief description

More information

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017 Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference London, 2017 Agenda Vectorization is becoming more and more important What is

More information

Getting Started with Intel SDK for OpenCL Applications

Getting Started with Intel SDK for OpenCL Applications Getting Started with Intel SDK for OpenCL Applications Webinar #1 in the Three-part OpenCL Webinar Series July 11, 2012 Register Now for All Webinars in the Series Welcome to Getting Started with Intel

More information

Achieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013

Achieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013 Achieving High Performance Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013 Does Instruction Set Matter? We find that ARM and x86 processors are simply engineering design points optimized

More information

Structured Parallel Programming Patterns for Efficient Computation

Structured Parallel Programming Patterns for Efficient Computation Structured Parallel Programming Patterns for Efficient Computation Michael McCool Arch D. Robison James Reinders ELSEVIER AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO

More information

LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015

LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015 LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015 Abstract Library for small matrix-matrix multiplications targeting

More information

Vectorization Advisor: getting started

Vectorization Advisor: getting started Vectorization Advisor: getting started Before you analyze Run GUI or Command Line Set-up environment Linux: source /advixe-vars.sh Windows: \advixe-vars.bat Run GUI or Command

More information

Ernesto Su, Hideki Saito, Xinmin Tian Intel Corporation. OpenMPCon 2017 September 18, 2017

Ernesto Su, Hideki Saito, Xinmin Tian Intel Corporation. OpenMPCon 2017 September 18, 2017 Ernesto Su, Hideki Saito, Xinmin Tian Intel Corporation OpenMPCon 2017 September 18, 2017 Legal Notice and Disclaimers By using this document, in addition to any agreements you have with Intel, you accept

More information

Using Intel VTune Amplifier XE and Inspector XE in.net environment

Using Intel VTune Amplifier XE and Inspector XE in.net environment Using Intel VTune Amplifier XE and Inspector XE in.net environment Levent Akyil Technical Computing, Analyzers and Runtime Software and Services group 1 Refresher - Intel VTune Amplifier XE Intel Inspector

More information

Повышение энергоэффективности мобильных приложений путем их распараллеливания. Примеры. Владимир Полин

Повышение энергоэффективности мобильных приложений путем их распараллеливания. Примеры. Владимир Полин Повышение энергоэффективности мобильных приложений путем их распараллеливания. Примеры. Владимир Полин Legal Notices This presentation is for informational purposes only. INTEL MAKES NO WARRANTIES, EXPRESS

More information

Intel Software Development Products for High Performance Computing and Parallel Programming

Intel Software Development Products for High Performance Computing and Parallel Programming Intel Software Development Products for High Performance Computing and Parallel Programming Multicore development tools with extensions to many-core Notices INFORMATION IN THIS DOCUMENT IS PROVIDED IN

More information

High Performance Parallel Programming. Multicore development tools with extensions to many-core. Investment protection. Scale Forward.

High Performance Parallel Programming. Multicore development tools with extensions to many-core. Investment protection. Scale Forward. High Performance Parallel Programming Multicore development tools with extensions to many-core. Investment protection. Scale Forward. Enabling & Advancing Parallelism High Performance Parallel Programming

More information

Parallel Programming. The Ultimate Road to Performance April 16, Werner Krotz-Vogel

Parallel Programming. The Ultimate Road to Performance April 16, Werner Krotz-Vogel Parallel Programming The Ultimate Road to Performance April 16, 2013 Werner Krotz-Vogel 1 Getting started with parallel algorithms Concurrency is a general concept multiple activities that can occur and

More information

Parallel Programming Features in the Fortran Standard. Steve Lionel 12/4/2012

Parallel Programming Features in the Fortran Standard. Steve Lionel 12/4/2012 Parallel Programming Features in the Fortran Standard Steve Lionel 12/4/2012 Agenda Overview of popular parallelism methodologies FORALL a look back DO CONCURRENT Coarrays Fortran 2015 Q+A 12/5/2012 2

More information

Growth in Cores - A well rehearsed story

Growth in Cores - A well rehearsed story Intel CPUs Growth in Cores - A well rehearsed story 2 1. Multicore is just a fad! Copyright 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

More information

Memory & Thread Debugger

Memory & Thread Debugger Memory & Thread Debugger Here is What Will Be Covered Overview Memory/Thread analysis New Features Deep dive into debugger integrations Demo Call to action Intel Confidential 2 Analysis Tools for Diagnosis

More information

Cilk Plus GETTING STARTED

Cilk Plus GETTING STARTED Cilk Plus GETTING STARTED Overview Fundamentals of Cilk Plus Hyperobjects Compiler Support Case Study 3/17/2015 CHRIS SZALWINSKI 2 Fundamentals of Cilk Plus Terminology Execution Model Language Extensions

More information

Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen

Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen Features We Discuss Synchronization (lock) hints The nonmonotonic:dynamic schedule Both Were new in OpenMP 4.5 May have slipped

More information

Intel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor

Intel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor Technical Resources Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPETY RIGHTS

More information

Bei Wang, Dmitry Prohorov and Carlos Rosales

Bei Wang, Dmitry Prohorov and Carlos Rosales Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512

More information

Intel Software Development Products Licensing & Programs Channel EMEA

Intel Software Development Products Licensing & Programs Channel EMEA Intel Software Development Products Licensing & Programs Channel EMEA Intel Software Development Products Advanced Performance Distributed Performance Intel Software Development Products Foundation of

More information

Expressing and Analyzing Dependencies in your C++ Application

Expressing and Analyzing Dependencies in your C++ Application Expressing and Analyzing Dependencies in your C++ Application Pablo Reble, Software Engineer Developer Products Division Software and Services Group, Intel Agenda TBB and Flow Graph extensions Composable

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Intel C++ Studio XE 2013 for Windows* Installation Guide and Release Notes Document number: 323805-003US 26 June 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.1.1 Changes since Intel

More information

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor D.Sc. Mikko Byckling 17th Workshop on High Performance Computing in Meteorology October 24 th 2016, Reading, UK Legal Disclaimer & Optimization

More information

Using Intel VTune Amplifier XE for High Performance Computing

Using Intel VTune Amplifier XE for High Performance Computing Using Intel VTune Amplifier XE for High Performance Computing Vladimir Tsymbal Performance, Analysis and Threading Lab 1 The Majority of all HPC-Systems are Clusters Interconnect I/O I/O... I/O I/O Message

More information

Installation Guide and Release Notes

Installation Guide and Release Notes Intel Parallel Studio XE 2013 for Linux* Installation Guide and Release Notes Document number: 323804-003US 10 March 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.1.1 Changes since Intel

More information

Structured Parallel Programming

Structured Parallel Programming Structured Parallel Programming Patterns for Efficient Computation Michael McCool Arch D. Robison James Reinders ELSEVIER AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO

More information

Bitonic Sorting Intel OpenCL SDK Sample Documentation

Bitonic Sorting Intel OpenCL SDK Sample Documentation Intel OpenCL SDK Sample Documentation Document Number: 325262-002US Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL

More information

Simplified and Effective Serial and Parallel Performance Optimization

Simplified and Effective Serial and Parallel Performance Optimization HPC Code Modernization Workshop at LRZ Simplified and Effective Serial and Parallel Performance Optimization Performance tuning Using Intel VTune Performance Profiler Performance Tuning Methodology Goal:

More information

CS 470 Spring Mike Lam, Professor. Advanced OpenMP

CS 470 Spring Mike Lam, Professor. Advanced OpenMP CS 470 Spring 2018 Mike Lam, Professor Advanced OpenMP Atomics OpenMP provides access to highly-efficient hardware synchronization mechanisms Use the atomic pragma to annotate a single statement Statement

More information

Pablo Halpern Parallel Programming Languages Architect Intel Corporation

Pablo Halpern Parallel Programming Languages Architect Intel Corporation Pablo Halpern Parallel Programming Languages Architect Intel Corporation CppCon, 8 September 2014 This work by Pablo Halpern is licensed under a Creative Commons Attribution

More information

Intel Xeon Phi programming. September 22nd-23rd 2015 University of Copenhagen, Denmark

Intel Xeon Phi programming. September 22nd-23rd 2015 University of Copenhagen, Denmark Intel Xeon Phi programming September 22nd-23rd 2015 University of Copenhagen, Denmark Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth

Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth Presenter: Surabhi Jain Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth May 25, 2018 ROME workshop (in conjunction with IPDPS 2018), Vancouver,

More information

The Intel Xeon Phi Coprocessor. Dr-Ing. Michael Klemm Software and Services Group Intel Corporation

The Intel Xeon Phi Coprocessor. Dr-Ing. Michael Klemm Software and Services Group Intel Corporation The Intel Xeon Phi Coprocessor Dr-Ing. Michael Klemm Software and Services Group Intel Corporation (michael.klemm@intel.com) Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Bitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved

Bitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2012 Intel Corporation All Rights Reserved Document Number: 325262-002US Revision: 1.3 World Wide Web: http://www.intel.com Document

More information

Intel Parallel Studio XE 2015

Intel Parallel Studio XE 2015 2015 Create faster code faster with this comprehensive parallel software development suite. Faster code: Boost applications performance that scales on today s and next-gen processors Create code faster:

More information

Revealing the performance aspects in your code

Revealing the performance aspects in your code Revealing the performance aspects in your code 1 Three corner stones of HPC The parallelism can be exploited at three levels: message passing, fork/join, SIMD Hyperthreading is not quite threading A popular

More information

MICHAL MROZEK ZBIGNIEW ZDANOWICZ

MICHAL MROZEK ZBIGNIEW ZDANOWICZ MICHAL MROZEK ZBIGNIEW ZDANOWICZ Legal Notices and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY

More information

Martin Kruliš, v

Martin Kruliš, v Martin Kruliš 1 Optimizations in General Code And Compilation Memory Considerations Parallelism Profiling And Optimization Examples 2 Premature optimization is the root of all evil. -- D. Knuth Our goal

More information

Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany

Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany Motivation C AVX2 AVX512 New instructions utilized! Scalar performance

More information

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Document number: 323803-001US 4 May 2011 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.2 Product Contents...

More information

Diego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK.

Diego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK. Diego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK. Legal Disclaimer & Software and workloads used in performance tests may have been optimized

More information

More performance options

More performance options More performance options OpenCL, streaming media, and native coding options with INDE April 8, 2014 2014, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Inside, Intel Xeon, and Intel

More information

Intel Thread Building Blocks

Intel Thread Building Blocks Intel Thread Building Blocks SPD course 2017-18 Massimo Coppola 23/03/2018 1 Thread Building Blocks : History A library to simplify writing thread-parallel programs and debugging them Originated circa

More information

Intel Array Building Blocks

Intel Array Building Blocks Intel Array Building Blocks Productivity, Performance, and Portability with Intel Parallel Building Blocks Intel SW Products Workshop 2010 CERN openlab 11/29/2010 1 Agenda Legal Information Vision Call

More information

Intel Thread Building Blocks

Intel Thread Building Blocks Intel Thread Building Blocks SPD course 2015-16 Massimo Coppola 08/04/2015 1 Thread Building Blocks : History A library to simplify writing thread-parallel programs and debugging them Originated circa

More information

Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism.

Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Parallel + SIMD is the Path Forward Intel Xeon and Intel Xeon Phi Product

More information

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division Intel VTune Amplifier XE Dr. Michael Klemm Software and Services Group Developer Relations Division Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS

More information

Intel Many Integrated Core (MIC) Architecture

Intel Many Integrated Core (MIC) Architecture Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products

More information

Jackson Marusarz Software Technical Consulting Engineer

Jackson Marusarz Software Technical Consulting Engineer Jackson Marusarz Software Technical Consulting Engineer What Will Be Covered Overview Memory/Thread analysis New Features Deep dive into debugger integrations Demo Call to action 2 Analysis Tools for Diagnosis

More information

Intel VTune Amplifier XE

Intel VTune Amplifier XE Intel VTune Amplifier XE Vladimir Tsymbal Performance, Analysis and Threading Lab 1 Agenda Intel VTune Amplifier XE Overview Features Data collectors Analysis types Key Concepts Collecting performance

More information

[Potentially] Your first parallel application

[Potentially] Your first parallel application [Potentially] Your first parallel application Compute the smallest element in an array as fast as possible small = array[0]; for( i = 0; i < N; i++) if( array[i] < small ) ) small = array[i] 64-bit Intel

More information

Intel Xeon Phi Coprocessor Performance Analysis

Intel Xeon Phi Coprocessor Performance Analysis Intel Xeon Phi Coprocessor Performance Analysis Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO

More information

Kevin O Leary, Intel Technical Consulting Engineer

Kevin O Leary, Intel Technical Consulting Engineer Kevin O Leary, Intel Technical Consulting Engineer Moore s Law Is Going Strong Hardware performance continues to grow exponentially We think we can continue Moore's Law for at least another 10 years."

More information

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Python Landscape Adoption of Python continues to grow among domain specialists and developers for its productivity benefits Challenge#1:

More information

Cilk Plus in GCC. GNU Tools Cauldron Balaji V. Iyer Robert Geva and Pablo Halpern Intel Corporation

Cilk Plus in GCC. GNU Tools Cauldron Balaji V. Iyer Robert Geva and Pablo Halpern Intel Corporation Cilk Plus in GCC GNU Tools Cauldron 2012 Balaji V. Iyer Robert Geva and Pablo Halpern Intel Corporation July 10, 2012 Presentation Outline Introduction Cilk Plus components Implementation GCC Project Status

More information

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017 Achieving Peak Performance on Intel Hardware Intel Software Developer Conference London, 2017 Welcome Aims for the day You understand some of the critical features of Intel processors and other hardware

More information

Beyond Threads: Scalable, Composable, Parallelism with Intel Cilk Plus and TBB

Beyond Threads: Scalable, Composable, Parallelism with Intel Cilk Plus and TBB Beyond Threads: Scalable, Composable, Parallelism with Intel Cilk Plus and TBB Jim Cownie Intel SSG/DPD/TCAR 1 Optimization Notice Optimization Notice Intel s compilers may or

More information

Using Intel Inspector XE 2011 with Fortran Applications

Using Intel Inspector XE 2011 with Fortran Applications Using Intel Inspector XE 2011 with Fortran Applications Jackson Marusarz Intel Corporation Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

Sarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018

Sarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018 Sarah Knepper Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018 Outline Motivation Problem statement and solutions Simple example Performance comparison 2 Motivation Partial differential equations

More information

Programming Models for Multi- Threading. Brian Marshall, Advanced Research Computing

Programming Models for Multi- Threading. Brian Marshall, Advanced Research Computing Programming Models for Multi- Threading Brian Marshall, Advanced Research Computing Why Do Parallel Computing? Limits of single CPU computing performance available memory I/O rates Parallel computing allows

More information

Parallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops

Parallel Programming. Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Parallel Programming Exploring local computational resources OpenMP Parallel programming for multiprocessors for loops Single computers nowadays Several CPUs (cores) 4 to 8 cores on a single chip Hyper-threading

More information

Software Optimization Case Study. Yu-Ping Zhao

Software Optimization Case Study. Yu-Ping Zhao Software Optimization Case Study Yu-Ping Zhao Yuping.zhao@intel.com Agenda RELION Background RELION ITAC and VTUE Analyze RELION Auto-Refine Workload Optimization RELION 2D Classification Workload Optimization

More information

This guide will show you how to use Intel Inspector XE to identify and fix resource leak errors in your programs before they start causing problems.

This guide will show you how to use Intel Inspector XE to identify and fix resource leak errors in your programs before they start causing problems. Introduction A resource leak refers to a type of resource consumption in which the program cannot release resources it has acquired. Typically the result of a bug, common resource issues, such as memory

More information

H.J. Lu, Sunil K Pandey. Intel. November, 2018

H.J. Lu, Sunil K Pandey. Intel. November, 2018 H.J. Lu, Sunil K Pandey Intel November, 2018 Issues with Run-time Library on IA Memory, string and math functions in today s glibc are optimized for today s Intel processors: AVX/AVX2/AVX512 FMA It takes

More information

Intel Array Building Blocks Technical Presentation: Code Tips

Intel Array Building Blocks Technical Presentation: Code Tips Intel Array Building Blocks Technical Presentation: Code Tips Zhang Zhang Noah Clemons {zhang.zhang, noah.clemons}@intel.com 1 Intel compilers, associated libraries and associated development tools may

More information

Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes

Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes 23 October 2014 Table of Contents 1 Introduction... 1 1.1 Product Contents... 2 1.2 Intel Debugger (IDB) is

More information

Intel Xeon Phi Coprocessor

Intel Xeon Phi Coprocessor Intel Xeon Phi Coprocessor http://tinyurl.com/inteljames twitter @jamesreinders James Reinders it s all about parallel programming Source Multicore CPU Compilers Libraries, Parallel Models Multicore CPU

More information

Crosstalk between VMs. Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA

Crosstalk between VMs. Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA Crosstalk between VMs Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA 2 September 2015 Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT

More information

Presenter: Georg Zitzlsberger. Date:

Presenter: Georg Zitzlsberger. Date: Presenter: Georg Zitzlsberger Date: 07-09-2016 1 Agenda Introduction to SIMD for Intel Architecture Compiler & Vectorization Validating Vectorization Success Intel Cilk Plus OpenMP* 4.x Summary 2 Vectorization

More information

Intel Advisor XE. Vectorization Optimization. Optimization Notice

Intel Advisor XE. Vectorization Optimization. Optimization Notice Intel Advisor XE Vectorization Optimization 1 Performance is a Proven Game Changer It is driving disruptive change in multiple industries Protecting buildings from extreme events Sophisticated mechanics

More information

Using Intel Transactional Synchronization Extensions

Using Intel Transactional Synchronization Extensions Using Intel Transactional Synchronization Extensions Dr.-Ing. Michael Klemm Software and Services Group michael.klemm@intel.com 1 Credits The Tutorial Gang Christian Terboven Michael Klemm Ruud van der

More information

Graphics Performance Analyzer for Android

Graphics Performance Analyzer for Android Graphics Performance Analyzer for Android 1 What you will learn from this slide deck Detailed optimization workflow of Graphics Performance Analyzer Android* System Analysis Only Please see subsequent

More information

GAP Guided Auto Parallelism A Tool Providing Vectorization Guidance

GAP Guided Auto Parallelism A Tool Providing Vectorization Guidance GAP Guided Auto Parallelism A Tool Providing Vectorization Guidance 7/27/12 1 GAP Guided Automatic Parallelism Key design ideas: Use compiler to help detect what is blocking optimizations in particular

More information

The Art of Parallel Processing

The Art of Parallel Processing The Art of Parallel Processing Ahmad Siavashi April 2017 The Software Crisis As long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a

More information

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes Document number: 323804-002US 21 June 2012 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.2 Product Contents...

More information

Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel

Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Agenda Which performance analysis tool should I use first? Intel Application

More information

Teaching Think Parallel

Teaching Think Parallel Teaching Think Parallel Four positive trends toward Parallel Programming, including advances in teaching/learning James Reinders, Intel April 2013 1 Tools for Parallel Programming Parallel Models Wildly

More information

Computer Architecture and Structured Parallel Programming James Reinders, Intel

Computer Architecture and Structured Parallel Programming James Reinders, Intel Computer Architecture and Structured Parallel Programming James Reinders, Intel Parallel Computing CIS 410/510 Department of Computer and Information Science Lecture 17 Manycore Computing and GPUs Computer

More information

Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation

Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation Agenda 1D interpolation problem statement Computation flow Application areas Data fitting in Intel MKL Data

More information

Performance Profiler. Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava,

Performance Profiler. Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava, Performance Profiler Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava, 08-09-2016 Faster, Scalable Code, Faster Intel VTune Amplifier Performance Profiler Get Faster Code Faster With Accurate

More information

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Agenda Intel Advisor for vectorization

More information

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel

More information

Progress on OpenMP Specifications

Progress on OpenMP Specifications Progress on OpenMP Specifications Wednesday, November 13, 2012 Bronis R. de Supinski Chair, OpenMP Language Committee This work has been authored by Lawrence Livermore National Security, LLC under contract

More information

What s P. Thierry

What s P. Thierry What s new@intel P. Thierry Principal Engineer, Intel Corp philippe.thierry@intel.com CPU trend Memory update Software Characterization in 30 mn 10 000 feet view CPU : Range of few TF/s and

More information

Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing

Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing Shared memory programming model OpenMP TMA4280 Introduction to Supercomputing NTNU, IMF February 16. 2018 1 Recap: Distributed memory programming model Parallelism with MPI. An MPI execution is started

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information