Native Computing and Optimization on Intel Xeon Phi

Size: px
Start display at page:

Download "Native Computing and Optimization on Intel Xeon Phi"

Transcription

1 Native Computing and Optimization on Intel Xeon Phi ISC 2015 Carlos Rosales

2 Overview Why run native? What is a native application? Building a native application Running a native application Setting affinity and pinning tasks Optimization Vectorization Alignment Parallelization Profiling with Vtune

3 What is a native application? It is an application built to run exclusively on the MIC coprocessor. MIC is not binary compatible with the host processor Instruction set is similar to Pentium, but not all 64 bit scalar extensions are included. MIC has 512 bit vector extensions, but does NOT have MMX, SSE, or AVX extensions. Native applications can t be used on the host CPU, and vice versa.

4 Why run a native application? It is possible to login and run applications on the MIC without any host intervention Easy way to get acquainted with the properties of the MIC Performance studies Single card scaling tests (OMP/MPI) No issues with data exchange with host The native code probably performs quite well on the host CPU once you build a host version

5 Building a native application Cross-compile on the host (login or compute nodes) No compilers installed on coprocessors MIC is fully supported by the Intel C/C++ and Fortran compilers (v13+): icc -openmp -mmic mysource.c -o myapp.mic ifort -openmp -mmic mysource.f90 -o myapp.mic The -mmic flag causes the compiler to generate a native mic executable It is convenient to use a.mic extension to differentiate MIC executables

6 Running a native application on Stampede Options to run from mic0 from a compute node: 1. Traditional ssh remote command execution c % ssh mic0 ls Clumsy if environment variables or directory changes needed 2. Interactively login to mic: c % ssh mic0 Then use as a normal server 3. Explicit launcher: c % micrun./a.out.mic 4. Implicit launcher: c %./a.out.mic

7 Native Application Launcher The micrun launcher has three nice features: It propagates the current working directory It propagates the shell environment (with translation) Environment variables that need to be different on host and coprocessor need to be defined using the MIC_ prefix on the host. E.g., c % export MIC_OMP_NUMTHREADS=183 c % export MIC_KMP_AFFINITY= verbose,balanced It propagates the command return code back to the host shell These features work whether the launcher is used explicitly or implicitly

8 Environmental Variables on the MIC If you ssh to mic0 and run directly from the card use the regular names: OMP_NUM_THREADS KMP_AFFINITY I_MPI_PIN_PROCESSOR_LIST If you use the launcher, use the MIC_ prefix to define them on the host: MIC_OMP_NUM_THREADS MIC_KMP_AFFINITY MIC_I_MPI_PIN_PROCESSOR_LIST You can also define a different prefix: export MIC_ENV_PREFIX=MYMIC MYMIC_OMP_NUM_THREADS...

9 Native Execution Quirks The mic runs a lightweight version of Linux Some tools are missing: w, numactl Some tools have reduced functionality: ps Relatively few libraries have been ported to the coprocessor environment Environment needs to be propagated to coprocessor This issue make the implicit or explicit launcher approach even more important

10 Best Practices For Running Native Apps Always bind processes to cores For MPI tasks (more on next presentation) I_MPI_PIN I_MPI_PIN_MODE I_MPI_PIN_PROCESSOR_LIST For threads KMP_AFFINITY={compact, scatter, balanced} KMP_AFFINITY=explicit,proclist=[0,1,2,3,4] Adding verbose will dump the full affinity information when the run starts Adding granularity=fine binds to specific thread contexts and may help in codes with heavy L1 cache reuse The MIC is a single chip, so there is no need for numactl If other affinity options can t be used the command taskset is available.

11 KMP_AFFINITY Example compact balanced scatter

12 Logical to Physical Processor Mapping Hardware: Physical Cores are Logical Cores are Mapping is not what you are used to! OS proc 1 maps to package 0 core 0 thread 0 OS proc 2 maps to package 0 core 0 thread 1 OS proc 3 maps to package 0 core 0 thread 2 OS proc 4 maps to package 0 core 0 thread 3 OS proc 5 maps to package 0 core 1 thread 0 [ ] OS proc 0 maps to package 0 core 60 thread 0 OS proc 241 maps to package 0 core 60 thread 1 OS proc 242 maps to package 0 core 60 thread 2 OS proc 243 maps to package 0 core 60 thread 3 OpenMP threads start binding to logical core 1, not logical core 0 For compact mapping 240 OpenMP threads are mapped to the first 60 cores No contention for the core containing logical core 0 the core that the O/S uses most But for scatter and balanced mappings, contention for logical core 0 begins at 61 threads Not much performance impact unless O/S is very busy Best to avoid core 60 for native and symmetric jobs

13 How Do I Tune Native Applications? Vectorization and Parallelization are critical! Single-thread scalar performance: ~1 GHz Pentium Vector width is 512 bits 8 double precision values / 16 single precision values You don t want to lose factors of 8-16 in performance Compiler reports provide important information about effectiveness of compiler at vectorization Start with a simple code the compiler reports can be very long & hard to follow There are lots of options & reports! Details at: /composerxe/compiler/cpp-lin/index.htm

14 Vectorization Reports New syntax for Intel 15 and above: Set optimization report level $ -opt-report=0-5 Set optimization report phase $ -opt-report-phase=vec Set optimization report output destination $ -opt-report-file=stdout Default optimization report verbosity level is 2 Default optimization report output will be saved in a file with extension.optrpt For Intel 13 you can still use vec-report# where # is a number from1 to 7 in increasing level of verbosity.

15 Vectorization Report Characteristics Increasing verbosity levels provide more detailed diagnostic information about every loop, including Loops successfully vectorized Loops not vectorized & reasons Specific dependency info for failures to vectorize Array alignment for each loop Unrolling depth for each loop Estimated speedup factor for vectorized loop Quirks For Intel 13 most/all of the vectorization messages repeated with the line number of the call site ignore these and look at the messages with the line number of the actual loop Reported reasons for not vectorizing are not very helpful look at specific dependency info & remember about C aliasing rules This has improved significantly with Intel 15

16 Vectorization Report Example Begin optimization report for: main(int, char **) Report from: Vector optimizations [vec] LOOP BEGIN at./vector.c(34,2) remark #15527: loop was not vectorized: function call to rand(void) cannot be vectorized [./vector.c(34,33) ] LOOP END LOOP BEGIN at./vector.c(35,2) remark #15527: loop was not vectorized: function call to rand(void) cannot be vectorized [./vector.c(35,33) ] LOOP END Explicit reasons for lack of vectorization LOOP BEGIN at./vector.c(38,5) remark #15344: loop was not vectorized: vector dependence prevents vectorization. First dependence is shown below. Use level 5 report for details remark #15346: vector dependence: assumed FLOW dependence between line 38 and line 38 LOOP END LOOP BEGIN at./vector.c(43,2) remark #15542: loop was not vectorized: inner loop was already vectorized LOOP BEGIN at./vector.c(45,3) remark #15300: LOOP WAS VECTORIZED LOOP END LOOP END Guidance for how to get additional information

17 Detailed Vectorization Report Report from: Vector optimizations [vec]. LOOP BEGIN at./vector.c(38,5) remark #15344: loop was not vectorized: vector dependence prevents vectorization remark #15346: vector dependence: assumed FLOW dependence between y line 38 and y line 38 LOOP END Variable with dependency mentioned explicitly LOOP BEGIN at./vector.c(45,3) remark #15388: vectorization support: reference x has aligned access [./vector.c(46,4) ] remark #15388: vectorization support: reference y has aligned access [./vector.c(46,4) ] remark #15388: vectorization support: reference z has aligned access [./vector.c(46,4) ] remark #15399: vectorization support: unroll factor set to 8 remark #15300: LOOP WAS VECTORIZED Alignment and unroll factor mentioned explicitly remark #15475: --- begin vector loop cost summary --- remark #15476: scalar loop cost: 11 remark #15477: vector loop cost: remark #15478: estimated potential speedup: remark #15479: lightweight vector operations: 5 remark #15488: --- end vector loop cost summary --- LOOP END Loop cost and estimated speedup

18 Detailed Vectorization Report (Host) Report from: Vector optimizations [vec]. LOOP BEGIN at./vector.c(38,5) remark #15344: loop was not vectorized: vector dependence prevents vectorization remark #15346: vector dependence: assumed FLOW dependence between y line 38 and y line 38 LOOP END LOOP BEGIN at./vector.c(45,3) remark #15388: vectorization support: reference x has aligned access [./vector.c(46,4) ] remark #15388: vectorization support: reference y has aligned access [./vector.c(46,4) ] remark #15388: vectorization support: reference z has aligned access [./vector.c(46,4) ] remark #15399: vectorization support: unroll factor set to 4 remark #15300: LOOP WAS VECTORIZED remark #15448: unmasked aligned unit stride loads: 2 remark #15449: unmasked aligned unit stride stores: 1 remark #15475: --- begin vector loop cost summary --- remark #15476: scalar loop cost: 11 remark #15477: vector loop cost: remark #15478: estimated potential speedup: remark #15479: lightweight vector operations: 4 remark #15480: medium-overhead vector operations: 1 remark #15488: --- end vector loop cost summary --- LOOP END Unroll set to 4, not 8 (AVX, 256 bit) Memory access information Much lower potential speedup

19 OpenMP complicates things Remember that OpenMP will split loops in ways that can break 64-Byte alignment alignment depends on thread count, so multiple loop version are generated by the compiler: LOOP BEGIN at./vector_omp.c(44,2) remark #15542: loop was not vectorized: inner loop was already vectorized LOOP BEGIN at./vector_omp.c(46,3) <Peeled> LOOP END LOOP BEGIN at./vector_omp.c(46,3) remark #15300: LOOP WAS VECTORIZED LOOP END LOOP BEGIN at./vector_omp.c(46,3) <Alternate Alignment Vectorized Loop> LOOP END LOOP BEGIN at./vector_omp.c(46,3) <Remainder> remark #15301: REMAINDER LOOP WAS VECTORIZED LOOP END LOOP END

20 Tuning Tools Currently there is no support for gprof when compiling native applications Profiling is supported by Intel s Vtune product Available on Stampede Example coming up! For MPI parallel applications the Intel Trace Analyzer (Itac) supports MIC execution Also available on Stampede More later

21 Starting with VTUNE Prepare the code Compile your code with "-g". Remember to add any optimization flags after the -g flag or you will revert to no optimization. Collect the performance data Load the vtune module with "module load vtune" Get an interactive session of prepare a slurm batch script Use the command line collection, for example, for hotspots on MIC: $ amplxe-cl -collect knc-hotspots -mrte-mode=native -no-summary -app-working-dir. --./myapp

22 Preset Profiling Sets MIC knc-hotspots, knc-bandwidth, knc-generalexploration Host bandwidth, general-exploration, hotspots, advanced-hotspots, snb-memory-access, snbaccess-contention, snb-port-saturation, snb-coreport-saturation

23 Important Notes You probably want to use the "-target-duration-type=medium" for mic collections, as this makes the post-processing much faster. I believe this samples every 100 ms. Use "amplxe-cl --help" to get additional help, there are may options for collection and I will not cover them all in this presentation. You can also generate text based reports, but you have to do that in the the development nodes, since vtune needs access to the license server for that: $ amplxe-cl -report summary./r000hs You may want to use the linux "xrandr" command to change the size of the remote desktop that is opened.

24 Plain Text Reports amplxe-cl -report summary r./r000hs General Exploration Metrics Parameter r000hs CPU Time Clockticks Instructions Retired CPI Rate Cache Usage 0.0 Vectorization Usage 0.0 TLB Usage 0.0 Collection and Platform Info Parameter r000hs Application Command Line./vec.mic User Name carlos Operating System Intel MIC Platform Software Stack (Built by Poky 7.0) 3.3 \n \l Computer Name c mic0.stampede.tacc.utexas.edu Result Size

25 Plain Text Reports (II) CPU --- Parameter r000hs Name Intel(R) Xeon(R) E5 processor Frequency Logical CPU Count 244 Summary Elapsed Time: CPU Time: Average CPU Usage: CPI Rate: Event summary Hardware Event Type Hardware Event Count:Self Hardware Event Sample Count:Self Events Per Sample CPU_CLK_UNHALTED INSTRUCTIONS_EXECUTED

26 Analyzing the Collected Performance Data Open a new stampede session in a different terminal Establish a vnc session. Find out how (if you have not done it before) in the Stampede user guide under "remote desktop access": Fire up the vtune GUI and point to the collected data (typically a directory named r00*): $ amplxe-gui./r000hs

27 Open Collected Data

28 Summary Tab (Hotspots) CPU usage histogram is a good tool to judge overall device utilization Looks terrible because this is serial code!

29 Summary Tab (General Exploration) No binding specified for this run, so we see some jumping around Very small case, so 100% L1 hits and no L2 (read) misses

30 Looking at the Bottlenecks High level overview, need to filter results Bottom panel allows to filter our application only (next slide)

31 Filtered Data Filtered data shows only our application information Multiple threads shown separately in the bottom panel for OMP code Time to look at the source code!

32 Source View Extremely useful and easy to use Once the hotspots are identified, you can examine the assembly code

33 Assembly View Selected source lines correspond to shaded assembly lines on the right hand side panel Easiest way I know of of correlating hotspots with generated assembly!

34 Generating New Projects

35 Specifying the Analysis Type Lots of options here, including a handy command line button at the bottom

36 Command Line Options

37 Performance Tuning Notes Xeon Phi always has multithreading enabled Four thread contexts per physical core Registers are replicated L1D, L1I, and (private, unified) L2 caches are shared Vector Unit instruction issue limitation: The vector unit can issue one instruction per cycle L1D Cache can deliver 64 Bytes (1 vector register) every cycle But a thread can issue a vector instruction every other cycle Need at least two threads to fully utilize the vector unit Using 3-4 threads does not increase maximum issue rate, but often helps tolerate latency

38 Intel reference material Main Software Developers web page: A list of links to very good training material at: Many answers can also be found in the Intel forums: Specific information about building and running native applications: Debugging:

39 More Intel reference Material Search for these at by document number This is more likely to get the most recent version than searching for the document number via Google. Primary Reference: Intel Xeon Phi Coprocessor System Software Developers Guide (document or ) Advanced Topics: Intel Xeon Phi Coprocessor Instruction Set Architecture Reference Manual (document ) Intel Xeon Phi Coprocessor (codename: Knights Corner) Performance Monitoring Units (document ) WARNING: Intel sometimes describes the number of vector registers as 16 in this documents. The actual number is 32.

40 Questions? For more information:

Native Computing and Optimization. Hang Liu December 4 th, 2013

Native Computing and Optimization. Hang Liu December 4 th, 2013 Native Computing and Optimization Hang Liu December 4 th, 2013 Overview Why run native? What is a native application? Building a native application Running a native application Setting affinity and pinning

More information

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. John D. McCalpin

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. John D. McCalpin Native Computing and Optimization on the Intel Xeon Phi Coprocessor John D. McCalpin mccalpin@tacc.utexas.edu Intro (very brief) Outline Compiling & Running Native Apps Controlling Execution Tuning Vectorization

More information

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. John D. McCalpin

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. John D. McCalpin Native Computing and Optimization on the Intel Xeon Phi Coprocessor John D. McCalpin mccalpin@tacc.utexas.edu Outline Overview What is a native application? Why run native? Getting Started: Building a

More information

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. Lars Koesterke John D. McCalpin

Native Computing and Optimization on the Intel Xeon Phi Coprocessor. Lars Koesterke John D. McCalpin Native Computing and Optimization on the Intel Xeon Phi Coprocessor Lars Koesterke John D. McCalpin lars@tacc.utexas.edu mccalpin@tacc.utexas.edu Intro (very brief) Outline Compiling & Running Native Apps

More information

Hands-on with Intel Xeon Phi

Hands-on with Intel Xeon Phi Hands-on with Intel Xeon Phi Lab 2: Native Computing and Vector Reports Bill Barth Kent Milfeld Dan Stanzione 1 Lab 2 What you will learn about Evaluating and Analyzing vector performance. Controlling

More information

Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ,

Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ, Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - fabio.baruffa@lrz.de LRZ, 27.6.- 29.6.2016 Architecture Overview Intel Xeon Processor Intel Xeon Phi Coprocessor, 1st generation Intel Xeon

More information

Symmetric Computing. ISC 2015 July John Cazes Texas Advanced Computing Center

Symmetric Computing. ISC 2015 July John Cazes Texas Advanced Computing Center Symmetric Computing ISC 2015 July 2015 John Cazes Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required:

More information

Lab MIC Offload Experiments 7/22/13 MIC Advanced Experiments TACC

Lab MIC Offload Experiments 7/22/13 MIC Advanced Experiments TACC Lab MIC Offload Experiments 7/22/13 MIC Advanced Experiments TACC # pg. Subject Purpose directory 1 3 5 Offload, Begin (C) (F90) Compile and Run (CPU, MIC, Offload) offload_hello 2 7 Offload, Data Optimize

More information

Introduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero

Introduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero Introduction to Intel Xeon Phi programming techniques Fabio Affinito Vittorio Ruggiero Outline High level overview of the Intel Xeon Phi hardware and software stack Intel Xeon Phi programming paradigms:

More information

Understanding Native Mode, Vectorization, and Symmetric MPI. Bill Barth Director of HPC, TACC June 16, 2013

Understanding Native Mode, Vectorization, and Symmetric MPI. Bill Barth Director of HPC, TACC June 16, 2013 Understanding Native Mode, Vectorization, and Symmetric MPI Bill Barth Director of HPC, TACC June 16, 2013 What is a native application? It is an application built to run exclusively on the MIC coprocessor.

More information

Overview of Intel Xeon Phi Coprocessor

Overview of Intel Xeon Phi Coprocessor Overview of Intel Xeon Phi Coprocessor Sept 20, 2013 Ritu Arora Texas Advanced Computing Center Email: rauta@tacc.utexas.edu This talk is only a trailer A comprehensive training on running and optimizing

More information

Symmetric Computing. Jerome Vienne Texas Advanced Computing Center

Symmetric Computing. Jerome Vienne Texas Advanced Computing Center Symmetric Computing Jerome Vienne Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required: CPU MIC Currently

More information

Code optimization in a 3D diffusion model

Code optimization in a 3D diffusion model Code optimization in a 3D diffusion model Roger Philp Intel HPC Software Workshop Series 2016 HPC Code Modernization for Intel Xeon and Xeon Phi February 18 th 2016, Barcelona Agenda Background Diffusion

More information

Intel Xeon Phi Coprocessor

Intel Xeon Phi Coprocessor Intel Xeon Phi Coprocessor A guide to using it on the Cray XC40 Terminology Warning: may also be referred to as MIC or KNC in what follows! What are Intel Xeon Phi Coprocessors? Hardware designed to accelerate

More information

Symmetric Computing. John Cazes Texas Advanced Computing Center

Symmetric Computing. John Cazes Texas Advanced Computing Center Symmetric Computing John Cazes Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host and across nodes Also called heterogeneous computing Two executables are required:

More information

Symmetric Computing. SC 14 Jerome VIENNE

Symmetric Computing. SC 14 Jerome VIENNE Symmetric Computing SC 14 Jerome VIENNE viennej@tacc.utexas.edu Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required: CPU MIC Currently

More information

Introduction to the Intel Xeon Phi on Stampede

Introduction to the Intel Xeon Phi on Stampede June 10, 2014 Introduction to the Intel Xeon Phi on Stampede John Cazes Texas Advanced Computing Center Stampede - High Level Overview Base Cluster (Dell/Intel/Mellanox): Intel Sandy Bridge processors

More information

Intel Knights Landing Hardware

Intel Knights Landing Hardware Intel Knights Landing Hardware TACC KNL Tutorial IXPUG Annual Meeting 2016 PRESENTED BY: John Cazes Lars Koesterke 1 Intel s Xeon Phi Architecture Leverages x86 architecture Simpler x86 cores, higher compute

More information

Reusing this material

Reusing this material XEON PHI BASICS Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Xeon Phi Coprocessors on Turing

Xeon Phi Coprocessors on Turing Xeon Phi Coprocessors on Turing Table of Contents Overview...2 Using the Phi Coprocessors...2 Example...2 Intel Vtune Amplifier Example...3 Appendix...8 Sources...9 Information Technology Services High

More information

Debugging Intel Xeon Phi KNC Tutorial

Debugging Intel Xeon Phi KNC Tutorial Debugging Intel Xeon Phi KNC Tutorial Last revised on: 10/7/16 07:37 Overview: The Intel Xeon Phi Coprocessor 2 Debug Library Requirements 2 Debugging Host-Side Applications that Use the Intel Offload

More information

Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2

Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2 Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2 This release of the Intel C++ Compiler 16.0 product is a Pre-Release, and as such is 64 architecture processor supporting

More information

Revealing the performance aspects in your code

Revealing the performance aspects in your code Revealing the performance aspects in your code 1 Three corner stones of HPC The parallelism can be exploited at three levels: message passing, fork/join, SIMD Hyperthreading is not quite threading A popular

More information

SCALABLE HYBRID PROTOTYPE

SCALABLE HYBRID PROTOTYPE SCALABLE HYBRID PROTOTYPE Scalable Hybrid Prototype Part of the PRACE Technology Evaluation Objectives Enabling key applications on new architectures Familiarizing users and providing a research platform

More information

Lab MIC Experiments 4/25/13 TACC

Lab MIC Experiments 4/25/13 TACC Lab MIC Experiments 4/25/13 TACC # pg. Subject Purpose directory 1 3 5 Offload, Begin (C) (F90) Compile and Run (CPU, MIC, Offload) offload_hello 2 7 Offload, Data Optimize Offload Data Transfers offload_transfer

More information

Accelerator Programming Lecture 1

Accelerator Programming Lecture 1 Accelerator Programming Lecture 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de January 11, 2016 Accelerator Programming

More information

Using Intel VTune Amplifier XE for High Performance Computing

Using Intel VTune Amplifier XE for High Performance Computing Using Intel VTune Amplifier XE for High Performance Computing Vladimir Tsymbal Performance, Analysis and Threading Lab 1 The Majority of all HPC-Systems are Clusters Interconnect I/O I/O... I/O I/O Message

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

MULTI-CORE PROGRAMMING. Dongrui She December 9, 2010 ASSIGNMENT

MULTI-CORE PROGRAMMING. Dongrui She December 9, 2010 ASSIGNMENT MULTI-CORE PROGRAMMING Dongrui She December 9, 2010 ASSIGNMENT Goal of the Assignment 1 The purpose of this assignment is to Have in-depth understanding of the architectures of real-world multi-core CPUs

More information

Intel VTune Amplifier XE

Intel VTune Amplifier XE Intel VTune Amplifier XE Vladimir Tsymbal Performance, Analysis and Threading Lab 1 Agenda Intel VTune Amplifier XE Overview Features Data collectors Analysis types Key Concepts Collecting performance

More information

MIC Lab Parallel Computing on Stampede

MIC Lab Parallel Computing on Stampede MIC Lab Parallel Computing on Stampede Aaron Birkland and Steve Lantz Cornell Center for Advanced Computing June 11 & 18, 2013 1 Interactive Launching This exercise will walk through interactively launching

More information

Message Passing Interface (MPI) on Intel Xeon Phi coprocessor

Message Passing Interface (MPI) on Intel Xeon Phi coprocessor Message Passing Interface (MPI) on Intel Xeon Phi coprocessor Special considerations for MPI on Intel Xeon Phi and using the Intel Trace Analyzer and Collector Gergana Slavova gergana.s.slavova@intel.com

More information

Introduction to Performance Tuning & Optimization Tools

Introduction to Performance Tuning & Optimization Tools Introduction to Performance Tuning & Optimization Tools a[i] a[i+1] + a[i+2] a[i+3] b[i] b[i+1] b[i+2] b[i+3] = a[i]+b[i] a[i+1]+b[i+1] a[i+2]+b[i+2] a[i+3]+b[i+3] Ian A. Cosden, Ph.D. Manager, HPC Software

More information

Advanced Threading and Optimization

Advanced Threading and Optimization Mikko Byckling, CSC Michael Klemm, Intel Advanced Threading and Optimization February 24-26, 2015 PRACE Advanced Training Centre CSC IT Center for Science Ltd, Finland!$omp parallel do collapse(3) do p4=1,p4d

More information

Performance Profiler. Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava,

Performance Profiler. Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava, Performance Profiler Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava, 08-09-2016 Faster, Scalable Code, Faster Intel VTune Amplifier Performance Profiler Get Faster Code Faster With Accurate

More information

Multicore Performance and Tools. Part 1: Topology, affinity, clock speed

Multicore Performance and Tools. Part 1: Topology, affinity, clock speed Multicore Performance and Tools Part 1: Topology, affinity, clock speed Tools for Node-level Performance Engineering Gather Node Information hwloc, likwid-topology, likwid-powermeter Affinity control and

More information

Scientific Computing with Intel Xeon Phi Coprocessors

Scientific Computing with Intel Xeon Phi Coprocessors Scientific Computing with Intel Xeon Phi Coprocessors Andrey Vladimirov Colfax International HPC Advisory Council Stanford Conference 2015 Compututing with Xeon Phi Welcome Colfax International, 2014 Contents

More information

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Parallel is the Path Forward Intel Xeon and Intel Xeon Phi Product Families are both going parallel Intel Xeon processor

More information

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division Intel VTune Amplifier XE Dr. Michael Klemm Software and Services Group Developer Relations Division Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS

More information

Overview Implicit Vectorisation Explicit Vectorisation Data Alignment Summary. Vectorisation. James Briggs. 1 COSMOS DiRAC.

Overview Implicit Vectorisation Explicit Vectorisation Data Alignment Summary. Vectorisation. James Briggs. 1 COSMOS DiRAC. Vectorisation James Briggs 1 COSMOS DiRAC April 28, 2015 Session Plan 1 Overview 2 Implicit Vectorisation 3 Explicit Vectorisation 4 Data Alignment 5 Summary Section 1 Overview What is SIMD? Scalar Processing:

More information

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune PORTING CP2K TO THE INTEL XEON PHI ARCHER Technical Forum, Wed 30 th July Iain Bethune (ibethune@epcc.ed.ac.uk) Outline Xeon Phi Overview Porting CP2K to Xeon Phi Performance Results Lessons Learned Further

More information

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Agenda VTune Amplifier XE OpenMP* Analysis: answering on customers questions about performance in the same language a program was written in Concepts, metrics and technology inside VTune Amplifier XE OpenMP

More information

PRACE PATC Course: Intel MIC Programming Workshop, MKL. Ostrava,

PRACE PATC Course: Intel MIC Programming Workshop, MKL. Ostrava, PRACE PATC Course: Intel MIC Programming Workshop, MKL Ostrava, 7-8.2.2017 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi Compiler Assisted Offload Automatic Offload Native Execution Hands-on

More information

LAB. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers

LAB. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers LAB Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers Dan Stanzione, Lars Koesterke, Bill Barth, Kent Milfeld dan/lars/bbarth/milfeld@tacc.utexas.edu XSEDE 12 July 16, 2012 1 Discovery

More information

Xeon Phi Native Mode - Sharpen Exercise

Xeon Phi Native Mode - Sharpen Exercise Xeon Phi Native Mode - Sharpen Exercise Fiona Reid, Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents June 19, 2015 1 Aims 1 2 Introduction 1 3 Instructions 2 3.1 Log into yellowxx

More information

Practical Introduction to

Practical Introduction to 1 2 Outline of the workshop Practical Introduction to What is ScaleMP? When do we need it? How do we run codes on the ScaleMP node on the ScaleMP Guillimin cluster? How to run programs efficiently on ScaleMP?

More information

Xeon Phi Native Mode - Sharpen Exercise

Xeon Phi Native Mode - Sharpen Exercise Xeon Phi Native Mode - Sharpen Exercise Fiona Reid, Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents April 30, 2015 1 Aims The aim of this exercise is to get you compiling and

More information

Non-uniform memory access (NUMA)

Non-uniform memory access (NUMA) Non-uniform memory access (NUMA) Memory access between processor core to main memory is not uniform. Memory resides in separate regions called NUMA domains. For highest performance, cores should only access

More information

Introduction to Xeon Phi. Bill Barth January 11, 2013

Introduction to Xeon Phi. Bill Barth January 11, 2013 Introduction to Xeon Phi Bill Barth January 11, 2013 What is it? Co-processor PCI Express card Stripped down Linux operating system Dense, simplified processor Many power-hungry operations removed Wider

More information

Intel Xeon Phi Coprocessor Performance Analysis

Intel Xeon Phi Coprocessor Performance Analysis Intel Xeon Phi Coprocessor Performance Analysis Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO

More information

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor D.Sc. Mikko Byckling 17th Workshop on High Performance Computing in Meteorology October 24 th 2016, Reading, UK Legal Disclaimer & Optimization

More information

Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism.

Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Parallel + SIMD is the Path Forward Intel Xeon and Intel Xeon Phi Product

More information

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector A brief Introduction to MPI 2 What is MPI? Message Passing Interface Explicit parallel model All parallelism is explicit:

More information

Vectorization on KNL

Vectorization on KNL Vectorization on KNL Steve Lantz Senior Research Associate Cornell University Center for Advanced Computing (CAC) steve.lantz@cornell.edu High Performance Computing on Stampede 2, with KNL, Jan. 23, 2017

More information

What's new in VTune Amplifier XE

What's new in VTune Amplifier XE What's new in VTune Amplifier XE Naftaly Shalev Software and Services Group Developer Products Division 1 Agenda What s New? Using VTune Amplifier XE 2013 on Xeon Phi coprocessors New and Experimental

More information

Tutorial: Analyzing MPI Applications. Intel Trace Analyzer and Collector Intel VTune Amplifier XE

Tutorial: Analyzing MPI Applications. Intel Trace Analyzer and Collector Intel VTune Amplifier XE Tutorial: Analyzing MPI Applications Intel Trace Analyzer and Collector Intel VTune Amplifier XE Contents Legal Information... 3 1. Overview... 4 1.1. Prerequisites... 5 1.1.1. Required Software... 5 1.1.2.

More information

Intel Parallel Studio: Vtune

Intel Parallel Studio: Vtune ntel Parallel Studio: Vtune C.Berthelot Christophe.Berthelot@atos.net Copyright c Bull S.A.S. 2016 1 C.Berthelot Christophe.Berthelot@atos.net c Atos Agenda ntroduction Bottelneck Gprof ntroduction The

More information

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries.

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries. Munara Tolubaeva Technical Consulting Engineer 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries. notices and disclaimers Intel technologies features and benefits depend

More information

Lab: Scientific Computing Tsunami-Simulation

Lab: Scientific Computing Tsunami-Simulation Lab: Scientific Computing Tsunami-Simulation Session 4: Optimization and OMP Sebastian Rettenberger, Michael Bader 23.11.15 Session 4: Optimization and OMP, 23.11.15 1 Department of Informatics V Linux-Cluster

More information

1. Many Core vs Multi Core. 2. Performance Optimization Concepts for Many Core. 3. Performance Optimization Strategy for Many Core

1. Many Core vs Multi Core. 2. Performance Optimization Concepts for Many Core. 3. Performance Optimization Strategy for Many Core 1. Many Core vs Multi Core 2. Performance Optimization Concepts for Many Core 3. Performance Optimization Strategy for Many Core 4. Example Case Studies NERSC s Cori will begin to transition the workload

More information

Benchmark results on Knight Landing (KNL) architecture

Benchmark results on Knight Landing (KNL) architecture Benchmark results on Knight Landing (KNL) architecture Domenico Guida, CINECA SCAI (Bologna) Giorgio Amati, CINECA SCAI (Roma) Roma 23/10/2017 KNL, BDW, SKL A1 BDW A2 KNL A3 SKL cores per node 2 x 18 @2.3

More information

Microarchitectural Analysis with Intel VTune Amplifier XE

Microarchitectural Analysis with Intel VTune Amplifier XE Microarchitectural Analysis with Intel VTune Amplifier XE Michael Klemm Software & Services Group Developer Relations Division 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests

Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests Steve Lantz 12/8/2017 1 What Is CPU Turbo? (Sandy Bridge) = nominal frequency http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/hc23.19.9-desktop-cpus/hc23.19.921.sandybridge_power_10-rotem-intel.pdf

More information

Tutorial: Finding Hotspots with Intel VTune Amplifier - Linux* Intel VTune Amplifier Legal Information

Tutorial: Finding Hotspots with Intel VTune Amplifier - Linux* Intel VTune Amplifier Legal Information Tutorial: Finding Hotspots with Intel VTune Amplifier - Linux* Intel VTune Amplifier Legal Information Tutorial: Finding Hotspots with Intel VTune Amplifier - Linux* Contents Legal Information... 3 Chapter

More information

Martin Kruliš, v

Martin Kruliš, v Martin Kruliš 1 Optimizations in General Code And Compilation Memory Considerations Parallelism Profiling And Optimization Examples 2 Premature optimization is the root of all evil. -- D. Knuth Our goal

More information

Kevin O Leary, Intel Technical Consulting Engineer

Kevin O Leary, Intel Technical Consulting Engineer Kevin O Leary, Intel Technical Consulting Engineer Moore s Law Is Going Strong Hardware performance continues to grow exponentially We think we can continue Moore's Law for at least another 10 years."

More information

Introduction to the Xeon Phi programming model. Fabio AFFINITO, CINECA

Introduction to the Xeon Phi programming model. Fabio AFFINITO, CINECA Introduction to the Xeon Phi programming model Fabio AFFINITO, CINECA What is a Xeon Phi? MIC = Many Integrated Core architecture by Intel Other names: KNF, KNC, Xeon Phi... Not a CPU (but somewhat similar

More information

Benchmark results on Knight Landing architecture

Benchmark results on Knight Landing architecture Benchmark results on Knight Landing architecture Domenico Guida, CINECA SCAI (Bologna) Giorgio Amati, CINECA SCAI (Roma) Milano, 21/04/2017 KNL vs BDW A1 BDW A2 KNL cores per node 2 x 18 @2.3 GHz 1 x 68

More information

Jackson Marusarz Intel Corporation

Jackson Marusarz Intel Corporation Jackson Marusarz Intel Corporation Intel VTune Amplifier Quick Introduction Get the Data You Need Hotspot (Statistical call tree), Call counts (Statistical) Thread Profiling Concurrency and Lock & Waits

More information

Bring your application to a new era:

Bring your application to a new era: Bring your application to a new era: learning by example how to parallelize and optimize for Intel Xeon processor and Intel Xeon Phi TM coprocessor Manel Fernández, Roger Philp, Richard Paul Bayncore Ltd.

More information

Vectorization Lab Parallel Computing at TACC: Ranger to Stampede Transition

Vectorization Lab Parallel Computing at TACC: Ranger to Stampede Transition Vectorization Lab Parallel Computing at TACC: Ranger to Stampede Transition Aaron Birkland Consultant Cornell Center for Advanced Computing December 11, 2012 1 Simple Vectorization This lab serves as an

More information

PRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ,

PRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ, PRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ, 27.6-29.6.2016 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi - Compiler Assisted Offload - Automatic Offload - Native Execution

More information

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming

More information

Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel

Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Agenda Which performance analysis tool should I use first? Intel Application

More information

An Introduction to the Intel Xeon Phi. Si Liu Feb 6, 2015

An Introduction to the Intel Xeon Phi. Si Liu Feb 6, 2015 Training Agenda Session 1: Introduction 8:00 9:45 Session 2: Native: MIC stand-alone 10:00-11:45 Lunch break Session 3: Offload: MIC as coprocessor 1:00 2:45 Session 4: Symmetric: MPI 3:00 4:45 1 Last

More information

Profiling: Understand Your Application

Profiling: Understand Your Application Profiling: Understand Your Application Michal Merta michal.merta@vsb.cz 1st of March 2018 Agenda Hardware events based sampling Some fundamental bottlenecks Overview of profiling tools perf tools Intel

More information

OpenMP programming Part II. Shaohao Chen High performance Louisiana State University

OpenMP programming Part II. Shaohao Chen High performance Louisiana State University OpenMP programming Part II Shaohao Chen High performance computing @ Louisiana State University Part II Optimization for performance Trouble shooting and debug Common Misunderstandings and Frequent Errors

More information

Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation

Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation S c i c o m P 2 0 1 3 T u t o r i a l Intel Xeon Phi Product Family Programming Tools Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation Agenda Intel Parallel Studio XE 2013

More information

Getting Started with Intel Cilk Plus SIMD Vectorization and SIMD-enabled functions

Getting Started with Intel Cilk Plus SIMD Vectorization and SIMD-enabled functions Getting Started with Intel Cilk Plus SIMD Vectorization and SIMD-enabled functions Introduction SIMD Vectorization and SIMD-enabled Functions are a part of Intel Cilk Plus feature supported by the Intel

More information

Porting and Tuning WRF Physics Packages on Intel Xeon and Xeon Phi and NVIDIA GPU

Porting and Tuning WRF Physics Packages on Intel Xeon and Xeon Phi and NVIDIA GPU Porting and Tuning WRF Physics Packages on Intel Xeon and Xeon Phi and NVIDIA GPU Tom Henderson Thomas.B.Henderson@noaa.gov Mark Govett, James Rosinski, Jacques Middlecoff NOAA Global Systems Division

More information

Intel Xeon Phi архитектура, модели программирования, оптимизация.

Intel Xeon Phi архитектура, модели программирования, оптимизация. Нижний Новгород, 2017 Intel Xeon Phi архитектура, модели программирования, оптимизация. Дмитрий Прохоров, Дмитрий Рябцев, Intel Agenda What and Why Intel Xeon Phi Top 500 insights, roadmap, architecture

More information

Intel Xeon Phi архитектура, модели программирования, оптимизация.

Intel Xeon Phi архитектура, модели программирования, оптимизация. Нижний Новгород, 2016 Intel Xeon Phi архитектура, модели программирования, оптимизация. Дмитрий Прохоров, Intel Agenda What and Why Intel Xeon Phi Top 500 insights, roadmap, architecture How Programming

More information

Growth in Cores - A well rehearsed story

Growth in Cores - A well rehearsed story Intel CPUs Growth in Cores - A well rehearsed story 2 1. Multicore is just a fad! Copyright 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

More information

No Time to Read This Book?

No Time to Read This Book? Chapter 1 No Time to Read This Book? We know what it feels like to be under pressure. Try out a few quick and proven optimization stunts described below. They may provide a good enough performance gain

More information

Preparing for Highly Parallel, Heterogeneous Coprocessing

Preparing for Highly Parallel, Heterogeneous Coprocessing Preparing for Highly Parallel, Heterogeneous Coprocessing Steve Lantz Senior Research Associate Cornell CAC Workshop: Parallel Computing on Ranger and Lonestar May 17, 2012 What Are We Talking About Here?

More information

Progress Porting and Tuning NWP Physics Packages on Intel Xeon Phi

Progress Porting and Tuning NWP Physics Packages on Intel Xeon Phi Progress Porting and Tuning NWP Physics Packages on Intel Xeon Phi Tom Henderson Thomas.B.Henderson@noaa.gov Mark Govett, Jacques Middlecoff James Rosinski NOAA Global Systems Division Indraneil Gokhale,

More information

Bei Wang, Dmitry Prohorov and Carlos Rosales

Bei Wang, Dmitry Prohorov and Carlos Rosales Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512

More information

OpenMP Exercises. These exercises will introduce you to using OpenMP for parallel programming. There are four exercises:

OpenMP Exercises. These exercises will introduce you to using OpenMP for parallel programming. There are four exercises: OpenMP Exercises These exercises will introduce you to using OpenMP for parallel programming. There are four exercises: 1. OMP Hello World 2. Worksharing Loop 3. OMP Functions 4. Hand-coding vs. MKL To

More information

rabbit.engr.oregonstate.edu What is rabbit?

rabbit.engr.oregonstate.edu What is rabbit? 1 rabbit.engr.oregonstate.edu Mike Bailey mjb@cs.oregonstate.edu rabbit.pptx What is rabbit? 2 NVIDIA Titan Black PCIe Bus 15 SMs 2880 CUDA cores 6 GB of memory OpenGL support OpenCL support Xeon system

More information

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER Adrian Jackson adrianj@epcc.ed.ac.uk @adrianjhpc Processors The power used by a CPU core is proportional to Clock Frequency x Voltage 2 In the past, computers

More information

Introduc)on to Xeon Phi

Introduc)on to Xeon Phi Introduc)on to Xeon Phi MIC Training Event at TACC Lars Koesterke Xeon Phi MIC Xeon Phi = first product of Intel s Many Integrated Core (MIC) architecture Co- processor PCI Express card Stripped down Linux

More information

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Scalasca support for Intel Xeon Phi Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Overview Scalasca performance analysis toolset support for MPI & OpenMP

More information

Performance Tuning on Itasca

Performance Tuning on Itasca Performance Tuning on Itasca Shuxia Zhangh and Andrew Gustafson Nov. 27, 2012 Outline What computer resources are available under Itasca umbrella? Has your code run efficient? Profiling applications using

More information

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER Adrian Jackson a.jackson@epcc.ed.ac.uk @adrianjhpc Processors The power used by a CPU core is proportional to Clock Frequency x Voltage 2 In the past,

More information

Dynamic SIMD Scheduling

Dynamic SIMD Scheduling Dynamic SIMD Scheduling Florian Wende SC15 MIC Tuning BoF November 18 th, 2015 Zuse Institute Berlin time Dynamic Work Assignment: The Idea Irregular SIMD execution Caused by branching: control flow varies

More information

Department of Informatics V. Tsunami-Lab. Session 4: Optimization and OMP Michael Bader, Alex Breuer. Alex Breuer

Department of Informatics V. Tsunami-Lab. Session 4: Optimization and OMP Michael Bader, Alex Breuer. Alex Breuer Tsunami-Lab Session 4: Optimization and OMP Michael Bader, MAC-Cluster: Overview Intel Sandy Bridge (snb) AMD Bulldozer (bdz) Product Name (base frequency) Xeon E5-2670 (2.6 GHz) AMD Opteron 6274 (2.2

More information

Programming Intel R Xeon Phi TM

Programming Intel R Xeon Phi TM Programming Intel R Xeon Phi TM An Overview Anup Zope Mississippi State University 20 March 2018 Anup Zope (Mississippi State University) Programming Intel R Xeon Phi TM 20 March 2018 1 / 46 Outline 1

More information

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads... OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.

More information

Optimising for the p690 memory system

Optimising for the p690 memory system Optimising for the p690 memory Introduction As with all performance optimisation it is important to understand what is limiting the performance of a code. The Power4 is a very powerful micro-processor

More information

Intel Xeon Phi Coprocessors

Intel Xeon Phi Coprocessors Intel Xeon Phi Coprocessors Reference: Parallel Programming and Optimization with Intel Xeon Phi Coprocessors, by A. Vladimirov and V. Karpusenko, 2013 Ring Bus on Intel Xeon Phi Example with 8 cores Xeon

More information