Intel Many Integrated Core (MIC) Programming Intel Xeon Phi
|
|
- Melissa Atkinson
- 5 years ago
- Views:
Transcription
1 Intel Many Integrated Core (MIC) Programming Intel Xeon Phi Dmitry Petunin Intel Technical Consultant 1
2 Legal Disclaimer & INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, reference Copyright 2012, Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Core, Phi, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others. Intel s compilers may or may not optimize to the same degree for non-intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. revision #
3 Agenda Introduction Data Access Semantics Explicit Offloading Implicit Offloading Examples 3
4 Agenda Introduction Data Access Semantics Explicit Offloading Implicit Offloading Examples 4
5 Architecture Overview PCI Core L2 Core L2 Core L2 Core L2 GDDR MC GDDR MC GDDR MC GDDR MC L2 Core L2 Core L2 Core L2 Core >50 cores 8GB GDDR5 Memory 8 memory controllers, 16 GDDR5 channels, up to 5.5GT/s PCIe Gen2 (Client) x16 per direction ECC 5
6 Architecture Overview Core Instruction Decode Scalar Unit Scalar Registers 32k ICache 32k DCache Vector Unit 512K L2 Cache Vector Registers 4 thread Interleave Pentium scalar instruction set (x87!) Fully functional In order-operation Full 64bit addressing 4 HW threads/core 2 cycle decoder 2 issue Two pipelines: Scalar Vector/Scalar 6
7 Architecture Overview ISA/Registers Standard Intel64 Registers (EM64T) rax rbx rcx rdx rsi rdx rsp rbp r8 r9 r10 r11 r12 r13 r14 r bit SIMD Registers: zmm0 zmm mask registers (16bit wide) k0 (special, don t use) k7 No xmm(sse/128bit) and ymm(avx/256bit) registers! x87 present 7
8 Architecture Overview Vector Instructions Vector Instruction Format 3 operand form with explicit destination register instruction destination, source1, source2 Source registers are not destroyed Very compact code (Most) MIC instructions can be masked instruction destination {mask}, source1, source2 Result of masking is non-destructive, i.e. destination values are preserved Example: vaddps zmm1{k1},zmm2,zmm3 dest mask source1 source2 8
9 Knights Corner Architecture Overview Software Architecture IA Benefit: Wide Range of Development Options Parallelization Options Intel Math Kernel Library MPI* Vector Options Intel Math Kernel Library Auto vectorization Ease of use OpenMP* Semi-auto vectorization: #pragma (vector, ivdep, simd) Intel Threading Building Blocks Intel Cilk Plus Array Notation: Intel Cilk Plus C/C++ Vector Classes (F32vec16, F64vec8) OpenCL* Pthreads* Intrinsics Fine control
10 Agenda Introduction Data Access Semantics Explicit Offloading Implicit Offloading Examples 10
11 Spectrum of Programming Models and Mindsets Xeon Multi-Core Centric Multi-Core Hosted General purpose serial and parallel computing Symmetric Codes with balanced needs Many-Core Centric MIC Many Core Hosted Highly-parallel codes Offload Codes with highlyparallel phases ulti-core Main( ) Foo( ) MPI_*( ) Main( ) Foo( ) MPI_*( ) Main( ) Foo( ) MPI_*( ) (Xeon) any-core Foo( ) Main( ) Foo( ) MPI_*( ) Main( ) Foo( ) MPI_*( ) (MIC) Range of models to meet application needs 11
12 Options for Offloading Application Code Intel Parallel Studio XE 2013 for MIC supports three models: Automatic offload o Only trigger offload when a MIC device is present o Offload supported in MKL library Offload pragmas o Only trigger offload when a MIC device is present o Safely ignored by non-mic compilers Offload keywords o Only trigger offload when a MIC device is present o Language extensions, need conditional compilation to be ignored Offloading and parallelism is orthogonal Offloading only transfers control to the MIC devices Parallelism needs to be exploited by a second model (e.g. OpenMP*) 12
13 Automatic offload with Math Kernel Library Intel Math Kernel Library (Intel MKL) void foo() /* Intel Math Kernel Library */ { float *A, *B, *C; /* Matrices */ sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N); } Implicit automatic offloading requires no code changes, simply link with the offload MKL Library Intel Xeon processor Intel Xeon Phi coprocessor Intel High Performance Math Kernel Library is Applicable to Multicore and Many-core Programming 13 3/2/2013 Copyright 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.
14 Heterogeneous Compiler Data Transfer Overview The host CPU and the Intel Xeon Phi coprocessor do not share physical or virtual memory in hardware Two offload data transfer models are available: 1. Explicit Copy o o o o Programmer designates variables that need to be copied between host and card in the offload directive Syntax: Pragma/directive-based C/C++ Example: #pragma offload target(mic) in(data:length(size)) Fortran Example:!dir$ offload target(mic) in(a1:length(size)) 2. Implicit Copy o o o o o Programmer marks variables that need to be shared between host and card The same variable can then be used in both host and coprocessor code Runtime automatically maintains coherence at the beginning and end of offload statements Syntax: keyword extensions based Example: _Cilk_shared double foo; _Offload func(y); 14
15 Agenda Memory Basics Data Access Semantics Explicit Offloading Implicit Offloading Programming Examples 15
16 Heterogeneous Compiler Offload using Explicit Copies Data Movement pa Host 2 Copy over 4 Copy back MIC 1 Allocate 5 Free Default treatment of in/out variables in a #pragma offload statement At the start of an offload: {...} o Space is allocated on the coprocessor o in variables are transferred to the coprocessor At the end of an offload: o out variables are transferred from the coprocessor #pragma offload inout(pa:length(n)) o Space for both types (as well as inout) is deallocated on the coprocessor 3 16
17 Heterogeneous Compiler Offload using Explicit Copies Modifier Example float reduction(float *data, int numberof) { float ret = 0.f; #pragma offload target(mic) in(data:length(numberof)) { #pragma omp parallel for reduction(+:ret) for (int i=0; i < numberof; ++i) ret += data[i]; } return ret; } Note: copies numberof elements to the coprocessor, not numberof*sizeof(float) bytes the compiler knows data s type 17
18 Heterogeneous Compiler Offload using Explicit Copies Offload pragma Keyword for variable & function definitions Entire blocks of code Data transfer Synchronization C/C++ Syntax #pragma offload <clauses> <statement block> attribute ((target(mic))) #pragma offload_attribute(push, target(mic)) #pragma offload_attribute(pop) #pragma offload_transfer target(mic) #pragma offload_wait signal(signal_slot) Semantics Allow next statement block to execute on Intel MIC Architecture or host CPU Compile function for, or allocate variable on, both CPU and Intel MIC Architecture Mark entire files or large blocks of code for generation on both host CPU and Intel MIC Architecture Initiates asynchronous data transfer, or initiates and completes synchronous data transfer Wait asynchronous offload processes to complete 18
19 Heterogeneous Compiler Offload using Explicit Copies Offload directive Keyword for variable/function definitions Data transfer Fortran Syntax!dir$ omp offload <clause> <OpenMP construct>!dir$ offload <clauses> <statement>!dir$ attributes offload:<mic> :: <rtnname>!dir$ offload_transfer target(mic) Semantics Execute next OpenMP* parallel construct on Intel MIC Architecture Execute next statement (function call) on Intel MIC Architecture Compile function or variable for CPU and Intel MIC Architecture Initiates asynchronous data transfer, or initiates and completes synchronous data transfer 19
20 Heterogeneous Compiler Offload using Explicit Copies Clauses Variables and pointers restricted to scalars, structs of scalars, and arrays of scalars Clauses Syntax Semantics Target specification target( name[:card_number] ) Where to run construct Conditional offload if (condition) Boolean expression Inputs in(var-list modifiers opt ) Copy from host to coprocessor Outputs out(var-list modifiers opt ) Copy from coprocessor to host Inputs & outputs inout(var-list modifiers opt ) Copy host to coprocessor and back when offload completes Non-copied data nocopy(var-list modifiers opt ) Data is local to target Async. offload signal(signal-slot) Trigger async offload Async. offload wait(signal-slot) Wait for completion 20
21 Heterogeneous Compiler Offload using Explicit Copies Modifiers Variables and pointers restricted to scalars, structs of scalars, and arrays of scalars Modifiers Syntax Semantics Specify pointer length length(element-count-expr) Copy N elements of the pointer s type Control pointer memory allocation Control freeing of pointer memory Control target data alignment alloc_if ( condition ) free_if ( condition ) align ( expression ) Allocate memory to hold data referenced by pointer if condition is TRUE Free memory used by pointer if condition is TRUE Specify minimum memory alignment on target 21
22 Agenda Memory Basics Data Access Semantics Explicit Offloading Implicit Offloading Programming Examples 22
23 Heterogeneous Compiler Offload using Implicit Copies Section of memory maintained at the same virtual address on both the host and Intel MIC Architecture coprocessor Reserving same address range on both devices allows Seamless sharing of complex pointer-containing data structures Elimination of user marshaling and data management Use of simple language extensions to C/C++ Same address range C/C++ executable Offload code Host Memory Host Intel MIC KN* Memory 23
24 Heterogeneous Compiler Offload using Implicit Copies When shared memory is synchronized Automatically done around offloads (so memory is only synchronized on entry to, or exit from, an offload call) Only modified data is transferred between CPU and coprocessor Dynamic memory you wish to share must be allocated with special functions: _Offload_shared_malloc, _Offload_shared_aligned_malloc, _Offload_shared_free, _Offload_shared_aligned_free Allows transfer of C++ objects Pointers are no longer an issue when they point to shared data Well-known methods can be used to synchronize access to shared data and prevent data races within offloaded code E.g., locks, critical sections, etc. This model is integrated with the Intel Cilk Plus parallel extensions Note: Not supported on Fortran - available for C/C++ only 24
25 Heterogeneous Compiler Implicit: Offloading using _Offload Example void findpi() { int count = 10000; } // Initialize shared global // variables pi = 0.0f; // Compute pi on target _Offload compute_pi(count); pi /= count; // Shared variable declaration for pi _Cilk_shared float pi; // Shared function declaration for // compute _Shared _Cilk_shared void compute_pi(int void compute_pi(nt count) count) { int i; } #pragma omp parallel for \ reduction(+:pi) reduction(+:pi) for (i=0; i<count; i++) { float t = (float)((i+0.5f)/count); pi += 4.0f/(1.0f+t*t); } 25
26 Heterogeneous Compiler Keyword _Cilk_shared for Data/Functions What Syntax Semantics Function int _Cilk_shared f(int x) { return x+1; } 26 Versions generated for both CPU and card; may be called from either side Global _Cilk_shared int x = 0; Visible on both sides File/Function static static _Cilk_shared int x; Visible on both sides, only to code within the file/function Class class _Cilk_shared x { }; Class methods, members, and and operators are available on both sides Pointer to shared data int _Cilk_shared *p; p is local (not shared), can point to shared data A shared pointer int *_Cilk_shared p; p is shared; should only point at shared data Entire blocks of code #pragma offload_attribute( push, _Cilk_shared) #pragma offload_attribute(pop) Mark entire files or large blocks of code _Cilk_shared using this pragma
27 Heterogeneous Compiler Implicit: Offloading using _Offload Feature Example Description Offloading a function call x = _Offload func(y); x = _Offload_to (card_number) func(y); func executes on coprocessor if possible func must execute on specified coprocessor Offloading asynchronously Offload a parallel for-loop x = _Cilk_spawn _Offload func(y); Non-blocking offload _Offload _Cilk_for(i=0; i<n; i++) { a[i] = b[i] + c[i]; } Loop executes in parallel on target. The loop is implicitly outlined as a function call. 27
28 Heterogeneous Compiler Command-line Options Offload-specific arguments to the Intel Compiler: Generate host+coprocessor code (by default only host code is generated): -offload-build (Deprecated offload is default) Produce a report of offload data transfers at compile time (not runtime) -opt-report-phase:offload Add Intel MIC Architecture compiler switches -offload-copts: switches Add Intel MIC Architecture archiver switches -offload-aropts: switches Add Intel MIC Architecture linker switches -offload-ldopts: switches Example: icc g O2 mkl offload-build offload-copts= -g -03 offload-ldopts= -L/opt/intel/composerxe_mic/mkl/lib/mic foo.c 28
29 Module Outline Memory Basics Data Access Semantics Explicit Offloading Implicit Offloading Examples 29
30 Example 1: Using MKL for offloading Lapack and Blas routines int main{ // initialize variables #pragma offload target(mic) in(transa, transb, N, alpha, beta) \ in(a:length(matrix_elements)) in(b:length(matrix_elements)) \ inout(c:length(matrix_elements)) { } // continue code } sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N); Sgemm performs C=beta*C+alpha*A*B, transa and transb regulate the transposition of A and B and the Ns define the sizes of the matrices (see documentation). C is input and output, all others are input only. MKL will automatically make optimal use of MIC 30
31 Example 2: Simultaneous computation on host and accelerator Host Prework Compute workload parallel Postwork Target Compute workload parallel When using a straight #pragma offload the host blocks until completion of the of the offloaded region or function. In order to obtain max performance it is necessary to keep the host working at the time the offload computes. 31
32 Example 2: Simultaneous computation on host and accelerator - OpenMP double attribute ((target(mic))) myworkload(double input){ } // do something useful here return result; int main(void){ //. Initialize variables #pragma omp parallel sections { #pragma omp section { #pragma offload target(mic) } result1= myworkload(input1); #pragma omp section } } result2= myworkload(input2); Function is generated for both MIC and CPU Create two threads in an OpenMP sections env. One thread executes the offload code on MIC The other thread executes the same function on the host 32
33 Example 2: Simultaneous computation on host and accelerator - Cilk _Cilk_shared double myworkload(double input){ } // do something useful here return result; int main() { } result1 = _Cilk_spawn _Cilk_offload myworkload(input2); result2 = myworkload(input1); cilk_sync; Function is generated for both MIC and CPU One thread is spawned and executes the offload code on MIC The host executes the same function and waits 33
34 Example 3: Asynchronous Transfer & Double Buffering Overlap computation and communication Generalizes to data domain decomposition pre-work Host data block Target iteration 0 data block data block process iteration 1 data block data block process iteration n data block data block process iteration n+1 data block data block process last iteration data block process 34
35 Example 3 Using Signals This does nothing except allocating an array #pragma offload_transfer target(mic:0) \ nocopy(in1:length(cnt)) alloc_if(1) free_if(0)) Start an asynchronous transfer, tracking signal in1 #pragma offload_transfer target(mic:0) in(in1:length(cnt) alloc_if(0) free_if(0)) signal(in1) Start once the completion of the transfer of in1 in signaled #pragma offload target(mic:0) nocopy(in1) wait(in1) \ out(res1:length(cnt) alloc_if(0) free_if(0)) This does nothing except freeing an array #pragma offload_transfer target(mic:0) \ nocopy(in1:length(cnt) alloc_if(0) free_if(1)) 35
36 Example 3: Double Buffering I int main(int argc, char* argv[]) { // Allocate & initialize in1, res1, // in2, res2 on host #pragma offload_transfer target(mic:0) in(cnt)\ nocopy(in1, res1, in2, res2 : length(cnt) \ alloc_if(1) free_if(0)) Only allocate arrays on card with alloc_if(1), no transfer do_async_in(); #pragma offload_transfer target(mic:0) \ } return 0; nocopy(in1, res1, in2, res2 : length(cnt) \ alloc_if(0) free_if(1)) Only free arrays on card with free_if(1), no transfer 36
37 Example 3: Double Buffering II void do_async_in() { float lsum; int i; lsum = 0.0f; #pragma offload_transfer target(mic:0) in(in1 : length(cnt) \ alloc_if(0) free_if(0)) signal(in1) for (i = 0; i < iter; i++) { if (i % 2 == 0) { #pragma offload_transfer target(mic:0) if(i!=iter - 1) \ in(in2 : length(cnt) alloc_if(0) free_if(0)) signal(in2) #pragma offload target(mic:0) nocopy(in1) wait(in1) \ out(res1 : length(cnt) alloc_if(0) free_if(0)) { } compute(in1, res1); lsum = lsum + sum_array(res1); } else { Send buffer in1 Send buffer in2 Once in1 is ready (signal!) process in1 37
38 Example 3: Double Buffering III } else { #pragma offload_transfer target(mic:0) if(i!= iter - 1) \ in(in1 : length(cnt) alloc_if(0) free_if(0)) signal(in1) #pragma offload target(mic:0) nocopy(in2) wait(in2) \ } { } out(res2 : length(cnt) alloc_if(0) free_if(0)) compute(in2, res2); lsum = lsum + sum_array(res2); } async_in_sum = lsum / (float)iter; } // for } // do_async_in() Send buffer in1 Once in2 is ready (signal!) process in2 38
39 39
40 Module Outline Memory Basics Data Access Semantics Explicit Offloading Implicit Offloading Examples 40
41 Extension SCIF SCIF Host-KNC communications backbone Provides com. cap. within a single platform(node) Low latency, low overhead communication Provides uniform API for communication across the hosts PCI Express* system busses Directly exposes DMA capabilities for high bandwidth transfer Fully exposed (/usr/include/scif.h) 41
42 SCIF Symmetric Communications Interface The SCIF driver provides a reliable connectionbased messaging layer, as well as functionality which abstracts RMA operations. The SCIF API is documented in the Intel MIC SCIF API Reference Manual for User Mode Linux and the Intel MIC SCIF API Reference Manual for Kernel Mode Linux. A common API is exposed for use in both user mode (ring 3) and kernel mode (ring 0), with the exception of slight differences in signature, and several functions which are only available in user mode, and several only available in kernel mode. 42
43 SCIF - Nodes and Ports SCIF node: physical endpoint in the SCIF network. The host and MIC Architecture devices are SCIF nodes (all cores under a single OS). Each node has a node identifier assigned at boot time. Node IDs are generally based on PCIe discovery order. The host node is always assigned ID 0. SCIF port: logical destination on a SCIF node. Within a node, a SCIF port on that node may be referred to by its number, a 16-bit integer, similar to an IP port. SCIF port identifier: is unique across a SCIF network, comprising both a node identifier and a local port number (analogous to a complete TCP/IP address with port) 43
44 SCIF Connections Functionality scif_epd_t scif_open(void); Create a new endpoint int scif_bind(scif_epd_t epd, uint16_t pn); Bind Endpoint to port int scif_listen(scif_epd_t epd, int backlog); Set endpoint to listen int scif_connect(scif_epd_t epd, struct scif_portid* dst); Request connection to listening endpoint int scif_accept (scif_epd_t epd, struct scif_portid* peer, scif_epd_t* newepd, int flags); Accepts the connection request int scif_close (scif_epd_t epd); Closes the connection 44
45 SCIF Opening a connection epdi=scif_open() epdj=scif_open() scif_bind(epdi,pm) scif_bind(epdj,pn) scif_listen(epdj,qlen) scif_connect(epdi,(nj,pn)) scif_accept(*nepd,peer) 45
46 46
47 Summary The Intel Xeon Phi coprocessor supports native execution and host-centric computing w/ offloading The tool chain fully supports the traditional way of (cross-) compiling and optimization for the coprocessor Programmers can choose between explicit offloading and implicit offloading to best utilize the coprocessor 47
Intel Xeon Phi Coprocessor Offloading Computation
Intel Xeon Phi Coprocessor Offloading Computation Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE,
More informationVincent C. Betro, Ph.D. NICS March 6, 2014
Vincent C. Betro, Ph.D. NICS March 6, 2014 NSF Acknowledgement This material is based upon work supported by the National Science Foundation under Grant Number 1137097 Any opinions, findings, and conclusions
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor 1 Agenda Introduction Intel Xeon Phi Architecture Programming Models Outlook Summary 2 Intel Multicore Architecture Intel Many Integrated Core Architecture (Intel MIC) Foundation
More informationKlaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation
S c i c o m P 2 0 1 3 T u t o r i a l Intel Xeon Phi Product Family Programming Models Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation Agenda Overview Execution options
More informationIntroduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero
Introduction to Intel Xeon Phi programming techniques Fabio Affinito Vittorio Ruggiero Outline High level overview of the Intel Xeon Phi hardware and software stack Intel Xeon Phi programming paradigms:
More informationHigh Performance Parallel Programming. Multicore development tools with extensions to many-core. Investment protection. Scale Forward.
High Performance Parallel Programming Multicore development tools with extensions to many-core. Investment protection. Scale Forward. Enabling & Advancing Parallelism High Performance Parallel Programming
More informationIntel Software Development Products for High Performance Computing and Parallel Programming
Intel Software Development Products for High Performance Computing and Parallel Programming Multicore development tools with extensions to many-core Notices INFORMATION IN THIS DOCUMENT IS PROVIDED IN
More informationProgramming for the Intel Xeon Phi Coprocessor
Programming for the Intel Xeon Phi Coprocessor Dr.-Ing. Michael Klemm Software and Services Group Intel Corporation (michael.klemm@intel.com) 1 Legal Disclaimer & INFORMATION IN THIS DOCUMENT IS PROVIDED
More informationIntel Xeon Phi Coprocessors
Intel Xeon Phi Coprocessors Reference: Parallel Programming and Optimization with Intel Xeon Phi Coprocessors, by A. Vladimirov and V. Karpusenko, 2013 Ring Bus on Intel Xeon Phi Example with 8 cores Xeon
More informationOverview: Programming Environment for Intel Xeon Phi Coprocessor
Overview: Programming Environment for Intel Xeon Phi Coprocessor One Source Base, Tuned to many Targets Source Compilers, Libraries, Parallel Models Multicore Many-core Cluster Multicore CPU Multicore
More informationWhat s New August 2015
What s New August 2015 Significant New Features New Directory Structure OpenMP* 4.1 Extensions C11 Standard Support More C++14 Standard Support Fortran 2008 Submodules and IMPURE ELEMENTAL Further C Interoperability
More informationLaurent Duhem Intel Alain Dominguez - Intel
Laurent Duhem Intel Alain Dominguez - Intel Agenda 2 What are Intel Xeon Phi Coprocessors? Architecture and Platform overview Intel associated software development tools Execution and Programming model
More informationRyan Hulguin
Ryan Hulguin ryan-hulguin@tennessee.edu Outline Beacon The Beacon project The Beacon cluster TOP500 ranking System specs Xeon Phi Coprocessor Technical specs Many core trend Programming models Applications
More informationThe Intel Xeon Phi Coprocessor. Dr-Ing. Michael Klemm Software and Services Group Intel Corporation
The Intel Xeon Phi Coprocessor Dr-Ing. Michael Klemm Software and Services Group Intel Corporation (michael.klemm@intel.com) Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED
More informationKlaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation
S c i c o m P 2 0 1 3 T u t o r i a l Intel Xeon Phi Product Family Programming Tools Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation Agenda Intel Parallel Studio XE 2013
More informationParallel Programming. The Ultimate Road to Performance April 16, Werner Krotz-Vogel
Parallel Programming The Ultimate Road to Performance April 16, 2013 Werner Krotz-Vogel 1 Getting started with parallel algorithms Concurrency is a general concept multiple activities that can occur and
More informationAchieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013
Achieving High Performance Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013 Does Instruction Set Matter? We find that ARM and x86 processors are simply engineering design points optimized
More informationAn Introduction to the Intel Xeon Phi Coprocessor
An Introduction to the Intel Xeon Phi Coprocessor INFIERI-2013 - July 2013 Leo Borges (leonardo.borges@intel.com) Intel Software & Services Group Introduction High-level overview of the Intel Xeon Phi
More informationIntel Software Development Products Licensing & Programs Channel EMEA
Intel Software Development Products Licensing & Programs Channel EMEA Intel Software Development Products Advanced Performance Distributed Performance Intel Software Development Products Foundation of
More informationIntel Xeon Phi Coprocessor
Architecture Advanced Workshop Memory Session Speaking: Shannon Cepeda Intel,, Cilk,, Pentium, VTune and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries 1 Objective This
More informationGet Ready for Intel MKL on Intel Xeon Phi Coprocessors. Zhang Zhang Technical Consulting Engineer Intel Math Kernel Library
Get Ready for Intel MKL on Intel Xeon Phi Coprocessors Zhang Zhang Technical Consulting Engineer Intel Math Kernel Library Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
More informationParallel Programming Features in the Fortran Standard. Steve Lionel 12/4/2012
Parallel Programming Features in the Fortran Standard Steve Lionel 12/4/2012 Agenda Overview of popular parallelism methodologies FORALL a look back DO CONCURRENT Coarrays Fortran 2015 Q+A 12/5/2012 2
More informationPRACE PATC Course: Intel MIC Programming Workshop, MKL. Ostrava,
PRACE PATC Course: Intel MIC Programming Workshop, MKL Ostrava, 7-8.2.2017 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi Compiler Assisted Offload Automatic Offload Native Execution Hands-on
More informationSarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018
Sarah Knepper Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018 Outline Motivation Problem statement and solutions Simple example Performance comparison 2 Motivation Partial differential equations
More informationLIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015
LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015 Abstract Library for small matrix-matrix multiplications targeting
More informationGrowth in Cores - A well rehearsed story
Intel CPUs Growth in Cores - A well rehearsed story 2 1. Multicore is just a fad! Copyright 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.
More informationIXPUG 16. Dmitry Durnov, Intel MPI team
IXPUG 16 Dmitry Durnov, Intel MPI team Agenda - Intel MPI 2017 Beta U1 product availability - New features overview - Competitive results - Useful links - Q/A 2 Intel MPI 2017 Beta U1 is available! Key
More informationPRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ,
PRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ, 27.6-29.6.2016 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi - Compiler Assisted Offload - Automatic Offload - Native Execution
More informationIntel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant
Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Parallel is the Path Forward Intel Xeon and Intel Xeon Phi Product Families are both going parallel Intel Xeon processor
More informationProgramming The Intel Xeon Phi coprocessor. Alejandro Duran Intel Software and Services Group
Programming The Intel Xeon Phi coprocessor Alejandro Duran Intel Software and Services Group 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor http://tinyurl.com/inteljames twitter @jamesreinders James Reinders it s all about parallel programming Source Multicore CPU Compilers Libraries, Parallel Models Multicore CPU
More informationUsing Intel VTune Amplifier XE and Inspector XE in.net environment
Using Intel VTune Amplifier XE and Inspector XE in.net environment Levent Akyil Technical Computing, Analyzers and Runtime Software and Services group 1 Refresher - Intel VTune Amplifier XE Intel Inspector
More informationIntel Math Kernel Library (Intel MKL) Latest Features
Intel Math Kernel Library (Intel MKL) Latest Features Sridevi Allam Technical Consulting Engineer Sridevi.allam@intel.com 1 Agenda - Introduction to Support on Intel Xeon Phi Coprocessors - Performance
More informationGuy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany
Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany Motivation C AVX2 AVX512 New instructions utilized! Scalar performance
More informationC Language Constructs for Parallel Programming
C Language Constructs for Parallel Programming Robert Geva 5/17/13 1 Cilk Plus Parallel tasks Easy to learn: 3 keywords Tasks, not threads Load balancing Hyper Objects Array notations Elemental Functions
More informationIntel MIC Architecture. Dr. Momme Allalen, LRZ, PRACE PATC: Intel MIC&GPU Programming Workshop
Intel MKL @ MIC Architecture Dr. Momme Allalen, LRZ, allalen@lrz.de PRACE PATC: Intel MIC&GPU Programming Workshop 1 2 Momme Allalen, HPC with GPGPUs, Oct. 10, 2011 What is the Intel MKL? Math library
More informationINTEL MKL Vectorized Compact routines
INTEL MKL Vectorized Compact routines Mesut Meterelliyoz, Peter Caday, Timothy B. Costa, Kazushige Goto, Louise Huot, Sarah Knepper, Arthur Araujo Mitrano, Shane Story 2018 BLIS RETREAT 09/17/2018 OUTLINE
More informationWhat s P. Thierry
What s new@intel P. Thierry Principal Engineer, Intel Corp philippe.thierry@intel.com CPU trend Memory update Software Characterization in 30 mn 10 000 feet view CPU : Range of few TF/s and
More informationA Simple Path to Parallelism with Intel Cilk Plus
Introduction This introductory tutorial describes how to use Intel Cilk Plus to simplify making taking advantage of vectorization and threading parallelism in your code. It provides a brief description
More informationAccelerator Programming Lecture 1
Accelerator Programming Lecture 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de January 11, 2016 Accelerator Programming
More informationVisualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017
Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference London, 2017 Agenda Vectorization is becoming more and more important What is
More informationBeyond Offloading Programming Models for the Intel Xeon Phi Coprocessor. Michael Hebenstreit, Senior Cluster Architect, Intel SFTS001
Beyond Offloading Programming Models for the Intel Xeon Phi Coprocessor Michael Hebenstreit, Senior Cluster Architect, Intel SFTS001 Agenda Overview Automatic offloading Offloading by pragmas and keywords
More informationPath to Exascale? Intel in Research and HPC 2012
Path to Exascale? Intel in Research and HPC 2012 Intel s Investment in Manufacturing New Capacity for 14nm and Beyond D1X Oregon Development Fab Fab 42 Arizona High Volume Fab 22nm Fab Upgrades D1D Oregon
More informationpymic: A Python* Offload Module for the Intel Xeon Phi Coprocessor
* Some names and brands may be claimed as the property of others. pymic: A Python* Offload Module for the Intel Xeon Phi Coprocessor Dr.-Ing. Michael Klemm Software and Services Group Intel Corporation
More informationAlexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria
Alexei Katranov IWOCL '16, April 21, 2016, Vienna, Austria Hardware: customization, integration, heterogeneity Intel Processor Graphics CPU CPU CPU CPU Multicore CPU + integrated units for graphics, media
More informationJim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen
Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen Features We Discuss Synchronization (lock) hints The nonmonotonic:dynamic schedule Both Were new in OpenMP 4.5 May have slipped
More informationInstallation Guide and Release Notes
Intel C++ Studio XE 2013 for Windows* Installation Guide and Release Notes Document number: 323805-003US 26 June 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.1.1 Changes since Intel
More informationObtaining the Last Values of Conditionally Assigned Privates
Obtaining the Last Values of Conditionally Assigned Privates Hideki Saito, Serge Preis*, Aleksei Cherkasov, Xinmin Tian Intel Corporation (* at submission time) 2016/10/04 OpenMPCon2016 Legal Disclaimer
More informationAgenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.
Agenda VTune Amplifier XE OpenMP* Analysis: answering on customers questions about performance in the same language a program was written in Concepts, metrics and technology inside VTune Amplifier XE OpenMP
More informationInstallation Guide and Release Notes
Intel Parallel Studio XE 2013 for Linux* Installation Guide and Release Notes Document number: 323804-003US 10 March 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.1.1 Changes since Intel
More informationUsing Intel VTune Amplifier XE for High Performance Computing
Using Intel VTune Amplifier XE for High Performance Computing Vladimir Tsymbal Performance, Analysis and Threading Lab 1 The Majority of all HPC-Systems are Clusters Interconnect I/O I/O... I/O I/O Message
More informationOpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel
OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel Clang * : An Excellent C++ Compiler LLVM * : Collection of modular and reusable compiler and toolchain technologies Created by Chris Lattner
More informationMICHAL MROZEK ZBIGNIEW ZDANOWICZ
MICHAL MROZEK ZBIGNIEW ZDANOWICZ Legal Notices and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY
More informationGetting Started with Intel SDK for OpenCL Applications
Getting Started with Intel SDK for OpenCL Applications Webinar #1 in the Three-part OpenCL Webinar Series July 11, 2012 Register Now for All Webinars in the Series Welcome to Getting Started with Intel
More informationIntel Many Integrated Core (MIC) Architecture
Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products
More informationKevin O Leary, Intel Technical Consulting Engineer
Kevin O Leary, Intel Technical Consulting Engineer Moore s Law Is Going Strong Hardware performance continues to grow exponentially We think we can continue Moore's Law for at least another 10 years."
More informationMore performance options
More performance options OpenCL, streaming media, and native coding options with INDE April 8, 2014 2014, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Inside, Intel Xeon, and Intel
More informationIntel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes
Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes 23 October 2014 Table of Contents 1 Introduction... 1 1.1 Product Contents... 2 1.2 Intel Debugger (IDB) is
More informationBei Wang, Dmitry Prohorov and Carlos Rosales
Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512
More informationIntel Math Kernel Library Perspectives and Latest Advances. Noah Clemons Lead Technical Consulting Engineer Developer Products Division, Intel
Intel Math Kernel Library Perspectives and Latest Advances Noah Clemons Lead Technical Consulting Engineer Developer Products Division, Intel After Compiler and Threading Libraries, what s next? Intel
More informationUsing Intel Transactional Synchronization Extensions
Using Intel Transactional Synchronization Extensions Dr.-Ing. Michael Klemm Software and Services Group michael.klemm@intel.com 1 Credits The Tutorial Gang Christian Terboven Michael Klemm Ruud van der
More informationEfficiently Introduce Threading using Intel TBB
Introduction This guide will illustrate how to efficiently introduce threading using Intel Threading Building Blocks (Intel TBB), part of Intel Parallel Studio XE. It is a widely used, award-winning C++
More informationIntel Xeon Phi Programmability (the good, the bad and the ugly)
Intel Xeon Phi Programmability (the good, the bad and the ugly) Robert Geva Parallel Programming Models Architect My Perspective When the compiler can solve the problem When the programmer has to solve
More informationCompiling for Scalable Computing Systems the Merit of SIMD. Ayal Zaks Intel Corporation Acknowledgements: too many to list
Compiling for Scalable Computing Systems the Merit of SIMD Ayal Zaks Intel Corporation Acknowledgements: too many to list Takeaways 1. SIMD is mainstream and ubiquitous in HW 2. Compiler support for SIMD
More informationUsing Intel Inspector XE 2011 with Fortran Applications
Using Intel Inspector XE 2011 with Fortran Applications Jackson Marusarz Intel Corporation Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS
More informationGAP Guided Auto Parallelism A Tool Providing Vectorization Guidance
GAP Guided Auto Parallelism A Tool Providing Vectorization Guidance 7/27/12 1 GAP Guided Automatic Parallelism Key design ideas: Use compiler to help detect what is blocking optimizations in particular
More informationIFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor
IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor D.Sc. Mikko Byckling 17th Workshop on High Performance Computing in Meteorology October 24 th 2016, Reading, UK Legal Disclaimer & Optimization
More informationIntel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech,
Intel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech, Atlanta February 24, 2017 Acknowledgements Benoit Jacob
More informationVectorization Advisor: getting started
Vectorization Advisor: getting started Before you analyze Run GUI or Command Line Set-up environment Linux: source /advixe-vars.sh Windows: \advixe-vars.bat Run GUI or Command
More informationBitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved
Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2012 Intel Corporation All Rights Reserved Document Number: 325262-002US Revision: 1.3 World Wide Web: http://www.intel.com Document
More informationIntel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes
Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Document number: 323803-001US 4 May 2011 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.2 Product Contents...
More informationIntel Advisor XE. Vectorization Optimization. Optimization Notice
Intel Advisor XE Vectorization Optimization 1 Performance is a Proven Game Changer It is driving disruptive change in multiple industries Protecting buildings from extreme events Sophisticated mechanics
More informationBitonic Sorting Intel OpenCL SDK Sample Documentation
Intel OpenCL SDK Sample Documentation Document Number: 325262-002US Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL
More informationIntel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor
Technical Resources Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPETY RIGHTS
More informationIntel Math Kernel Library (Intel MKL) BLAS. Victor Kostin Intel MKL Dense Solvers team manager
Intel Math Kernel Library (Intel MKL) BLAS Victor Kostin Intel MKL Dense Solvers team manager Intel MKL BLAS/Sparse BLAS Original ( dense ) BLAS available from www.netlib.org Additionally Intel MKL provides
More informationPROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec
PROGRAMOVÁNÍ V C++ CVIČENÍ Michal Brabec PARALLELISM CATEGORIES CPU? SSE Multiprocessor SIMT - GPU 2 / 17 PARALLELISM V C++ Weak support in the language itself, powerful libraries Many different parallelization
More informationIntroduction to the Xeon Phi programming model. Fabio AFFINITO, CINECA
Introduction to the Xeon Phi programming model Fabio AFFINITO, CINECA What is a Xeon Phi? MIC = Many Integrated Core architecture by Intel Other names: KNF, KNC, Xeon Phi... Not a CPU (but somewhat similar
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor A guide to using it on the Cray XC40 Terminology Warning: may also be referred to as MIC or KNC in what follows! What are Intel Xeon Phi Coprocessors? Hardware designed to accelerate
More informationIntel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python
Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Python Landscape Adoption of Python continues to grow among domain specialists and developers for its productivity benefits Challenge#1:
More informationExpressing and Analyzing Dependencies in your C++ Application
Expressing and Analyzing Dependencies in your C++ Application Pablo Reble, Software Engineer Developer Products Division Software and Services Group, Intel Agenda TBB and Flow Graph extensions Composable
More informationCode modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism.
Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Parallel + SIMD is the Path Forward Intel Xeon and Intel Xeon Phi Product
More informationIntel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation
Intel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation Alexander Kalinkin Anton Anders Roman Anders 1 Legal Disclaimer INFORMATION IN
More informationErnesto Su, Hideki Saito, Xinmin Tian Intel Corporation. OpenMPCon 2017 September 18, 2017
Ernesto Su, Hideki Saito, Xinmin Tian Intel Corporation OpenMPCon 2017 September 18, 2017 Legal Notice and Disclaimers By using this document, in addition to any agreements you have with Intel, you accept
More informationGraphics Performance Analyzer for Android
Graphics Performance Analyzer for Android 1 What you will learn from this slide deck Detailed optimization workflow of Graphics Performance Analyzer Android* System Analysis Only Please see subsequent
More informationMikhail Dvorskiy, Jim Cownie, Alexey Kukanov
Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov What is the Parallel STL? C++17 C++ Next An extension of the C++ Standard Template Library algorithms with the execution policy argument Support for parallel
More informationThis guide will show you how to use Intel Inspector XE to identify and fix resource leak errors in your programs before they start causing problems.
Introduction A resource leak refers to a type of resource consumption in which the program cannot release resources it has acquired. Typically the result of a bug, common resource issues, such as memory
More informationIntel Array Building Blocks
Intel Array Building Blocks Productivity, Performance, and Portability with Intel Parallel Building Blocks Intel SW Products Workshop 2010 CERN openlab 11/29/2010 1 Agenda Legal Information Vision Call
More informationIntel Xeon Phi programming. September 22nd-23rd 2015 University of Copenhagen, Denmark
Intel Xeon Phi programming September 22nd-23rd 2015 University of Copenhagen, Denmark Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED,
More informationEliminate Threading Errors to Improve Program Stability
Introduction This guide will illustrate how the thread checking capabilities in Intel Parallel Studio XE can be used to find crucial threading defects early in the development cycle. It provides detailed
More informationContributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth
Presenter: Surabhi Jain Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth May 25, 2018 ROME workshop (in conjunction with IPDPS 2018), Vancouver,
More informationProgramming for the Intel Many Integrated Core Architecture By James Reinders. The Architecture for Discovery. PowerPoint Title
Programming for the Intel Many Integrated Core Architecture By James Reinders The Architecture for Discovery PowerPoint Title Intel Xeon Phi coprocessor 1. Designed for Highly Parallel workloads 2. and
More informationManycore Processors. Manycore Chip: A chip having many small CPUs, typically statically scheduled and 2-way superscalar or scalar.
phi 1 Manycore Processors phi 1 Definition Manycore Chip: A chip having many small CPUs, typically statically scheduled and 2-way superscalar or scalar. Manycore Accelerator: [Definition only for this
More informationSample for OpenCL* and DirectX* Video Acceleration Surface Sharing
Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing User s Guide Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2013 Intel Corporation All Rights Reserved Document
More informationIntel Parallel Studio XE 2015
2015 Create faster code faster with this comprehensive parallel software development suite. Faster code: Boost applications performance that scales on today s and next-gen processors Create code faster:
More informationDiego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK.
Diego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK. Legal Disclaimer & Software and workloads used in performance tests may have been optimized
More informationVisualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature
Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Agenda Intel Advisor for vectorization
More informationStanislav Bratanov; Roman Belenov; Ludmila Pakhomova 4/27/2015
Stanislav Bratanov; Roman Belenov; Ludmila Pakhomova 4/27/2015 What is Intel Processor Trace? Intel Processor Trace (Intel PT) provides hardware a means to trace branching, transaction, and timing information
More informationWhat's new in VTune Amplifier XE
What's new in VTune Amplifier XE Naftaly Shalev Software and Services Group Developer Products Division 1 Agenda What s New? Using VTune Amplifier XE 2013 on Xeon Phi coprocessors New and Experimental
More informationDemonstrating Performance Portability of a Custom OpenCL Data Mining Application to the Intel Xeon Phi Coprocessor
Demonstrating Performance Portability of a Custom OpenCL Data Mining Application to the Intel Xeon Phi Coprocessor Alexander Heinecke (TUM), Dirk Pflüger (Universität Stuttgart), Dmitry Budnikov, Michael
More informationParallel Hybrid Computing Stéphane Bihan, CAPS
Parallel Hybrid Computing Stéphane Bihan, CAPS Introduction Main stream applications will rely on new multicore / manycore architectures It is about performance not parallelism Various heterogeneous hardware
More information