Intel Math Kernel Library Perspectives and Latest Advances. Noah Clemons Lead Technical Consulting Engineer Developer Products Division, Intel
|
|
- Kerrie Francis
- 6 years ago
- Views:
Transcription
1 Intel Math Kernel Library Perspectives and Latest Advances Noah Clemons Lead Technical Consulting Engineer Developer Products Division, Intel
2 After Compiler and Threading Libraries, what s next? Intel Performance Libraries Where your workload, Intel Integrated Performance Primitives (Intel IPP), and Intel Math Kernel Library (Intel MKL) Intersect Library will generate hand-tuned Intel AVX code Intel Performance Libraries work with any compiler Functions for common algorithms Thread-safe functions no blocking calls Multiple OS support Minimal OS overhead User-replaceable memory allocation mechanism MKL & IPP Compiler SIMD Harness architecture features with performance libraries tailored for wide variety of computational domains CPU
3 Agenda What is Intel Math Kernel Library (Intel MKL)? Intel MKL supports Intel Xeon Phi Coprocessor Intel MKL usage models on Intel Xeon Phi Automatic Offload Compiler Assisted Offload Native Execution Performance roadmap Conditional Numerical Reproducibility Where to find more information? 3
4 Intel MKL is industry s leading math library* Linear Algebra Fast Fourier Transforms Vector Math Vector Random Number Generators Summary Statistics Data Fitting BLAS LAPACK Sparse solvers ScaLAPACK Multidimensional (up to 7D) FFTW interfaces Cluster FFT Trigonometric Hyperbolic Exponential, Logarithmic Power / Root Rounding Congruential Recursive Wichmann-Hill Mersenne Twister Sobol Neiderreiter Non-deterministic Kurtosis Variation coefficient Quantiles, order statistics Min/max Variancecovariance Splines Interpolation Cell search Source *2012 Evans Data N. American developer survey Intel MKL Multicore CPU Multicore CPU Intel MIC co-processor Multicore Cluster Clusters with Multicore and Many-core Multicore Many-core Clusters 4
5 Intel MKL Supports for Intel Xeon Phi Coprocessor Intel MKL 11.0 supports the Intel Xeon Phi coprocessor. Heterogeneous computing Takes advantage of both multicore host and many-core coprocessors Optimized for wider (512-bit) SIMD instructions Flexible usage models: Automatic Offload: Offers transparent heterogeneous computing Compiler Assisted Offload: Allows fine offloading control Native execution: Use the coprocessors as independent nodes 5 Using Intel MKL on Intel Xeon Phi Performance scales from multicore to many-cores Familiarity of architecture and programming models Code re-use, Faster time-to-market
6 Usage Models on Intel Xeon Phi Coprocessor Automatic Offload No code changes required Automatically uses both host and target Transparent data transfer and execution management Compiler Assisted Offload Explicit controls of data transfer and remote execution using compiler offload pragmas/directives Can be used together with Automatic Offload Native Execution Uses the coprocessors as independent nodes Input data is copied to targets in advance 6
7 Execution Models Multicore Centric Many-Core Centric Multicore (Intel Xeon ) Many-core (Intel Xeon Phi ) Multicore Hosted General purpose serial and parallel computing Symmetric Codes with balanced needs Many Core Hosted Highly-parallel codes Offload Codes with highlyparallel phases 7
8 Automatic Offload (AO) Offloading is automatic and transparent By default, Intel MKL decides: When to offload Work division between host and targets Users enjoy host and target parallelism automatically Users can still control work division to fine tune 8 performance
9 How to Use Automatic Offload Using Automatic Offload is easy Call a function: mkl_mic_enable() or Set an env variable: MKL_MIC_ENABLE=1 What if there doesn t exist a coprocessor in the system? Runs on the host as usual without any penalty 9
10 Automatic Offload Enabled Functions A selective set of MKL functions are AO enabled. Only functions with sufficient computation to offset data transfer overhead are subject to AO In 11.0, AO enabled functions include: Level-3 BLAS:?GEMM,?TRSM,?TRMM LAPACK 3 amigos: LU, QR, Cholesky 10
11 Work Division Control in Automatic Offload Examples Notes mkl_mic_set_workdivision( MKL_TARGET_MIC, 0, 0.5) Offload 50% of computation only to the 1 st card. Examples Notes MKL_MIC_0_WORKDIVISION=0.5 Offload 50% of computation only to the 1 st card. 11
12 Tips for Using Automatic Offload AO works only when matrix sizes are large?gemm: Offloading only when M, N > 2048?TRSM/TRMM: Offloading only when M, N > 3072 Square matrices may give better performance Work division settings are just hints to MKL runtime How to disable AO after it is enabled? mkl_mic_disable( ), or mkl_mic_set_workdivision(mic_target_host, 0, 1.0), or MKL_HOST_WORKDIVISION=100 12
13 Tips for Using Automatic Offload Threading control tips: Avoid using the OS core of the target, which is handling data transfer and housekeeping tasks. Example (on a 60-core card): MIC_KMP_AFFINITY=explicit,granularity=fine,proclist=[1-236:1] Also, prevent thread migration on the host: KMP_AFFINITY=granularity=fine,compact,1,0 Note: Different env-variables on host and coprocessor: OMP_NUM_THREADS MIC_OMP_NUM_THREADS KMP_AFFINITY MIC_KMP_AFFINITY 13
14 Compiler Assisted Offload (CAO) Offloading is explicitly controlled by compiler pragmas or directives. All MKL functions can be offloaded in CAO. In comparison, only a subset of MKL is subject to AO. Can leverage the full potential of compiler s offloading facility. More flexibility in data transfer and remote execution management. A big advantage is data persistence: Reusing transferred data for multiple operations 14
15 How to Use Compiler Assisted Offload The same way you would offload any function call to the coprocessor. An example in C: #pragma offload target(mic) \ in(transa, transb, N, alpha, beta) \ in(a:length(matrix_elements)) \ in(b:length(matrix_elements)) \ in(c:length(matrix_elements)) \ out(c:length(matrix_elements) alloc_if(0)) { sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N); } 15
16 How to Use Compiler Assisted Offload An example in Fortran:!DEC$ ATTRIBUTES OFFLOAD : TARGET( MIC ) :: SGEMM!DEC$ OMP OFFLOAD TARGET( MIC ) &!DEC$ IN( TRANSA, TRANSB, M, N, K, ALPHA, BETA, LDA, LDB, LDC ), &!DEC$ IN( A: LENGTH( NCOLA * LDA )), &!DEC$ IN( B: LENGTH( NCOLB * LDB )), &!DEC$ INOUT( C: LENGTH( N * LDC ))!$OMP PARALLEL SECTIONS!$OMP SECTION CALL SGEMM( TRANSA, TRANSB, M, N, K, ALPHA, & A, LDA, B, LDB BETA, C, LDC )!$OMP END PARALLEL SECTIONS 16
17 Data Persistence with CAO declspec(target(mic)) static float *A, *B, *C, *C1; // Transfer matrices A, B, and C to coprocessor and do not de-allocate matrices A and B #pragma offload target(mic) \ in(transa, transb, M, N, K, alpha, beta, LDA, LDB, LDC) \ in(a:length(ncola * LDA) free_if(0)) \ in(b:length(ncolb * LDB) free_if(0)) \ inout(c:length(n * LDC)) { sgemm(&transa, &transb, &M, &N, &K, &alpha, A, &LDA, B, &LDB, &beta, C, &LDC); } // Transfer matrix C1 to coprocessor and reuse matrices A and B #pragma offload target(mic) \ in(transa1, transb1, M, N, K, alpha1, beta1, LDA, LDB, LDC1) \ nocopy(a:length(ncola * LDA) alloc_if(0) free_if(0)) \ nocopy(b:length(ncolb * LDB) alloc_if(0) free_if(0)) \ inout(c1:length(n * LDC1)) { sgemm(&transa1, &transb1, &M, &N, &K, &alpha1, A, &LDA, B, &LDB, &beta1, C1, &LDC1); } // Deallocate A and B on the coprocessor #pragma offload target(mic) \ nocopy(a:length(ncola * LDA) free_if(1)) \ nocopy(b:length(ncolb * LDB) free_if(1)) \ { } 17
18 Tips of Using Compiler Assisted Offload Use data persistence to avoid unnecessary data copying and memory alloc/de-alloc Thread affinity: avoid using the OS core. Example for a 60- core coprocessor: MIC_KMP_AFFINITY=explicit,granularity=fine,proclist=[1-236:1] Use huge (2MB) pages for memory allocation in user code: MIC_USE_2MB_BUFFERS=64K The value of MIC_USE_2MB_BUFFERS is a threshold. E.g., allocations of 64K bytes or larger will use huge pages. 18
19 Using AO and CAO in the Same Program Users can use AO for some MKL calls and use CAO for others in the same program Only supported by Intel compilers Work division must be set explicitly for AO Otherwise, all MKL AO calls are executed on the host 19
20 Native Execution Programs can be built to run only on the coprocessor by using the mmic build option. Tips of using MKL in native execution: Use all threads to get best performance (for 60-core coprocessor) MIC_OMP_NUM_THREADS=240 Thread affinity setting KMP_AFFINITY=explicit,proclist=[1-240:1,0,241,242,243],granularity=fine Use huge pages for memory allocation (More details on the Intel Developer Zone ): The mmap( ) system call The open-source libhugetlbfs.so library: 20
21 Suggestions on Choosing Usage Models Choose native execution if Highly parallel code. Want to use coprocessors as independent compute nodes. Choose AO when A sufficient Byte/FLOP ratio makes offload beneficial. Level-3 BLAS functions:?gemm,?trmm,?trsm. LU and QR factorization (in upcoming release updates). Choose CAO when either There is enough computation to offset data transfer overhead. Transferred data can be reused by multiple operations You can always run on the host if offloading does not achieve better performance 21
22 Linking Examples AO: The same way of building code on Xeon! icc O3 mkl sgemm.c o sgemm.exe Native: Using mmic icc O3 mmic -mkl sgemm.c o sgemm.exe CAO: Using -offload-option icc O3 -openmp -mkl \ offload-option,mic,ld, -L$MKLROOT/lib/mic -Wl,\ --start-group -lmkl_intel_lp64 -lmkl_intel_thread \ -lmkl_core -Wl,--end-group sgemm.c o sgemm.exe 22
23 Intel MKL Link Line Advisor A web tool to help users to choose correct link line options. Also available offline in the MKL product package. 23
24 Performance Roadmap The following components are well optimized for Intel Xeon Phi coprocessors: BLAS Level 3, and much of Level 1 & 2 Sparse BLAS:?CSRMV,?CSRMM LU, Cholesky, and QR factorization FFTs: 1D/2D/3D, SP and DP, r2c, c2c VML (real floating point functions) Random number generators: MT19937, MT2203, MRG32k3a Discrete Uniform and Geometric 24
25 Where to Find More Code Examples $MKLROOT/examples/mic_samples ao_sgemm AO example dexp VML example (vdexp) dgaussian double precision Gaussian RNG fft complex-to-complex 1D FFT sexp VML example (vsexp) sgaussian single precision Gaussian RNG sgemm SGEMM example sgemm_f SGEMM example(fortran 90) sgemm_reuse SGEMM with data persistence sgeqrf QR factorization sgetrf LU factorization spotrf Cholesky solverc PARDISO examples 25
26 Where to Find MKL Documentation? How to use MKL on Intel Xeon Phi coprocessors: Using Intel Math Kernel Library on Intel Xeon Phi Coprocessors section in the User s Guide. Support Functions for Intel Many Integrated Core Architecture section in the Reference Manual. Tips, app notes, etc. can also be found on the Intel Developer Zone :
27 Run-to-run Numerical Reproducibility with the Intel Math Kernel Library and Intel Composer XE 2013
28 Agenda Why do floating point results vary? Reproducibility in the Intel compilers New reproducibility features in Intel MKL 28
29 Ever seen something like this? C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe
30 or this on different processors? Intel Xeon Processor E5540 Intel Xeon Processor E C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe C:\Users\me>test.exe
31 Why do results vary? Root cause for variations in results in Intel MKL floating-point numbers and rounding double precision example where (a+b)+c a+(b+c) = 2-63 (mathematical result) ( ) (correct IEEE result) ( ) 2-63 (correct IEEE result) 31
32 Why might the order of operations change in a computer program Optimizations instruction sets multiple cores / multiple processors Non-deterministic task scheduling code path memory alignment affects grouping of data in registers most functions are threaded to use as many cores as will give good scalability some algorithms use asynchronous task scheduling for optimal performance optimized to use all the processor features available on the system where the program is run Many optimizations require a change in order of operations. 32
33 Why are reproducible results important for Intel MKL users? Technical / legacy Software correctness is determined by comparison to previous gold results. Debugging / porting When developing and debugging, a higher degree of run-to-run stability is required to find potential problems Legal Accreditation or approval of software might require exact reproduction of previously defined results. Customer perception Developers may understand the technical issues with reproducibility but still require reproducible results since end users or customers will be disconcerted by the inconsistencies. Source: correspondence with Kai Diethelm of GNS. see his whitepaper: 33
34 What are the ingredients for reproducibility Source code Tools Compilers Libraries 34
35 Floating Point Semantics The -fp-model (/fp:) compiler switch lets you choose the floating point semantics at a coarse granularity. It lets you specify the compiler rules for: Value safety (our main focus) FP expression evaluation FPU environment access Precise FP exceptions FP contractions (fused multiply-add)
36 The -fp-model & /fp: switches Value fast[=1] (default) fast=2 precise except Description allows value-unsafe optimizations allows additional optimizations value-safe optimizations only enables floating point exception semantics strict precise + except + disable fma + don t assume default floating-point environment Recommendation for reproducibility: -fp-model precise for reproducible result and for ANSI/ IEEE standards compliance, C++ & Fortran
37 Reassociation #include <iostream> #define N 100 int main() { float a[n], b[n]; float c = -1., tiny = 1.e-20F; } for (int i=0; i<n; i++) a[i]=1.0; for (int i=0; i<n; i++) { a[i] = a[i] + c + tiny; b[i] = 1/a[i]; } std::cout << "a = " << a[0] << " b = " << b[0] << "\n"; -fp-model precise disables reassociation enforces C std conformance (left-toright) may carry a significant performance penalty Parentheses are respected only in value-safe mode!
38 Run-to-Run Variations (single-threaded) Data alignment may vary from run to run, due to changes in the external environment E.g. malloc of a string to contain date, time, user name or directory: size of allocation affects alignment of subsequent malloc s Compiler may peel scalar iterations off the start of the loop until subsequent memory accesses are aligned, so that the main loop kernel can be vectorized efficiently For reduction loops, this changes the composition of the partial sums, hence changes rounding and the final result Occurs for both gcc and icc, when compiling for Intel AVX To avoid, align data: _mm_malloc(size, 32) (icc only) mkl_malloc(size, 32) (Intel MKL) or compile with fp-model precise (icc) or without ffast-math (larger performance impact)
39 Typical Performance Impact SPECCPU2006fp benchmark suite compiled with -O2 or -O3 Geomean performance reduction due to fp-model precise and fp-model source: 12% - 15% Intel Compiler XE 2011 ( 12.0 ) Measured on Intel Xeon 5650 system with dual, 6-core processors at 2.67Ghz, 24GB memory, 12MB cache, SLES* 10 x64 SP2 Performance impact can vary between applications Use -fp-model precise to improve floating point reproducibility while limiting performance impact
40 How did Intel MKL handle reproducibility historically? Through MKL 10.3 (Nov. 2011), the recommendation was to: Align your input/output arrays using the Intel MKL memory manager Call sequential Intel MKL This meant the user needed to handle threading themselves 40
41 New! Balancing Reproducibility and Performance: Conditional Numerical Reproducibility (CNR) Memory alignment Align memory try Intel MKL memory allocation functions 64-byte alignment for processors in the next few years Number of threads Set the number of threads to a constant number Use sequential libraries Deterministic task scheduling Ensures that FP operations occur in order to ensure reproducible results Code path control Maintains consistent code paths across processors Will often mean lower performance on the latest processors Goal: Achieve best performance possible for cases that require reproducibility 41
42 Controls for CNR features 42
43 Why Conditional? In Intel MKL 11.0 reproducibility is currently available under certain conditions: Within single operating systems / architecture Reproducibility only applies within the blue boxes, not between them Linux* IA32 Intel 64 Windows* IA32 Intel 64 Mac OS X IA32 Intel 64 Within a particular version of Intel MKL Results in version 11.0 update 1 may differ slightly from results in version 11.0 Reproducibility controls in Intel MKL only affect Intel MKL functions; other tools, and other parts of your code can still cause variation Send us your feedback! 43
44 CNR Impact on Performance of Intel Optimized LINPACK Benchmark 44
45 What s next? 45
46 Further resources on reproducibility Reference manuals, User Guides, Getting Started Guides Intel MKL Documentation Intel Fortran Composer XE 2013 Documentation Intel C++ Composer XE 2013 Documentation Knowledgebase: CNR in Intel MKL 11.0, Consistency of Floating-Point Results Support Intel MKL user forum Intel compiler forums [IVF, Fortran, and C++] Intel Premier support Feedback Survey: 46
47 Summary When writing programs for reproducible results: Write your source code with reproducibility in mind Use fp-model precise with Intel compilers Use the new Conditional Numerical Reproducibility (CNR) features in Intel MKL Evaluate CNR in the following: Intel Math Kernel Library 11.0 Intel Composer XE 2013 Intel Parallel Studio XE 2013 Intel Cluster Studio XE 2013 Provide feedback: 47
48
49 49
50 Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Copyright, Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries. Optimization Notice Intel s compilers may or may not optimize to the same degree for non-intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #
51
Get Ready for Intel MKL on Intel Xeon Phi Coprocessors. Zhang Zhang Technical Consulting Engineer Intel Math Kernel Library
Get Ready for Intel MKL on Intel Xeon Phi Coprocessors Zhang Zhang Technical Consulting Engineer Intel Math Kernel Library Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
More informationPRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ,
PRACE PATC Course: Intel MIC Programming Workshop, MKL LRZ, 27.6-29.6.2016 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi - Compiler Assisted Offload - Automatic Offload - Native Execution
More informationIntel MIC Architecture. Dr. Momme Allalen, LRZ, PRACE PATC: Intel MIC&GPU Programming Workshop
Intel MKL @ MIC Architecture Dr. Momme Allalen, LRZ, allalen@lrz.de PRACE PATC: Intel MIC&GPU Programming Workshop 1 2 Momme Allalen, HPC with GPGPUs, Oct. 10, 2011 What is the Intel MKL? Math library
More informationPRACE PATC Course: Intel MIC Programming Workshop, MKL. Ostrava,
PRACE PATC Course: Intel MIC Programming Workshop, MKL Ostrava, 7-8.2.2017 1 Agenda A quick overview of Intel MKL Usage of MKL on Xeon Phi Compiler Assisted Offload Automatic Offload Native Execution Hands-on
More informationIntel Math Kernel Library (Intel MKL) Latest Features
Intel Math Kernel Library (Intel MKL) Latest Features Sridevi Allam Technical Consulting Engineer Sridevi.allam@intel.com 1 Agenda - Introduction to Support on Intel Xeon Phi Coprocessors - Performance
More informationKlaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation
S c i c o m P 2 0 1 3 T u t o r i a l Intel Xeon Phi Product Family Programming Tools Klaus-Dieter Oertel, May 28 th 2013 Software and Services Group Intel Corporation Agenda Intel Parallel Studio XE 2013
More informationIntel Performance Libraries
Intel Performance Libraries Powerful Mathematical Library Intel Math Kernel Library (Intel MKL) Energy Science & Research Engineering Design Financial Analytics Signal Processing Digital Content Creation
More informationGetting Reproducible Results with Intel MKL
Getting Reproducible Results with Intel MKL Why do results vary? Root cause for variations in results Floating-point numbers order of computation matters! Single precision example where (a+b)+c a+(b+c)
More informationFastest and most used math library for Intel -based systems 1
Fastest and most used math library for Intel -based systems 1 Speaker: Alexander Kalinkin Contributing authors: Peter Caday, Kazushige Goto, Louise Huot, Sarah Knepper, Mesut Meterelliyoz, Arthur Araujo
More informationIntel Math Kernel Library (Intel MKL) Overview. Hans Pabst Software and Services Group Intel Corporation
Intel Math Kernel Library (Intel MKL) Overview Hans Pabst Software and Services Group Intel Corporation Agenda Motivation Functionality Compilation Performance Summary 2 Motivation How and where to optimize?
More informationSergey Maidanov. Software Engineering Manager for Intel Distribution for Python*
Sergey Maidanov Software Engineering Manager for Intel Distribution for Python* Introduction Python is among the most popular programming languages Especially for prototyping But very limited use in production
More informationChao Yu, Technical Consulting Engineer, Intel IPP and MKL Team
Chao Yu, Technical Consulting Engineer, Intel IPP and MKL Team Agenda Intel IPP and Intel MKL Benefits What s New in Intel MKL 11.3 What s New in Intel IPP 9.0 New Features and Changes Tips to Move Intel
More informationIntel Math Kernel Library 10.3
Intel Math Kernel Library 10.3 Product Brief Intel Math Kernel Library 10.3 The Flagship High Performance Computing Math Library for Windows*, Linux*, and Mac OS* X Intel Math Kernel Library (Intel MKL)
More informationMaximizing performance and scalability using Intel performance libraries
Maximizing performance and scalability using Intel performance libraries Roger Philp Intel HPC Software Workshop Series 2016 HPC Code Modernization for Intel Xeon and Xeon Phi February 17 th 2016, Barcelona
More informationIntel Software Development Products for High Performance Computing and Parallel Programming
Intel Software Development Products for High Performance Computing and Parallel Programming Multicore development tools with extensions to many-core Notices INFORMATION IN THIS DOCUMENT IS PROVIDED IN
More informationWhat s New August 2015
What s New August 2015 Significant New Features New Directory Structure OpenMP* 4.1 Extensions C11 Standard Support More C++14 Standard Support Fortran 2008 Submodules and IMPURE ELEMENTAL Further C Interoperability
More informationSarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018
Sarah Knepper Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018 Outline Motivation Problem statement and solutions Simple example Performance comparison 2 Motivation Partial differential equations
More informationINTEL MKL Vectorized Compact routines
INTEL MKL Vectorized Compact routines Mesut Meterelliyoz, Peter Caday, Timothy B. Costa, Kazushige Goto, Louise Huot, Sarah Knepper, Arthur Araujo Mitrano, Shane Story 2018 BLIS RETREAT 09/17/2018 OUTLINE
More informationIntel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant
Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant Parallel is the Path Forward Intel Xeon and Intel Xeon Phi Product Families are both going parallel Intel Xeon processor
More informationIntroduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero
Introduction to Intel Xeon Phi programming techniques Fabio Affinito Vittorio Ruggiero Outline High level overview of the Intel Xeon Phi hardware and software stack Intel Xeon Phi programming paradigms:
More informationAchieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013
Achieving High Performance Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013 Does Instruction Set Matter? We find that ARM and x86 processors are simply engineering design points optimized
More informationIntel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech,
Intel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech, Atlanta February 24, 2017 Acknowledgements Benoit Jacob
More informationIntel Math Kernel Library. Getting Started Tutorial: Using the Intel Math Kernel Library for Matrix Multiplication
Intel Math Kernel Library Getting Started Tutorial: Using the Intel Math Kernel Library for Matrix Multiplication Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.
More informationH.J. Lu, Sunil K Pandey. Intel. November, 2018
H.J. Lu, Sunil K Pandey Intel November, 2018 Issues with Run-time Library on IA Memory, string and math functions in today s glibc are optimized for today s Intel processors: AVX/AVX2/AVX512 FMA It takes
More informationAgenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.
Agenda VTune Amplifier XE OpenMP* Analysis: answering on customers questions about performance in the same language a program was written in Concepts, metrics and technology inside VTune Amplifier XE OpenMP
More informationHigh Performance Parallel Programming. Multicore development tools with extensions to many-core. Investment protection. Scale Forward.
High Performance Parallel Programming Multicore development tools with extensions to many-core. Investment protection. Scale Forward. Enabling & Advancing Parallelism High Performance Parallel Programming
More informationVisualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017
Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference London, 2017 Agenda Vectorization is becoming more and more important What is
More informationLIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015
LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015 Abstract Library for small matrix-matrix multiplications targeting
More informationIntel Math Kernel Library (Intel MKL) BLAS. Victor Kostin Intel MKL Dense Solvers team manager
Intel Math Kernel Library (Intel MKL) BLAS Victor Kostin Intel MKL Dense Solvers team manager Intel MKL BLAS/Sparse BLAS Original ( dense ) BLAS available from www.netlib.org Additionally Intel MKL provides
More informationOpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel
OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel Clang * : An Excellent C++ Compiler LLVM * : Collection of modular and reusable compiler and toolchain technologies Created by Chris Lattner
More informationInstallation Guide and Release Notes
Intel Parallel Studio XE 2013 for Linux* Installation Guide and Release Notes Document number: 323804-003US 10 March 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 1 1.1.1 Changes since Intel
More informationIntel Parallel Studio XE 2015
2015 Create faster code faster with this comprehensive parallel software development suite. Faster code: Boost applications performance that scales on today s and next-gen processors Create code faster:
More informationVectorization Advisor: getting started
Vectorization Advisor: getting started Before you analyze Run GUI or Command Line Set-up environment Linux: source /advixe-vars.sh Windows: \advixe-vars.bat Run GUI or Command
More informationUsing Intel Math Kernel Library with MathWorks* MATLAB* on Intel Xeon Phi Coprocessor System
Using Intel Math Kernel Library with MathWorks* MATLAB* on Intel Xeon Phi Coprocessor System Overview This guide is intended to help developers use the latest version of Intel Math Kernel Library (Intel
More informationIntel Advisor XE. Vectorization Optimization. Optimization Notice
Intel Advisor XE Vectorization Optimization 1 Performance is a Proven Game Changer It is driving disruptive change in multiple industries Protecting buildings from extreme events Sophisticated mechanics
More informationIntel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python
Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Python Landscape Adoption of Python continues to grow among domain specialists and developers for its productivity benefits Challenge#1:
More informationGetting Started with Intel SDK for OpenCL Applications
Getting Started with Intel SDK for OpenCL Applications Webinar #1 in the Three-part OpenCL Webinar Series July 11, 2012 Register Now for All Webinars in the Series Welcome to Getting Started with Intel
More informationIXPUG 16. Dmitry Durnov, Intel MPI team
IXPUG 16 Dmitry Durnov, Intel MPI team Agenda - Intel MPI 2017 Beta U1 product availability - New features overview - Competitive results - Useful links - Q/A 2 Intel MPI 2017 Beta U1 is available! Key
More informationGraphics Performance Analyzer for Android
Graphics Performance Analyzer for Android 1 What you will learn from this slide deck Detailed optimization workflow of Graphics Performance Analyzer Android* System Analysis Only Please see subsequent
More informationBei Wang, Dmitry Prohorov and Carlos Rosales
Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512
More informationIntel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation
Intel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation Alexander Kalinkin Anton Anders Roman Anders 1 Legal Disclaimer INFORMATION IN
More informationKevin O Leary, Intel Technical Consulting Engineer
Kevin O Leary, Intel Technical Consulting Engineer Moore s Law Is Going Strong Hardware performance continues to grow exponentially We think we can continue Moore's Law for at least another 10 years."
More informationAchieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017
Achieving Peak Performance on Intel Hardware Intel Software Developer Conference London, 2017 Welcome Aims for the day You understand some of the critical features of Intel processors and other hardware
More informationIntel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes
Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes 23 October 2014 Table of Contents 1 Introduction... 1 1.1 Product Contents... 2 1.2 Intel Debugger (IDB) is
More informationIntroduction to the Xeon Phi programming model. Fabio AFFINITO, CINECA
Introduction to the Xeon Phi programming model Fabio AFFINITO, CINECA What is a Xeon Phi? MIC = Many Integrated Core architecture by Intel Other names: KNF, KNC, Xeon Phi... Not a CPU (but somewhat similar
More informationIntel Software Development Products Licensing & Programs Channel EMEA
Intel Software Development Products Licensing & Programs Channel EMEA Intel Software Development Products Advanced Performance Distributed Performance Intel Software Development Products Foundation of
More informationInstallation Guide and Release Notes
Intel C++ Studio XE 2013 for Windows* Installation Guide and Release Notes Document number: 323805-003US 26 June 2013 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.1.1 Changes since Intel
More informationMaximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
More informationParallel Programming. The Ultimate Road to Performance April 16, Werner Krotz-Vogel
Parallel Programming The Ultimate Road to Performance April 16, 2013 Werner Krotz-Vogel 1 Getting started with parallel algorithms Concurrency is a general concept multiple activities that can occur and
More informationIntel Math Kernel Library (Intel MKL) Sparse Solvers. Alexander Kalinkin Intel MKL developer, Victor Kostin Intel MKL Dense Solvers team manager
Intel Math Kernel Library (Intel MKL) Sparse Solvers Alexander Kalinkin Intel MKL developer, Victor Kostin Intel MKL Dense Solvers team manager Copyright 3, Intel Corporation. All rights reserved. Sparse
More informationThis guide will show you how to use Intel Inspector XE to identify and fix resource leak errors in your programs before they start causing problems.
Introduction A resource leak refers to a type of resource consumption in which the program cannot release resources it has acquired. Typically the result of a bug, common resource issues, such as memory
More informationA Simple Path to Parallelism with Intel Cilk Plus
Introduction This introductory tutorial describes how to use Intel Cilk Plus to simplify making taking advantage of vectorization and threading parallelism in your code. It provides a brief description
More informationIntroduction of Xeon Phi Programming
Introduction of Xeon Phi Programming Wei Feinstein Shaohao Chen HPC User Services LSU HPC/LONI Louisiana State University 10/21/2015 Introduction to Xeon Phi Programming Overview Intel Xeon Phi architecture
More informationEfficiently Introduce Threading using Intel TBB
Introduction This guide will illustrate how to efficiently introduce threading using Intel Threading Building Blocks (Intel TBB), part of Intel Parallel Studio XE. It is a widely used, award-winning C++
More informationIntel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor
Technical Resources Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPETY RIGHTS
More informationWhat s P. Thierry
What s new@intel P. Thierry Principal Engineer, Intel Corp philippe.thierry@intel.com CPU trend Memory update Software Characterization in 30 mn 10 000 feet view CPU : Range of few TF/s and
More informationIntel Math Kernel Library
Intel Math Kernel Library Release 7.0 March 2005 Intel MKL Purpose Performance, performance, performance! Intel s scientific and engineering floating point math library Initially only basic linear algebra
More informationIntel Many Integrated Core (MIC) Architecture
Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products
More informationEliminate Threading Errors to Improve Program Stability
Eliminate Threading Errors to Improve Program Stability This guide will illustrate how the thread checking capabilities in Parallel Studio can be used to find crucial threading defects early in the development
More informationCode modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism.
Code modernization and optimization for improved performance using the OpenMP* programming model for threading and SIMD parallelism. Parallel + SIMD is the Path Forward Intel Xeon and Intel Xeon Phi Product
More informationJim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen
Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen Features We Discuss Synchronization (lock) hints The nonmonotonic:dynamic schedule Both Were new in OpenMP 4.5 May have slipped
More informationOptimizing the operations with sparse matrices on Intel architecture
Optimizing the operations with sparse matrices on Intel architecture Gladkikh V. S. victor.s.gladkikh@intel.com Intel Xeon, Intel Itanium are trademarks of Intel Corporation in the U.S. and other countries.
More informationUsing the Intel Math Kernel Library (Intel MKL) and Intel Compilers to Obtain Run-to-Run Numerical Reproducible Results
Using the Intel Math Kernel Library (Intel MKL) and Intel Compilers to Obtain Run-to-Run Numerical Reproducible Results by Todd Rosenquist, Technical Consulting Engineer, Intel Math Kernal Library and
More informationIntel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2
Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2 This release of the Intel C++ Compiler 16.0 product is a Pre-Release, and as such is 64 architecture processor supporting
More informationCase Study. Optimizing an Illegal Image Filter System. Software. Intel Integrated Performance Primitives. High-Performance Computing
Case Study Software Optimizing an Illegal Image Filter System Intel Integrated Performance Primitives High-Performance Computing Tencent Doubles the Speed of its Illegal Image Filter System using SIMD
More informationOverview of Intel Xeon Phi Coprocessor
Overview of Intel Xeon Phi Coprocessor Sept 20, 2013 Ritu Arora Texas Advanced Computing Center Email: rauta@tacc.utexas.edu This talk is only a trailer A comprehensive training on running and optimizing
More informationGrowth in Cores - A well rehearsed story
Intel CPUs Growth in Cores - A well rehearsed story 2 1. Multicore is just a fad! Copyright 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.
More informationVisualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature
Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel Agenda Intel Advisor for vectorization
More informationMikhail Dvorskiy, Jim Cownie, Alexey Kukanov
Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov What is the Parallel STL? C++17 C++ Next An extension of the C++ Standard Template Library algorithms with the execution policy argument Support for parallel
More informationEliminate Threading Errors to Improve Program Stability
Introduction This guide will illustrate how the thread checking capabilities in Intel Parallel Studio XE can be used to find crucial threading defects early in the development cycle. It provides detailed
More informationEliminate Memory Errors to Improve Program Stability
Eliminate Memory Errors to Improve Program Stability This guide will illustrate how Parallel Studio memory checking capabilities can find crucial memory defects early in the development cycle. It provides
More informationMore performance options
More performance options OpenCL, streaming media, and native coding options with INDE April 8, 2014 2014, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Inside, Intel Xeon, and Intel
More informationIFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor
IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor D.Sc. Mikko Byckling 17th Workshop on High Performance Computing in Meteorology October 24 th 2016, Reading, UK Legal Disclaimer & Optimization
More informationUsing Intel VTune Amplifier XE and Inspector XE in.net environment
Using Intel VTune Amplifier XE and Inspector XE in.net environment Levent Akyil Technical Computing, Analyzers and Runtime Software and Services group 1 Refresher - Intel VTune Amplifier XE Intel Inspector
More informationHans Pabst, January 24 th 2013 Software and Services Group Intel Corporation
VPE Swiss Workshop: Infrastruktur für rechenintensive Anwendungen Intel Xeon Phi Product Family An Overview Hans Pabst, January 24 th 2013 Software and Services Group Intel Corporation Agenda Overview
More informationBecca Paren Cluster Systems Engineer Software and Services Group. May 2017
Becca Paren Cluster Systems Engineer Software and Services Group May 2017 Clusters are complex systems! Challenge is to reduce this complexity barrier for: Cluster architects System administrators Application
More informationEliminate Memory Errors to Improve Program Stability
Introduction INTEL PARALLEL STUDIO XE EVALUATION GUIDE This guide will illustrate how Intel Parallel Studio XE memory checking capabilities can find crucial memory defects early in the development cycle.
More informationExpressing and Analyzing Dependencies in your C++ Application
Expressing and Analyzing Dependencies in your C++ Application Pablo Reble, Software Engineer Developer Products Division Software and Services Group, Intel Agenda TBB and Flow Graph extensions Composable
More informationIntel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes
Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes Document number: 323803-001US 4 May 2011 Table of Contents 1 Introduction... 1 1.1 What s New... 2 1.2 Product Contents...
More informationUsing Intel VTune Amplifier XE for High Performance Computing
Using Intel VTune Amplifier XE for High Performance Computing Vladimir Tsymbal Performance, Analysis and Threading Lab 1 The Majority of all HPC-Systems are Clusters Interconnect I/O I/O... I/O I/O Message
More informationVincent C. Betro, Ph.D. NICS March 6, 2014
Vincent C. Betro, Ph.D. NICS March 6, 2014 NSF Acknowledgement This material is based upon work supported by the National Science Foundation under Grant Number 1137097 Any opinions, findings, and conclusions
More informationAlexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria
Alexei Katranov IWOCL '16, April 21, 2016, Vienna, Austria Hardware: customization, integration, heterogeneity Intel Processor Graphics CPU CPU CPU CPU Multicore CPU + integrated units for graphics, media
More informationObtaining the Last Values of Conditionally Assigned Privates
Obtaining the Last Values of Conditionally Assigned Privates Hideki Saito, Serge Preis*, Aleksei Cherkasov, Xinmin Tian Intel Corporation (* at submission time) 2016/10/04 OpenMPCon2016 Legal Disclaimer
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor A guide to using it on the Cray XC40 Terminology Warning: may also be referred to as MIC or KNC in what follows! What are Intel Xeon Phi Coprocessors? Hardware designed to accelerate
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor http://tinyurl.com/inteljames twitter @jamesreinders James Reinders it s all about parallel programming Source Multicore CPU Compilers Libraries, Parallel Models Multicore CPU
More informationIntel Many Integrated Core (MIC) Programming Intel Xeon Phi
Intel Many Integrated Core (MIC) Programming Intel Xeon Phi Dmitry Petunin Intel Technical Consultant 1 Legal Disclaimer & INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED,
More informationHigh Performance Computing The Essential Tool for a Knowledge Economy
High Performance Computing The Essential Tool for a Knowledge Economy Rajeeb Hazra Vice President & General Manager Technical Computing Group Datacenter & Connected Systems Group July 22 nd 2013 1 What
More informationOverview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation
Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation Agenda 1D interpolation problem statement Computation flow Application areas Data fitting in Intel MKL Data
More informationCrosstalk between VMs. Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA
Crosstalk between VMs Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA 2 September 2015 Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT
More informationCompiling for Scalable Computing Systems the Merit of SIMD. Ayal Zaks Intel Corporation Acknowledgements: too many to list
Compiling for Scalable Computing Systems the Merit of SIMD Ayal Zaks Intel Corporation Acknowledgements: too many to list Takeaways 1. SIMD is mainstream and ubiquitous in HW 2. Compiler support for SIMD
More informationBitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved
Intel SDK for OpenCL* Applications Sample Documentation Copyright 2010 2012 Intel Corporation All Rights Reserved Document Number: 325262-002US Revision: 1.3 World Wide Web: http://www.intel.com Document
More informationGuy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany
Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany Motivation C AVX2 AVX512 New instructions utilized! Scalar performance
More informationBitonic Sorting Intel OpenCL SDK Sample Documentation
Intel OpenCL SDK Sample Documentation Document Number: 325262-002US Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL
More informationRunning Docker* Containers on Intel Xeon Phi Processors
Running Docker* Containers on Intel Xeon Phi Processors White Paper March 2017 Revision 001 Document Number: 335644-001US Notice: This document contains information on products in the design phase of development.
More informationTuning Python Applications Can Dramatically Increase Performance
Tuning Python Applications Can Dramatically Increase Performance Vasilij Litvinov Software Engineer, Intel Legal Disclaimer & 2 INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED,
More informationIntel Visual Fortran Compiler Professional Edition 11.0 for Windows* In-Depth
Intel Visual Fortran Compiler Professional Edition 11.0 for Windows* In-Depth Contents Intel Visual Fortran Compiler Professional Edition for Windows*........................ 3 Features...3 New in This
More informationConsistency of Floating-Point Results using the Intel Compiler or Why doesn t my application always give the same answer?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Consistency of Floating-Point Results using the Intel Compiler or Why doesn t my application
More informationImproving Numerical Reproducibility in C/C++/Fortran
Improving Numerical Reproducibility in C/C++/Fortran Steve Lionel Intel Corporation steve.lionel@intel.com 1 The Three Objectives Accuracy Reproducibility Performance Pick two Reproducibility Consistent
More informationContributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth
Presenter: Surabhi Jain Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth May 25, 2018 ROME workshop (in conjunction with IPDPS 2018), Vancouver,
More informationThe Intel Xeon Phi Coprocessor. Dr-Ing. Michael Klemm Software and Services Group Intel Corporation
The Intel Xeon Phi Coprocessor Dr-Ing. Michael Klemm Software and Services Group Intel Corporation (michael.klemm@intel.com) Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED
More information