Kirill Rogozhin. Intel

Similar documents
Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017

Intel Software and Services, 2017

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature

Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany

Vectorization Advisor: getting started

What s P. Thierry

OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel

Jackson Marusarz Intel

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

H.J. Lu, Sunil K Pandey. Intel. November, 2018

IXPUG 16. Dmitry Durnov, Intel MPI team

Bei Wang, Dmitry Prohorov and Carlos Rosales

Intel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor

LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries.

Kevin O Leary, Intel Technical Consulting Engineer

Real World Development examples of systems / iot

Intel Math Kernel Library (Intel MKL) BLAS. Victor Kostin Intel MKL Dense Solvers team manager

Achieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013

Graphics Performance Analyzer for Android

Memory & Thread Debugger

Intel Software Development Products Licensing & Programs Channel EMEA

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor

Bitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved

Gil Rapaport and Ayal Zaks. Intel Corporation, Israel Development Center. March 27-28, 2017 European LLVM Developers Meeting

Crosstalk between VMs. Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA

Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing

Becca Paren Cluster Systems Engineer Software and Services Group. May 2017

Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov

Bitonic Sorting Intel OpenCL SDK Sample Documentation

Obtaining the Last Values of Conditionally Assigned Privates

Diego Caballero and Vectorizer Team, Intel Corporation. April 16 th, 2018 Euro LLVM Developers Meeting. Bristol, UK.

Intel Cluster Checker 3.0 webinar

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant

INTEL MKL Vectorized Compact routines

What s New August 2015

Simplified and Effective Serial and Parallel Performance Optimization

Ayal Zaks and Gil Rapaport, Vectorization Team, Intel Corporation. October 18 th, 2017 US LLVM Developers Meeting, San Jose, CA

Ernesto Su, Hideki Saito, Xinmin Tian Intel Corporation. OpenMPCon 2017 September 18, 2017

OpenCL* and Microsoft DirectX* Video Acceleration Surface Sharing

Compiling for Scalable Computing Systems the Merit of SIMD. Ayal Zaks Intel Corporation Acknowledgements: too many to list

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017

Intel Advisor XE. Vectorization Optimization. Optimization Notice

Sarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018

HPCG on Intel Xeon Phi 2 nd Generation, Knights Landing. Alexander Kleymenov and Jongsoo Park Intel Corporation SC16, HPCG BoF

KNL tools. Dr. Fabio Baruffa

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python

Intel Many Integrated Core (MIC) Architecture

Installation Guide and Release Notes

Intel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech,

Efficient Parallel Programming on Xeon Phi for Exascale

Growth in Cores - A well rehearsed story

Intel Software and Services, Kirill Rogozhin

GAP Guided Auto Parallelism A Tool Providing Vectorization Guidance

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel Xeon Phi Processor

Intel profiling tools and roofline model. Dr. Luigi Iapichino

Debugging and Analyzing Programs using the Intercept Layer for OpenCL Applications

Performance Profiler. Klaus-Dieter Oertel Intel-SSG-DPD IT4I HPC Workshop, Ostrava,

Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation

SCIENTIFIC COMPUTING FOR ENGINEERS PERFORMANCE MODELING

April 2 nd, Bob Burroughs Director, HPC Solution Sales

Jackson Marusarz Software Technical Consulting Engineer

Getting Started with Intel SDK for OpenCL Applications

More performance options

MICHAL MROZEK ZBIGNIEW ZDANOWICZ

Повышение энергоэффективности мобильных приложений путем их распараллеливания. Примеры. Владимир Полин

Expressing and Analyzing Dependencies in your C++ Application

Master Informatics Eng.

Tuning Python Applications Can Dramatically Increase Performance

Stanislav Bratanov; Roman Belenov; Ludmila Pakhomova 4/27/2015

Jomar Silva Technical Evangelist

Alexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria

Using Intel VTune Amplifier XE for High Performance Computing

Optimizing Film, Media with OpenCL & Intel Quick Sync Video

High Performance Computing The Essential Tool for a Knowledge Economy

Opportunities and Challenges in Sparse Linear Algebra on Many-Core Processors with High-Bandwidth Memory

Compiling for Scalable Computing Systems the Merit of SIMD. Ayal Zaks Intel Corporation Acknowledgements: too many to list

12th ANNUAL WORKSHOP 2016 NVME OVER FABRICS. Presented by Phil Cayton Intel Corporation. April 6th, 2016

Exploiting Local Orientation Similarity for Efficient Ray Traversal of Hair and Fur

Software Optimization Case Study. Yu-Ping Zhao

Intel Xeon Phi Coprocessor Performance Analysis

Intel technologies, tools and techniques for power and energy efficiency analysis

Desktop 4th Generation Intel Core, Intel Pentium, and Intel Celeron Processor Families and Intel Xeon Processor E3-1268L v3

Using Intel VTune Amplifier XE and Inspector XE in.net environment

Using the Roofline Model and Intel Advisor

Intel Math Kernel Library (Intel MKL) Sparse Solvers. Alexander Kalinkin Intel MKL developer, Victor Kostin Intel MKL Dense Solvers team manager

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes

Installation Guide and Release Notes

Achieving 2.5X 1 Higher Performance for the Taboola TensorFlow* Serving Application through Targeted Software Optimization

Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth

Sergey Maidanov. Software Engineering Manager for Intel Distribution for Python*

Intel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation

Intel Architecture for Software Developers

Intel s Architecture for NFV

A GPU performance estimation model based on micro-benchmarks and black-box kernel profiling

Ravindra Babu Ganapathi

Collecting OpenCL*-related Metrics with Intel Graphics Performance Analyzers

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

Intel Xeon Phi coprocessor (codename Knights Corner) George Chrysos Senior Principal Engineer Hot Chips, August 28, 2012

Transcription:

Kirill Rogozhin Intel

From Old HPC principle to modern performance model Old HPC principles: 1. Balance principle (e.g. Kung 1986) hw and software parameters altogether 2. Compute Density, intensity, machine balance - (FLOP/byte or Byte/FLOP ratio for algorithm or hardware). E.g. Kennedy, Carr: 1988, 1994: Improving the Ratio of Memory operations to Floating-Point Operations in Loops. More research catalyzed by memory wall/ gap growth and by GPGPU - 2008, Berkeley: generalized into Roofline Performance Model. Williams, Waterman, Patterson. Roofline: an insightful visual performance model for multicore - 2014: Cache-aware Roofline model: Ilic, Pratas, Sousa. INESC-ID/IST, Technical Uni of Lisbon. Intel Confidential 2

Memory Wall Patterson, 2011 Intel Confidential 3

From Old HPC principle to modern performance model Old HPC principles: 1. Balance principle (e.g. Kung 1986) hw and software parameters altogether 2. Compute Density, intensity, machine balance - (FLOP/byte or Byte/FLOP ratio for algorithm). E.g. Kennedy, Carr: 1988, 1994: Improving the Ratio of Memory operations to Floating-Point Operations in Loops. More research catalyzed by memory wall/ gap growth and by GPGPU: - 2008, Berkeley: generalized into Roofline Performance Model. Williams, Waterman, Patterson. Roofline: an insightful visual performance model for multicore - 2014: Cache-aware Roofline model: Ilic, Pratas, Sousa. INESC-ID/IST, Technical Uni of Lisbon. Intel Confidential 4

Density, Intensity, Machine balance Arithmetic Intensity = Total Flops computed Total Bytes transferred WIP OI Arithmetic Operational Intensity = Total Flops computed Total Bytes transferred between DRAM (MCDRAM) and LLC Implemented in 2017 Update 1 WIP AI Arithmetic Intensity = Total Flops computed Total Bytes transferred between CPU and memory Arithmetic Intensity = Total Intops+Flops computed Total Bytes transferred between CPU and memory 5

Agenda - Roofline origins - Roofline as a visual performance model - Roofline: under the hood - Cache-aware vs. Traditional Roofline - Roofline interpretation - Customer adoption. Internal and external collaboration. Next Steps. - Customer use cases. Value proposition. Intel Confidential 6

Attainable Performance (Gflops/s) Roofline [1] is a visual performance model NERSC+Intel, Intel HPC Dev Conf 17 Arithmetic Intensity (flops/byte) Computation Limit Roofline is a visually intuitive performance model used to bound the performance of various numerical methods and operations running on multicore, manycore, or accelerator processor architectures. - 7 -

Roofline Automation in Intel Advisor Each Roof (slope) Gives peak CPU/Memory throughput of your PLATFORM (benchmarked) Each Dot represents loop or function in YOUR APPLICATION (profiled) Interactive mapping to source and performance profile Synergy between Vector Advisor and Roofline: FMA example Customizable chart 8

Roofline model: Am I bound by VPU/CPU or by Memory? A B C What makes loops A, B, C different? 9

Advisor Roofline: under the hood Seconds User-mode sampling Root access not needed Roofline application profile: Axis Y: FLOP/S = #FLOP (mask aware) / #Seconds Axis X: AI = #FLOP / #Bytes Roofs Microbenchmarks Actual peak for the current configuration Performance = Flops/seconds #FLOP Binary Instrumentation Does not rely on CPU counters AI = Flop/byte Bytes Binary Instrumentation Counts operands size (not cachelines) 10

Getting Roofline in Advisor FLOP/S = #FLOP/Seconds Step 1: Survey - Non intrusive. Representative - Output: Seconds (+much more) Step 2: FLOPS - Precise, instrumentation based - Physically count Num-Instructions - Output: #FLOP, #Bytes Seconds #FLOP Count - Mask Utilization - #Bytes Intel Confidential 11

Mask Utilization and FLOPS profiler 12

Why Mask Utilization Important? for(i = 0; i <= MAX; i++) c[i] = a[i] + b[i]; 100% a[i] + b[i] a[i+7] a[i+6] a[i+5] a[i+4] a[i+3] a[i+2] a[i+1] a[i] b[i+7] b[i+6] b[i+5] b[i+4] b[i+3] b[i+2] b[i+1] b[i] + c[i] c[i+7] c[i+6] c[i+5] c[i+4] c[i+3] c[i+2] c[i+1] c[i] 13

Why Mask Utilization Important? 3 elements suppressed for(i = 0; i <= MAX; i++) if (cond(i)) c[i] = a[i] + b[i]; SIMD Utilization = 5/8 cond[i] 1 0 1 0 1 0 1 1 62.5% a[i] + b[i] a[i+7] a[i+6] a[i+5] a[i+4] a[i+3] a[i+2] a[i+1] a[i] b[i+7] b[i+6] b[i+5] b[i+4] b[i+3] b[i+2] b[i+1] b[i] + c[i] c[i+7] c[i+6] c[i+5] c[i+4] c[i+3] c[i+2] c[i+1] c[i] 14

AVX-512 Mask Registers 8 Mask registers of size 64-bits k1-k7 can be used for predication k0 can be used as a destination or source for mask manipulation operations 4 different mask granularities. For instance, at 512b: Packed Integer Byte use mask bits [63:0] VPADDB zmm1 {k1}, zmm2, zmm3 Packed Integer Word use mask bits [31:0] VPADDW zmm1 {k1}, zmm2, zmm3 Packed IEEE FP32 and Integer Dword use mask bits [15:0] VADDPS zmm1 {k1}, zmm2, zmm3 Packed IEEE FP64 and Integer Qword use mask bits [7:0] VADDPD zmm1 {k1}, zmm2, zmm3 zmm1 zmm2 zmm3 k1 zmm1 element size VADDPD zmm1 {k1}, zmm2, zmm3 a7 a6 a5 a4 a3 a2 a1 a0 b7 b6 b5 b4 b3 b2 b1 b0 c7 c6 c5 c4 c3 c2 c1 c0 + + + + + + + + 1 0 1 1 1 1 0 0 b7+c7 a6 b5+c5 b4+c4 b3+c3 b2+c2 a1 a0 Vector Length 128 256 512 Byte 16 32 64 Word 8 16 32 Dw ord/sp 4 8 16 Qw ord/dp 2 4 8

Survey+FLOPs Report on AVX-512: FLOP/s, Bytes and AI, Masks and Efficiency Intel Confidential 16

General efficiency (FLOPS) vs. VPU-centric efficiency (Vector Efficiency) High Vector Efficiency Low FLOPS Low Vector Efficiency High FLOPS Intel Confidential 17

Cache-Aware vs. Classic Roofline AI = # FLOP / # BYTE Classic Roofline: Intensity defined as # FLOP/ # BYTES (Cache DRAM) - DRAM traffic (or MCDRAM-traffic-based) - Variable for the same code/platform (varies with dataset size/trip count) - Can be measured relative to different memory hierarchy levels cache level, HBM, DRAM Cache-aware Roofline: Intensity defined as # FLOP / # BYTES (CPU Memory Sub-system) - Algorithmic, Cumulative (L1+L2+LLC+DRAM) traffic-based - Invariant for the given code / platform combination - Typically AI_CARM < AI_DRAM 18

Attainable Performance (Gflops/s) The Cache-Aware Roofline Model NERSC+Intel, Intel HPC Dev Conf 17 CPU Limit FMA+SIMD CPU Limit FMA CPU Limit Scalar Unroll CPU Limit Scalar Total volume of bytes transferred across all memory hierarchies to the core: Attainable perf Gflops/s = min Peak performace Gflops/s Bandwidths to Core Operational x Intensity Operational Intensity (flops/byte) Bandwidths to Core Bandwidth from L1 to Core Bandwidth from L2 to Core Bandwidth from L3 to Core Bandwidth from DRAM to Core [1] S. Williams et al. CACM (2009), crd.lbl.gov/departments/computer-science/par/research/roofline - 19 -

Attainable Performance (Gflops/s) Attainable Performance (Gflops/s) Example 1: Effect of L2 Cache Optimization NERSC+Intel, Intel HPC Dev Conf 17 Classical roofline Cache-aware roofline FMA+SIMD FMA+SIMD FMA Scalar FMA Scalar Reducing dataset to fit into L2 Reducing dataset to fit into L2 Operational Intensity (flops/byte) We are L2 bandwidth bound, clearly shown by the cache-aware roofline Arithmetic/Operational Intensity (flops/byte)

Attainable Performance (Gflops/s) Attainable Performance (Gflops/s) Example 2: Compute Bound Application NERSC+Intel, Intel HPC Dev Conf 17 Classical roofline Cache-aware roofline FMA+SIMD FMA+SIMD FMA Scalar FMA Scalar Operational Intensity (flops/byte) Effect of vectorization/fma: vertical increase in both models Arithmetic/Operational Intensity (flops/byte)

Interpreting Roofline Data: advanced ROI analysis. Final Limits (assuming perfect optimization) Long-term ROI, optimization strategy Current Limits (what are my current bottlenecks) Next step, optimization tactics Finally memory-bound Invest more into effective cache utilization Finally compute-bound Invest more into effective CPU/VPU (SIMD) optimization Check your Advisor Survey and MAP results 22

Acknowledgments/References Roofline model proposed by Williams, Waterman, Patterson: http://www.eecs.berkeley.edu/~waterman/papers/roofline.pdf Cache-aware Roofline model: Upgrading the loft (Ilic, Pratas, Sousa, INESC-ID/IST, Thec Uni of Lisbon) http://www.inesc-id.pt/ficheiros/publicacoes/9068.pdf At Intel: Roman Belenov, Zakhar Matveev, Julia Fedorova SSG product teams, Hugh Caffey, in collaboration with Philippe Thierry 23

Roofline model: value proposition 1. Sense of absolute performance when optimizing applications: How do I know if my performance is good? Why am I not getting peak performance of the platform? 2. Choose optimization direction (ROI, where to invest first): How do I know which optimization to apply? What is the limiting factor? How do I know when to stop? 3. The language for Perf Experts and Domain Experts to talk to each other 4. Lightweight projection and co-design tool(this is where it originated from)

Legal Disclaimer & INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Intel, Pentium, Xeon, Xeon Phi, Core, VTune, Cilk, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. Intel s compilers may or may not optimize to the same degree for non-intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 25