Alexei Katranov. IWOCL '16, April 21, 2016, Vienna, Austria

Similar documents
Expressing and Analyzing Dependencies in your C++ Application

More performance options

Sample for OpenCL* and DirectX* Video Acceleration Surface Sharing

Mikhail Dvorskiy, Jim Cownie, Alexey Kukanov

MICHAL MROZEK ZBIGNIEW ZDANOWICZ

Debugging and Analyzing Programs using the Intercept Layer for OpenCL Applications

H.J. Lu, Sunil K Pandey. Intel. November, 2018

IXPUG 16. Dmitry Durnov, Intel MPI team

Agenda. Optimization Notice Copyright 2017, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.

OpenCL* and Microsoft DirectX* Video Acceleration Surface Sharing

LIBXSMM Library for small matrix multiplications. Intel High Performance and Throughput Computing (EMEA) Hans Pabst, March 12 th 2015

Michael Kinsner, Dirk Seynhaeve IWOCL 2018

Vectorization Advisor: getting started

OpenMP * 4 Support in Clang * / LLVM * Andrey Bokhanko, Intel

Efficiently Introduce Threading using Intel TBB

Getting Started with Intel SDK for OpenCL Applications

Graphics Performance Analyzer for Android

Tuning Python Applications Can Dramatically Increase Performance

Optimizing Film, Media with OpenCL & Intel Quick Sync Video

Intel Xeon Phi Coprocessor. Technical Resources. Intel Xeon Phi Coprocessor Workshop Pawsey Centre & CSIRO, Aug Intel Xeon Phi Coprocessor

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python

Installation Guide and Release Notes

Bitonic Sorting. Intel SDK for OpenCL* Applications Sample Documentation. Copyright Intel Corporation. All Rights Reserved

Bitonic Sorting Intel OpenCL SDK Sample Documentation

Intel Software Development Products Licensing & Programs Channel EMEA

INTEL MKL Vectorized Compact routines

Повышение энергоэффективности мобильных приложений путем их распараллеливания. Примеры. Владимир Полин

Becca Paren Cluster Systems Engineer Software and Services Group. May 2017

Achieving High Performance. Jim Cownie Principal Engineer SSG/DPD/TCAR Multicore Challenge 2013

Intel Advisor XE Future Release Threading Design & Prototyping Vectorization Assistant

Bei Wang, Dmitry Prohorov and Carlos Rosales

Using Intel VTune Amplifier XE and Inspector XE in.net environment

Intel SDK for OpenCL* - Sample for OpenCL* and Intel Media SDK Interoperability

12th ANNUAL WORKSHOP 2016 NVME OVER FABRICS. Presented by Phil Cayton Intel Corporation. April 6th, 2016

Jim Cownie, Johnny Peyton with help from Nitya Hariharan and Doug Jacobsen

Intel Media Server Studio Professional Edition for Linux*

Intel Parallel Studio XE 2015

Real World Development examples of systems / iot

Visualizing and Finding Optimization Opportunities with Intel Advisor Roofline feature. Intel Software Developer Conference London, 2017

Sarah Knepper. Intel Math Kernel Library (Intel MKL) 25 May 2018, iwapt 2018

Installation Guide and Release Notes

Contributors: Surabhi Jain, Gengbin Zheng, Maria Garzaran, Jim Cownie, Taru Doodi, and Terry L. Wilmarth

Guy Blank Intel Corporation, Israel March 27-28, 2017 European LLVM Developers Meeting Saarland Informatics Campus, Saarbrücken, Germany

Jackson Marusarz Software Technical Consulting Engineer

Intel Math Kernel Library (Intel MKL) BLAS. Victor Kostin Intel MKL Dense Solvers team manager

Overview of Data Fitting Component in Intel Math Kernel Library (Intel MKL) Intel Corporation

Memory & Thread Debugger

Intel Cluster Checker 3.0 webinar

Intel Media Server Studio 2017 R3 Essentials Edition for Linux* Release Notes

Intel Parallel Studio XE 2011 for Windows* Installation Guide and Release Notes

Jomar Silva Technical Evangelist

pymic: A Python* Offload Module for the Intel Xeon Phi Coprocessor

Crosstalk between VMs. Alexander Komarov, Application Engineer Software and Services Group Developer Relations Division EMEA

Intel Math Kernel Library (Intel MKL) Team - Presenter: Murat Efe Guney Workshop on Batched, Reproducible, and Reduced Precision BLAS Georgia Tech,

Sergey Maidanov. Software Engineering Manager for Intel Distribution for Python*

Aleksei Fedotov. November 29, 2017

High Performance Computing The Essential Tool for a Knowledge Economy

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017

Kevin O Leary, Intel Technical Consulting Engineer

Obtaining the Last Values of Conditionally Assigned Privates

OPENSHMEM AND OFI: BETTER TOGETHER

OpenCL* Device Fission for CPU Performance

High Dynamic Range Tone Mapping Post Processing Effect Multi-Device Version

INTRODUCTION TO OPENCL TM A Beginner s Tutorial. Udeepta Bordoloi AMD

3D ray tracing simple scalability case study

What s New August 2015

Intel Parallel Studio XE 2015 Composer Edition for Linux* Installation Guide and Release Notes

Krzysztof Laskowski, Intel Pavan K Lanka, Intel

Intel Media Server Studio 2018 R1 - HEVC Decoder and Encoder Release Notes (Version )

Parallel Programming Features in the Fortran Standard. Steve Lionel 12/4/2012

Intel Advisor XE. Vectorization Optimization. Optimization Notice

Intel Media Server Studio 2018 R1 Essentials Edition for Linux* Release Notes

Solving performance puzzle by using Intel Threading Building Blocks

Intel Many Integrated Core (MIC) Architecture

IN-PERSISTENT-MEMORY COMPUTING WITH JAVA ERIC KACZMAREK INTEL CORPORATION

Chao Yu, Technical Consulting Engineer, Intel IPP and MKL Team

Intel VTune Amplifier XE for Tuning of HPC Applications Intel Software Developer Conference Frankfurt, 2017 Klaus-Dieter Oertel, Intel

Collecting OpenCL*-related Metrics with Intel Graphics Performance Analyzers

Intel Direct Sparse Solver for Clusters, a research project for solving large sparse systems of linear algebraic equation

Demonstrating Performance Portability of a Custom OpenCL Data Mining Application to the Intel Xeon Phi Coprocessor

Intel Xeon Phi Coprocessor

HPCG on Intel Xeon Phi 2 nd Generation, Knights Landing. Alexander Kleymenov and Jongsoo Park Intel Corporation SC16, HPCG BoF

NVMe Over Fabrics: Scaling Up With The Storage Performance Development Kit

Intel VTune Amplifier XE

Intel Parallel Studio XE 2011 SP1 for Linux* Installation Guide and Release Notes

A Simple Path to Parallelism with Intel Cilk Plus

Achieving Performance with OpenCL 2.0 on Intel Processor Graphics. Robert Ioffe, Sonal Sharma, Michael Stoner

Eliminate Threading Errors to Improve Program Stability

Gil Rapaport and Ayal Zaks. Intel Corporation, Israel Development Center. March 27-28, 2017 European LLVM Developers Meeting

Eliminate Threading Errors to Improve Program Stability

Arnon Peleg Visual Computing Products Management

Using Intel Inspector XE 2011 with Fortran Applications

IFS RAPS14 benchmark on 2 nd generation Intel Xeon Phi processor

Intel and the Future of Consumer Electronics. Shahrokh Shahidzadeh Sr. Principal Technologist

PROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec

What's new in VTune Amplifier XE

Revealing the performance aspects in your code

Intel Integrated Native Developer Experience 2015 (OS X* host)

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

What s P. Thierry

Transcription:

Alexei Katranov IWOCL '16, April 21, 2016, Vienna, Austria

Hardware: customization, integration, heterogeneity Intel Processor Graphics CPU CPU CPU CPU Multicore CPU + integrated units for graphics, media and compute Discrete co-processors and accelerators FPGAs, fixed function devices, domain-specific compute engines, etc Diverse and heterogeneous environments with multiple compute resources 2

Intel Threading Building Blocks (Intel TBB) Widely used C++ template library Rich feature set for general purpose parallelism For Windows*, Linux*, OS X*, Android*, etc. Both commercial and open-source licenses Commercial support for Intel Atom, Core, Xeon processors, and for Intel Xeon Phi coprocessors Community contributions for non-intel architectures http://software.intel.com/intel-tbb http://threadingbuildingblocks.org 3

Rich Feature Set for Parallelism Parallel algorithms and data structures Threads and synchronization Memory allocation and task scheduling Generic Parallel Algorithms Efficient scalable way to exploit the power of multi-core without having to start from scratch Flow Graph A set of classes to express parallelism as a graph of compute dependencies and/or data flow Concurrent Containers Concurrent access, and a scalable alternative to serial containers with external locking Synchronization Primitives Atomic operations, a variety of mutexes with different properties, condition variables Task Scheduler Sophisticated work scheduling engine that empowers parallel algorithms and the flow graph Thread Local Storage Unlimited number of thread-local variables Threads OS API wrappers Miscellaneous Thread-safe timers and exception classes Memory Allocation Scalable memory manager and false-sharing free allocators 4

Intel TBB Flow Graph at glance Graph object Intel TBB Flow Graph is an abstraction built on top of TBB task scheduler API Like an additional programming model Explicitly defined control and data dependencies between computations Parallelism is automatically extracted Intel TBB flow graph is targeted to multicore shared memory systems. Edge Graph node 5

Hello World Example Users create nodes and edges, interact with the graph and wait for it to complete tbb::flow::graph g; tbb::flow::continue_node< tbb::flow::continue_msg > h( g, []( const continue_msg & ) { std::cout << Hello ; } ); tbb::flow::continue_node< tbb::flow::continue_msg > w( g, []( const continue_msg & ) { std::cout << World\n ; } ); tbb::flow::make_edge( h, w ); h.try_put(continue_msg()); g.wait_for_all(); f() h f() w 7

Idea of Heterogeneous Flow Graph TBB flow graph as a coordination layer Be the glue that connects hetero HW and SW IP together Expose parallelism between blocks; simplify integration Device 1 Device 2 Device 3 8

OpenCL node Core functionality: enumerate & query OpenCL devices select a device to be used for program execution transfer data to/from the device execute a given kernel there support efficient kernel chaining (no excessive data transfer) OpenCL node Host node OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. 9

Hello World example for OpenCL node // A graph with OpenCL support. opencl_graph g; const char str[] = "Hello, World!"; // OpenCL buffer for the string opencl_buffer<cl_char> b(g, sizeof(str)); // Copy the string to the buffer std::copy_n(str, sizeof(str), b.begin()); // A node that outputs the content of an incoming buffer opencl_node<tuple<opencl_buffer<cl_char>>> clprint(g, "hello_world.cl", "print"); k.set_ndranges({1}); // Send the buffer as the node input input_port<0>(clprint).try_put(b); // Wait for work completeion. g.wait_for_all(); // hello_world.cl kernel void print( global char *str ) { printf("opencl says '"); for ( ; *str; ++str ) printf("%c", *str); printf("'\n"); } 11

OpenCL node pipeline example other nodes in flow graph b3 b1 b2 b1 b1 other nodes in flow graph 12

OpenCL node pipeline example typedef opencl_buffer<cl_int> cl_buffer_t; typedef opencl_node < tuple<cl_buffer_t, cl_buffer_t> > cl_node_t; // Create nodes cl_node_t cl_mul( g, "program.cl", "mul" ); cl_node_t cl_add( g, "program.cl", "add" ); function_node_t f( g, unlimited, []( const cl_buffer_t &t ) {...} ); // Create dependencies between nodes make_edge( cl_mul, cl_add ); make_edge( cl_add, f ); // Put buffers to the graph cl_buffer_t b1( g, N ), b2( g, N ), b3( g, N ); input_port<0>( cl_mul ).try_put( b1 ); input_port<1>( cl_mul ).try_put( b2 ); input_port<1>( cl_add ).try_put( b3 ); 13

Under the hood Your code cl_node_t cl_mul Real work OpenCL intialization 1. Query the available devices 2. Create context 3. Create queue Create a kernel: 1. Prepare the list of devices 2. Read file 3. Prepare program 4. Build program 5. Print error if observed 6. Get a kernel 13

Under the hood Your code cl_node_t cl_mul Real work OpenCL intialization Create a kernel cl_node_t cl_add Create a kernel 14

Under the hood Your code cl_node_t cl_mul cl_node_t cl_add Real work OpenCL intialization Create a kernel Create a kernel cl_buffer_t b1,b2,b3 Create a buffer Create a buffer Create a buffer 15

Under the hood Your code cl_node_t cl_mul cl_node_t cl_add cl_buffer_t b1,b2,b3 Real work OpenCL init Kernel x2 Buffer x3 Work in parallel cl_mul<0>.put( b1 ) Send msg Move data to device 16

Under the hood Your code Real work Work in parallel...... cl_mul<0>.put( b1 ) Send msg Move data to device cl_mul<1>.put( b2 ) Send msg Move data to device Run kernel cl_mul 1. Set arguments 2. Enqueue kernel 3. Put b1 to cl_add 17

Under the hood Your code Real work Work in parallel...... cl_mul<1>.put( b2 ) Send msg Move data to device Run kernel cl_mul cl_add<1>.put( b3 ) Send msg Move data to device Sync with previous kernel cl_mul Run kernel cl_add 18

Under the hood Your code Real work...... cl_add<1>.put( b3 ) Send msg Work in parallel Move data to device Sync with previous kernel cl_mul Run kernel cl_add Sync with previous kernel cl_add 19

Under the hood Your code Real work...... cl_add<1>.put( b3 ) Send msg Work in parallel Move data to device Sync with previous kernel cl_mul Run kernel cl_add Sync with previous kernel cl_add Move data and run f 20

Under the hood Your code cl_node_t cl_mul cl_node_t cl_add cl_buffer_t b1,b2,b3 cl_mul<0>.put( b1 ) cl_mul<1>.put( b2 ) cl_mul<2>.put( b3 ) Real work OpenCL intialization Create a kernel: 1. Query the available devices 2. 1. Create Prepare context the list of devices 3. 2. Create Read queue filea kernel: 3. 1. Prepare Prepare program the list of devices 4. 2. Build Read program file 5. 3. Print Prepare error if program observed 6. 4. Get Build a kernel program 5. Print error if observed 6. Get a kernel Create buffer Create buffer Create buffer Send message Send message Send message Work in parallel Move data to device Move data to device Move data to device Run kernel cl_mul 1. Set arguments 2. Enqueue kernel 1. Set arguments 3. Put b1 to cl_add 2. Enqueue kernel 3. Put b1 to cl_add Run kernel cl_add Sync with previous kernel cl_mul Run kernel cl_add Sync with previous kernel cl_add Move data and run f 21

Under the hood Your code cl_node_t cl_mul cl_node_t cl_add cl_buffer_t b1,b2,b3 cl_mul<0>.put( b1 ) cl_mul<1>.put( b2 ) cl_mul<2>.put( b3 ) Real work OpenCL intialization Create a kernel: 1. Query the available devices 2. 1. Create Prepare context the list of devices 3. 2. Create Read queue filea kernel: 3. 1. Prepare Prepare program the list of devices 4. 2. Build Read program file 5. 3. Print Prepare error if program observed 6. 4. Get Build a kernel program 5. Print error if observed 6. Get a kernel Create buffer Create buffer Create buffer Send message Send message Send message Work in parallel Move data to device Move data to device Move data to device Run kernel cl_mul 1. Set arguments 2. Enqueue kernel 1. Set arguments 3. Put b1 to cl_add 2. Enqueue kernel 3. Put b1 to cl_add Run kernel cl_add Intel TBB work Sync with previous kernel cl_mul Run kernel cl_add Sync with previous kernel cl_add Move data and run f 22

relative time (less is better) OpenCL node overheads 7% Configuration info: 6% 5% 4% Desktop system: Hardware: Intel Core i7-6700k CPU @4.00Ghz, 16 GB RAM; Software: Microsoft * Windows 10 Enterprise, Microsoft Visual Studio * Professional 2015 Update 1, Intel HD Graphics Driver for Windows 15.40. 3% 2% 1% 0% Desktop system Mobile system Mobile system: Intel Core i5-4300u CPU @1.90Ghz, 8 GB RAM; Software: Microsoft Windows 8.1 Enterprise, Microsoft Visual Studio Professional 2015 Update 2, Intel HD Graphics Driver for Windows 15.36 gemm tone_mapping median_filter 24

time, sec time, ms time, ms OpenCL node overheads in detail Mobile system Desktop system gemm tone_mapping tone_mapping 16 14 12 10 8 6 4 2 12,7sec 13,4 sec 14 12 10 8 6 4 2 10,5 ms 11,0 ms 6 5 4 3 2 1 4,40 ms 4,45 ms 0 original opencl_node 0 original opencl_node 0 original opencl_node 25

Load balancing CPU and GPU OpenCL node (CPU) Tile generator OpenCL node (GPU) Generic support makes coordinating with any model easier and efficient 26

Load balancing CPU and GPU Available PUs OpenCL node (CPU) Dispatcher Tile generator OpenCL node (GPU) Generic support makes coordinating with any model easier and efficient 27

Load balancing CPU and GPU Available PUs OpenCL node (CPU) Dispatcher Tile generator OpenCL node (GPU) Generic support makes coordinating with any model easier and efficient 28

relative time (less is better) Load balancing CPU and GPU: performance 1,2 Performance of token-based implementation of Fractal 1,0 0,8 0,6 0,4 0,2 0,0 Intel Core i7-6700k Intel HD Graphics 530 Intel Core i7-6700k + Intel HD Graphics 530 29

Summary Intel TBB flow graph is a coordination layer on heterogeneous systems: First class support for OpenCL (opencl_node overview: https://software.intel.com/en-us/blogs/2015/12/09/opencl-node-overview) Reasonable performance overheads (about 1% for 4 ms workload on a desktop system) Declarative language to express unstructured parallelism, e.g. token-based balancing scheme Intel TBB is open source and freely available on https://www.threadingbuildingblocks.org/ 31

Acknowledgments Our thanks to Alexey Kukanov for co-authoring and thorough review Michael Voss for material contribution to the features described in this presentation Robert Ioffe for evaluating our work and providing valuable feedback Others who helped in developing the functionality 32

Legal Disclaimer & INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Intel, Pentium, Xeon, Xeon Phi, Core, VTune, Cilk, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. Intel s compilers may or may not optimize to the same degree for non-intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 33

Backup: Motivation for data flow and graphparallelism Serial implementation (perhaps vectorized) x = A(); y = B(x); z = C(x); D(y,z); Loop-parallel implementation Loop- and graph-parallel implementation 35