GPU Performance Optimisation. Alan Gray EPCC The University of Edinburgh

Similar documents
Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh

GPGPU. Alan Gray/James Perry EPCC The University of Edinburgh.

GPU Profiling and Optimization. Scott Grauer-Gray

CUDA Optimization with NVIDIA Nsight Visual Studio Edition 3.0. Julien Demouth, NVIDIA

Fundamental CUDA Optimization. NVIDIA Corporation

CUDA Optimization: Memory Bandwidth Limited Kernels CUDA Webinar Tim C. Schroeder, HPC Developer Technology Engineer

Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA

CUDA OPTIMIZATIONS ISC 2011 Tutorial

Fundamental CUDA Optimization. NVIDIA Corporation

CUDA Optimizations WS Intelligent Robotics Seminar. Universität Hamburg WS Intelligent Robotics Seminar Praveen Kulkarni

Kernel optimizations Launch configuration Global memory throughput Shared memory access Instruction throughput / control flow

GPU Fundamentals Jeff Larkin November 14, 2016

Fundamental Optimizations

Supercomputing, Tutorial S03 New Orleans, Nov 14, 2010

CS 179 Lecture 4. GPU Compute Architecture

Peak Performance for an Application in CUDA

Dense Linear Algebra. HPC - Algorithms and Applications

Lecture: Storage, GPUs. Topics: disks, RAID, reliability, GPUs (Appendix D, Ch 4)

ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications

TUNING CUDA APPLICATIONS FOR MAXWELL

Advanced CUDA Optimizations. Umar Arshad ArrayFire

Advanced CUDA Programming. Dr. Timo Stich

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Lecture 27: Pot-Pourri. Today s topics: Shared memory vs message-passing Simultaneous multi-threading (SMT) GPUs Disks and reliability

Tesla Architecture, CUDA and Optimization Strategies

Double-Precision Matrix Multiply on CUDA

TUNING CUDA APPLICATIONS FOR MAXWELL

Warps and Reduction Algorithms

HPC Issues for DFT Calculations. Adrian Jackson EPCC

CUDA Performance Optimization. Patrick Legresley

Memory. Lecture 2: different memory and variable types. Memory Hierarchy. CPU Memory Hierarchy. Main memory

CUDA Performance Considerations (2 of 2)

Reductions and Low-Level Performance Considerations CME343 / ME May David Tarjan NVIDIA Research

Lecture 2: different memory and variable types

Exploiting GPU Caches in Sparse Matrix Vector Multiplication. Yusuke Nagasaka Tokyo Institute of Technology

GPU Performance Nuggets

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS

CSE 591: GPU Programming. Using CUDA in Practice. Klaus Mueller. Computer Science Department Stony Brook University

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

Finite Element Integration and Assembly on Modern Multi and Many-core Processors

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

GPU Architecture. Alan Gray EPCC The University of Edinburgh

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN

Introduction to GPU hardware and to CUDA

Preparing seismic codes for GPUs and other

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17

High Performance Computing on GPUs using NVIDIA CUDA

Advanced CUDA Optimizing to Get 20x Performance. Brent Oster

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC

Analysis Report. Number of Multiprocessors 3 Multiprocessor Clock Rate Concurrent Kernel Max IPC 6 Threads per Warp 32 Global Memory Bandwidth

Computer Architecture

Lecture 27: Multiprocessors. Today s topics: Shared memory vs message-passing Simultaneous multi-threading (SMT) GPUs

Introduction to GPGPU and GPU-architectures

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

GPU Programming. Performance Considerations. Miaoqing Huang University of Arkansas Fall / 60

Slide credit: Slides adapted from David Kirk/NVIDIA and Wen-mei W. Hwu, DRAM Bandwidth

Trends in HPC (hardware complexity and software challenges)

Persistent RNNs. (stashing recurrent weights on-chip) Gregory Diamos. April 7, Baidu SVAIL

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

CUDA Memories. Introduction 5/4/11

Optimizing Parallel Reduction in CUDA

University of Bielefeld

GPU Computing: Development and Analysis. Part 1. Anton Wijs Muhammad Osama. Marieke Huisman Sebastiaan Joosten

CSE 160 Lecture 24. Graphical Processing Units

CUDA Experiences: Over-Optimization and Future HPC

CS/EE 217 Midterm. Question Possible Points Points Scored Total 100

Introduction to CUDA Programming

CSE 599 I Accelerated Computing - Programming GPUS. Memory performance

CMPSCI 691AD General Purpose Computation on the GPU

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

Parallelizing FPGA Technology Mapping using GPUs. Doris Chen Deshanand Singh Aug 31 st, 2010

CME 213 S PRING Eric Darve

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

High Performance Computing and GPU Programming

Advanced CUDA Optimizations

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

Profiling & Tuning Applications. CUDA Course István Reguly

Advanced CUDA Optimizing to Get 20x Performance

Alternative GPU friendly assignment algorithms. Paul Richmond and Peter Heywood Department of Computer Science The University of Sheffield

GPU Acceleration of HPC Applications

Lecture 1: Introduction and Computational Thinking

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

Numerical Simulation on the GPU

A TALENTED CPU-TO-GPU MEMORY MAPPING TECHNIQUE

Porting the ICON Non-hydrostatic Dynamics and Physics to GPUs

Portland State University ECE 588/688. Graphics Processors

Using GPUs to compute the multilevel summation of electrostatic forces

State-of-the-art in Heterogeneous Computing

GPU Programming. Lecture 1: Introduction. Miaoqing Huang University of Arkansas 1 / 27

Introduction to Parallel Computing with CUDA. Oswald Haan

Code Optimizations for High Performance GPU Computing

Data Parallel Execution Model

Mathematical computations with GPUs

2/2/11. Administrative. L6: Memory Hierarchy Optimization IV, Bandwidth Optimization. Project Proposal (due 3/9) Faculty Project Suggestions

Identifying Performance Limiters Paulius Micikevicius NVIDIA August 23, 2011

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

Profiling of Data-Parallel Processors

2006: Short-Range Molecular Dynamics on GPU. San Jose, CA September 22, 2010 Peng Wang, NVIDIA

HiPANQ Overview of NVIDIA GPU Architecture and Introduction to CUDA/OpenCL Programming, and Parallelization of LDPC codes.

Transcription:

GPU Performance Optimisation EPCC The University of Edinburgh

Hardware NVIDIA accelerated system: Memory Memory

GPU vs CPU: Theoretical Peak capabilities NVIDIA Fermi AMD Magny-Cours (6172) Cores 448 (1.15GHz) 12 (2.1GHz) Operations/cycle 1 4 DP Performance (peak) 515 GFlops 101 GFlops Memory Bandwidth (peak) 144 GB/s 27.5 GB/s For these products, GPU theoretical advantage is ~5x for both compute and main memory bandwidth Application performance very much depends on application Typically a small fraction of peak Depends how well application is suited to/tuned for architecture

GPU performance inhibitors Lack of available parallelism Copying data to/from device Global memory latency Global memory bandwidth Code branching Device under-utilisation This lecture will address each of these And advise how to maximise performance Concentrating on NVIDIA, but many concepts will be transferable to e.g. AMD

CUDA Profiling Simply set COMPUTE_PROFILE environment variable to 1 Log file, e.g. cuda_profile_0.log created at runtime: # CUDA_PROFILE_LOG_VERSION 2.0 # CUDA_DEVICE 0 Tesla M1060 # CUDA_CONTEXT 1 # TIMESTAMPFACTOR fffff6e2e9ee8858 method,gputime,cputime,occupancy method=[ memcpyhtod ] gputime=[ 37.952 ] cputime=[ 86.000 ] method=[ memcpyhtod ] gputime=[ 37.376 ] cputime=[ 71.000 ] method=[ memcpyhtod ] gputime=[ 37.184 ] cputime=[ 57.000 ] method=[ _Z23inverseEdgeDetect1D_colPfS_S_ ] gputime=[ 253.536 ] cputime=[ 13.00 0 ] occupancy=[ 0.250 ]... timing information for kernels and data transfer Possible to output more metrics (cache misses etc) See doc/compute_profiler.txt file in main CUDA installation

Exploiting parallelism GPU performance relies on parallel use of many threads Amdahl s law for parallel speedup: P αp + (1 α) P: number of parallel tasks α: fraction of app which is serial P is very large, even within a single GPU α must be extremely small to achieve good speedup Effort must be made to exploit as much parallelism as possible within application May involve rewriting/refactoring

Host Device Data Copy CPU (host) and GPU (device) have separate memories. All data read/written on the device must be copied to/ from the device (over PCIe bus). This very expensive Must try to minimise copies Keep data resident on device May involve porting more routines to device, even if they are not computationally expensive Might be quicker to calculate something from scratch on device instead of copying from host

Data copy optimisation example Loop over timesteps inexpensive_routine_on_host(data_on_host) copy data from host to device expensive_routine_on_device(data_on_device) copy data from device to host End loop over timesteps Port inexpensive routine to device and move data copies outside of loop copy data from host to device Loop over timesteps inexpensive_routine_on_device(data_on_device) expensive_routine_on_device(data_on_device) End loop over timesteps copy data from device to host

Code Branching On NVIDIA GPUs, there are less instruction scheduling units than cores Threads are scheduled in groups of 32, called a warp N.B. It may take multiple cycles to process all threads in a warp, e.g. 4 cycles on Tesla 10-series SM, which only has 8 cores Threads within a warp must execute the same instruction in lock-step (on different data elements) The CUDA programming allows branching, but this results in all cores following all branches With only the required results saved This is obviously suboptimal Must avoid intra-warp branching wherever possible (especially in key computational sections)

Branching example E.g you want to split your threads into 2 groups: i = blockidx.x*blockdim.x + threadidx.x; if (i%2 == 0) else Threads within warp diverge i = blockidx.x*blockdim.x + threadidx.x; if ((i/32)%2 == 0) else Threads within warp follow same path

Occupancy and Memory Latency hiding Programmer decomposes loops in code to threads Obviously, there must be at least as many total threads as cores, otherwise cores will be left idle. For best performance, actually want #threads >> #cores Accesses to global memory have several hundred cycles latency When a thread stalls waiting for data, if another thread can switch in this latency can be hidden. NVIDIA GPUs have very fast thread switching, and support many concurrent threads Fermi supports up to 1536 concurrent threads on an SM, e.g. if there are 256 threads per block, it can run up to 6 blocks concurrently per SM. Remaining blocks will queue. But note that resources must be shared between threads: High use of on-chip memory and registers will limit number of concurrent threads. Best to experiment measuring performance NVIDIA provide a CUDA Occupancy Calculator spreadsheet to help.

Occupancy example Loop over i from 1 to 512 Loop over j from 1 to 512 independent iteration Original code 1D decomposition Calc i from thread/block ID Loop over j from 1 to 512 independent iteration 2D decomposition Calc i & j from thread/block ID independent iteration 512 threads 262,144 threads

Memory coalescing Global memory bandwidth for graphics memory on GPU is high compared to CPU But there are many data-hungry cores Memory bandwidth is a botteneck Maximum bandwidth achieved when data is loaded for multiple threads in a single transaction: coalescing This will happen when data access patterns meet certain conditions: 16 consecutive threads (half-warp) must access data from within the same memory segment E.g. condition met when consecutive threads read consecutive memory addresses within a warp. Otherwise, memory accesses are serialised, significantly degrading performance Adapting code to allow coalescing can dramatically improve performance

Memory coalescing example Condition met when consecutive threads read consecutive memory addresses In C, outermost index runs fastest row = blockidx.x*blockdim.x + threadidx.x; for (col=0; col<n; col++) output[row][col]=2*input[row][col]; Not coalesced col = blockidx.x*blockdim.x + threadidx.x; for (row=0; row<n; row++) output[row][col]=2*input[row][col]; coalesced

Use of On-Chip Memory Global Memory accesses can be avoided altogether if the on-chip memory and registers can be utilised. But these are small Shared memory: shared between threads in a block shared qualifier (C), shared attribute (Fortran) Fermi has 64 kb configurable as shared or cached partitioned as 16KB/48KB or 48KB/16KB Automatic caching in Fermi GPUs, but you may get better results doing it manually (but can be very involved) Constant memory: 64kB read only memory constant qualifier (C), constant attribute (Fortran) and use cudamemcpytosymbol to copy from host (see programming guide)

Conclusions GPU architecture offers higher Floating Point and memory bandwidth performance over leading CPUs E.g. ~5x peak in each for NVIDIA Fermi vs AMD Magny Cours There are a number of factors which can inhibit application performance on the GPU. And a number of steps which can be taken to circumvent these inhibitors Some of these may require significant development/tuning for real applications It is important to have a good understanding of the application, architecture and programming model.