Introduction to GPU hardware and to CUDA

Similar documents
General Purpose GPU Computing in Partial Wave Analysis

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

CME 213 S PRING Eric Darve

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

GPU Architecture. Alan Gray EPCC The University of Edinburgh

Technology for a better society. hetcomp.com

B. Tech. Project Second Stage Report on

GPU Fundamentals Jeff Larkin November 14, 2016

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

Introduction to CUDA Programming

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN

GPU Programming Using NVIDIA CUDA

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC

Josef Pelikán, Jan Horáček CGG MFF UK Praha

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS

Scientific discovery, analysis and prediction made possible through high performance computing.

CUDA Programming Model

Paralization on GPU using CUDA An Introduction

GPU programming. Dr. Bernhard Kainz

Threading Hardware in G80

Finite Element Integration and Assembly on Modern Multi and Many-core Processors

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

Numerical Simulation on the GPU

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

Introduction to CUDA

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University

Tesla Architecture, CUDA and Optimization Strategies

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

CS 220: Introduction to Parallel Computing. Introduction to CUDA. Lecture 28

GPU Programming. Lecture 1: Introduction. Miaoqing Huang University of Arkansas 1 / 27

Exotic Methods in Parallel Computing [GPU Computing]

Mathematical computations with GPUs

GPU for HPC. October 2010

Graphics Processor Acceleration and YOU

Warps and Reduction Algorithms

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Tesla GPU Computing A Revolution in High Performance Computing

Parallel Computing: Parallel Architectures Jin, Hai

GPUs and GPGPUs. Greg Blanton John T. Lubia

Portland State University ECE 588/688. Graphics Processors

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

ECE 574 Cluster Computing Lecture 17

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

High Performance Computing with Accelerators

CUDA Architecture & Programming Model

General-purpose computing on graphics processing units (GPGPU)

CUDA Optimizations WS Intelligent Robotics Seminar. Universität Hamburg WS Intelligent Robotics Seminar Praveen Kulkarni

Practical Introduction to CUDA and GPU

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

Overview: Graphics Processing Units

Administrivia. Administrivia. Administrivia. CIS 565: GPU Programming and Architecture. Meeting

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

University of Bielefeld

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

GPUs and Emerging Architectures

arxiv: v1 [physics.comp-ph] 4 Nov 2013

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

GPGPU/CUDA/C Workshop 2012

CS GPU and GPGPU Programming Lecture 8+9: GPU Architecture 7+8. Markus Hadwiger, KAUST

Introduction to CUDA

E6895 Advanced Big Data Analytics Lecture 8: GPU Examples and GPU on ios devices

COMP 605: Introduction to Parallel Computing Lecture : GPU Architecture

Fundamental CUDA Optimization. NVIDIA Corporation

ECE 574 Cluster Computing Lecture 15

PROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec

Tesla GPU Computing A Revolution in High Performance Computing

Massively Parallel Computing with CUDA. Carlos Alberto Martínez Angeles Cinvestav-IPN

Device Memories and Matrix Multiplication

CS516 Programming Languages and Compilers II

CUDA (Compute Unified Device Architecture)

NVIDIA s Compute Unified Device Architecture (CUDA)

NVIDIA s Compute Unified Device Architecture (CUDA)

Improving Performance of Machine Learning Workloads

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

L17: CUDA, cont. 11/3/10. Final Project Purpose: October 28, Next Wednesday, November 3. Example Projects

Programming in CUDA. Malik M Khan

Hardware/Software Co-Design

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

General Purpose Computing on Graphical Processing Units (GPGPU(

HPC Architectures. Types of resource currently in use

HiPANQ Overview of NVIDIA GPU Architecture and Introduction to CUDA/OpenCL Programming, and Parallelization of LDPC codes.

OpenACC Course. Office Hour #2 Q&A

CS 179 Lecture 4. GPU Compute Architecture

CUDA. GPU Computing. K. Cooper 1. 1 Department of Mathematics. Washington State University

Parallel Numerical Algorithms

Lecture 1: an introduction to CUDA

GPU Computing: Development and Analysis. Part 1. Anton Wijs Muhammad Osama. Marieke Huisman Sebastiaan Joosten

Parallel Programming Concepts. GPU Computing with OpenCL

Multi-Processors and GPU

Fundamental CUDA Optimization. NVIDIA Corporation

CSC573: TSHA Introduction to Accelerators

Scientific Computing on GPUs: GPU Architecture Overview

Master Informatics Eng.

Accelerating CFD with Graphics Hardware

A TALENTED CPU-TO-GPU MEMORY MAPPING TECHNIQUE

CS 179: GPU Programming LECTURE 5: GPU COMPUTE ARCHITECTURE FOR THE LAST TIME

Transcription:

Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 35

Course outline Introduction to GPU hardware structure and CUDA programming model Writing and compiling a simple CUDA code Example application: 2D Euler equations Optimization strategies and performance tuning In-depth coverage of CUDA features Combining CUDA with MPI. Philip Blakely (LSC) GPU introduction 2 / 35

Introduction to GPUs References Why use GPUs? Short history of GPUs Hardware structure CUDA Language Installing CUDA Philip Blakely (LSC) GPU introduction 3 / 35

References http://www.nvidia.com/cuda - Official site http://developer.nvidia.com/cuda-downloads - Drivers, Compilers, Manuals /opt/cuda-8.0/doc/pdf/ or /lsc/opt/cuda-8.0/doc/pdf CUDA C Programming Guide CUDA C Best Practices Guide Programming Massively Parallel Processors, David B. Kirk, and Wen-mei W. Hwu (2010) CUDA by Example, Jason Sanders and Edward Kandrot (2010) The CUDA Handbook, Nicholas Wilt (2013) Philip Blakely (LSC) GPU introduction 4 / 35

Course caveats Only applicable to NVIDIA GPUs Assume CUDA-enabled graphics card compute capability 2.0. Material relevant to all CUDA-enabled GPUs Assume good knowledge of C/C++, including simple optimization Some knowledge of C++ (simple classes and templates) Assume basic knowledge of Linux command-line Some knowledge of compilation useful Experience of parallel algorithms and issues is useful Linux is assumed (although CUDA is available under Windows and Mac OS) Philip Blakely (LSC) GPU introduction 5 / 35

The power of GPUs All PCs have dedicated graphics cards. Usually used for realistic rendering in games, visualization, etc. However, when programmed effectively, GPUs can provide a lot of computing power: Live demos fluidgl particles Mandelbrot Philip Blakely (LSC) GPU introduction 6 / 35

Why should we use GPUs? Can provide large speed-up over CPU performance For some applications, a factor of 100 (although that was comparing a single Intel core with a full NVIDIA GPU). More typical improvement around x10 when compared to 8-core Intel processor. Relatively cheap - a few hundred pounds - NVIDIA are in gamers market Widely available Easy to install Philip Blakely (LSC) GPU introduction 7 / 35

Improved GFLOPS Theoretical GFLOPS are higher than Intel s processors (not entirely clear which processor is being compared to) Philip Blakely (LSC) GPU introduction 8 / 35

Improved bandwidth In order to get good performance out of any processor, we need high bandwidth as well: Philip Blakely (LSC) GPU introduction 9 / 35

Why shouldn t we use GPUs? Speed up with double precision less good than single Factor of up to 32 difference (very GPU-model dependent) in performance as opposed to CPU difference of factor 2 Not suitable for all problems - depends on algorithm Can require a lot of work to get very good performance May be better to use multi-core general processors with MPI or OpenMP Current programming models do not protect the programmer from themselves. For some applications, GPU may only give 2.5x advantage over a fully-used multi-core CPU. A commercial grade GPU will cost around 2,500, around the same as a 16-core Xeon server. Philip Blakely (LSC) GPU introduction 10 / 35

Brief history of GPUs Graphics cards (for home PCs) first developed in 1990s Handle graphics rendering, leaving CPU to do complex work CPU hands over a scene for rendering to graphics card Early 2000s, get more complex graphics ops. and programmable. APIs such as OpenGL and Microsoft s DirectX Scientific Computing researchers start performing calcs. using built-in vector ops to do useful work. Late 2000s, Graphics card companies developed ways to generally program GPUs Advent of GPGPUs and general programming languages, e.g. CUDA (NVIDIA - 2006) Super computing with GPUs takes off 2010-2016 - NVIDIA progressively improve their GPUs, adding extra instructions, memory caches, dynamic parallelism, better C ++ support, etc. Meanwhile, Intel and others have been investing in multi-core computing, producing the Intel Xeon Phi and Knights Landing. OpenCL consortium also started. Philip Blakely (LSC) GPU introduction 11 / 35

Hardware structure - typical CPU Large number of transistors associated with flow control, cache, etc. Handles branch prediction, speculative execution, etc. Philip Blakely (LSC) GPU introduction 12 / 35

Hardware struture - typical GPU Larger proportion of transistors associated with floating point operations. No branch prediction or speculative execution on GPU cores. Results in higher theoretical FLOPS as long as we can get the data there fast enough. Philip Blakely (LSC) GPU introduction 13 / 35

GPU programming GPU Programming model: GPU programming is typically SIMD (or SIMT) Programming for GPUs requires rethinking algorithms and memory layout. Best algorithm for CPU not necessarily best for GPU. Knowledge of GPU hardware required for best performance GPU programming works best if we: Perform same operation simultaneously on multiple pieces of data Organise operations to be as independent as possible Arrange data in GPU memory to maximise rate of data access Philip Blakely (LSC) GPU introduction 14 / 35

CUDA language The CUDA language taught in this course is applicable to NVIDIA GPUs only. Other GPU manufacturers exist, e.g. AMD. OpenCL aims to be applicable to all GPUs. CUDA (Compute Unified Device Architecture) is enhanced C ++ Contains extensions corresponding to hardware features of the GPU. Easy to use (if you know C ++ ) but contains much more potential for hard-to-trace errors than C ++ due to the hardware. Philip Blakely (LSC) GPU introduction 15 / 35

Alternatives to CUDA Caveat: I have not studied any of these... Microsoft s DirectCompute (following DirectX) AMD Stream Computing SDK (AMD-hardware specific) PyCUDA (Python wrapper for CUDA) CLyther (Python wrapper for OpenCL) Jacket (MatLab or C ++ wrapper - commercial) MatLab s Parallel Computing Toolbox Philip Blakely (LSC) GPU introduction 16 / 35

Installing CUDA Drivers Instructions for latest driver can be found at https://developer.nvidia.com/cuda-downloads For Ubuntu, proprietary drivers are in standard repositories but may not be the latest version The Toolkit from NVIDIA provides a compiler, libraries, documentation, and a wide range of examples. CSC desktops and laptops have NVIDIA drivers and Toolkit pre-installed. Philip Blakely (LSC) GPU introduction 17 / 35

Compiling a CUDA program NVIDIA have their own compiler, nvcc (based on Clang). Uses either g++ or cl (Microsoft) for compiling CPU code. nvcc needed to compile any CUDA code CUDA API functions can be compiled with normal compiler (gcc / cl) Compilation nvcc helloworld.cu -o helloworld Philip Blakely (LSC) GPU introduction 18 / 35

Programming model CUDA works in terms of threads (Single Instruction Multiple Thread) Threads are separate processing elements within a program: Each executes exactly the same set of instructions, typically differing only by their thread number, which can lead to differing data dependencies / execution paths. Threads execute independently of each other unless explicitly synchronized (or part of same warp) Typically threads have access to same memory block as well as some data local to each thread. i.e. shared-memory paradigm with all the potential issues that implies. Philip Blakely (LSC) GPU introduction 19 / 35

Conceptual diagram Each thread has internal registers and access to global and constant memory Philip Blakely (LSC) GPU introduction 20 / 35

Conceptual diagram Each thread has internal registers and access to global and constant memory Threads arranged in blocks have access to a common block of shared memory. Threads can communicate within blocks Philip Blakely (LSC) GPU introduction 20 / 35

Conceptual diagram Each thread has internal registers and access to global and constant memory Threads arranged in blocks have access to a common block of shared memory. Threads can communicate within blocks A block of threads is run entirely on a single streaming-multiprocessor on the GPU. Philip Blakely (LSC) GPU introduction 20 / 35

Conceptual diagram Blocks are arranged into a grid. Separate blocks cannot (easily) communicate as they may be run on separate streaming multiprocessors. Philip Blakely (LSC) GPU introduction 21 / 35

Conceptual diagram Blocks are arranged into a grid. Separate blocks cannot (easily) communicate as they may be run on separate streaming multiprocessors. Communication with CPU is only via global and constant memory. Philip Blakely (LSC) GPU introduction 21 / 35

Conceptual diagram Blocks are arranged into a grid. Separate blocks cannot (easily) communicate as they may be run on separate streaming multiprocessors. Communication with CPU is only via global and constant memory. The GPU cannot request data from the CPU; the CPU must send data to it. Philip Blakely (LSC) GPU introduction 21 / 35

Conceptual elaboration Work for the GPU is written in separate functions called kernels, called from the main program. When running a program, you launch one or more blocks of threads, specifying the number of threads in each block and the total number of blocks. Each block of threads is allocated to a Streaming Multiprocessor (SM) by the hardware. Each thread has a unique index made up of a thread index and a block index which are available inside your code. These are used to distinguish threads in a kernel. Communication between threads in the same block is possible. Communication between blocks is not. Philip Blakely (LSC) GPU introduction 22 / 35

Hardware specifics A GPU is made up of several streaming multiprocessors (4 for Quadro K3100M, 13 for Kepler K20c), each of which consists of 32, 48, 128, or 192 cores. No. of cores per SM is lower in older cards. Each SM runs a warp of 32 threads at once. Global memory access relatively slow (400-800 clock cycles) A SM can have many thread blocks allocated at once, but only runs one at any one time. The SM can switch rapidly to running other threads to hide memory latency Thread registers and shared memory for a thread-block remain on multiprocessor until thread-block finished. Philip Blakely (LSC) GPU introduction 23 / 35

Streaming Multiprocessor When launching a kernel each block of threads is assigned to a Streaming Multiprocessor All threads run exactly the same kernel code. All threads in the block run potentially concurrently; do not assume any particular ordering. Within a block threads are split into warps of 32 threads which run in lock-step, all executing the same instruction at the same time Different warps within a block are not necessarily run in lock-step, but can be synchronized at any point. All threads in a block have access to some common shared memory on the SM. Multiple thread blocks can be allocated on a SM at once, assuming sufficient memory. Execution can switch to other blocks if necessary, while holding other running blocks in memory. Philip Blakely (LSC) GPU introduction 24 / 35

Hardware limitations Tesla K20 - Compute capability 3.5 13 Multiprocessors, each with 192 cores = 2496 cores. Per streaming multiprocessor (SM): 65,536 32-bit registers 16-48kB shared memory per SM 16 active blocks Global: 64kB constant memory 5GB global memory Maximum threads per block: 1024 Philip Blakely (LSC) GPU introduction 25 / 35

Hardware limitations Tesla M40 - Compute capability 5.2 24 Multiprocessors, each with 128 cores = 3072 cores Per streaming multiprocessor (SM): 65,536 32-bit registers 96kB shared memory per SM 32 active blocks Global: 64kB constant memory 12GB global memory Maximum threads per block: 1024 Philip Blakely (LSC) GPU introduction 26 / 35

Hardware limitations If you have a recent NVIDIA GPU in your own PC, it will have CUDA capability. If bought recently, this will probably be 5.0 or better. The commercial-grade GPUs differ only in that they have more RAM, and that it is ECC. The same applications demonstrated here will run on your own hardware, so you can often use a local machine for testing See http://www-internal.lsc.phy.cam.ac.uk/internal/systems.shtml for CUDA capability details for our machines. Philip Blakely (LSC) GPU introduction 27 / 35

Terminology Host The PC s main CPU and/or RAM Device A GPU being used for CUDA operations (usually the GPU global memory) Kernel Block of code to be run on a GPU Thread Single running instance of a kernel Block Indexed collection of threads Philip Blakely (LSC) GPU introduction 28 / 35

Terminology continued Global memory RAM on a GPU that can be read/written to by any thread Constant memory RAM on a GPU that can be read by any thread Shared memory RAM on a GPU that can be read/written to by any thread in a block. Registers Memory local to each thread. Latency Time between issuing an instruction and instruction being completed Philip Blakely (LSC) GPU introduction 29 / 35

Terminology ctd. Compute capability NVIDIA label for what hardware features a graphics card has. Most cards: 3.x: Latest cards: 6.x Warps A warp is a consecutive set of 32 threads within a block that are executed simultaneously (in lock-step). If branching occurs within a warp, then different code branches execute serially. Philip Blakely (LSC) GPU introduction 30 / 35

Importance of blocks All threads in a block have access to same shared memory Threads in block can communicate using shared memory and some special CUDA instructions: syncthreads() - waits until all threads in a block reach this point Blocks may be run in any order, and cannot (ordinarily/easily/safely) communicate with each other. Philip Blakely (LSC) GPU introduction 31 / 35

Parallel Correctness Remember that threads are essentially independent Cannot make any assumptions about execution order Blocks may be run in any order within a grid Algorithm must be designed to work without block-ordering assumptions Any attempt to use global memory to pass information between blocks will probably fail (or at least give unpredictable results)... Philip Blakely (LSC) GPU introduction 32 / 35

Simple race-condition The simple instruction i++ usually results in the operations: Read value of i into processor register Increment register Write i back to memory If the same instruction occurs on separate threads, and i refers to the same place in global memory, then the following could occur: Start with i = 0 Thread 1 Read i Increment register Write i End with i =2 Thread 2 Read i Increment register Write i Philip Blakely (LSC) GPU introduction 33 / 35

Simple race-condition The simple instruction i++ usually results in the operations: Read value of i into processor register Increment register Write i back to memory If the same instruction occurs on separate threads, and i refers to the same place in global memory, then the following could occur: Start with i = 0 Thread 1 Read i Increment register Write i End with i =1 Thread 2 Read i Increment register Write i Philip Blakely (LSC) GPU introduction 33 / 35

Global memory read/write Atomic Instruction Using an atomic instruction will ensure that the reads and writes occur serially. This will ensure correctness at the expense of performance. Programming Guide If a non-atomic instruction executed by a warp writes to the same location in global or shared memory for more than one of the threads of the warp, the number of serialized writes that occur to that location varies depending on the compute capability of the device and which thread performs the final write is undefined. In other words: Here be dragons! Each thread should write to its own global memory location Ideally, no global memory needing to be read should be written to by the same kernel. Philip Blakely (LSC) GPU introduction 34 / 35

Summary Introduced GPUs - history and their advantages Described hardware model and how it relates to the programming model Defined some terminology for later use Philip Blakely (LSC) GPU introduction 35 / 35