General Purpose GPU programming (GP-GPU) with Nvidia CUDA. Libby Shoop

Similar documents
CIS 665: GPU Programming. Lecture 2: The CUDA Programming Model

GPGPU. Lecture 2: CUDA

GPU Computation CSCI 4239/5239 Advanced Computer Graphics Spring 2014

Lecture 1: Introduction

Parallel Computing. Lecture 19: CUDA - I

Lecture 3: Introduction to CUDA

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Computation to Core Mapping Lessons learned from a simple application

Using The CUDA Programming Model

Lessons learned from a simple application

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

Intro to GPU s for Parallel Computing

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Matrix Multiplication in CUDA. A case study

CS 677: Parallel Programming for Many-core Processors Lecture 1

Lecture 2: Introduction to CUDA C

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University

Module 2: Introduction to CUDA C

Many-core Processors Lecture 1. Instructor: Philippos Mordohai Webpage:

CUDA (Compute Unified Device Architecture)

Module 2: Introduction to CUDA C. Objective

GPU Architecture and Programming. Ozcan Ozturk Bilkent University Ankara, Turkey

Threading Hardware in G80

CUDA C Programming Mark Harris NVIDIA Corporation

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

Core/Many-Core Architectures and Programming. Prof. Huiyang Zhou

Introduction to CUDA (1 of n*)

Introduction to CUDA (1 of n*)

COMP 322: Fundamentals of Parallel Programming. Flynn s Taxonomy for Parallel Computers

EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 22 CUDA

Introduction to GPGPUs and to CUDA programming model

CUDA Programming Model

Introduction to GPU programming. Introduction to GPU programming p. 1/17

Introduction to CELL B.E. and GPU Programming. Agenda

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

HPC COMPUTING WITH CUDA AND TESLA HARDWARE. Timothy Lanfear, NVIDIA

Lecture 11: GPU programming

CPUs+GPUs. PCIe: 1GB/s per lane x16 => 16GB/s full duplex. DMI2 20 Gb/s 40x PCIe C L1 L2. 40x PCIe. 2 QPI links 2x 19.2GB/s Full Duplex.

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

Tesla Architecture, CUDA and Optimization Strategies

Nvidia G80 Architecture and CUDA Programming

Massively Parallel Architectures

High Performance Computing

Data Parallel Execution Model

Introduction to GPU (Graphics Processing Unit) Architecture & Programming

ECE 574 Cluster Computing Lecture 15

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

Josef Pelikán, Jan Horáček CGG MFF UK Praha

Application Acceleration Using the Massive Parallel Processing Power of GPUs

GPGPU/CUDA/C Workshop 2012

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5)

CUDA Architecture & Programming Model

ECE/CS 757: Advanced Computer Architecture II GPGPUs

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

ECE 574 Cluster Computing Lecture 17

Lecture 1: Introduction and Computational Thinking

An Example of Physical Reality Behind CUDA. GeForce Speedup of Applications Why Massively Parallel Processor.

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

Massively Parallel Computing with CUDA. Carlos Alberto Martínez Angeles Cinvestav-IPN

Introduction to CUDA

CS : Many-core Computing with CUDA

CUDA Parallel Programming Model Michael Garland

Programming in CUDA. Malik M Khan

Introduction to CUDA (2 of 2)

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

High Performance Linear Algebra on Data Parallel Co-Processors I

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Stanford University. NVIDIA Tesla M2090. NVIDIA GeForce GTX 690

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

COMP 322: Fundamentals of Parallel Programming

Practical Introduction to CUDA and GPU

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Master Informatics Eng.

Overview: Graphics Processing Units

CSE 160 Lecture 24. Graphical Processing Units

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17

Introduction to CUDA Programming

GPU & High Performance Computing (by NVIDIA) CUDA. Compute Unified Device Architecture Florian Schornbaum

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

ME964 High Performance Computing for Engineering Applications

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

Blockseminar: Verteiltes Rechnen und Parallelprogrammierung. Tim Conrad

CS/EE 217 GPU Architecture and Parallel Programming. Lecture 22: Introduction to OpenCL

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

GPU programming. Dr. Bernhard Kainz

L18: Introduction to CUDA

1/25/12. Administrative

Data parallel computing

Module 3: CUDA Execution Model -I. Objective

Lecture 10!! Introduction to CUDA!

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

CSE 591: GPU Programming. Memories. Klaus Mueller. Computer Science Department Stony Brook University

GPU. Ben de Waal Summer 2008

Transcription:

General Purpose GPU programming (GP-GPU) with Nvidia CUDA Libby Shoop 3

What is (Historical) GPGPU? General Purpose computation using GPU and graphics API in applications other than 3D graphics GPU accelerates critical path of application Data parallel algorithms leverage GPU attributes Large data arrays, streaming throughput Fine-grain SIMD parallelism sometimes also referred to as SIMT, emphasizing the threads Low-latency floating point (FP) computation 4

What is (Historical) GPGPU? Applications see GPGPU.org, GPU Gems Game effects (FX) physics, image processing Physical modeling, computational engineering, matrix algebra, convolution, correlation, sorting 5

CUDA Compute Unified Device Architecture General purpose programming model User kicks off batches of threads on the GPU GPU = dedicated super-threaded, massively data parallel co-processor Targeted software stack Compute oriented drivers, language, and tools Driver for loading computation programs into GPU Standalone Driver - Optimized for computation Interface designed for compute graphics-free API Data sharing with OpenGL buffer objects Guaranteed maximum download & readback speeds Explicit GPU memory management 7

Parallel Computing on a GPU: First Generation 8-series GPUs deliver 25 to 200+ GFLOPS on compiled parallel C applications Available in laptops, desktops, and clusters GPU parallelism is doubling every year Programming model scales transparently Programmable in C with CUDA tools Multithreaded SPMD model uses application data parallelism and thread parallelism GeForce 8800 Tesla D870 Tesla S870 9

New server: wulver 2 GPU Cards:!! GTX 770! Kepler Architecture! Compute capability 3.0!!!! GTX 970! Maxwell Architecture! Compute capability 5.2! 10

GTX 770 Kepler 11

GPU architecture evolution: Maxwell 12

Architecture: TK1 Keppler 192 cores!! On 1 SMX:! streaming! Multiprocessor! 13

CUDA Devices and Threads A compute device Is a coprocessor to the CPU or host ( memory Has its own DRAM (device Runs many threads in parallel Is typically a GPU but can also be another type of parallel processing device Data-parallel portions of an application are expressed as device kernels which run on many threads Differences between GPU and CPU threads GPU threads are extremely lightweight Very little creation overhead GPU needs 1000s of threads for full efficiency Multi-core CPU needs only a few 14

CUDA C Integrated host+device app C program Serial or modestly parallel parts in host C code Highly parallel parts in device SPMD kernel C code ( host ) Serial Code ( device ) Parallel Kernel KernelA<<< nblk, ntid >>>(args);... ( host ) Serial Code ( device ) Parallel Kernel KernelB<<< nblk, ntid >>>(args);... 16

Extended C Type Qualifiers global, device, shared, local, constant Keywords threadidx, blockidx Intrinsics syncthreads Runtime API Memory, symbol, execution management Function launch device float filter[n]; global void convolve (float *image) { } shared float region[m];... region[threadidx] = image[i]; syncthreads()... image[j] = result; // Allocate GPU memory void *myimage = cudamalloc(bytes) // 100 blocks, 10 threads per block convolve<<<100, 10>>> (myimage); 19

nvcc compiler Special compiler that creates device code and host code /usr/local/cuda/bin/nvcc threadid.cu -o threadid -arch=sm_20 The arch will allow you to create code for each type of compute level See the nvidia developer documentation 20

Arrays of Parallel Threads A CUDA kernel is executed by an array of threads ( SPMD ) All threads run the same code Each thread has an ID that it uses to compute memory addresses and make control decisions threadid 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; 22

Thread Blocks: Scalable Cooperation Divide monolithic thread array into multiple blocks Threads within a block cooperate via shared memory, atomic operations and barrier synchronization Threads in different blocks cannot cooperate threadid Thread Block 0 Thread Block 1 Thread Block N - 1 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; 23

CUDA API Highlights: Easy and Lightweight The API is an extension to the ANSI C programming language Low learning curve The hardware is designed to enable lightweight runtime and driver High performance 24

Block IDs and Thread IDs original architectures Each thread uses IDs to decide what data to work on Block ID: 1D or 2D Thread ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data Image processing Solving PDEs on volumes Host Kernel 1 Kernel 2 Device Grid 1 Grid 2 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Block (1, 1) (0,0,1) (1,0,1) (2,0,1) (3,0,1) Thread (0,0,0) Thread (1,0,0) Thread (2,0,0) Thread (3,0,0) Thread (0,1,0) Thread (1,1,0) Thread (2,1,0) Thread (3,1,0) Courtesy: NDVIA 25

Assigning grids, blocks of threads Obtaining a thread s ID depends on layout chosen The complicated aspect of CUDA programming Host Device Kernel 1 Grid 1 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Grid 2 Kernel 2 Block (1, 1) (0,0,1) (1,0,1) (2,0,1) (3,0,1) Thread (0,0,0) Thread (1,0,0) Thread (2,0,0) Thread (3,0,0) Thread (0,1,0) Thread (1,1,0) Thread (2,1,0) Thread (3,1,0) Courtesy: NDVIA 26

CUDA Memory Model Overview Global memory Main means of communicating R/W Data between host and device Grid ( 0 (0, Block ( 0 (1, Block Contents visible to all threads Long latency access Shared Memory Registers Registers Shared Memory Registers Registers We will focus on global memory for now Host ( 0 (1, Thread ( 0 (0, Thread Global Memory ( 0 (1, Thread ( 0 (0, Thread 27

CUDA Device Memory Allocation cudamalloc() Allocates object in the device Global Memory Requires two parameters Address of a pointer to the allocated object Size of of allocated object cudafree() Frees object from device Global Memory Pointer to freed object Host Grid ( 0 (0, Block Registers Global Memory Shared Memory ( 0 (0, Thread Registers ( 0 (1, Thread ( 0 (1, Block Shared Memory Registers Registers ( 0 (1, Thread ( 0 (0, Thread 28

(. cont ) CUDA Device Memory Allocation Code example: Allocate a 64 * 64 single precision float array Attach the allocated storage to Md d is often used to indicate a device data structure TILE_WIDTH = 64; Float* Md int size = TILE_WIDTH * TILE_WIDTH * sizeof(float); cudamalloc((void**)&md, size); cudafree(md); 29

CUDA Host-Device Data Transfer cudamemcpy() memory data transfer Grid Requires four parameters Pointer to destination Pointer to source Number of bytes copied ( 0 (0, Block Registers Shared Memory Registers ( 0 (1, Block Shared Memory Registers Registers Type of transfer Host to Host ( 0 (1, Thread ( 0 (0, Thread ( 0 (1, Thread ( 0 (0, Thread Host to Device Device to Host Host Global Memory Device to Device Asynchronous transfer 30

CUDA Host-Device Data Transfer (cont.) Code example: Transfer a 64 * 64 single precision float array M is in host memory and Md is in device memory cudamemcpyhosttodevice and cudamemcpydevicetohost are symbolic constants cudamemcpy(md, M, size, cudamemcpyhosttodevice); cudamemcpy(m, Md, size, cudamemcpydevicetohost); 31

CUDA Keywords 32

CUDA Function Declarations () DeviceFunc device float () KernelFunc global void Executed on the: device device Only callable from the: device host host float () HostFunc host host global defines a kernel function Must return void device and host can be used together 33

Calling a Kernel Function Thread Creation A kernel function must be called with an execution configuration: global void KernelFunc(...); dim3 DimGrid(100, 50); // 5000 thread blocks dim3 DimBlock(4, 8, 8); // 256 threads per block size_t SharedMemBytes = 64; // 64 bytes of shared memory KernelFunc<<< DimGrid, DimBlock, SharedMemBytes >>>(...); Any call to a kernel function is asynchronous from CUDA 1.0 on, explicit synch needed for blocking 35

Grids, Blocks, and Threads Separate presentation 36

A Simple Running Example Matrix Multiplication A simple matrix multiplication example that illustrates the basic features of memory and thread management in CUDA programs Leave shared memory usage until later Local, register usage Thread ID usage Memory data transfer API between host and device Assume square matrix for simplicity 37

Programming Model: Square Matrix Multiplication Example P = M * N of size WIDTH x WIDTH Without tiling: One thread calculates one element of P M and N are loaded WIDTH times from global memory N M P WIDTH WIDTH WIDTH WIDTH 38

Memory Layout of a Matrix in C M 0,0! M 0,1! M 0,2! M 0,3! M 1,0! M 1,1! M 1,2! M 1,3! M 2,0! M 2,1! M 2,2! M 2,3! M 3,0! M 3,1! M 3,2! M 3,3! M! M 0,0! M 0,1! M 0,2! M 0,3! M 1,1! M 1,0! M 1,2! M 1,3! M 2,0! M 2,1! M 2,2! M 2,3! M 3,0! M 3,1! M 3,2! M 3,3! 39

Step 1: Matrix Multiplication A Simple Host Version in C // Matrix multiplication on the (CPU) host in double N precision! ( Width void MatrixMulOnHost(float* M, float* N, float* P, int { ( i ++ for (int i = 0; i < Width; j! for (int j = 0; j < Width; ++j) { double sum = 0; for (int k = 0; k < Width; ++k) { double a = M[i * width + k]; } } M double b = N[k * width + j]; sum += a * b; } P[i * Width + j] = sum; k! i! P k! WIDTH WIDTH WIDTH WIDTH 40

Step 2: Input Matrix Data Transfer ( Code (Host-side ( Width void MatrixMulOnDevice(float* M, float* N, float* P, int { int size = Width * Width * sizeof(float); float* Md, Nd, Pd;! 1. // Allocate and Load M, N to device memory cudamalloc(&md, size); cudamemcpy(md, M, size, cudamemcpyhosttodevice); cudamalloc(&nd, size); cudamemcpy(nd, N, size, cudamemcpyhosttodevice); // Allocate P on the device cudamalloc(&pd, size); 41

Step 3: Output Matrix Data Transfer ( Code (Host-side 2. // Kernel invocation code to be shown later!!! 3. // Read P from the device! cudamemcpy(p, Pd, size, cudamemcpydevicetohost);!! // Free device matrices! cudafree(md); cudafree(nd); cudafree (Pd);! }! 42

Step 4: Kernel Function // Matrix multiplication kernel per thread code ( Width global void MatrixMulKernel(float* Md, float* Nd, float* Pd, int { // Pvalue is used to store the element of the matrix // that is computed by the thread float Pvalue = 0; 43

(. cont ) Step 4: Kernel Function { ( k ++ for (int k = 0; k < Width; float Melement = Md[threadIdx.y*Width+k]; float Nelement = Nd[k*Width+threadIdx.x]; Pvalue += Melement * Nelement; } Nd tx! k! WIDTH } Pd[threadIdx.y*Width+threadIdx.x] = Pvalue; Md Pd ty! ty! k! tx! WIDTH WIDTH WIDTH 44

Step 5: Kernel Invocation (Host-side Code) // Setup the execution configuration dim3 dimgrid(1, 1); dim3 dimblock(width, Width); // Launch the device computation threads! MatrixMulKernel<<<dimGrid, dimblock>>>(md, Nd, Pd, Width); 45

Only One Thread Block Used One Block of threads compute matrix Pd Each thread computes one element of Pd Each thread Loads a row of matrix Md Loads a column of matrix Nd Perform one multiply and addition for each pair of Md and Nd elements Compute to off-chip memory access ratio close to 1:1 (not very ( high Size of matrix limited by the number of threads allowed in a thread block Grid 1 Block 1 3 2 5 4 WIDTH Thread )2, 2 ( Md Nd Pd 2 4 2 6 48 46

Step 7: Handling Arbitrary Sized Square Matrices Have each 2D thread block to compute a (TILE_WIDTH) 2 sub-matrix (tile) of the result matrix Each has (TILE_WIDTH) 2 threads Nd WIDTH Generate a 2D Grid of (WIDTH/ TILE_WIDTH) 2 blocks Md You still need to put a loop around the kernel call for cases where WIDTH/ TILE_WIDTH is greater than max grid size (64K)!! Pd bx! by! TILE_WIDTH! ty! tx! WIDTH WIDTH WIDTH 47

Some Useful Information on Tools 48

Compiling a CUDA Program Virtual Physical C/C++ CUDA Application NVCC PTX Code PTX to Target Compiler float4 me = gx[gtid]; me.x += me.y * me.z; CPU Code Parallel Thread ( PTX ) execution Virtual Machine and ISA Programming model Execution resources and state ld.global.v4.f32 {$f1,$f3,$f5,$f7}, [$r9+0]; mad.f32 $f1, $f5, $f3, $f1; G80 GPU Target code 49

Compilation Any source file containing CUDA language extensions must be compiled with NVCC NVCC is a compiler driver Works by invoking all the necessary tools and compilers like cudacc, g++, cl,... NVCC outputs: ( Code C code (host CPU PTX Must then be compiled with the rest of the application using another tool Object code directly Or, PTX source, interpreted at runtime 50

Linking Any executable with CUDA code requires two dynamic libraries: ( cudart ) The CUDA runtime library ( cuda ) The CUDA core library 51

Debugging Using the Device Emulation Mode An executable compiled in device emulation mode (nvcc -deviceemu) runs completely on the host using the CUDA runtime No need of any device and CUDA driver Each device thread is emulated with a host thread Running in device emulation mode, one can: (. etc Use host native debug support (breakpoints, inspection, Access any device-specific data from host code and vice-versa Call any host function from device code (e.g. printf) and viceversa Detect deadlock situations caused by improper usage of syncthreads 52

Device Emulation Mode Pitfalls Emulated device threads execute sequentially, so simultaneous accesses of the same memory location by multiple threads could produce different results. Dereferencing device pointers on the host or host pointers on the device can produce correct results in device emulation mode, but will generate an error in device execution mode 53

Floating Point Results of floating-point computations will slightly differ because of: Different compiler outputs, instruction sets Use of extended precision for intermediate results There are various options to force strict single precision on the host 54