GPU programming: CUDA. Sylvain Collange Inria Rennes Bretagne Atlantique

Similar documents
GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

PPAR: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique PPAR

GPU programming: Code optimization part 1. Sylvain Collange Inria Rennes Bretagne Atlantique

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

GPU programming: Code optimization. Sylvain Collange Inria Rennes Bretagne Atlantique

GPU programming: Code optimization part 1. Sylvain Collange Inria Rennes Bretagne Atlantique

Optimization Techniques for Parallel Code: 4: GPU code optimization

GPU Computing with CUDA

GPU programming: Code optimization part 2. Sylvain Collange Inria Rennes Bretagne Atlantique

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

CUDA C/C++ BASICS. NVIDIA Corporation

PPAR: GPU code optimization

GPU Computing: Introduction to CUDA. Dr Paul Richmond

CUDA. Sathish Vadhiyar High Performance Computing

GPU programming: Code optimization

CUDA C/C++ BASICS. NVIDIA Corporation

Lecture 8: GPU Programming. CSE599G1: Spring 2017

Lecture 11: GPU programming

Device Memories and Matrix Multiplication

HPCSE II. GPU programming and CUDA

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

An Introduction to GPU Computing and CUDA Architecture

CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation

CUDA C Programming Mark Harris NVIDIA Corporation

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

Introduction to GPGPU and GPU-architectures

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

CS : Many-core Computing with CUDA

Lecture 2: CUDA Programming

Vector Addition on the Device: main()

Scientific discovery, analysis and prediction made possible through high performance computing.

Introduction to CUDA Programming

Parallel Accelerators

CS377P Programming for Performance GPU Programming - I

Lecture 3: Introduction to CUDA

Lecture 6b Introduction of CUDA programming

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory

Lecture 10!! Introduction to CUDA!

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

Introduction to GPU programming. Introduction to GPU programming p. 1/17

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Parallel Computing. Lecture 19: CUDA - I

GPU Computing Master Clss. Development Tools

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

Real-time Graphics 9. GPGPU

Massively Parallel Algorithms

Optimizing Parallel Reduction in CUDA

Parallel Accelerators

Basics of CADA Programming - CUDA 4.0 and newer

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro

COSC 6374 Parallel Computations Introduction to CUDA

GPU 1. CSCI 4850/5850 High-Performance Computing Spring 2018

Module 2: Introduction to CUDA C

Lecture 5. Performance Programming with CUDA

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

GPU Computing with CUDA

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

CUDA Parallel Programming Model Michael Garland

Programmable Graphics Hardware (GPU) A Primer

EEM528 GPU COMPUTING

Parallel Numerical Algorithms

Real-time Graphics 9. GPGPU

High Performance Computing and GPU Programming

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

High Performance Linear Algebra on Data Parallel Co-Processors I

CSE 599 I Accelerated Computing Programming GPUS. Intro to CUDA C

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

$ cd ex $ cd 1.Enumerate_GPUs $ make $./enum_gpu.x. Enter in the application folder Compile the source code Launch the application

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

ME964 High Performance Computing for Engineering Applications

GPU Programming Using CUDA

Module 2: Introduction to CUDA C. Objective

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

GPU Programming Introduction

GPU programming. Dr. Bernhard Kainz

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

NVIDIA CUDA. Fermi Compatibility Guide for CUDA Applications. Version 1.1

GPU programming: Unified memory models and more. Sylvain Collange Inria Rennes Bretagne Atlantique

Tesla Architecture, CUDA and Optimization Strategies

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA

ECE 574 Cluster Computing Lecture 15

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

CUDA Parallelism Model

University of Bielefeld

CUDA GPGPU Workshop CUDA/GPGPU Arch&Prog

CUDA Kenjiro Taura 1 / 36

Lecture 2: Introduction to CUDA C

GPU Architecture and Programming. Andrei Doncescu inspired by NVIDIA

High-Performance Computing Using GPUs

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan

Programming with CUDA, WS09

GPU Computing with CUDA

CUDA Programming Model

Lecture 7: Caffe: GPU Optimization. boris.

Transcription:

GPU programming: CUDA Sylvain Collange Inria Rennes Bretagne Atlantique sylvain.collange@inria.fr

This lecture: CUDA programming We have seen some GPU architecture Now how to program it? 2

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 3

GPU development environments For general-purpose programming (not graphics) Multiple toolkits NVIDIA CUDA Khronos OpenCL Microsoft DirectCompute Google RenderScript Mostly syntactical variations Underlying principles are the same In this course, focus on NVIDIA CUDA 4

Higher-level programming... Directive-based OpenACC OpenMP 4.0 Language extensions / libraries Microsoft C++ AMP Intel Cilk+ NVIDIA Thrust, CUB Languages Intel ISPC Most corporations agree we need common standards... But only if their own product becomes the standard! 5

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 6

Hello World in CUDA CPU host code + GPU device code global void hello() { } Device code int main() { hello<<<1,1>>>(); printf("hello World!\n"); } return 0; Host code 7

Compiling a CUDA program Executable contains both host and device code Device code in PTX and/or native PTX can be recompiled on the fly (e.g. old program on new GPU) NVIDIA's compiler driver takes care of the process: Host code gcc/msvc Source files.cu cudafe Intermediate language PTX ptxas Device code nvcc -o hello hello.cu GPU native binary Binary for other GPU fatbin Fat binary nvcc 8

Control flow Program running on CPUs Submit work to the GPU through the GPU driver Commands execute asynchronously Application: Fat binary GPU code CUDA Runtime API cudaxxx functions CUDA Driver API cuyyy functions CUDA Runtime User mode Kernel mode GPU Driver push pop CPU Command queue GPU 9

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow CPU GPU CPU memory GPU memory 10

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory CPU GPU 1 CPU memory GPU memory 11

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory CPU 2 GPU 2. Copy inputs from CPU mem to GPU memory 1 CPU memory GPU memory 12

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory 2. Copy inputs from CPU mem to GPU memory CPU 2 1 GPU 3 3. Run computation on GPU CPU memory GPU memory 13

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory 2. Copy inputs from CPU mem to GPU memory CPU 2 4 1 GPU 3 3. Run computation on GPU 4. Copy back results to CPU memory CPU memory GPU memory 14

Example: a + b Our Hello World example did not involve the GPU Let's add up 2 numbers on the GPU Start from host code int main() { float a = 1515, b = 159; // Inputs, in host mem } float c; // c = a + b; printf("c = %f\n", c); vectoradd example: cuda/samples/0_simple/vectoradd 15

Step 1: allocate GPU memory int main() { float a = 1515, b = 159; // Inputs, in host mem } // Allocate GPU memory size_t size = sizeof(float); float *d_a, *d_b, *d_c; cudamalloc((void **)&d_a, size); cudamalloc((void **)&d_b, size); cudamalloc((void **)&d_c, size); Passing a pointer to the pointer to be overwritten // Free GPU memory cudafree(d_a); cudafree(d_b); cudafree(d_c); Allocate space for a and b in GPU memory At the end, free memory 16

Step 2, 4: copy data to/from GPU memory int main() { float a = 1515, b = 159; // Inputs, CPU mem // Allocate GPU memory size_t size = sizeof(float); float *d_a, *d_b, *d_c; cudamalloc((void **)&d_a, size); cudamalloc((void **)&d_b, size); cudamalloc((void **)&d_c, size); // Copy from CPU mem to GPU mem cudamemcpy(d_a, &a, size, cudamemcpyhosttodevice); cudamemcpy(d_b, &b, size, cudamemcpyhosttodevice); // Copy results back to CPU mem cudamemcpy(&c, d_c, size, cudamemcpydevicetohost); printf("c = %f\n", c); } // Free GPU memory cudafree(d_a); cudafree(d_b); cudafree(d_c); 17

Step 3: launch kernel global void addongpu(float * a, float * b, float * c) { *c = *a + *b; } int main() { float a = 1515, b = 159; // Inputs, CPU mem // Allocate GPU memory size_t size = sizeof(float); float *d_a, *d_b, *d_c; cudamalloc((void **)&d_a, size); cudamalloc((void **)&d_b, size); cudamalloc((void **)&d_c, size); // Copy from CPU mem to GPU mem cudamemcpy(d_a, &a, size, cudamemcpyhosttodevice); cudamemcpy(d_b, &b, size, cudamemcpyhosttodevice); // Launch computation on GPU addongpu<<<1, 1>>>(d_A, d_b, d_c); Kernel is a function prefixed by global Runs on GPU Invoked from CPU code with <<<>>> syntax float c; // Result on CPU } // Copy results back to CPU mem cudamemcpy(&c, d_c, size, cudamemcpydevicetohost); printf("c = %f\n", c); // Free GPU memory cudafree(d_a); cudafree(d_b); cudafree(d_c); What is inside the <<<>>>? 18

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 19

Granularity of a GPU task Results from last Thursday lab work Total latency of transfer, compute, transfer back: ~5 µs CPU-GPU transfer latency: 0.3 µs GPU kernel call: ~4 µs CPU performance: 100 Gflop/s how many flops in 5 µs? 20

Granularity of a GPU task Results from last Thursday lab work Total latency of transfer, compute, transfer back: ~5 µs CPU-GPU transfer latency: 0.3 µs GPU kernel call: ~4 µs CPU performance: 100 Gflop/s 500 000 flops in 5 µs For < 500k operations, computing on CPU will be always faster! Millions of operations needed to amortize data transfer time Only worth offloading large parallel tasks the GPU 21

GPU physical organization Thread Warp Execution units... Shared memory Registers L1 cache SM 1 SM 2 To L2 cache / external memory 22

Workload: logical organization A kernel is launch on a grid: my_kernel<<<n, m>>>(...) Two nested levels n blocks / grid m threads / block Block 0 Block 1 Block n-1 Grid Thread 2 Thread 2 Thread 1 Thread 1 Thread 0 Thread m-1 Thread 0 Thread m-1 Threads 23

Outer level: grid of blocks Blocks also named Concurrent Thread Arrays (CTAs) Virtually unlimited number of blocks No communication between blocks of the same grid Grid 1 Block 1 Block 2... Block 3 24

Inner level: threads Blocks contain threads All threads in a block Run on the same SM: they can communicate Run in parallel: they can synchronize Constraints Max number of threads/block (512 or 1024 depending on arch) Recommended: at least 64 threads for good performance Recommended: multiple of the warp size Threads Block Barrier 25

Multi-BSP model: recap Modern parallel platforms are hierarchical Threads cores nodes... Remember the memory wall, the speed of light Multi-BSP: BSP generalization with multiple nested levels Level 2 superstep Level 1 superstep Level 1 Barrier Higher level: more expensive synchronization Level 2 Barrier 26

CPU thread L1 superstep L2 superstep Multi-BSP and CUDA Kernel launch 1 Grid 1 Block 1 Block 2 L1 bar.... L2 Bar Global mem write Kernel launch 2 Grid 2 Minor difference: BSP is based on message passing, CUDA on shared memory 27

Mapping blocks to hardware resources Thread Block 1 Block 2 Block 3 Block 4 Warp Execution units... Shared memory Registers L1 cache SM 1 SM 2 To L2 cache / external memory SM resources are partitioned across blocks 28

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Run in order or out of order Should not assume anything on execution order of blocks Blocks 0 4 1 5 8 9 10 6 2 3 711 SM 1 SM 2 29

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Run in order or out of order Should not assume anything on execution order of blocks 0 4 Blocks 5 8 9 10 6 7 11 2 1 3 SM 1 SM 2 30

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Blocks Run in order or out of order Should not assume anything on execution order of blocks 0 5 8 9 1011 3 4 6 7 SM 1 SM 2 1 2 31

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Blocks Run in order or out of order Should not assume anything on execution order of blocks 0 5 8 10 9 11 SM 1 SM 2 4 1 2 3 6 7 32

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 33

Example: vector addition Addition example: only 1 thread Now let's run a parallel computation Start with multiple blocks, 1 thread/block Independent computations in each block No communication/synchronization needed 34

Host code: initialization A and B are now arrays: just change allocation size int main() { int numelements = 50000; size_t size = numelements * sizeof(float); float *h_a = (float *)malloc(size); float *h_b = (float *)malloc(size); float *h_c = (float *)malloc(size); Initialize(h_A, h_b); // Allocate device memory float *d_a, *d_b, *d_c; cudamalloc((void **)&d_a, size); cudamalloc((void **)&d_b, size); cudamalloc((void **)&d_c, size); } cudamemcpy(d_a, h_a, size, cudamemcpyhosttodevice); cudamemcpy(d_b, h_b, size, cudamemcpyhosttodevice);... 35

Host code: kernel and kernel launch global void vectoradd2(float *A, float *B, float *C) { int i = blockidx.x; } C[i] = A[i] + B[i]; Launch n blocks of 1 thread each (for now) int blocks = numelements; vectoradd2<<<blocks, 1>>>(d_A, d_b, d_c); 36

Device code global void vectoradd2(float *A, float *B, float *C) { int i = blockidx.x; Built-in CUDA variable: in device code only C[i] = A[i] + B[i]; } Block number i processes element i Grid of block may have up to 3 dimensions (blockidx.x, blockidx.y, blockidx.z) For programmer convenience: no incidence on scheduling 37

Multiple blocks, multiple threads/block Fixed number of threads / block: here 64 Host code Not necessarily multiple of thread block! int threads = 64; int blocks = (numelements + threads - 1) / threads; // Round up vectoradd3<<<blocks, threads>>>(d_a, d_b, d_c, numelements); Device code global void vectoradd3(const float *A, const float *B, float *C, int n) { Global index int i = blockidx.x * blockdim.x + threadidx.x; } if(i < n) { C[i] = A[i] + B[i]; } Last block may have less work to do Thread block may also have up to 3 dimensions: threadidx.{x,y,z} 38

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 39

Barriers Threads can synchronize inside one block In C for CUDA: syncthreads(); Needs to be called at the same place for all threads of the block if(tid < 5) {... } else {... } syncthreads(); if(a[0] == 17) { syncthreads(); } else { syncthreads(); } Same condition for all threads in the block if(tid < 5) { syncthreads(); } else { syncthreads(); } OK OK Wrong 40

Shared memory Fast, software-managed memory Faster than global memory Valid only inside one block Each block sees its own copy Used to exchange data between threads Concurrent writes: one thread wins, but we do not know which one Execution units Shared memory SM 1 Thread Block 1 Block 2 Warp Registers L1 cache 41

Thread communication: common pattern Each thread writes to its own location No write conflict Barrier Wait until all threads have written Read data from other threads Compute Write to smem[tid] Barrier Read from smem[f(tid)] Barrier Compute 42

Example: parallel reduction Algorithm for 2-level multi-bsp model P 0 P 1 P 2 P 3 P 4 P 5 P 6 P 7 a 4 a 5 a 6 a 7 a 0 a 1 a 2 a 3 L1 barrier a 0 a 2 a 4 a 6 L1 barrier Level 1 reduction a 0 a 4 L2 Barrier Level 2 reduction r 43

Reduction in CUDA: level 1 global void reduce1(float *g_idata, float *g_odata, unsigned int n) { extern shared float sdata[]; unsigned int tid = threadidx.x; unsigned int i = blockidx.x * blockdim.x + threadidx.x; // Load from global to shared mem sdata[tid] = (i < n)? g_idata[i] : 0; syncthreads(); for(unsigned int s = 1; s < blockdim.x; s *= 2) { int index = 2 * s * tid; Dynamic shared memory allocation: will specify size later } if(index < blockdim.x) { sdata[index] += sdata[index + s]; } syncthreads(); } // Write result for this block to global mem if (tid == 0) g_odata[blockidx.x] = sdata[0]; cuda/samples/6_advanced/reduction 44

Reduction: host code int smemsize = threads * sizeof(float); reduce1<<<blocks, threads, smemsize>>>(d_idata, d_odata, size); Optional parameter: Size of dynamic shared memory per block Level 2: run reduction kernel again, until we have 1 block left By the way, is our reduction operator associative? 45

A word on floating-point Parallel reduction requires the operator to be associative Is addition associative? On reals: yes On floating-point numbers: no With 4 decimal digits: (1.234+123.4)-123.4=124.6-123.4=1.200 Consequence: different result depending on thread count 46

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 47

Arithmetic intensity Ratio of computation to memory pressure High arithmetic intensity: compute bound algorithm Low arithmetic intensity: memory bound algorithm Thursday lab work NVIDIA GTX 470 needs 65 flops / word to be compute-bound How to reach enough arithmetic intensity? Need to reuse values loaded from memory 48

Classic example: matrix multiplication Naive algorithm for i = 0 to n-1 for j = 0 to n-1 for k = 0 to n-1 C[i,j]+=A[i,k]*B[k,j] A B C k k Arithmetic intensity: 1:1 :( 49

Reusing inputs Move loop on k up for k = 0 to n-1 for i = 0 to n-1 for j = 0 to n-1 C[i,j]+=A[i,k]*B[k,j] A B C k k Enable data reuse on inputs But need to keep all matrix C in registers! 50

With tiling Block loops on i and j for i = 0 to n-1 step 16 for j = 0 to n-1 step 16 for k = 0 to n-1 for i2 = i to i+15 for j2 = j to j+15 C[i2,j2]+=A[i2,k]*B[k,j2] For one block: product between horizontal panel of A and vertical panel of B A B C i,j Constant size 51

With more tiling Block loop on k for i = 0 to n-1 step 16 for j = 0 to n-1 step 16 for k = 0 to n-1 step 16 for k2 = k to k+15 for i2 = i to i+15 for j2 = j to j+15 C[i2,j2]+=A[i2,k]*B[k,j2] A B C k k Constant size 52

Pre-loading data for i = 0 to n-1 step 16 for j = 0 to n-1 step 16 c = {0} for k = 0 to n-1 step 16 a = A[i..i+15,k..k+15] b = B[k..k+15,j..j+15] for k2 = 0 to 15 for i2 = 0 to 15 for j2 = 0 to 15 c[i2,j2]+=a[i2,k2]*b[k2,j2] Load submatrices a and b Multiply submatrices c = a b C[i..i+15,j..j+15] = c Store submatrix c A B C b a c 53

Breaking into two levels Run loops on i, j, i2, j2 in parallel for // i = 0 to n-1 step 16 for // j = 0 to n-1 step 16 c = {0} for k = 0 to n-1 step 16 a = A[i..i+15,k..k+15] b = B[k..k+15,j..j+15] Level 2: Blocks Level 1: Threads for k2 = 0 to 15 for // i2 = 0 to 15 for // j2 = 0 to 15 c[i2,j2]+=a[i2,k2]*b[k2,j2] C[i..i+15,j..j+15] = c Let's focus on threads 54

Level 1: SIMD (PRAM-style) version Each processor has ID (x,y) Loops on i2, j2 are implicit c[x,y] = 0 for k = 0 to n-1 step 16 a[x,y] = A[i+x,k+y] b[x,y] = B[k+x,j+y] for k2 = 0 to 15 c[x,y]+=a[x,k2]*b[k2,y] C[i+x,j+y] = c[x,y] Load submatrices a and b Multiply submatrices c = a b Store submatrix c Private writes: no conflict Read from other processors How to translate to SPMD (BSP-style)? 55

SPMD version Place synchronization barriers c[x,y] = 0 for k = 0 to n-1 step 16 a[x,y] = A[i+x,k+y] b[x,y] = B[k+x,j+y] Barrier for k2 = 0 to 15 c[x,y]+=a[x,k2]*b[k2,y] Barrier C[i+x,j+y] = c[x,y] Why do we need the second barrier? 56

Data allocation 3 memory spaces: Global, Shared, Local Where should we put: A, B, C, a, b, c? c[x,y] = 0 for k = 0 to n-1 step 16 a[x,y] = A[i+x,k+y] b[x,y] = B[k+x,j+y] Barrier for k2 = 0 to 15 c[x,y] += a[x,k2]*b[k2,y] Barrier C[i+x,j+y] = c[x,y] 57

Data allocation Memory spaces: Global, Shared, Local As local as possible c = 0 for k = 0 to n-1 step 16 a[x,y] = A[i+x,k+y] b[x,y] = B[k+x,j+y] Barrier for k2 = 0 to 15 c += a[x,k2]*b[k2,y] Barrier C[i+x,j+y] = c Local: private to each thread (indices are implicit) Global: shared between blocks / inputs and outputs Shared: shared between threads, private to block 58

CUDA version Straightforward translation float Csub = 0; Precomputed base addresses for(int a = abegin, b = bbegin; a <= aend; a += astep, b += bstep) { shared float As[BLOCK_SIZE][BLOCK_SIZE]; Declare shared memory shared float Bs[BLOCK_SIZE][BLOCK_SIZE]; As[ty][tx] = A[a + wa * ty + tx]; Bs[ty][tx] = B[b + wb * ty + tx]; Linearized arrays } syncthreads(); for(int k = 0; k < BLOCK_SIZE; ++k) { Csub += As[ty][k] * Bs[k][tx]; } syncthreads(); int c = wb * BLOCK_SIZE * by + BLOCK_SIZE * bx; C[c + wb * ty + tx] = Csub; matrixmul example: cuda/samples/0_simple/matrixmul 59

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example Extra features 60

Multiple grid/block dimensions Grid and block size are of type dim3 Support up to 3 dimensions dim3 dimblock(tx, ty, tz); dim3 dimgrid(bx, by, bz); my_kernel<<<dimgrid, dimblock>>>(arguments); Implicit cast from int to dim3 y and z sizes are 1 by default On device side, threadidx, blockdim, blockidx, griddim are also of type dim3 Access members with.x,.y,.z Some early GPUs (CC 1.x) only support 2-D grids 61

Local memory Registers are fast but Limited in size Not addressable Local memory used for Local variables that do not fit in registers (register spilling) Local arrays accessed with indirection int a[17]; b = a[i]; Warning: local is a misnomer! Physically, local memory usually goes off-chip 62

Device functions Kernel can call functions Need to be marked for GPU compilation device int foo(int i) { } A function can be compiled for both host and device host device int bar(int i) { } Device functions can call device functions Older GPUs do not support recursion 63

Asynchronous execution By default, GPU calls are asynchronous Returns immediately to CPU code GPU commands are still executed in-order: queuing Some commands are synchronous by default cudamemcpy(..., cudamemcpydevicetohost) Use cudamemcpyasync for asynchronous version Keep it in mind when checking for errors! Error returned by a command may be caused by an earlier command To force synchronization: cuthreadsynchronize() 64

Recap Memory management: Host code and memory / Device code and memory Writing GPU Kernels Dimensions of parallelism: grids, blocks, threads Memory spaces: global, local, shared memory Next time: advanced features and optimization techniques 65

References and further reading CUDA C Programming Guide Mark Harris. Introduction to CUDA C. http://developer.nvidia.com/cuda-education David Luebke, John Owens. Intro to parallel programming. Online course. https://www.udacity.com/course/cs344 Paulius Micikevicius. GPU Performance Analysis and Optimization. GTC 2012. http://on-demand.gputechconf.com/gtc/2012/presentations/s05 14-GTC2012-GPU-Performance-Analysis.pdf 66