GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

Similar documents
GPU programming: CUDA. Sylvain Collange Inria Rennes Bretagne Atlantique

PPAR: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique PPAR

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

CUDA. Sathish Vadhiyar High Performance Computing

Lecture 8: GPU Programming. CSE599G1: Spring 2017

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

An Introduction to GPU Computing and CUDA Architecture

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

GPU Computing: Introduction to CUDA. Dr Paul Richmond

Lecture 11: GPU programming

Lecture 2: CUDA Programming

CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

CUDA C/C++ BASICS. NVIDIA Corporation

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

HPCSE II. GPU programming and CUDA

Massively Parallel Algorithms

Optimizing Parallel Reduction in CUDA

CUDA C/C++ BASICS. NVIDIA Corporation

Introduction to CUDA Programming

GPU programming: Unified memory models and more. Sylvain Collange Inria Rennes Bretagne Atlantique

CUDA C Programming Mark Harris NVIDIA Corporation

Scientific discovery, analysis and prediction made possible through high performance computing.

NVIDIA CUDA. Fermi Compatibility Guide for CUDA Applications. Version 1.1

CS377P Programming for Performance GPU Programming - I

GPU Computing with CUDA

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

Real-time Graphics 9. GPGPU

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Parallel Numerical Algorithms

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory

Lecture 3: Introduction to CUDA

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

GPU 1. CSCI 4850/5850 High-Performance Computing Spring 2018

Parallel Accelerators

High Performance Computing and GPU Programming

Parallel Computing. Lecture 19: CUDA - I

Lecture 6b Introduction of CUDA programming

CSE 599 I Accelerated Computing Programming GPUS. Intro to CUDA C

Josef Pelikán, Jan Horáček CGG MFF UK Praha

Introduction to GPU programming. Introduction to GPU programming p. 1/17

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro

Parallel Accelerators

GPU Programming Introduction

University of Bielefeld

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

GPU programming: Code optimization part 1. Sylvain Collange Inria Rennes Bretagne Atlantique

Module 2: Introduction to CUDA C

Real-time Graphics 9. GPGPU

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Vector Addition on the Device: main()

Lecture 10!! Introduction to CUDA!

ECE 574 Cluster Computing Lecture 15

GPU Programming Using CUDA

$ cd ex $ cd 1.Enumerate_GPUs $ make $./enum_gpu.x. Enter in the application folder Compile the source code Launch the application

Basics of CADA Programming - CUDA 4.0 and newer

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

Lecture 2: Introduction to CUDA C

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Introduc)on to GPU Programming

Module 2: Introduction to CUDA C. Objective

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

GPU programming: Code optimization part 2. Sylvain Collange Inria Rennes Bretagne Atlantique

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci

CUDA Parallel Programming Model Michael Garland

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan

GPU programming: Code optimization

GPU programming: Code optimization part 1. Sylvain Collange Inria Rennes Bretagne Atlantique

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

CUDA Kenjiro Taura 1 / 36

Lecture 5. Performance Programming with CUDA

EEM528 GPU COMPUTING

GPU Architecture and Programming. Andrei Doncescu inspired by NVIDIA

Tesla Architecture, CUDA and Optimization Strategies

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh

High Performance Linear Algebra on Data Parallel Co-Processors I

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

GPU Computing: Development and Analysis. Part 1. Anton Wijs Muhammad Osama. Marieke Huisman Sebastiaan Joosten

GPU programming. Dr. Bernhard Kainz

Basic CUDA workshop. Outlines. Setting Up Your Machine Architecture Getting Started Programming CUDA. Fine-Tuning. Penporn Koanantakool

High-Performance Computing Using GPUs

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

PPAR: GPU code optimization

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA

CUDA. More on threads, shared memory, synchronization. cuprintf

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN

CS671 Parallel Programming in the Many-Core Era

Introduction to Parallel Computing with CUDA. Oswald Haan

Zero-copy. Table of Contents. Multi-GPU Learning CUDA to Solve Scientific Problems. Objectives. Technical Issues Zero-copy. Multigpu.

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Card Sizes. Tesla K40: 2880 processors; 12 GB memory

COSC 6374 Parallel Computations Introduction to CUDA

GPU Programming. Rupesh Nasre.

Programmable Graphics Hardware (GPU) A Primer

ECE 574 Cluster Computing Lecture 17

GPU CUDA Programming

Transcription:

GPU programming: CUDA basics Sylvain Collange Inria Rennes Bretagne Atlantique sylvain.collange@inria.fr

This lecture: CUDA programming We have seen some GPU architecture Now how to program it? 2

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example 3

GPU development environments For general-purpose programming (not graphics) Multiple toolkits NVIDIA CUDA Khronos OpenCL Microsoft DirectCompute Google RenderScript Mostly syntactical variations Underlying principles are the same In this course, focus on NVIDIA CUDA 4

Higher-level programming... Directive-based OpenACC OpenMP 4.0 Language extensions / libraries Microsoft C++ AMP Intel Cilk+ NVIDIA Thrust, CUB Languages Intel ISPC Most corporations agree we need common standards... But only if their own product becomes the standard! 5

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example Re-using data Matrix multiplication example 6

Hello World in CUDA CPU host code + GPU device code global void hello() { } Device code int main() { hello<<<1,1>>>(); printf("hello World!\n"); } return 0; Host code 7

Compiling a CUDA program Executable contains both host and device code Device code in PTX and/or native PTX can be recompiled on the fly (e.g. old program on new GPU) NVIDIA's compiler driver takes care of the process: Host code gcc/msvc Source files.cu cudafe Intermediate language PTX ptxas Device code nvcc -o hello hello.cu GPU native binary Binary for other GPU fatbin Fat binary nvcc 8

Control flow Program running on CPUs Submit work to the GPU through the GPU driver Commands execute asynchronously Application: Fat binary GPU code CUDA Runtime API cudaxxx functions CUDA Driver API cuyyy functions CUDA Runtime User mode Kernel mode GPU Driver push pop CPU Command queue GPU 9

External memory: discrete GPU Classical CPU-GPU model Split memory spaces Highest bandwidth from GPU memory Transfers to main memory are slower CPU 26GB/s PCI Express 16GB/s GPU 290GB/s Main memory Graphics memory 8 GB 3 GB Ex: Intel Core i7 4770, Nvidia GeForce GTX 780 10

External memory: embedded GPU Most GPUs today Same memory May support coherent memory GPU can read directly from CPU caches More contention on external memory CPU Cache GPU 26GB/s Main memory 8 GB 11

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow CPU GPU CPU memory GPU memory 12

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory CPU GPU 1 CPU memory GPU memory 13

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory CPU 2 GPU 2. Copy inputs from CPU mem to GPU memory 1 CPU memory GPU memory 14

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory 2. Copy inputs from CPU mem to GPU memory CPU 2 1 GPU 3 3. Run computation on GPU CPU memory GPU memory 15

Data flow Main program runs on the host Manages memory transfers Initiate work on GPU Typical flow 1. Allocate GPU memory 2. Copy inputs from CPU mem to GPU memory CPU 2 4 1 GPU 3 3. Run computation on GPU 4. Copy back results to CPU memory CPU memory GPU memory 16

Example: a + b Our Hello World example did not involve the GPU Let's add up 2 numbers on the GPU Start from host code int main() { float ab[] = {1515, 159}; // Inputs, in host mem } float c; // c = ab[0] + ab[1]; printf("c = %f\n", c); vectoradd example: cuda/samples/0_simple/vectoradd 17

Step 1: allocate GPU memory int main() { float ab[] = {1515, 159}; // Inputs, in host mem } // Allocate GPU memory float *d_ab, *d_c; cudamalloc((void **)&d_ab, 2*sizeof(float)); cudamalloc((void **)&d_c, sizeof(float)); Passing a pointer to the pointer to be overwritten // Free GPU memory cudafree(d_ab); cudafree(d_c); Allocate space for a, b and c in GPU memory At the end, free memory 18

Step 2, 4: copy data to/from GPU memory int main() { float ab[] = {1515, 159}; // Inputs, CPU mem // Allocate GPU memory float *d_ab, *d_c; cudamalloc((void **)&d_ab, 2*sizeof(float)); cudamalloc((void **)&d_c, sizeof(float)); // Copy from CPU mem to GPU mem cudamemcpy(d_ab, &ab, 2*sizeof(float), cudamemcpyhosttodevice); // Copy results back to CPU mem cudamemcpy(&c, d_c, sizeof(float), cudamemcpydevicetohost); printf("c = %f\n", c); } // Free GPU memory cudafree(d_ab); cudafree(d_c); 19

Step 3: launch kernel global void addongpu(float * ab, float * c) { *c = ab[0] + ab[1]; } int main() { float ab[] = {1515, 159}; } // Inputs, CPU mem // Allocate GPU memory float *d_ab, *d_c; cudamalloc((void **)&d_ab, 2*sizeof(float)); cudamalloc((void **)&d_c, sizeof(float)); // Copy from CPU mem to GPU mem cudamemcpy(d_ab, &a, 2*sizeof(float), cudamemcpyhosttodevice); // Launch computation on GPU addongpu<<<1, 1>>>(d_AB, d_c); float c; // Result on CPU // Copy results back to CPU mem cudamemcpy(&c, d_c, sizeof(float), cudamemcpydevicetohost); printf("c = %f\n", c); // Free GPU memory cudafree(d_ab); cudafree(d_c); Kernel is a function prefixed by global Runs on GPU Invoked from CPU code with <<<>>> syntax Note: we could have passed a and b directly as kernel parameters What is inside the <<<>>>? 20

Asynchronous execution By default, GPU calls are asynchronous Returns immediately to CPU code GPU commands are still executed in-order: queuing Some commands are synchronous by default cudamemcpy(..., cudamemcpydevicetohost) Use cudamemcpyasync for asynchronous version Keep it in mind when checking for errors! Error returned by a command may be caused by an earlier command To force synchronization: cuthreadsynchronize() 21

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example 22

Granularity of a GPU task Results from last Thursday lab work Total latency of transfer, compute, transfer back: ~5 µs CPU-GPU transfer latency: 0.3 µs GPU kernel call: ~4 µs CPU performance: 100 Gflop/s how many flops in 5 µs? 23

Granularity of a GPU task Results from last Thursday lab work Total latency of transfer, compute, transfer back: ~5 µs CPU-GPU transfer latency: 0.3 µs GPU kernel call: ~4 µs CPU performance: 100 Gflop/s 500 000 flops in 5 µs For < 500k operations, computing on CPU will be always faster! Millions of operations needed to amortize data transfer time Only worth offloading large parallel tasks the GPU 24

GPU physical organization Thread Warp Execution units... Shared memory Registers L1 cache SM 1 SM 2 To L2 cache / external memory 25

Workload: logical organization A kernel is launch on a grid: my_kernel<<<blocks, threads>>>(...) Two nested levels Blocks Threads Block 0 Block 1 Block 2 Grid Thread 2 Thread 1 Thread 0 Thread 2 Thread 1 Thread 0 Threads 26

Outer level: grid of blocks Blocks or Concurrent Thread Arrays (CTAs) No communication between blocks of the same grid No practical limit on the number of blocks Grid 1 Block 1 Block 2... Block 3 27

Inner level: threads Blocks contain threads All threads in a block Run on the same SM: they can communicate Run in parallel: they can synchronize Constraints Max number of threads/block (512 or 1024 depending on arch) Recommended: at least 64 threads for good performance Recommended: multiple of the warp size Threads Block Barrier 28

Multi-BSP model: recap Modern parallel platforms are hierarchical Threads cores nodes... Remember the memory wall, the speed of light Multi-BSP: BSP generalization with multiple nested levels Level 2 superstep Level 1 superstep Level 1 Barrier Higher level: more expensive synchronization Level 2 Barrier 29

CPU thread L1 superstep L2 superstep Multi-BSP and CUDA Kernel launch 1 Grid 1 Block 1 Block 2 L1 bar.... L2 Bar Global mem write Kernel launch 2 Grid 2 Minor difference: BSP is based on message passing, CUDA on shared memory 30

Mapping blocks to hardware resources Thread Block 1 Block 2 Block 3 Block 4 Warp Execution units... Shared memory Registers L1 cache SM 1 SM 2 To L2 cache / external memory SM resources are partitioned across blocks 31

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Run in order or out of order Should not assume anything on execution order of blocks Blocks 0 4 1 5 8 9 10 6 2 3 711 SM 1 SM 2 32

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Run in order or out of order Should not assume anything on execution order of blocks 0 4 Blocks 5 8 9 10 6 7 11 2 1 3 SM 1 SM 2 33

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Blocks Run in order or out of order Should not assume anything on execution order of blocks 0 5 8 9 1011 3 4 6 7 SM 1 SM 2 1 2 34

Block scheduling Blocks may Run serially or in parallel Run on the same or different SM Blocks Run in order or out of order Should not assume anything on execution order of blocks 0 5 8 10 9 11 SM 1 SM 2 4 1 2 3 6 7 35

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example 36

Example: vector addition Addition example: only 1 thread Now let's run a parallel computation Start with multiple blocks, 1 thread/block Independent computations in each block No communication/synchronization needed 37

Host code: initialization A and B are now arrays: just change allocation size int main() { int numelements = 50000; size_t size = numelements * sizeof(float); float *h_a = (float *)malloc(size); float *h_b = (float *)malloc(size); float *h_c = (float *)malloc(size); Initialize(h_A, h_b); // Allocate device memory float *d_a, *d_b, *d_c; cudamalloc((void **)&d_a, size); cudamalloc((void **)&d_b, size); cudamalloc((void **)&d_c, size); } cudamemcpy(d_a, h_a, size, cudamemcpyhosttodevice); cudamemcpy(d_b, h_b, size, cudamemcpyhosttodevice);... 38

Host code: kernel and kernel launch global void vectoradd2(float *A, float *B, float *C) { int i = blockidx.x; } C[i] = A[i] + B[i]; Launch n blocks of 1 thread each (for now) int blocks = numelements; vectoradd2<<<blocks, 1>>>(d_A, d_b, d_c); 39

Device code global void vectoradd2(float *A, float *B, float *C) { int i = blockidx.x; Built-in CUDA variable: in device code only C[i] = A[i] + B[i]; } Block number i processes element i Grid of blocks may have up to 3 dimensions (blockidx.x, blockidx.y, blockidx.z) For programmer convenience: no effect on scheduling 40

Multiple blocks, multiple threads/block Fixed number of threads / block: here 64 Host code Not necessarily multiple of block size! int threads = 64; int blocks = (numelements + threads - 1) / threads; // Round up vectoradd3<<<blocks, threads>>>(d_a, d_b, d_c, numelements); Device code global void vectoradd3(const float *A, const float *B, float *C, int n) { Global index int i = blockidx.x * blockdim.x + threadidx.x; } if(i < n) { C[i] = A[i] + B[i]; } Last block may have less work to do Thread block may also have up to 3 dimensions: threadidx.{x,y,z} 41

Outline GPU programming environments CUDA basics Host side Device side: threads, blocks, grids Expressing parallelism Vector add example Managing communications Parallel reduction example 42

Barriers Threads can synchronize inside one block In C for CUDA: syncthreads(); Needs to be called at the same place for all threads of the block if(tid < 5) {... } else {... } syncthreads(); if(a[0] == 17) { syncthreads(); } else { syncthreads(); } Same condition for all threads in the block if(tid < 5) { syncthreads(); } else { syncthreads(); } OK OK Wrong 43

Shared memory Fast, software-managed memory Faster than global memory Valid only inside one block Each block sees its own copy Used to exchange data between threads Concurrent writes: one thread wins, but we do not know which one Execution units Shared memory SM 1 Thread Block 1 Block 2 Warp Registers L1 cache 44

Thread communication: common pattern Each thread writes to its own location No write conflict Barrier Wait until all threads have written Read data from other threads Compute Write to smem[tid] Barrier Read from smem[f(tid)] Barrier Compute 45

Example: parallel reduction Algorithm for 2-level multi-bsp model P 0 P 1 P 2 P 3 P 4 P 5 P 6 P 7 a 4 a 5 a 6 a 7 a 0 a 1 a 2 a 3 L1 barrier a 0 a 2 a 4 a 6 L1 barrier Level 1 reduction a 0 a 4 L2 Barrier Level 2 reduction r 46

Reduction in CUDA: level 1 global void reduce1(float *g_idata, float *g_odata, unsigned int n) { extern shared float sdata[]; unsigned int tid = threadidx.x; unsigned int i = blockidx.x * blockdim.x + threadidx.x; // Load from global to shared mem sdata[tid] = (i < n)? g_idata[i] : 0; syncthreads(); for(unsigned int s = 1; s < blockdim.x; s *= 2) { int index = 2 * s * tid; Dynamic shared memory allocation: will specify size later } if(index < blockdim.x) { sdata[index] += sdata[index + s]; } syncthreads(); } // Write result for this block to global mem if (tid == 0) g_odata[blockidx.x] = sdata[0]; cuda/samples/6_advanced/reduction 47

Reduction: host code int smemsize = threads * sizeof(float); reduce1<<<blocks, threads, smemsize>>>(d_idata, d_odata, size); Optional parameter: Size of dynamic shared memory per block Level 2: run reduction kernel again, until we have 1 block left By the way, is our reduction operator associative? 48

A word on floating-point Parallel reduction requires the operator to be associative Is addition associative? On reals: yes On floating-point numbers: no With 4 decimal digits: (1.234+123.4)-123.4=124.6-123.4=1.200 Consequence: different result depending on thread count 49

Recap Memory management: Host code and memory / Device code and memory Writing GPU Kernels Dimensions of parallelism: grids, blocks, threads Memory spaces: global, local, shared memory Next time: code optimization techniques 50

References and further reading CUDA C Programming Guide Mark Harris. Introduction to CUDA C. http://developer.nvidia.com/cuda-education David Luebke, John Owens. Intro to parallel programming. Online course. https://www.udacity.com/course/cs344 Paulius Micikevicius. GPU Performance Analysis and Optimization. GTC 2012. http://on-demand.gputechconf.com/gtc/2012/presentations/s05 14-GTC2012-GPU-Performance-Analysis.pdf 51