CIS 665: GPU Programming. Lecture 2: The CUDA Programming Model

Similar documents
General Purpose GPU programming (GP-GPU) with Nvidia CUDA. Libby Shoop

GPU Computation CSCI 4239/5239 Advanced Computer Graphics Spring 2014

GPGPU. Lecture 2: CUDA

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Lecture 1: Introduction

Threading Hardware in G80

Intro to GPU s for Parallel Computing

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Parallel Computing. Lecture 19: CUDA - I

Using The CUDA Programming Model

Lecture 3: Introduction to CUDA

Introduction to CUDA (1 of n*)

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

Computation to Core Mapping Lessons learned from a simple application

Lessons learned from a simple application

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

Core/Many-Core Architectures and Programming. Prof. Huiyang Zhou

CS 677: Parallel Programming for Many-core Processors Lecture 1

Many-core Processors Lecture 1. Instructor: Philippos Mordohai Webpage:

Lecture 2: Introduction to CUDA C

Matrix Multiplication in CUDA. A case study

Module 2: Introduction to CUDA C

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

CUDA (Compute Unified Device Architecture)

Introduction to CUDA (1 of n*)

GPU Architecture and Programming. Ozcan Ozturk Bilkent University Ankara, Turkey

EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 22 CUDA

Module 2: Introduction to CUDA C. Objective

CUDA Programming Model

Nvidia G80 Architecture and CUDA Programming

HPC COMPUTING WITH CUDA AND TESLA HARDWARE. Timothy Lanfear, NVIDIA

An Example of Physical Reality Behind CUDA. GeForce Speedup of Applications Why Massively Parallel Processor.

Introduction to GPGPUs and to CUDA programming model

CUDA C Programming Mark Harris NVIDIA Corporation

COMP 322: Fundamentals of Parallel Programming. Flynn s Taxonomy for Parallel Computers

Introduction to CELL B.E. and GPU Programming. Agenda

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

Massively Parallel Architectures

Introduction to GPU programming. Introduction to GPU programming p. 1/17

Lecture 11: GPU programming

ECE 574 Cluster Computing Lecture 15

Data Parallel Execution Model

CS179 GPU Programming Introduction to CUDA. Lecture originally by Luke Durant and Tamas Szalay

Tesla Architecture, CUDA and Optimization Strategies

Josef Pelikán, Jan Horáček CGG MFF UK Praha

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

Mattan Erez. The University of Texas at Austin

High Performance Computing

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

Introduction to GPU (Graphics Processing Unit) Architecture & Programming

ECE/CS 757: Advanced Computer Architecture II GPGPUs

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

ECE 574 Cluster Computing Lecture 17

Programming in CUDA. Malik M Khan

Introduction to CUDA

CPUs+GPUs. PCIe: 1GB/s per lane x16 => 16GB/s full duplex. DMI2 20 Gb/s 40x PCIe C L1 L2. 40x PCIe. 2 QPI links 2x 19.2GB/s Full Duplex.

GPGPU/CUDA/C Workshop 2012

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5)

Lecture 1: Introduction and Computational Thinking

Massively Parallel Computing with CUDA. Carlos Alberto Martínez Angeles Cinvestav-IPN

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

High Performance Linear Algebra on Data Parallel Co-Processors I

Portland State University ECE 588/688. Graphics Processors

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

EE382N (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 09 GPUs (II) Mattan Erez. The University of Texas at Austin

Parallel Systems Course: Chapter IV. GPU Programming. Jan Lemeire Dept. ETRO November 6th 2008

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

Introduction to CUDA (2 of 2)

Master Informatics Eng.

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

Overview: Graphics Processing Units

GPU & High Performance Computing (by NVIDIA) CUDA. Compute Unified Device Architecture Florian Schornbaum

CUDA Architecture & Programming Model

Spring 2009 Prof. Hyesoon Kim

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Spring 2011 Prof. Hyesoon Kim

L18: Introduction to CUDA

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Application Acceleration Using the Massive Parallel Processing Power of GPUs

Real-time Graphics 9. GPGPU

Practical Introduction to CUDA and GPU

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

GPUs and GPGPUs. Greg Blanton John T. Lubia

Paralization on GPU using CUDA An Introduction

CSE 160 Lecture 24. Graphical Processing Units

CS GPU and GPGPU Programming Lecture 8+9: GPU Architecture 7+8. Markus Hadwiger, KAUST

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

GPU Computing with CUDA. Part 2: CUDA Introduction

COSC 6374 Parallel Computations Introduction to CUDA

COMP 605: Introduction to Parallel Computing Lecture : GPU Architecture

Introduction to CUDA

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

High Performance Computing on GPUs using NVIDIA CUDA

Introduc)on to GPU Programming

EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 18 GPUs (III)

Introduction to CUDA Programming

Transcription:

CIS 665: GPU Programming Lecture 2: The CUDA Programming Model 1

Slides References Nvidia (Kitchen) David Kirk + Wen-Mei Hwu (UIUC) Gary Katz and Joe Kider 2

3D 3D API: API: OpenGL OpenGL or or Direct3D Direct3D GPU Command & Data Stream GPU GPU Front Front End End 3D API Commands Vertex Index Stream 3D 3D Application Application Or Or Game Game Primitive Primitive Assembly Assembly Assembled Primitives Rasterization and Interpolation Fixed-function pipeline CPU-GPU Boundary (AGP/PCIe) Pixel Location Stream Raster Operations Pixel Updates Frame Frame Buffer Buffer Pre-transformed Vertices Programmable Programmable Vertex Vertex Processor Processor Transformed Vertices Pre-transformed Fragments Programmable Programmable Fragment Fragment Processor Processor Transformed Fragments 3

Why Use the GPU for Computing? The GPU has evolved into a very flexible and powerful processor: It s programmable using high-level languages It supports 32-bit floating point precision It offers lots of GFLOPS: GFLOPS GPU in every PC and workstation G80 = GeForce 8800 GTX G71 = GeForce 7900 GTX G70 = GeForce 7800 GTX NV40 = GeForce 6800 Ultra NV35 = GeForce FX 5950 Ultra NV30 = GeForce FX 5800 4

What is Behind such an Evolution? The GPU is specialized for compute-intensive, highly data parallel computation (exactly what graphics rendering is about) So, more transistors can be devoted to data processing rather than data caching and flow control Control CPU ALU ALU ALU ALU GPU Cache DRAM DRAM The fast-growing video game industry exerts strong economic pressure that forces constant innovation 5

What is (Historical) GPGPU? General Purpose computation using GPU and graphics API in applications other than 3D graphics GPU accelerates critical path of application Data parallel algorithms leverage GPU attributes Large data arrays, streaming throughput Fine-grain SIMD parallelism Low-latency floating point (FP) computation Applications see //GPGPU.org Game effects (FX) physics, image processing Physical modeling, computational engineering, matrix algebra, convolution, correlation, sorting 6

Previous GPGPU Constraints Dealing with graphics API Working with the corner cases of the graphics API Addressing modes Limited texture size/dimension Shader capabilities Limited outputs Instruction sets Lack of Integer & bit ops Communication limited Between pixels Scatter a[i] = p Input Registers Fragment Program Output Registers FB Memory per thread per Shader per Context Texture Constants Temp Registers 7

CUDA Compute Unified Device Architecture General purpose programming model User kicks off batches of threads on the GPU GPU = dedicated super-threaded, massively data parallel co-processor Targeted software stack Compute oriented drivers, language, and tools Driver for loading computation programs into GPU Standalone Driver - Optimized for computation Interface designed for compute graphics-free API Data sharing with OpenGL buffer objects Guaranteed maximum download & readback speeds Explicit GPU memory management 8

An Example of Physical Reality GPU w/ local DRAM (device) Behind CUDA CPU (host) 9

Parallel Computing on a GPU 8-series GPUs deliver 25 to 200+ GFLOPS on compiled parallel C applications Available in laptops, desktops, and clusters GeForce 8800 GPU parallelism is doubling every year Programming model scales transparently Tesla D870 Programmable in C with CUDA tools Multithreaded SPMD model uses application data parallelism and thread parallelism Tesla S870 10

Overview CUDA programming model basic concepts and data types CUDA application programming interface - basic Simple examples to illustrate basic concepts and functionalities Performance features will be covered later 11

CUDA C with no shader limitations! Integrated host+device app C program Serial or modestly parallel parts in host C code Highly parallel parts in device SPMD kernel C code Serial Code (host) Parallel Kernel (device) KernelA<<< nblk, ntid >>>(args);... Serial Code (host) Parallel Kernel (device) KernelB<<< nblk, ntid >>>(args);... 12

CUDA Devices and s A compute device Is a coprocessor to the CPU or host Has its own DRAM (device memory) Runs many threads in parallel Is typically a GPU but can also be another type of parallel processing device Data-parallel portions of an application are expressed as device kernels which run on many threads Differences between GPU and CPU threads GPU threads are extremely lightweight Very little creation overhead GPU needs 1000s of threads for full efficiency Multi-core CPU needs only a few 13

G80 Graphics Mode The future of GPUs is programmable processing So build the architecture around the processor Host Input Assembler Setup / Rstr / ZCull Vtx Issue Geom Issue Pixel Issue SP TF L1 SP SP TF L1 SP SP TF L1 SP SP TF L1 SP SP TF L1 SP SP TF L1 SP SP TF L1 SP SP TF L1 SP Processor L2 L2 L2 L2 L2 L2 FB FB FB FB FB FB 14

G80 CUDA mode A Device Example Processors execute computing threads New operating mode/hw interface for computing Host Input Assembler Execution Manager Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Parallel Data Cache Texture Texture Texture Texture Texture Texture Texture Texture Load/store Load/store Load/store Load/store Load/store Load/store Global Memory 15

Extended C Declspecs global, device, shared, local, constant Keywords threadidx, blockidx Intrinsics syncthreads Runtime API Memory, symbol, execution management Function launch device float filter[n]; global void convolve (float *image) { } shared float region[m];... region[threadidx] = image[i]; syncthreads()... image[j] = result; // Allocate GPU memory void *myimage = cudamalloc(bytes) // 100 blocks, 10 threads per block convolve<<<100, 10>>> (myimage); 16

Extended C Integrated source (foo.cu) cudacc EDG C/C++ frontend Open64 Global Optimizer GPU Assembly foo.s OCG G80 SASS foo.sass CPU Host Code foo.cpp gcc / cl Mark Murphy, NVIDIA s Experience with Open64, www.capsl.udel.edu/conferences/open64/2008 /Papers/101.doc 17

Arrays of Parallel s A CUDA kernel is executed by an array of threads All threads run the same code (SPMD) Each thread has an ID that it uses to compute memory addresses and make control decisions threadid 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; 18

Blocks: Scalable Cooperation Divide monolithic thread array into multiple blocks s within a block cooperate via shared memory, atomic operations and barrier synchronization s in different blocks cannot cooperate Block 0 Block 0 Block N - 1 threadid 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; 19

Batching: Grids and Blocks A kernel is executed as a grid of thread blocks All threads share data memory space A thread block is a batch of threads that can cooperate with each other by: Synchronizing their execution For hazard-free shared memory accesses Efficiently sharing data through a low latency shared memory Two threads from two different blocks cannot cooperate Host Kernel 1 Kernel 2 Block (1, 1) (0, 0) Device (1, 0) Grid 1 Block (0, 0) Block (0, 1) Grid 2 (2, 0) Block (1, 0) Block (1, 1) (3, 0) (4, 0) Block (2, 0) Block (2, 1) (0, 1) (1, 1) (2, 1) (3, 1) (4, 1) (0, 2) (1, 2) (2, 2) (3, 2) Courtesy: NDVIA (4, 2) 20

Block and IDs s and blocks have IDs So each thread can decide what data to work on Block ID: 1D or 2D ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data Image processing Solving PDEs on volumes Device Block (1, 1) (0, 0) (1, 0) Grid 1 Block (0, 0) Block (0, 1) (2, 0) Block (1, 0) Block (1, 1) (3, 0) (4, 0) Block (2, 0) Block (2, 1) (0, 1) (1, 1) (2, 1) (3, 1) (4, 1) (0, 2) (1, 2) (2, 2) (3, 2) (4, 2) Courtesy: NDVIA 21

CUDA Device Memory Space Overview Each thread can: R/W per-thread registers R/W per-thread local memory R/W per-block shared memory R/W per-grid global memory Read only per-grid constant memory Read only per-grid texture memory (Device) Grid Block (0, 0) Shared Memory Registers Registers (0, 0) (1, 0) Block (1, 0) Shared Memory Registers Registers (0, 0) (1, 0) Local Memory Local Memory Local Memory Local Memory The host can R/W global, constant, and texture memories Host Global Memory Constant Memory Texture Memory 22

Global, Constant, and Texture Memories (Long Latency Accesses) Global memory Main means of communicating R/W Data between host and device Contents visible to all threads Texture and Constant Memories Constants initialized by host Contents visible to all threads (Device) Grid Block (0, 0) Registers Shared Memory (0, 0) Local Memory Registers (1, 0) Local Memory Block (1, 0) Registers Shared Memory (0, 0) Local Memory Registers (1, 0) Local Memory Host Global Memory Constant Memory Texture Memory Courtesy: NDVIA 23

Block IDs and IDs Each thread uses IDs to decide what data to work on Block ID: 1D or 2D ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data Image processing Solving PDEs on volumes Host Kernel 1 Kernel 2 Device Grid 1 Grid 2 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Block (1, 1) (0,0,1) (1,0,1) (2,0,1) (3,0,1) (0,0,0) (1,0,0) (2,0,0) (3,0,0) (0,1,0) (1,1,0) (2,1,0) (3,1,0) Courtesy: NDVIA 24

CUDA Memory Model Overview Global memory Main means of communicating R/W Data between host and device Grid Block (0, 0) Block (1, 0) Contents visible to all threads Shared Memory Registers Registers Shared Memory Registers Registers Long latency access We will focus on global memory for now Host (0, 0) (1, 0) Global Memory (0, 0) (1, 0) Constant and texture memory will come later 25

CUDA API Highlights: Easy and Lightweight The API is an extension to the ANSI C programming language Low learning curve The hardware is designed to enable lightweight runtime and driver High performance 26

CUDA Device Memory Allocation cudamalloc() Allocates object in the device Global Memory Requires two parameters Address of a pointer to the allocated object Size of of allocated object cudafree() Frees object from device Global Memory Pointer to freed object Host Grid Block (0, 0) Registers Global Memory Shared Memory (0, 0) Registers (1, 0) Block (1, 0) Shared Memory Registers Registers (0, 0) (1, 0) 27

CUDA Device Memory Allocation (cont.) Code example: Allocate a 64 * 64 single precision float array Attach the allocated storage to Md d is often used to indicate a device data structure TILE_WIDTH = 64; Float* Md int size = TILE_WIDTH * TILE_WIDTH * sizeof(float); cudamalloc((void**)&md, size); cudafree(md); 28

CUDA Host-Device Data Transfer cudamemcpy() memory data transfer Grid Requires four parameters Block (0, 0) Block (1, 0) Pointer to destination Pointer to source Number of bytes copied Registers Shared Memory Registers Shared Memory Registers Registers Type of transfer Host to Host (0, 0) (1, 0) (0, 0) (1, 0) Host to Device Device to Host Host Global Memory Device to Device Asynchronous transfer 29

CUDA Host-Device Data Transfer Code example: (cont.) Transfer a 64 * 64 single precision float array M is in host memory and Md is in device memory cudamemcpyhosttodevice and cudamemcpydevicetohost are symbolic constants cudamemcpy(md, M, size, cudamemcpyhosttodevice); cudamemcpy(m, Md, size, cudamemcpydevicetohost); 30

GeForce 8800 Series Technical Specs Maximum number of threads per block: 512 Maximum size of each dimension of a grid: 65,535 Number of streaming multiprocessors (SM): GeForce 8800 GTX: 16 @ 675 MHz GeForce 8800 GTS: 12 @ 600 MHz Device memory: GeForce 8800 GTX: 768 MB GeForce 8800 GTS: 640 MB Shared memory per multiprocessor: 16KB divided in 16 banks Constant memory: 64 KB Warp size: 32 threads (16 Warps/Block) 31

CUDA Keywords 32

CUDA Function Declarations device float DeviceFunc() global void KernelFunc() Executed on the: device device Only callable from the: device host host float HostFunc() host host global defines a kernel function Must return void device and host can be used together 33

CUDA Function Declarations (cont.) device functions cannot have their address taken For functions executed on the device: No recursion No static variable declarations inside the function No variable number of arguments 34

Calling a Kernel Function Creation A kernel function must be called with an execution configuration: global void KernelFunc(...); dim3 DimGrid(100, 50); // 5000 thread blocks dim3 DimBlock(4, 8, 8); // 256 threads per block size_t SharedMemBytes = 64; // 64 bytes of shared memory KernelFunc<<< DimGrid, DimBlock, SharedMemBytes >>>(...); Any call to a kernel function is asynchronous from CUDA 1.0 on, explicit synch needed for blocking 35

Technical details NVIDIA s CUDA development tools provide three key components to help you get started: the latest CUDA driver, a complete CUDA Toolkit, and CUDA SDK code samples. 1. CUDA Driver 2. CUDA Toolkit 3. CUDA SDK code samples 36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

A Simple Running Example Matrix Multiplication A simple matrix multiplication example that illustrates the basic features of memory and thread management in CUDA programs Leave shared memory usage until later Local, register usage ID usage Memory data transfer API between host and device Assume square matrix for simplicity 52

Programming Model: Square Matrix Multiplication Example P = M * N of size WIDTH x WIDTH Without tiling: One thread calculates one element of P M and N are loaded WIDTH times from global memory N WIDTH M P WIDTH WIDTH WIDTH 53

Memory Layout of a Matrix in C M 0,0 M 1,0 M 2,0 M 3,0 M 0,1 M 1,1 M 2,1 M 3,1 M 0,2 M 1,2 M 2,2 M 3,2 M 0,3 M 1,3 M 2,3 M 3,3 M M 0,0 M 1,0 M 2,0 M 3,0 M 1,1 M 0,1 M 2,1 M 3,1 M 0,2 M 1,2 M 2,2 M 3,2 M 0,3 M 1,3 M 2,3 M 3,3 54

Step 1: Matrix Multiplication A Simple Host Version in C // Matrix multiplication on the (CPU) host in double precision N void MatrixMulOnHost(float* M, float* N, float* P, int Width) { } for (int i = 0; i < Width; ++i) for (int j = 0; j < Width; ++j) { double sum = 0; for (int k = 0; k < Width; ++k) { double a = M[i * width + k]; double b = N[k * width + j]; sum += a * b; M } P[i * Width + j] = sum; } k i P j k WIDTH WIDTH WIDTH WIDTH 55

Step 2: Input Matrix Data Transfer (Host-side Code) void MatrixMulOnDevice(float* M, float* N, float* P, int Width) { int size = Width * Width * sizeof(float); float* Md, Nd, Pd; 1. // Allocate and Load M, N to device memory cudamalloc(&md, size); cudamemcpy(md, M, size, cudamemcpyhosttodevice); cudamalloc(&nd, size); cudamemcpy(nd, N, size, cudamemcpyhosttodevice); // Allocate P on the device cudamalloc(&pd, size); 56

Step 3: Output Matrix Data Transfer (Host-side Code) 2. // Kernel invocation code to be shown later 3. // Read P from the device cudamemcpy(p, Pd, size, cudamemcpydevicetohost); // Free device matrices cudafree(md); cudafree(nd); cudafree (Pd); } 57

Step 4: Kernel Function // Matrix multiplication kernel per thread code global void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) { // Pvalue is used to store the element of the matrix // that is computed by the thread float Pvalue = 0; 58

Step 4: Kernel Function (cont.) for (int k = 0; k < Width; ++k) { float Melement = Md[threadIdx.y*Width+k]; float Nelement = Nd[k*Width+threadIdx.x]; Pvalue += Melement * Nelement; } Nd tx k WIDTH } Pd[threadIdx.y*Width+threadIdx.x] = Pvalue; Md Pd ty ty k tx WIDTH WIDTH WIDTH 59

Step 5: Kernel Invocation (Host-side Code) // Setup the execution configuration dim3 dimgrid(1, 1); dim3 dimblock(width, Width); // Launch the device computation threads! MatrixMulKernel<<<dimGrid, dimblock>>>(md, Nd, Pd, Width); 60

Only One Block Used One Block of threads compute matrix Pd Each thread computes one element of Pd Each thread Loads a row of matrix Md Loads a column of matrix Nd Perform one multiply and addition for each pair of Md and Nd elements Compute to off-chip memory access ratio close to 1:1 (not very high) Size of matrix limited by the number of threads allowed in a thread block Grid 1 Block 1 3 2 5 4 WIDTH (2, 2) Md Nd Pd 2 4 2 6 48 61

Step 7: Handling Arbitrary Sized Square Matrices Have each 2D thread block to compute a (TILE_WIDTH) 2 sub-matrix (tile) of the result matrix Each has (TILE_WIDTH) 2 threads Generate a 2D Grid of (WIDTH/TILE_WIDTH) 2 blocks Md You still need to put a loop around the kernel call for cases where WIDTH/TILE_WIDTH is greater than max grid size (64K)! Nd Pd bx by TILE_WIDTH ty tx WIDTH WIDTH WIDTH WIDTH 62

Some Useful Information on Tools 63

Compiling a CUDA Program Virtual Physical C/C++ CUDA Application NVCC PTX Code PTX to Target Compiler float4 me = gx[gtid]; me.x += me.y * me.z; CPU Code Parallel execution (PTX) Virtual Machine and ISA Programming model Execution resources and state ld.global.v4.f32 {$f1,$f3,$f5,$f7}, [$r9+0]; mad.f32 $f1, $f5, $f3, $f1; G80 GPU Target code 64 64

Compilation Any source file containing CUDA language extensions must be compiled with NVCC NVCC is a compiler driver Works by invoking all the necessary tools and compilers like cudacc, g++, cl,... NVCC outputs: C code (host CPU Code) PTX Must then be compiled with the rest of the application using another tool Object code directly Or, PTX source, interpreted at runtime 65 65

Linking Any executable with CUDA code requires two dynamic libraries: The CUDA runtime library (cudart) The CUDA core library (cuda) 66

Debugging Using the Device Emulation Mode An executable compiled in device emulation mode (nvcc -deviceemu) runs completely on the host using the CUDA runtime No need of any device and CUDA driver Each device thread is emulated with a host thread Running in device emulation mode, one can: Use host native debug support (breakpoints, inspection, etc.) Access any device-specific data from host code and vice-versa Call any host function from device code (e.g. printf) and viceversa Detect deadlock situations caused by improper usage of syncthreads 67

Device Emulation Mode Pitfalls Emulated device threads execute sequentially, so simultaneous accesses of the same memory location by multiple threads could produce different results. Dereferencing device pointers on the host or host pointers on the device can produce correct results in device emulation mode, but will generate an error in device execution mode 68

Floating Point Results of floating-point computations will slightly differ because of: Different compiler outputs, instruction sets Use of extended precision for intermediate results There are various options to force strict single precision on the host 69