GPU Programming Introduction

Similar documents
An Introduction to GPU Computing and CUDA Architecture

CUDA C/C++ BASICS. NVIDIA Corporation

CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation

CUDA C/C++ BASICS. NVIDIA Corporation

Lecture 6b Introduction of CUDA programming

HPCSE II. GPU programming and CUDA

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory

GPU 1. CSCI 4850/5850 High-Performance Computing Spring 2018

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan

Introduction to GPU Computing Junjie Lai, NVIDIA Corporation

Introduction to CUDA C

$ cd ex $ cd 1.Enumerate_GPUs $ make $./enum_gpu.x. Enter in the application folder Compile the source code Launch the application

Introduction to CUDA C

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

CUDA Programming. Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna.

Massively Parallel Algorithms

Parallel Programming and Debugging with CUDA C. Geoff Gerfin Sr. System Software Engineer

CSE 599 I Accelerated Computing Programming GPUS. Intro to CUDA C

Getting Started with CUDA C/C++ Mark Ebersole, NVIDIA CUDA Educator

Introduction to CUDA C QCon 2011 Cyril Zeller, NVIDIA Corporation

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

Automatic translation from CUDA to C++ Luca Atzori, Vincenzo Innocente, Felice Pantaleo, Danilo Piparo

GPU applications within HEP

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

Introduction to CUDA C/C++ Mark Ebersole, NVIDIA CUDA Educator

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

Advanced Topics: Streams, Multi-GPU, Tools, Libraries, etc.

GPU Computing: Introduction to CUDA. Dr Paul Richmond

Introduction to GPGPUs and to CUDA programming model

Tesla Architecture, CUDA and Optimization Strategies

GPU & High Performance Computing (by NVIDIA) CUDA. Compute Unified Device Architecture Florian Schornbaum

Vector Addition on the Device: main()

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

CUDA C Programming Mark Harris NVIDIA Corporation

GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

ECE 574 Cluster Computing Lecture 15

GPU programming. Dr. Bernhard Kainz

Massively Parallel Computing with CUDA. Carlos Alberto Martínez Angeles Cinvestav-IPN

Lecture 2: Introduction to CUDA C

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

Paralization on GPU using CUDA An Introduction

Module 2: Introduction to CUDA C. Objective

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

Practical Introduction to CUDA and GPU

Basic unified GPU architecture

Lecture 3: Introduction to CUDA

An Introduction to GPU Programming

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Graph Partitioning. Standard problem in parallelization, partitioning sparse matrix in nearly independent blocks or discretization grids in FEM.

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

CUDA GPGPU Workshop CUDA/GPGPU Arch&Prog

Introduction to CUDA Programming

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN

Scientific discovery, analysis and prediction made possible through high performance computing.

Lecture 8: GPU Programming. CSE599G1: Spring 2017

COSC 6374 Parallel Computations Introduction to CUDA

Introduction to GPU Computing. 周国峰 Wuhan University 2017/10/13

Stanford University. NVIDIA Tesla M2090. NVIDIA GeForce GTX 690

University of Bielefeld

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

GPU Architecture and Programming. Andrei Doncescu inspired by NVIDIA

ECE 574 Cluster Computing Lecture 17

High Performance Linear Algebra on Data Parallel Co-Processors I

Memory concept. Grid concept, Synchronization. GPU Programming. Szénási Sándor.

CUDA Architecture & Programming Model

Lecture 11: GPU programming

Module 2: Introduction to CUDA C

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Lecture 2: CUDA Programming

Parallel Computing. Lecture 19: CUDA - I

COSC 462. CUDA Basics: Blocks, Grids, and Threads. Piotr Luszczek. November 1, /10

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5)

GPU Programming Using CUDA

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

GPU programming: CUDA. Sylvain Collange Inria Rennes Bretagne Atlantique

Introduction to CUDA CIRC Summer School 2014

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

An Introduc+on to OpenACC Part II

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA

CUDA Kenjiro Taura 1 / 36

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh

COSC 6339 Accelerators in Big Data

Basic CUDA workshop. Outlines. Setting Up Your Machine Architecture Getting Started Programming CUDA. Fine-Tuning. Penporn Koanantakool

Tutorial: Parallel programming technologies on hybrid architectures HybriLIT Team

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

MIC-GPU: High-Performance Computing for Medical Imaging on Programmable Graphics Hardware (GPUs)

High-Performance Computing Using GPUs

Speed Up Your Codes Using GPU

Using CUDA. Oswald Haan

Introduction to Parallel Computing with CUDA. Oswald Haan

COSC 6385 Computer Architecture. - Data Level Parallelism (II)

Transcription:

GPU Programming Introduction DR. CHRISTOPH ANGERER, NVIDIA

AGENDA Introduction to Heterogeneous Computing Using Accelerated Libraries GPU Programming Languages Introduction to CUDA Lunch

What is Heterogeneous Computing? Application Execution CPU High Serial Performance High Data Parallelism GPU +

Low Latency or High Throughput?

Latency vs. Throughput F-22 Raptor 1500 mph Knoxville to San Jose in 1:25 Seats 1 Boeing 737 485 mph Knoxville to San Jose in 4:20 Seats 200

Latency vs. Throughput F-22 Raptor Latency 1:25 Throughput 1 / 1.42 hours = 0.7 people/hr. Boeing 737 Latency 4:20 Throughput 200 / 4.33 hours = 46.2 people/hr.

Low Latency or High Throughput? CPU architecture must minimize latency within each thread GPU architecture hides latency with computation from other threads CPU core Low Latency Processor Computation Thread/Warp T 1 T 2 T 3 T 4 GPU Streaming Multiprocessor High Throughput Processor W 4 W 3 W 2 W 1 T n Processing Waiting for data Ready to be processed Context switch

Anatomy of a GPU-Accelerated Application Serial code executes in a Host (CPU) thread Parallel code executes in many Device (GPU) threads across multiple processing elements Application Serial code Host = CPU Parallel code Device = GPU Serial code Host = CPU Parallel code Device = GPU...

Simple Processing Flow PCI Bus 1. Copy input data from CPU memory to GPU memory

Simple Processing Flow PCI Bus 1. Copy input data from CPU memory to GPU memory 2. Load GPU code and execute it

Simple Processing Flow PCI Bus 1. Copy input data from CPU memory to GPU memory 2. Load GPU code and execute it 3. Copy results from GPU memory to CPU memory

3 APPROACHES TO GPU PROGRAMMING Applications Libraries Compiler Directives Programming Languages Easy to use Most Performance Easy to use Portable code Most Performance Most Flexibility

Simplicity SIMPLICITY & PERFORMANCE Accelerated Libraries Little or no code change for standard libraries, high performance. Limited by what libraries are available Compiler Directives Based on existing programming languages, so they are simple and familiar. Performance may not be optimal because directives often do not expose low level architectural details Performance Parallel Programming languages Expose low-level details for maximum performance Often more difficult to learn and more time consuming to implement.

Libraries

Libraries: Easy, High-Quality Acceleration Ease of use: Drop-in : Quality: Performance: Using libraries enables GPU acceleration without in-depth knowledge of GPU programming Many GPU-accelerated libraries follow standard APIs, thus enabling acceleration with minimal code changes Libraries offer high-quality implementations of functions encountered in a broad range of applications NVIDIA libraries are tuned by experts

Gpu accelerated libraries Providing Drop-in Acceleration Linear Algebra FFT, BLAS, SPARSE, Matrix NVIDIA cufft, cublas, cusparse Numerical & Math RAND, Statistics NVIDIA Math Lib NVIDIA curand Data Struct. & AI Sort, Scan, Zero Sum Visual Processing Image & Video NVIDIA NPP GPU AI Board Games NVIDIA Video Encode GPU AI Path Finding

3 Steps to CUDA-accelerated application Step 1: Substitute library calls with equivalent CUDA library calls saxpy ( ) cublassaxpy ( ) Step 2: Manage data locality - with CUDA: cudamalloc(), cudamemcpy(), etc. - with CUBLAS: cublasalloc(), cublassetvector(), etc. Step 3: Rebuild and link the CUDA-accelerated library nvcc myobj.o l cublas

Programming Languages (CUDA)

GPU Programming Languages Numerical analytics Fortran C C++ Python C# MATLAB, Mathematica, LabVIEW CUDA Fortran CUDA C Thrust, CUDA C++ PyCUDA, Copperhead GPU.NET

What is CUDA? CUDA Platform and Programming Model Expose GPU computing for general purpose A model how to offload work to the GPU and how the work is executed on the GPU CUDA C/C++ Based on industry-standard C/C++ Small set of extensions to enable heterogeneous programming Straightforward APIs to manage devices, memory etc.

Anatomy of a CUDA Application Serial code executes in a Host (CPU) thread (Functions) Parallel code executes in many Device (GPU) threads across multiple processing elements (Kernels) CUDA Application Serial code Host = CPU Parallel code Device = GPU Serial code Host = CPU Parallel code Device = GPU...

CUDA Kernels: Parallel Threads A kernel is a function executed on the GPU as an array of threads in parallel All threads execute the same code can take different paths but the fewer divergence between neighboring threads the better global void mykernel( float * input, float * output) { float x = input[threadidx.x]; float y = func(x); output[threadidx.x] = y; } Each thread has an ID Select input/output data Control decisions

CUDA Kernels: Blocks Threads are grouped into blocks Blocks are grouped into a grid

CUDA Kernels: Grids blockidx.z blockidx.y 2 GPU 1 A grid is a 1D, 2D or 3D index space 1 2 3 Threads query their position inside the index space using blockidx.{x,y,z} and threadidx.{x,y,z} and decide on the work they have to do Only threads in the same block can communicate directly (via shared memory) blockidx.x

HELLO WORLD!

Hello World! int main(void) { } printf("hello World!\n"); return 0; Standard C that runs on the host NVIDIA compiler (nvcc) can be used to compile programs with no device code Output: $ module load cudatoolkit $ nvcc hello_world.cu $ a.out Hello World!

Hello World! with Device Code global void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("hello World!\n"); return 0; } Two new syntactic elements

Hello World! with Device Code global void mykernel(void) { } CUDA C/C++ keyword global indicates a function that: Runs on the device Is called from host code nvcc separates source code into host and device components Device functions (e.g. mykernel()) processed by NVIDIA compiler Host functions (e.g. main()) processed by standard host compiler - gcc, cl.exe

Hello World! with Device Code mykernel<<<1,1>>>(); Triple angle brackets mark a call from host code to device code Also called a kernel launch We ll return to the parameters (1,1) in a moment That s all that is required to execute a function on the GPU!

Parallel Programming in CUDA C/C++ But wait GPU computing is about massive parallelism! We need a more interesting example We ll start by adding two integers and build up to vector addition a b c

Addition on the Device A simple kernel to add two integers global void add(int a, int b, int *c) { } *c = a + b; As before global is a CUDA C/C++ keyword meaning add() will execute on the device add() will be called from the host Note that we are passing pointers. - Necessary for c because we want the result back

Memory Management Host and device memory are separate entities Device pointers point to GPU memory May be passed to/from host code May not be dereferenced in host code Host pointers point to CPU memory May be passed to/from device code May not be dereferenced in device code Simple CUDA API for handling device memory cudamalloc(), cudafree(), cudamemcpy() Similar to the C equivalents malloc(), free(), memcpy() NEW (CUDA 6.0): Unified Memory can do the memory management for you!

Addition on the Device: add() Returning to our add() kernel global void add(int a, int b, int *c) { } *c = a + b; Let s take a look at main()

Addition on the Device: main() int main(void) { int a, b, c; int *d_c; int size = sizeof(int); // host copies of a, b, c // device copy of c // Allocate space for device copy of c cudamalloc((void **)&d_c, size); // Setup input values a = 2; b = 7;

Addition on the Device: main() // Launch add() kernel on GPU add<<<1,1>>>(a, b, d_c); // Copy result back to host cudamemcpy(&c, d_c, size, cudamemcpydevicetohost); } // Cleanup cudafree(d_a); cudafree(d_b); cudafree(d_c); return 0;

Unified Memory Simplifies Data Management int main(void) { int a, b; int *c; int size = sizeof(int); // Allocate space for device copy of c cudamallocmanged(&c, size); // Setup input values a = 2; b = 7; // Launch add() kernel on GPU add<<<1,1>>>(a, b, c); //use result on host, no explicit copy needed! cudadevicesynchronize(); printf( Result is %d\n, d); // Cleanup cudafree(c); return 0; }

Review Difference between host and device Host CPU Device GPU Using global to declare a function as device code (kernel) Executes on the device Called from the host Passing parameters from host code to a device kernel

RUNNING IN PARALLEL

Moving to Parallel GPU computing is about massive parallelism So how do we run code in parallel on the device? add<<< 1, 1 >>>(); add<<< 1, N >>>(); Instead of executing add() once, execute N threads in parallel

CUDA Threads With add() running in parallel we can do vector addition Terminology: a block can be split into parallel threads Let s change add() to use parallel threads instead of a single thread global void add(int *a, int *b, int *c) { c[blockidx.x] c[threadidx.x] = = a[blockidx.x] a[threadidx.x] + b[blockidx.x]; + b[threadidx.x]; } We use threadidx.x Let s have a look at main()

Unified Memory #define N 512 int main(void) { int *a, *b, *c; int size = N * sizeof(int); // Allocate space for N-vectors a, b, c cudamallocmanaged(&a, size); cudamallocmanaged(&b, size); cudamallocmanged(&c, size); // Setup input values on host fillrandom(n, a, b); // Launch one add() kernel block on GPU with N threads add<<< 1, N >>>(a, b, c); //use result on host, no explicit copy needed cudadevicesynchronize(); printvector(n, c); // Cleanup cudafree(a); cudafree(b); cudafree(c); return 0; }

Review Launching parallel kernels Kernels are launched with a given grid (1/2/3D index space) and block (1/2/3D index subspace) through the <<<grid, block>>> notation Launch a N blocks with M threads with add<<<n,m>>>( ); Use blockidx.x to access block index, threadidx.x to access thread index However, the number of threads per block is very limited (max 1024) Also, a block is always executed on a single streaming multiprocessor (SM), the other ~14 (on K40) SMs are unused The grid allows a much bigger index space to fill the GPU

WELCOME TO THE GRID

Indexing Arrays with Blocks and Threads A thread calculates its position in the 1/2/3D grid using blockidx.x and threadidx.x Consider indexing an array with one element per thread (8 threads/block) threadidx.x threadidx.x threadidx.x threadidx.x 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 blockidx.x = 0 blockidx.x = 1 blockidx.x = 2 blockidx.x = 3 With M threads per block, a unique index for each thread is given by: int index = blockidx.x * M + threadidx.x;

Indexing Arrays: Example Which thread will operate on the red element? 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 M = 8 threadidx.x = 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 blockidx.x = 2 int index = blockidx.x * M + threadidx.x; = 2 * 8 + 5; = 21;

Vector Addition with Blocks and Threads Use the built-in variable blockdim.x for threads per block int index = blockidx.x * blockdim.x + threadidx.x; Combined version of add() to use parallel threads and parallel blocks global void add(int *a, int *b, int *c) { int index = blockidx.x * blockdim.x + threadidx.x; c[index] = a[index] + b[index]; } What changes need to be made in main()?

Unified Memory #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main(void) { int *a, *b, *c; int size = N * sizeof(int); // Allocate space for device copies of a, b, c cudamallocmanaged(&a, size); cudamallocmanaged(&b, size); cudamallocmanged(&c, size); // Setup input values on host fillrandom(n, a, b); // Launch add() kernel on GPU with N blocks add<<<n/threads_per_block,threads_per_block>>>(a, b, c); //use result on host cudadevicesynchronize(); printvector(n, c); // Cleanup cudafree(a); cudafree(b); cudafree(c); return 0; }

Handling Arbitrary Vector Sizes Typical problems are not even multiples of blockdim.x Avoid accessing beyond the end of the arrays: global void add(int *a, int *b, int *c, int n) { int index = threadidx.x + blockidx.x * blockdim.x; if (index < n) c[index] = a[index] + b[index]; } Update the kernel launch: add<<<(n + M-1) / M,M>>>(d_a, d_b, d_c, N);

CONCEPTS Heterogeneous Computing Blocks Threads Indexing Shared memory syncthreads() Asynchronous operation ADVANCED CONCEPTS: COOPERATING THREADS Handling errors Managing devices

1D Stencil Consider applying a 1D stencil to a 1D array of elements Each output element is the sum of input elements within a radius If radius is 3, then each output element is the sum of 7 input elements: in out

Implementing Within a Block Each thread processes one output element blockdim.x elements per block Input elements are read several times With radius 3, each input element is read seven times in 01 23 45 67 radius radius out Thread Thread Thread Thread Thread Thread Thread Thread Thread 0 1 2 3 4 5 6 7 8

Sharing Data Between Threads Terminology: within a block, threads share data via shared memory Extremely fast on-chip memory By opposition to device memory, referred to as global memory Like a user-managed cache Declare using shared, allocated per block Data is not visible to threads in other blocks

Implementing With Shared Memory Cache data in shared memory Read (blockdim.x + 2 * radius) input elements from global memory to shared memory Compute blockdim.x output elements Write blockdim.x output elements to global memory Each block needs a halo of radius elements at each boundary in out halo on left halo on right blockdim.x output elements

Stencil Kernel (1 of 2) global void stencil_1d(int *in, int *out) { shared int temp[block_size + 2 * RADIUS]; int gindex = threadidx.x + blockidx.x * blockdim.x; int lindex = threadidx.x + RADIUS; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadidx.x < RADIUS) { temp[lindex - RADIUS] = in[gindex - RADIUS]; } temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];

Stencil Kernel (2 of 2) // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; } // Store the result out[gindex] = result;

Data Race! The stencil example will not work Suppose thread 15 reads the halo before thread 0 has fetched it... temp[lindex] = in[gindex]; if (threadidx.x < RADIUS) { } temp[lindex RADIUS] = in[gindex RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++)... result += temp[lindex + offset]; Store at temp[18] Load from temp[19] Skipped since threadid.x > RADIUS

syncthreads() void syncthreads(); Synchronizes all threads within a block Used to prevent RAW / WAR / WAW hazards All threads must reach the barrier In conditional code, the condition must be uniform across the block

Stencil Kernel global void stencil_1d(int *in, int *out) { shared int temp[block_size + 2 * RADIUS]; int gindex = threadidx.x + blockidx.x * blockdim.x; int lindex = threadidx.x + radius; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadidx.x < RADIUS) { temp[lindex RADIUS] = in[gindex RADIUS]; } temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; // Synchronize (ensure all the data is available) syncthreads();

Stencil Kernel // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; } // Store the result out[gindex] = result;

Review (1 of 2) Launching parallel threads Launch N blocks with M threads per block with kernel<<<n,m>>>( ); Use blockidx.x to access block index within grid Use threadidx.x to access thread index within block Assign elements to threads: int index = blockidx.x * blockdim.x + threadidx.x;

Review (2 of 2) Use shared to declare a variable/array in shared memory Data is shared between threads in a block Not visible to threads in other blocks Use syncthreads() as a barrier Use to prevent data hazards

CONCEPTS Heterogeneous Computing Blocks Threads Indexing Shared memory syncthreads() Asynchronous operation MANAGING THE DEVICE Handling errors Managing devices

Coordinating Host & Device Kernel launches are asynchronous Control returns to the CPU immediately CPU needs to synchronize before consuming the results cudamemcpy() cudamemcpyasync() Blocks the CPU until the copy is complete Copy begins when all preceding CUDA calls have completed Asynchronous, does not block the CPU cudadevicesynchronize() Blocks the CPU until all preceding CUDA calls have completed

Reporting Errors All CUDA API calls return an error code (cudaerror_t) Error in the API call itself OR Error in an earlier asynchronous operation (e.g. kernel) Get the error code for the last error: cudaerror_t cudagetlasterror(void) Get a string to describe the error: char *cudageterrorstring(cudaerror_t) if(cudagetlasterror()!= cudasuccess) printf("%s\n", cudageterrorstring(cudagetlasterror()));

Device Management Application can query and select GPUs cudagetdevicecount(int *count) cudasetdevice(int device) cudagetdevice(int *device) cudagetdeviceproperties(cudadeviceprop *prop, int device) Multiple host threads can share a device A single host thread can manage multiple devices cudasetdevice(i) to select current device cudamemcpy( ) for peer-to-peer copies

Next Steps We just touched the surface But you can already do a lot of useful things with what we learned Next topics: Thread cooperation via shared memory and syncthreads() Asynchronous Kernel launches and memory operations Device management (cudasetdevice, error handling ) Atomic operations Launching kernels from the GPU MPI

Links and References These languages are supported on all CUDA-capable GPUs. CUDA C/C++ and Fortran http://developer.nvidia.com/cuda-toolkit Parallel Forall blog http://devblogs.nvidia.com/parallelforall/ Thrust C++ Template Library http://developer.nvidia.com/thrust Accelerated Libraries https://developer.nvidia.com/gpu-accelerated-libraries CUDA Programming Guide http://docs.nvidia.com/cuda/ PyCUDA (Python) http://mathema.tician.de/software/pycuda GTC On Demand http://on-demand-gtc.gputechconf.com/ gtcnew/on-demand-gtc.php

LUNCH