Introduction to GPU Computing. Design and Analysis of Parallel Algorithms

Size: px
Start display at page:

Download "Introduction to GPU Computing. Design and Analysis of Parallel Algorithms"

Transcription

1 Introduction to GPU Computing Design and Analysis of Parallel Algorithms

2 Sources CUDA Programming Guide (3.2) CUDA Best Practices Guide (3.2) CUDA Toolkit Reference Manual (3.2) CUDA SDK Examples

3 Part I Introduction

4 Multi-core versus many-core Multi-core (CPU) Many-core (GPU) Control ALU Control ALU Cache Many-core processors: 1. More and simpler cores 2. Massive parallelism

5 The evolution of CPUs and GPUs CPUs: General purpose Sequential Advanced core features Recent growth in number of cores GPUs: Special purpose (graphics) Massively parallel Large memory bandwidth Simple core features Recent move toward general purpose programming

6 Part II Overview of the CUDA programming model

7 hello.cu 1. Source code: #include <stdio.h> #include <stdlib.h> #include <cuda.h> global void hello(void) {} int main(int argc, char *argv[]) { hello<<<1,1>>>(); printf("%s\n", cudageterrorstring(cudagetlasterror())); return 0; } 2. Output: > no error

8 Get started 1. Log in to hamrinsberget 2. Load the CUDA module: module load cudatoolkit/ Type in the code on the previous slide in a file called hello.cu 4. Compile and link by typing nvcc -o hello hello.cu 5. Run the program by typing./hello

9 Kernels, grids, and thread blocks Kernel Grid Block Block Block Block Block Block Block Block Block Older GPU Newer GPU Block Block Block Block Block Block Block Block Block Block Block Block Block Block Block Block Block Time Block Time

10 Kernel A function written in CUDA C Run by multiple threads in parallel on the device Each thread is given a unique ID Number of threads set when the kernel is launched

11 Kernel Example Kernel code: global void kernel(void) { return; } Host code that launches the kernel with 8 4 = 32 threads: kernel<<<8,4>>>();

12 Thread Sequential scalar thread of execution Very very cheap (almost free) Managed in hardware by the device

13 Thread block A (small) group of threads Typically around 128 or 256 threads per block Size of the thread block determined at kernel launch Threads within a thread block share a fast memory

14 Grid A group of thread blocks Size of grid determined at kernel launch Threads in different thread blocks cannot synchronize

15 Warp A (small) subset of a thread block Typically 32 threads per warp Size determined by the hardware Threads within a warp execute in SIMD fashion

16 Part III Overview of the CUDA architecture

17 Memory hierarchy Host mem Accessible by the host (and also by the GPU via mapped memory). GPU mem Accessible by the GPU directly and by the CPU via an API. Shared mem Private memory associated with each thread block. Registers Private memory associated with each thread. There are also other types of memories which we don t go into.

18 SIMT architecture SIMT Single Instruction Multiple Thread Device A collection of streaming multiprocessors. Streaming multiprocessor A collection of streaming processors with a shared memory and shared control logic. Streaming processor A sequential scalar core. Hundreds or thousands of streaming processors per device.

19 Problems associated with SIMT Global memory accesses Threads within a warp can access memory independently How do we minimize the number of memory transactions? Shared memory accesses Shared memory divided into separate banks for increased bandwidth How do we exploit this parallelism? Branches Threads within a warp can take different branches Warps execute in SIMD fashion How do we exploit the SIMD parallelism?

20 Coalescing of global memory accesses Accesses to nearby data are coalesced into a single memory transaction If all accesses are nearby, then only one memory transaction is issued If all accesses are scattered, then roughly one memory transaction per thread is issued

21 Shared memory bank conflicts Shared memory divided into separate (e.g., 16) banks for increased bandwidth Successive 32B chunks are interleaved across the banks Simultaneous accesses to the same bank are serialized

22 SIMT/warp scheduler With re-convergence A B C D F E G

23 SIMT/warp scheduler Without re-convergence: serialization A B C D F E E G G G

24 SIMT/warp scheduler Stack One way to support re-convergence in hardware is via a stack Each stack entry consists of three fields: 1. Next program counter 2. Active threads mask 3. Re-convergence program counter At a branch, the following actions are taken: 1. Update the next program counter of the top entry to the re-convergence point of the branch 2. Add a new stack entry for the branch not taken threads 3. Add a new stack entry for the branch taken threads

25 SIMT/warp scheduler Stack A 1111 Initial configuration

26 SIMT/warp scheduler Stack After branch at A G 1111 F 0001 G B 1110 G

27 SIMT/warp scheduler Stack After branch at B G 1111 F 0001 G E 1110 G D 0110 E C 1000 E

28 SIMT/warp scheduler Stack G 1111 F 0001 G E 1110 G D 0110 E E 1000 E After C

29 SIMT/warp scheduler Stack G 1111 F 0001 G E 1110 G E 0110 E After D

30 SIMT/warp scheduler Stack G 1111 F 0001 G G 1110 G After E

31 SIMT/warp scheduler Stack G 1111 G 0001 G After F

32 SIMT/warp scheduler Stack G 1111 After complete re-convergence

33 Occupancy Need many warps per streaming multi-processor to hide latencies However, warps consume resources such as registers and shared memory Occupancy measures the percentage of the maximum number of warps that can be assigned to a streaming multi-processor Heuristics: # threads per block should be a multiple of the warp size At least 64 threads per block should be used Between 128 and 256 threads per block is a good starting point Better to have several small blocks than only one large block per multi-processor if latency is a problem

34 Summary of optimization hints Coalesce global memory accesses Resolve shared memory bank conflicts Avoid branch divergence within warps

35 Part IV Overview of the CUDA C/C++ extensions

36 Function type qualifiers device Compiles function for the device. Callable only from the device. global Compiles function for the device. Callable only from the device. host Compiles function for the host. Callable only from the host. Default if nothing else specified. Used mainly together with device in order to compile the function for both the device and the host. Some restrictions: 1. Neither device nor global support recursion, static variables, variable number of arguments. 2. global functions must have void return type. 3. A call to a global function is asynchronous (returns before it is completed).

37 Variable type qualifiers device Places the variable in the device s global memory. shared Places the variable in a thread block s shared memory. Can also be dynamically allocated at kernel invocation time with some restrictions.

38 Vector types charx shortx intx longx longlongy floatx doubley X is 1 4 and Y is 1 2

39 Vector types Access of elements:.x (1st),.y (2nd),.z (3rd),.w (4th) Construction of vectors: make_<type>(x,..., w) Example: float4 f4 = make_float4(0.f,1.f,2.f,3.f);

40 Execution configuration 1. General syntax: kernel<<<g,b,n,s>>>(...) 2. G: Size of the grid (type dim3 or int) 3. B: Size of thread block (type dim3 or int) 4. N: Size of dynamically allocated shared memory (in bytes/block) 5. S: Stream associated with the kernel invocation 6. N and S default to 0 if omitted

41 Execution configuration Example dim3 block(16,16); dim3 grid(columns/block.x,rows/block.y); kernel<<<grid,block,sizeof(float)*block.x>>>(); Launches a grid with thread blocks of size Allocates sizeof(float)*block.x bytes of shared memory per thread block

42 Dynamically allocated shared memory Often, the amount of shared memory requires depends on the size of a thread block Therefore, CUDA allows shared memory to be allocated dynamically as follows A declaration of the form extern shared sh[]; indicates a dynamically allocated array in shared memory The size of the array is specified when the kernel is launched

43 Built-in variables in global functions threadidx (type uint3) Thread ID within thread block blockdim (type dim3) Thread block size blockidx (type uint3) Thread block ID within grid griddim (type dim3) Grid size warpsize (type int) Number of threads in a warp

44 Part V Examples

45 Vector addition Add two n-vectors A and B One thread for each component n-way parallelism Kernel code: global void vectoradd(float *A, float *B, int n) { int i = blockidx.x*blockdim.x + threadidx.x; if (i < n) A[i] += B[i]; } Execution configuration: vectoradd<<<(n+127)/128,128>>>(a, B, n);

46 Fractal image generation Mandelbrot set centered at (0.4, 0.2) with width 0.05

47 Fractal image generation Sequential algorithm for each point c C do z 0 C iter 0 while iter < maxiter z 2 < 4 do z z 2 + c iter iter + 1 end while Determine the color of c as a function of iter. end for

48 Fractal image generation 1. Source code: global void mandelbrot_gpu_kernel(...) { int y = blockidx.y*blockdim.y+threadidx.y; int x = blockidx.x*blockdim.x+threadidx.x; float X0 = xmin + ((xmax-xmin)/(xres-1))*x; float Y0 = ymin + ((ymax-ymin)/(yres-1))*y; int iter; float X=0, Y=0; for (iter = 0; iter < maxiter && X*X + Y*Y <= 4; iter++) { float Z = X*X - Y*Y + X0; Y = 2*X*Y + Y0; X = Z; } image[y*xres+x] = (1.f) - iter / (float) maxiter; } 2. Result: 80 times speedup over (unoptimized) CPU version.

49 Edge detection Using the Sobel operator

50 Edge detection Sequential algorithm for each pixel, A(y, x), of the input image do A s A(y 1 : y + 1, x 1 : x + 1) X A s Y A s L x = i j X (i, j) L y = i j Y (i, j) B(y, x) L 2 x + L 2 Y end for ( denotes element-wise multiplication)

51 Edge detection 1. Source code: global void edge_kernel(float *A, float *B, int width, int height) { int y = blockidx.y*blockdim.y + threadidx.y; int x = blockidx.x*blockdim.x + threadidx.x; if (y >= 1 && y < height-1 && x >= 1 && x < width-1) { float a00 = A[(y-1)*width+(x-1)]; //... snip... float a22 = A[(y+1)*width+(x+1)]; float Lx, Ly; Lx = -a00-2*a10 - a20 + a02 + 2*a12 + a22; Ly = -a00-2*a01 - a02 + a20 + 2*a21 + a22; float res = sqrtf(lx*lx + Ly*Ly); if (res < 0.4f) res = 0.0f; else res = 1.0f; B[y*width+x] = res; } }

52 Matrix addition Add two n n-matrices A and B One thread for each component n 2 -way parallelism Kernel code: global void matrixadd(float *A, float *B, int n) { int row = blockidx.y*blockdim.y + threadidx.y; int column = blockidx.x*blockdim.x + threadidx.x; if (row < n && column < n) A[row*n+column] += B[row*n+column]; } Execution configuration: dim3 block(16,16); dim3 grid((n+block.x-1)/block.x,(n+block.y-1)/block.y); vectoradd<<<grid,block>>>(a, B, n);

53 Effect of non-coalesced memory accesses Matrix addition Good Warps across the rows of the matrix. Bad Warps across the columns of the matrix Effect of non coalesced memory accesses 10 1 Time/speedup Good Bad Bad/Good (speedup) n

54 Matrix multiplication: C = AB B jb A ib C 1. Recall: C(i, j) = K k=1 A(i, k)b(k, j) 2. Partition C into blocks. 3. One thread block computes one block of C 4. One thread computes one entry of C

55 Matrix multiplication Simple global void gemm_kernel(matrix A, Matrix B, Matrix C) { int row = blockidx.y * blockdim.y + threadidx.y; int col = blockidx.x * blockdim.x + threadidx.x; float Cval = 0; for (int j = 0; j < A.n; j++) Cval += A.mtx[row*A.n+j] * B.mtx[j*B.n+col]; C.mtx[row*C.n+col] = Cval; } 1. Compute the thread s row and col indices. 2. Initialize the accumulator Cval to zero. 3. Compute a dot product using a loop. 4. Store the dot product in C.

56 Matrix multiplication Using shared memory global void gemm_kernel(matrix A, Matrix B, Matrix C) { int ib = blockidx.y, jb = blockidx.x; int i = threadidx.y, j = threadidx.x; shared float As[BLOCK_SIZE][BLOCK_SIZE]; shared float Bs[BLOCK_SIZE][BLOCK_SIZE]; float Cval = 0; for (int kb = 0; kb < A.n/BLOCK_SIZE; kb++) { As[i][j] = A.mtx[(ib*BLOCK_SIZE+i)*A.n+(kb*BLOCK_SIZE+j)]; Bs[i][j] = B.mtx[(kb*BLOCK_SIZE+i)*B.n+(jb*BLOCK_SIZE+j)]; syncthreads(); for (int k = 0; k < BLOCK_SIZE; k++) Cval = Cval + As[i][k] * Bs[k][j]; syncthreads(); } C.mtx[(ib*BLOCK_SIZE+i)*C.n+(jb*BLOCK_SIZE+j)] = Cval; } 1. Load submatrices of A and B into shared memory. 2. Compute dot product using cached submatrices. 3. Store in C.

57 Effect of bank conflicts Matrix multiplication Good Warps across the rows of cached B. Bad Warps across the columns of cached B. 250 Effect of bank conflicts 200 Gflops/s Good Bad n

58 Part VI CUDA runtime API overview

59 Memory management API (Host-side) Functions: cudahostalloc(ptr, sz, flags) cudamallochost(ptr, sz) cudamallochost(ptr, sz) cudafreehost(ptr) (and more)

60 Memory management API (Device-side) Functions: cudamalloc(ptr, sz) cudamallocpitch(ptr, pitch, width, height) cudamalloc3d(pitched, extent) cudafree(ptr) (and more)

61 Data transfers API Functions: cudamemcpy(dst, src, cnt, kind) cudamemcpyasync(dst, src, cnt, kind, str) cudamemcpy2d(d, dp, s, sp, w, h, kind) cudamemcpy2dasync(d, dp, s, sp, w, h, kind, str) cudamemcpy3d(p) cudamemcpy3dasync(p, str) (and more)

62 Thread management API Functions: cudathreadsynchronize() (and more)

63 Error handling API Most CUDA functions return a cudaerror_t object No error is signaled by returning cudasuccess Data type: cudaerror_t Functions: err = cudagetlasterror() cudageterrorstring(err) (and more)

64 Streams CUDA manages parallelism using streams Each command is assigned to a stream (default: 0) Commands within a stream execute sequentially Commands from different streams may execute in parallel

65 Streams API Data type: cudastream_t Functions: cudastreamcreate(str) cudastreamquery(str) cudastreamsynchronize(str) cudastreamwaitevent(str, ev, flags) cudastreamdestroy(str) (and more)

66 Streams Example cudastream_t str1, str2; cudaevent_t ev1, ev2; cudastreamcreate(&str1); cudastreamcreate(&str2); cudaeventcreate(&ev1); cudaeventcreate(&ev2); for (int i = 0; i < 2; i++) //... asynchronous transfer on stream i... for (int i = 0; i < 2; i++) //... kernel launch on stream i... cudaeventrecord(ev1, str1); cudaeventrecord(ev2, str2); cudastreamwaitevent(null, ev1, 0); cudastreamwaitevent(null, ev2, 0); //... transfer from device to host... cudastreamdestroy(str1); cudastreamdestroy(str2); cudaeventdestroy(ev1); cudaeventdestroy(ev2);

67 Events Record events on the device Query elapsed time between two events Allows one to measure transfer times and kernel times Allows one to get progress informatio from the device

68 Events API Data type: cudaevent_t Functions: cudaeventcreate(ev) cudaeventrecord(ev, str) cudaeventquery(ev) cudaeventsynchronize(ev) cudaeventelapsedtime(ms, start, end) cudaeventdestroy(ev) (and more)

69 Events Example cudaevent_t ev1, ev2; cudaeventcreate(&ev1); cudaeventcreate(&ev2); cudaeventrecord(ev1, 0); //... memcpy and/or kernels... cudaeventrecord(ev2, 0); cudaeventsynchronize(ev2); float ms; cudaeventelapsedtime(&ms, ev1, ev2); cudaeventdestroy(ev1); cudaeventdestroy(ev2);

Tesla Architecture, CUDA and Optimization Strategies

Tesla Architecture, CUDA and Optimization Strategies Tesla Architecture, CUDA and Optimization Strategies Lan Shi, Li Yi & Liyuan Zhang Hauptseminar: Multicore Architectures and Programming Page 1 Outline Tesla Architecture & CUDA CUDA Programming Optimization

More information

Introduction to GPGPUs and to CUDA programming model

Introduction to GPGPUs and to CUDA programming model Introduction to GPGPUs and to CUDA programming model www.cineca.it Marzia Rivi m.rivi@cineca.it GPGPU architecture CUDA programming model CUDA efficient programming Debugging & profiling tools CUDA libraries

More information

Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA

Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA Optimization Overview GPU architecture Kernel optimization Memory optimization Latency optimization Instruction optimization CPU-GPU

More information

Data Parallel Execution Model

Data Parallel Execution Model CS/EE 217 GPU Architecture and Parallel Programming Lecture 3: Kernel-Based Data Parallel Execution Model David Kirk/NVIDIA and Wen-mei Hwu, 2007-2013 Objective To understand the organization and scheduling

More information

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions. Schedule CUDA Digging further into the programming manual Application Programming Interface (API) text only part, sorry Image utilities (simple CUDA examples) Performace considerations Matrix multiplication

More information

Module 3: CUDA Execution Model -I. Objective

Module 3: CUDA Execution Model -I. Objective ECE 8823A GPU Architectures odule 3: CUDA Execution odel -I 1 Objective A more detailed look at kernel execution Data to thread assignment To understand the organization and scheduling of threads Resource

More information

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks. Sharing the resources of an SM Warp 0 Warp 1 Warp 47 Register file A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks Shared A single SRAM (ex. 16KB)

More information

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 CUDA Lecture 2 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de December 15, 2015 CUDA Programming Fundamentals CUDA

More information

Scientific discovery, analysis and prediction made possible through high performance computing.

Scientific discovery, analysis and prediction made possible through high performance computing. Scientific discovery, analysis and prediction made possible through high performance computing. An Introduction to GPGPU Programming Bob Torgerson Arctic Region Supercomputing Center November 21 st, 2013

More information

CUDA Parallelism Model

CUDA Parallelism Model GPU Teaching Kit Accelerated Computing CUDA Parallelism Model Kernel-Based SPMD Parallel Programming Multidimensional Kernel Configuration Color-to-Grayscale Image Processing Example Image Blur Example

More information

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I) Lecture 9 CUDA CUDA (I) Compute Unified Device Architecture 1 2 Outline CUDA Architecture CUDA Architecture CUDA programming model CUDA-C 3 4 CUDA : a General-Purpose Parallel Computing Architecture CUDA

More information

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture Rafia Inam Mälardalen Real-Time Research Centre Mälardalen University, Västerås, Sweden http://www.mrtc.mdh.se rafia.inam@mdh.se CONTENTS

More information

University of Bielefeld

University of Bielefeld Geistes-, Natur-, Sozial- und Technikwissenschaften gemeinsam unter einem Dach Introduction to GPU Programming using CUDA Olaf Kaczmarek University of Bielefeld STRONGnet Summerschool 2011 ZIF Bielefeld

More information

COSC 6374 Parallel Computations Introduction to CUDA

COSC 6374 Parallel Computations Introduction to CUDA COSC 6374 Parallel Computations Introduction to CUDA Edgar Gabriel Fall 2014 Disclaimer Material for this lecture has been adopted based on various sources Matt Heavener, CS, State Univ. of NY at Buffalo

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna14/ [ 10 ] GPU and CUDA Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture and Performance

More information

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY This lecture: Programming model and language Introduction to memory spaces and memory access Shared memory Matrix multiplication example Lecture

More information

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh Learn CUDA in an Afternoon Alan Gray EPCC The University of Edinburgh Overview Introduction to CUDA Practical Exercise 1: Getting started with CUDA GPU Optimisation Practical Exercise 2: Optimising a CUDA

More information

Lecture 10!! Introduction to CUDA!

Lecture 10!! Introduction to CUDA! 1(50) Lecture 10 Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY 1(50) Laborations Some revisions may happen while making final adjustments for Linux Mint. Last minute changes may occur.

More information

Introduction to Parallel Computing with CUDA. Oswald Haan

Introduction to Parallel Computing with CUDA. Oswald Haan Introduction to Parallel Computing with CUDA Oswald Haan ohaan@gwdg.de Schedule Introduction to Parallel Computing with CUDA Using CUDA CUDA Application Examples Using Multiple GPUs CUDA Application Libraries

More information

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list CUDA Programming Week 1. Basic Programming Concepts Materials are copied from the reference list G80/G92 Device SP: Streaming Processor (Thread Processors) SM: Streaming Multiprocessor 128 SP grouped into

More information

CUDA C Programming Mark Harris NVIDIA Corporation

CUDA C Programming Mark Harris NVIDIA Corporation CUDA C Programming Mark Harris NVIDIA Corporation Agenda Tesla GPU Computing CUDA Fermi What is GPU Computing? Introduction to Tesla CUDA Architecture Programming & Memory Models Programming Environment

More information

CME 213 S PRING Eric Darve

CME 213 S PRING Eric Darve CME 213 S PRING 2017 Eric Darve Review Secret behind GPU performance: simple cores but a large number of them; even more threads can exist live on the hardware (10k 20k threads live). Important performance

More information

Real-time Graphics 9. GPGPU

Real-time Graphics 9. GPGPU Real-time Graphics 9. GPGPU GPGPU GPU (Graphics Processing Unit) Flexible and powerful processor Programmability, precision, power Parallel processing CPU Increasing number of cores Parallel processing

More information

NVIDIA GPU CODING & COMPUTING

NVIDIA GPU CODING & COMPUTING NVIDIA GPU CODING & COMPUTING WHY GPU S? ARCHITECTURE & PROGRAM MODEL CPU v. GPU Multiprocessor Model Memory Model Memory Model: Thread Level Programing Model: Logical Mapping of Threads Programing Model:

More information

Hands-on CUDA Optimization. CUDA Workshop

Hands-on CUDA Optimization. CUDA Workshop Hands-on CUDA Optimization CUDA Workshop Exercise Today we have a progressive exercise The exercise is broken into 5 steps If you get lost you can always catch up by grabbing the corresponding directory

More information

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono Basic Elements of CUDA Algoritmi e Calcolo Parallelo References q This set of slides is mainly based on: " CUDA Technical Training, Dr. Antonino Tumeo, Pacific Northwest National Laboratory " Slide of

More information

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research Introduction to CUDA CME343 / ME339 18 May 2011 James Balfour [ jbalfour@nvidia.com] NVIDIA Research CUDA Programing system for machines with GPUs Programming Language Compilers Runtime Environments Drivers

More information

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA Part 1: Hardware design and programming model Dirk Ribbrock Faculty of Mathematics, TU dortmund 2016 Table of Contents Why parallel

More information

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA CUDA PROGRAMMING MODEL Carlo Nardone Sr. Solution Architect, NVIDIA EMEA CUDA: COMMON UNIFIED DEVICE ARCHITECTURE Parallel computing architecture and programming model GPU Computing Application Includes

More information

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY This lecture: Programming model and language Memory spaces and memory access Shared memory Examples Lecture questions: 1. Suggest two significant

More information

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

More information

GPU Programming. Performance Considerations. Miaoqing Huang University of Arkansas Fall / 60

GPU Programming. Performance Considerations. Miaoqing Huang University of Arkansas Fall / 60 1 / 60 GPU Programming Performance Considerations Miaoqing Huang University of Arkansas Fall 2013 2 / 60 Outline Control Flow Divergence Memory Coalescing Shared Memory Bank Conflicts Occupancy Loop Unrolling

More information

GPU programming. Dr. Bernhard Kainz

GPU programming. Dr. Bernhard Kainz GPU programming Dr. Bernhard Kainz Overview About myself Motivation GPU hardware and system architecture GPU programming languages GPU programming paradigms Pitfalls and best practice Reduction and tiling

More information

Programming with CUDA, WS09

Programming with CUDA, WS09 Programming with CUDA and Parallel Algorithms Waqar Saleem Jens Müller Lecture 3 Thursday, 29 Nov, 2009 Recap Motivational videos Example kernel Thread IDs Memory overhead CUDA hardware and programming

More information

Introduction to GPGPU and GPU-architectures

Introduction to GPGPU and GPU-architectures Introduction to GPGPU and GPU-architectures Henk Corporaal Gert-Jan van den Braak http://www.es.ele.tue.nl/ Contents 1. What is a GPU 2. Programming a GPU 3. GPU thread scheduling 4. GPU performance bottlenecks

More information

EEM528 GPU COMPUTING

EEM528 GPU COMPUTING EEM528 CS 193G GPU COMPUTING Lecture 2: GPU History & CUDA Programming Basics Slides Credit: Jared Hoberock & David Tarjan CS 193G History of GPUs Graphics in a Nutshell Make great images intricate shapes

More information

Lecture 2: CUDA Programming

Lecture 2: CUDA Programming CS 515 Programming Language and Compilers I Lecture 2: CUDA Programming Zheng (Eddy) Zhang Rutgers University Fall 2017, 9/12/2017 Review: Programming in CUDA Let s look at a sequential program in C first:

More information

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5)

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5) CUDA programming model N. Cardoso & P. Bicudo Física Computacional (FC5) N. Cardoso & P. Bicudo CUDA programming model 1/23 Outline 1 CUDA qualifiers 2 CUDA Kernel Thread hierarchy Kernel, configuration

More information

CUDA Memory Types All material not from online sources/textbook copyright Travis Desell, 2012

CUDA Memory Types All material not from online sources/textbook copyright Travis Desell, 2012 CUDA Memory Types All material not from online sources/textbook copyright Travis Desell, 2012 Overview 1. Memory Access Efficiency 2. CUDA Memory Types 3. Reducing Global Memory Traffic 4. Example: Matrix-Matrix

More information

COSC 6339 Accelerators in Big Data

COSC 6339 Accelerators in Big Data COSC 6339 Accelerators in Big Data Edgar Gabriel Fall 2018 Motivation Programming models such as MapReduce and Spark provide a high-level view of parallelism not easy for all problems, e.g. recursive algorithms,

More information

CUDA Programming Model

CUDA Programming Model CUDA Xing Zeng, Dongyue Mou Introduction Example Pro & Contra Trend Introduction Example Pro & Contra Trend Introduction What is CUDA? - Compute Unified Device Architecture. - A powerful parallel programming

More information

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

GPU Programming Using CUDA. Samuli Laine NVIDIA Research GPU Programming Using CUDA Samuli Laine NVIDIA Research Today GPU vs CPU Different architecture, different workloads Basics of CUDA Executing code on GPU Managing memory between CPU and GPU CUDA API Quick

More information

CS : Many-core Computing with CUDA

CS : Many-core Computing with CUDA CS4402-9535: Many-core Computing with CUDA Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) UWO-CS4402-CS9535 (Moreno Maza) CS4402-9535: Many-core Computing with CUDA UWO-CS4402-CS9535

More information

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center An Introduction to GPU Architecture and CUDA C/C++ Programming Bin Chen April 4, 2018 Research Computing Center Outline Introduction to GPU architecture Introduction to CUDA programming model Using the

More information

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34 1 / 34 GPU Programming Lecture 2: CUDA C Basics Miaoqing Huang University of Arkansas 2 / 34 Outline Evolvements of NVIDIA GPU CUDA Basic Detailed Steps Device Memories and Data Transfer Kernel Functions

More information

Optimizing CUDA for GPU Architecture. CSInParallel Project

Optimizing CUDA for GPU Architecture. CSInParallel Project Optimizing CUDA for GPU Architecture CSInParallel Project August 13, 2014 CONTENTS 1 CUDA Architecture 2 1.1 Physical Architecture........................................... 2 1.2 Virtual Architecture...........................................

More information

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci TECHNISCHE UNIVERSITÄT WIEN Fakultät für Informatik Cyber-Physical Systems Group CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci Outline of CUDA Basics Basic Kernels and Execution on GPU

More information

CUDA Performance Optimization. Patrick Legresley

CUDA Performance Optimization. Patrick Legresley CUDA Performance Optimization Patrick Legresley Optimizations Kernel optimizations Maximizing global memory throughput Efficient use of shared memory Minimizing divergent warps Intrinsic instructions Optimizations

More information

Real-time Graphics 9. GPGPU

Real-time Graphics 9. GPGPU 9. GPGPU GPGPU GPU (Graphics Processing Unit) Flexible and powerful processor Programmability, precision, power Parallel processing CPU Increasing number of cores Parallel processing GPGPU general-purpose

More information

GPU Computing: A Quick Start

GPU Computing: A Quick Start GPU Computing: A Quick Start Orest Shardt Department of Chemical and Materials Engineering University of Alberta August 25, 2011 Session Goals Get you started with highly parallel LBM Take a practical

More information

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model Parallel Programming Principle and Practice Lecture 9 Introduction to GPGPUs and CUDA Programming Model Outline Introduction to GPGPUs and Cuda Programming Model The Cuda Thread Hierarchy / Memory Hierarchy

More information

CUDA Architecture & Programming Model

CUDA Architecture & Programming Model CUDA Architecture & Programming Model Course on Multi-core Architectures & Programming Oliver Taubmann May 9, 2012 Outline Introduction Architecture Generation Fermi A Brief Look Back At Tesla What s New

More information

Computer Architecture

Computer Architecture Jens Teubner Computer Architecture Summer 2017 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2017 Jens Teubner Computer Architecture Summer 2017 34 Part II Graphics

More information

COSC 6385 Computer Architecture. - Data Level Parallelism (II)

COSC 6385 Computer Architecture. - Data Level Parallelism (II) COSC 6385 Computer Architecture - Data Level Parallelism (II) Fall 2013 SIMD Instructions Originally developed for Multimedia applications Same operation executed for multiple data items Uses a fixed length

More information

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory CUDA Exercises CUDA Programming Model 05.05.2015 Lukas Cavigelli ETZ E 9 / ETZ D 61.1 Integrated Systems Laboratory Exercises 1. Enumerate GPUs 2. Hello World CUDA kernel 3. Vectors Addition Threads and

More information

Introduction to CUDA Programming

Introduction to CUDA Programming Introduction to CUDA Programming Steve Lantz Cornell University Center for Advanced Computing October 30, 2013 Based on materials developed by CAC and TACC Outline Motivation for GPUs and CUDA Overview

More information

GPU Computing with CUDA

GPU Computing with CUDA GPU Computing with CUDA Hands-on: Shared Memory Use (Dot Product, Matrix Multiplication) Dan Melanz & Dan Negrut Simulation-Based Engineering Lab Wisconsin Applied Computing Center Department of Mechanical

More information

Tiled Matrix Multiplication

Tiled Matrix Multiplication Tiled Matrix Multiplication Basic Matrix Multiplication Kernel global void MatrixMulKernel(int m, m, int n, n, int k, k, float* A, A, float* B, B, float* C) C) { int Row = blockidx.y*blockdim.y+threadidx.y;

More information

Lab 1 Part 1: Introduction to CUDA

Lab 1 Part 1: Introduction to CUDA Lab 1 Part 1: Introduction to CUDA Code tarball: lab1.tgz In this hands-on lab, you will learn to use CUDA to program a GPU. The lab can be conducted on the SSSU Fermi Blade (M2050) or NCSA Forge using

More information

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Introduction to CUDA programming 1 Agenda GPU Architecture Overview Tools of the Trade Introduction to CUDA C Patterns of Parallel

More information

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun Outline Memory Management CIS 565 Fall 2011 Qing Sun sunqing@seas.upenn.edu Kernels Matrix multiplication Managing Memory CPU and GPU have separate memory spaces Host (CPU) code manages device (GPU) memory

More information

Lecture 1: an introduction to CUDA

Lecture 1: an introduction to CUDA Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Overview hardware view software view CUDA programming

More information

GPU Programming Using CUDA

GPU Programming Using CUDA GPU Programming Using CUDA Michael J. Schnieders Depts. of Biomedical Engineering & Biochemistry The University of Iowa & Gregory G. Howes Department of Physics and Astronomy The University of Iowa Iowa

More information

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms CS 590: High Performance Computing GPU Architectures and CUDA Concepts/Terms Fengguang Song Department of Computer & Information Science IUPUI What is GPU? Conventional GPUs are used to generate 2D, 3D

More information

Lecture 3: Introduction to CUDA

Lecture 3: Introduction to CUDA CSCI-GA.3033-004 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Introduction to CUDA Some slides here are adopted from: NVIDIA teaching kit Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

More information

Module 2: Introduction to CUDA C

Module 2: Introduction to CUDA C ECE 8823A GPU Architectures Module 2: Introduction to CUDA C 1 Objective To understand the major elements of a CUDA program Introduce the basic constructs of the programming model Illustrate the preceding

More information

Parallel Computing. Lecture 19: CUDA - I

Parallel Computing. Lecture 19: CUDA - I CSCI-UA.0480-003 Parallel Computing Lecture 19: CUDA - I Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com GPU w/ local DRAM (device) Behind CUDA CPU (host) Source: http://hothardware.com/reviews/intel-core-i5-and-i7-processors-and-p55-chipset/?page=4

More information

Image convolution with CUDA

Image convolution with CUDA Image convolution with CUDA Lecture Alexey Abramov abramov _at_ physik3.gwdg.de Georg-August University, Bernstein Center for Computational Neuroscience, III Physikalisches Institut, Göttingen, Germany

More information

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh GPU Programming EPCC The University of Edinburgh Contents NVIDIA CUDA C Proprietary interface to NVIDIA architecture CUDA Fortran Provided by PGI OpenCL Cross platform API 2 NVIDIA CUDA CUDA allows NVIDIA

More information

Introduction to CUDA

Introduction to CUDA Introduction to CUDA Overview HW computational power Graphics API vs. CUDA CUDA glossary Memory model, HW implementation, execution Performance guidelines CUDA compiler C/C++ Language extensions Limitations

More information

Cartoon parallel architectures; CPUs and GPUs

Cartoon parallel architectures; CPUs and GPUs Cartoon parallel architectures; CPUs and GPUs CSE 6230, Fall 2014 Th Sep 11! Thanks to Jee Choi (a senior PhD student) for a big assist 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ~ socket 14 ~ core 14 ~ HWMT+SIMD

More information

CUDA Performance Optimization Mark Harris NVIDIA Corporation

CUDA Performance Optimization Mark Harris NVIDIA Corporation CUDA Performance Optimization Mark Harris NVIDIA Corporation Outline Overview Hardware Memory Optimizations Execution Configuration Optimizations Instruction Optimizations Summary Optimize Algorithms for

More information

Lecture 2: Introduction to CUDA C

Lecture 2: Introduction to CUDA C CS/EE 217 GPU Architecture and Programming Lecture 2: Introduction to CUDA C David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2013 1 CUDA /OpenCL Execution Model Integrated host+device app C program Serial or

More information

GPU Programming. Rupesh Nasre.

GPU Programming. Rupesh Nasre. GPU Programming Rupesh Nasre. http://www.cse.iitm.ac.in/~rupesh IIT Madras July 2017 Debugging Debugging parallel programs is difficult. Non-determinism due to thread-scheduling Output can be different

More information

CS 179: GPU Computing. Recitation 2: Synchronization, Shared memory, Matrix Transpose

CS 179: GPU Computing. Recitation 2: Synchronization, Shared memory, Matrix Transpose CS 179: GPU Computing Recitation 2: Synchronization, Shared memory, Matrix Transpose Synchronization Ideal case for parallelism: no resources shared between threads no communication between threads Many

More information

Lecture 5. Performance Programming with CUDA

Lecture 5. Performance Programming with CUDA Lecture 5 Performance Programming with CUDA Announcements 2011 Scott B. Baden / CSE 262 / Spring 2011 2 Today s lecture Matrix multiplication 2011 Scott B. Baden / CSE 262 / Spring 2011 3 Memory Hierarchy

More information

Lecture 7. Using Shared Memory Performance programming and the memory hierarchy

Lecture 7. Using Shared Memory Performance programming and the memory hierarchy Lecture 7 Using Shared Memory Performance programming and the memory hierarchy Announcements Scott B. Baden /CSE 260/ Winter 2014 2 Assignment #1 Blocking for cache will boost performance but a lot more

More information

Practical Introduction to CUDA and GPU

Practical Introduction to CUDA and GPU Practical Introduction to CUDA and GPU Charlie Tang Centre for Theoretical Neuroscience October 9, 2009 Overview CUDA - stands for Compute Unified Device Architecture Introduced Nov. 2006, a parallel computing

More information

Josef Pelikán, Jan Horáček CGG MFF UK Praha

Josef Pelikán, Jan Horáček CGG MFF UK Praha GPGPU and CUDA 2012-2018 Josef Pelikán, Jan Horáček CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ 1 / 41 Content advances in hardware multi-core vs. many-core general computing

More information

Mathematical computations with GPUs

Mathematical computations with GPUs Master Educational Program Information technology in applications Mathematical computations with GPUs CUDA Alexey A. Romanenko arom@ccfit.nsu.ru Novosibirsk State University CUDA - Compute Unified Device

More information

HPCSE II. GPU programming and CUDA

HPCSE II. GPU programming and CUDA HPCSE II GPU programming and CUDA What is a GPU? Specialized for compute-intensive, highly-parallel computation, i.e. graphic output Evolution pushed by gaming industry CPU: large die area for control

More information

CUDA Advanced Techniques 3 Mohamed Zahran (aka Z)

CUDA Advanced Techniques 3 Mohamed Zahran (aka Z) Some slides are used and slightly modified from: NVIDIA teaching kit CSCI-GA.3033-004 Graphics Processing Units (GPUs): Architecture and Programming CUDA Advanced Techniques 3 Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

More information

Information Coding / Computer Graphics, ISY, LiTH. CUDA memory! ! Coalescing!! Constant memory!! Texture memory!! Pinned memory 26(86)

Information Coding / Computer Graphics, ISY, LiTH. CUDA memory! ! Coalescing!! Constant memory!! Texture memory!! Pinned memory 26(86) 26(86) Information Coding / Computer Graphics, ISY, LiTH CUDA memory Coalescing Constant memory Texture memory Pinned memory 26(86) CUDA memory We already know... Global memory is slow. Shared memory is

More information

Hardware/Software Co-Design

Hardware/Software Co-Design 1 / 13 Hardware/Software Co-Design Review so far Miaoqing Huang University of Arkansas Fall 2011 2 / 13 Problem I A student mentioned that he was able to multiply two 1,024 1,024 matrices using a tiled

More information

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan CUDA Workshop High Performance GPU computing EXEBIT- 2014 Karthikeyan CPU vs GPU CPU Very fast, serial, Low Latency GPU Slow, massively parallel, High Throughput Play Demonstration Compute Unified Device

More information

Outline Overview Hardware Memory Optimizations Execution Configuration Optimizations Instruction Optimizations Summary

Outline Overview Hardware Memory Optimizations Execution Configuration Optimizations Instruction Optimizations Summary Optimizing CUDA Outline Overview Hardware Memory Optimizations Execution Configuration Optimizations Instruction Optimizations Summary NVIDIA Corporation 2009 2 Optimize Algorithms for the GPU Maximize

More information

ECE 574 Cluster Computing Lecture 15

ECE 574 Cluster Computing Lecture 15 ECE 574 Cluster Computing Lecture 15 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 30 March 2017 HW#7 (MPI) posted. Project topics due. Update on the PAPI paper Announcements

More information

Module 2: Introduction to CUDA C. Objective

Module 2: Introduction to CUDA C. Objective ECE 8823A GPU Architectures Module 2: Introduction to CUDA C 1 Objective To understand the major elements of a CUDA program Introduce the basic constructs of the programming model Illustrate the preceding

More information

Performance optimization with CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Performance optimization with CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono Performance optimization with CUDA Algoritmi e Calcolo Parallelo References This set of slides is mainly based on: CUDA Technical Training, Dr. Antonino Tumeo, Pacific Northwest National Laboratory Slide

More information

Introduction to GPU programming. Introduction to GPU programming p. 1/17

Introduction to GPU programming. Introduction to GPU programming p. 1/17 Introduction to GPU programming Introduction to GPU programming p. 1/17 Introduction to GPU programming p. 2/17 Overview GPUs & computing Principles of CUDA programming One good reference: David B. Kirk

More information

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series Introduction to GPU Computing Using CUDA Spring 2014 Westgid Seminar Series Scott Northrup SciNet www.scinethpc.ca (Slides http://support.scinet.utoronto.ca/ northrup/westgrid CUDA.pdf) March 12, 2014

More information

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

GPU Programming Using CUDA. Samuli Laine NVIDIA Research GPU Programming Using CUDA Samuli Laine NVIDIA Research Today GPU vs CPU Different architecture, different workloads Basics of CUDA Executing code on GPU Managing memory between CPU and GPU CUDA API Quick

More information

CS516 Programming Languages and Compilers II

CS516 Programming Languages and Compilers II CS516 Programming Languages and Compilers II Zheng Zhang Spring 2015 Jan 29 GPU Programming II Rutgers University Review: Programming with CUDA Compute Unified Device Architecture (CUDA) Mapping and managing

More information

Review. Lecture 10. Today s Outline. Review. 03b.cu. 03?.cu CUDA (II) Matrix addition CUDA-C API

Review. Lecture 10. Today s Outline. Review. 03b.cu. 03?.cu CUDA (II) Matrix addition CUDA-C API Review Lecture 10 CUDA (II) host device CUDA many core processor threads thread blocks grid # threads >> # of cores to be efficient Threads within blocks can cooperate Threads between thread blocks cannot

More information

For personnal use only

For personnal use only Inverting Large Images Using CUDA Finnbarr P. Murphy (fpm@fpmurphy.com) This is a simple example of how to invert a very large image, stored as a vector using nvidia s CUDA programming environment and

More information

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN

CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction. Francesco Rossi University of Bologna and INFN CUDA and GPU Performance Tuning Fundamentals: A hands-on introduction Francesco Rossi University of Bologna and INFN * Using this terminology since you ve already heard of SIMD and SPMD at this school

More information

Introduction to Scientific Programming using GPGPU and CUDA

Introduction to Scientific Programming using GPGPU and CUDA Introduction to Scientific Programming using GPGPU and CUDA Day 1 Sergio Orlandini s.orlandini@cineca.it Mario Tacconi m.tacconi@cineca.it 0 Hands on: Compiling a CUDA program Environment and utility:

More information

Tutorial: Parallel programming technologies on hybrid architectures HybriLIT Team

Tutorial: Parallel programming technologies on hybrid architectures HybriLIT Team Tutorial: Parallel programming technologies on hybrid architectures HybriLIT Team Laboratory of Information Technologies Joint Institute for Nuclear Research The Helmholtz International Summer School Lattice

More information

GPU Fundamentals Jeff Larkin November 14, 2016

GPU Fundamentals Jeff Larkin November 14, 2016 GPU Fundamentals Jeff Larkin , November 4, 206 Who Am I? 2002 B.S. Computer Science Furman University 2005 M.S. Computer Science UT Knoxville 2002 Graduate Teaching Assistant 2005 Graduate

More information

Writing and compiling a CUDA code

Writing and compiling a CUDA code Writing and compiling a CUDA code Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) Writing CUDA code 1 / 65 The CUDA language If we want fast code, we (unfortunately)

More information