Zero Copy Memory and Multiple GPUs

Size: px
Start display at page:

Download "Zero Copy Memory and Multiple GPUs"

Transcription

1 Zero Copy Memory and Multiple GPUs Goals Zero Copy Memory Pinned and mapped memory on the host can be read and written to from the GPU program (if the device permits this) This may result in performance gains, particularly when the memory is read and written only once Results may vary for discrete GPUs and integrated GPUs Portable pinned memory To see performance gains all threads must view the memory as pinned, not just the thread allocating the memory There are flags that will make pinned memory portable

2 Timezero - 1 The kernel We use our dot product application once again Our kernel code is unchanged from previous visits Reduction is used to generate partial results The CPU completes the calculation #include "../common/book.h" #define imin(a,b) (a<b?a:b) const int N = 33 * 1024 * 1024; const int threadsperblock = 256; const int blockspergrid = imin( 32, (N+threadsPerBlock-1) / threadsperblock ); global void dot( int size, float *a, float *b, float *c ) { shared float cache[threadsperblock]; int tid = threadidx.x + blockidx.x * blockdim.x; int cacheindex = threadidx.x; float temp = 0; while (tid < size) { temp += a[tid] * b[tid]; tid += blockdim.x * griddim.x; // set the cache values cache[cacheindex] = temp; // synchronize threads in this block syncthreads(); // for reductions, threadsperblock must be a power of 2 // because of the following code int i = blockdim.x/2; while (i!= 0) { if (cacheindex < i) cache[cacheindex] += cache[cacheindex + i]; syncthreads(); i /= 2; if (cacheindex == 0) c[blockidx.x] = cache[0];

3 Timezero - 2 Here is our original version a, b, and partial_c are allocated on the host The device copies are allocated using cudamalloc The data is initialized on the host The timer is started Vectors a and b are copied to the GPU The kernel is called float malloc_test( int size ) { cudaevent_t start, stop; float *a, *b, c, *partial_c; float *dev_a, *dev_b, *dev_partial_c; float elapsedtime; HANDLE_ERROR( cudaeventcreate( &start ) ); HANDLE_ERROR( cudaeventcreate( &stop ) ); // allocate memory on the CPU side a = (float*)malloc( size*sizeof(float) ); b = (float*)malloc( size*sizeof(float) ); partial_c = (float*)malloc( blockspergrid*sizeof(float) ); // allocate the memory on the GPU HANDLE_ERROR( cudamalloc( (void**)&dev_a, size*sizeof(float) ) ); HANDLE_ERROR( cudamalloc( (void**)&dev_b, size*sizeof(float) ) ); HANDLE_ERROR( cudamalloc( (void**)&dev_partial_c, blockspergrid*sizeof(float) ) ); // fill in the host memory with data for (int i=0; i<size; i++) { a[i] = i; b[i] = i*2; HANDLE_ERROR( cudaeventrecord( start, 0 ) ); // copy the arrays 'a' and 'b' to the GPU HANDLE_ERROR( cudamemcpy( dev_a, a, size*sizeof(float), cudamemcpyhosttodevice ) ); HANDLE_ERROR( cudamemcpy( dev_b, b, size*sizeof(float), cudamemcpyhosttodevice ) ); dot<<<blockspergrid,threadsperblock>>>( size, dev_a, dev_b, dev_partial_c );

4 Timezero - 3 Original version The partial results are copied back The elapsed time is found The final result is calculated on the CPU The device memory is freed The host memory is freed The timer events are destroyed The result is printed The elapsed time is returned // copy the array 'c' back from the GPU to the CPU HANDLE_ERROR( cudamemcpy( partial_c, dev_partial_c, blockspergrid*sizeof(float), cudamemcpydevicetohost ) ); HANDLE_ERROR( cudaeventrecord( stop, 0 ) ); HANDLE_ERROR( cudaeventsynchronize( stop ) ); HANDLE_ERROR( cudaeventelapsedtime( &elapsedtime, start, stop ) ); // finish up on the CPU side c = 0; for (int i=0; i<blockspergrid; i++) { c += partial_c[i]; HANDLE_ERROR( cudafree( dev_a ) ); HANDLE_ERROR( cudafree( dev_b ) ); HANDLE_ERROR( cudafree( dev_partial_c ) ); // free memory on the CPU side free( a ); free( b ); free( partial_c ); // free events HANDLE_ERROR( cudaeventdestroy( start ) ); HANDLE_ERROR( cudaeventdestroy( stop ) ); printf( "Value calculated: %f\n", c ); return elapsedtime;

5 Timezero - 4 The zero copy version Create the timers Call cudahostalloc with the added flag cudahostallocmapped This means we will read and write host memory from the GPU Since the memory spaces are different we need to call cudahostgetdevicepointer for each vector There are no data copies We initialize the data the same as before float cuda_host_alloc_test( int size ) { cudaevent_t start, stop; float *a, *b, c, *partial_c; float *dev_a, *dev_b, *dev_partial_c; float elapsedtime; HANDLE_ERROR( cudaeventcreate( &start ) ); HANDLE_ERROR( cudaeventcreate( &stop ) ); // allocate the memory on the CPU HANDLE_ERROR( cudahostalloc( (void**)&a, size*sizeof(float), cudahostallocwritecombined cudahostallocmapped ) ); HANDLE_ERROR( cudahostalloc( (void**)&b, size*sizeof(float), cudahostallocwritecombined cudahostallocmapped ) ); HANDLE_ERROR( cudahostalloc( (void**)&partial_c, blockspergrid*sizeof(float), cudahostallocmapped ) ); // find out the GPU pointers HANDLE_ERROR( cudahostgetdevicepointer( &dev_a, a, 0 ) ); HANDLE_ERROR( cudahostgetdevicepointer( &dev_b, b, 0 ) ); HANDLE_ERROR( cudahostgetdevicepointer( &dev_partial_c, partial_c, 0 ) ); // fill in the host memory with data for (int i=0; i<size; i++) { a[i] = i; b[i] = i*2;

6 Timezero - 5 Zero copy (continued) Start the timer and call the kernel No copy back of results! Get the elapsed time The final result is calculated on the host Memory is freed The timer events are destroyed The result is printed The elapsed time is returned HANDLE_ERROR( cudaeventrecord( start, 0 ) ); dot<<<blockspergrid,threadsperblock>>>( size, dev_a, dev_b, dev_partial_c ); HANDLE_ERROR( cudathreadsynchronize() ); HANDLE_ERROR( cudaeventrecord( stop, 0 ) ); HANDLE_ERROR( cudaeventsynchronize( stop ) ); HANDLE_ERROR( cudaeventelapsedtime( &elapsedtime, start, stop ) ); // finish up on the CPU side c = 0; for (int i=0; i<blockspergrid; i++) { c += partial_c[i]; HANDLE_ERROR( cudafreehost( a ) ); HANDLE_ERROR( cudafreehost( b ) ); HANDLE_ERROR( cudafreehost( partial_c ) ); // free events HANDLE_ERROR( cudaeventdestroy( start ) ); HANDLE_ERROR( cudaeventdestroy( stop ) ); printf( "Value calculated: %f\n", c ); return elapsedtime;

7 Timezero - 6 The main program Make sure mapped memory is supported Set the flag to indicate you are using mapped memory Now run each version of the program For the GPUs used by the authors there was about a 45% speedup int main( void ) { cudadeviceprop prop; int whichdevice; HANDLE_ERROR( cudagetdevice( &whichdevice ) ); HANDLE_ERROR( cudagetdeviceproperties( &prop, whichdevice ) ); if (prop.canmaphostmemory!= 1) { printf( "Device can not map memory.\n" ); return 0; float elapsedtime; HANDLE_ERROR( cudasetdeviceflags( cudadevicemaphost ) ); // try it with malloc elapsedtime = malloc_test( N ); printf( "Time using cudamalloc: %3.1f ms\n", elapsedtime ); // now try it with cudahostalloc elapsedtime = cuda_host_alloc_test( N ); printf( "Time using cudahostalloc: %3.1f ms\n", elapsedtime );

8 When to Use Zero Copy Memory Discrete GPU Dedicated DRAM, usually on separate circuit board Usually will improve performance if data is read and written only once However, if data is read multiple times then there will be a significant performance penalty because data is NOT cached Integrated GPU Built into system s chipset, shares regular CPU memory Zero copy memory is always a win since it is the same memory, but beware of using too much of it The cuda device properties has a boolean field integrated that tells you if your memory is integrated or not For the dot product program that follows performance gains were in the 30-40% range by using zero copy

9 Multidevice - 1 Why multiple devices? You may have a built-in GPU and another GPU on a separate card NVIDIA supports multiple GPUs using SLI (Scalable Link Interface) We use our dot product program yet again There are no changes for the code shown at the right #include "../common/book.h" #define imin(a,b) (a<b?a:b) #define N (33*1024*1024) const int threadsperblock = 256; const int blockspergrid = imin( 32, (N/2+threadsPerBlock-1) / threadsperblock ); global void dot( int size, float *a, float *b, float *c ) { shared float cache[threadsperblock]; int tid = threadidx.x + blockidx.x * blockdim.x; int cacheindex = threadidx.x; float temp = 0; while (tid < size) { temp += a[tid] * b[tid]; tid += blockdim.x * griddim.x; // set the cache values cache[cacheindex] = temp; // synchronize threads in this block syncthreads(); // for reductions, threadsperblock must be a power of 2 // because of the following code int i = blockdim.x/2; while (i!= 0) { if (cacheindex < i) cache[cacheindex] += cache[cacheindex + i]; syncthreads(); i /= 2; if (cacheindex == 0) c[blockidx.x] = cache[0];

10 Multidevice - 2 DataStruct stores the device ID and the size of the vector it is working on Routine method Declare data objects Allocate local and cuda memory Copy vectors a and b to the GPU Call the kernel to do the calculation struct DataStruct { int deviceid; int size; float *a; float *b; float returnvalue; ; void* routine( void *pvoiddata ) { DataStruct *data = (DataStruct*)pvoidData; HANDLE_ERROR( cudasetdevice( data->deviceid ) ); int size = data->size; float *a, *b, c, *partial_c; float *dev_a, *dev_b, *dev_partial_c; // allocate memory on the CPU side a = data->a; b = data->b; partial_c = (float*)malloc( blockspergrid*sizeof(float) ); // allocate the memory on the GPU HANDLE_ERROR( cudamalloc( (void**)&dev_a, size*sizeof(float) ) ); HANDLE_ERROR( cudamalloc( (void**)&dev_b, size*sizeof(float) ) ); HANDLE_ERROR( cudamalloc( (void**)&dev_partial_c, blockspergrid*sizeof(float) ) ); // copy the arrays 'a' and 'b' to the GPU HANDLE_ERROR( cudamemcpy( dev_a, a, size*sizeof(float), cudamemcpyhosttodevice ) ); HANDLE_ERROR( cudamemcpy( dev_b, b, size*sizeof(float), cudamemcpyhosttodevice ) ); dot<<<blockspergrid,threadsperblock>>>( size, dev_a, dev_b, dev_partial_c );

11 Multidevice - 3 Routine (continued) Copy the results back and combine them Then do the usual cleanup // copy the array 'c' back from the GPU to the CPU HANDLE_ERROR( cudamemcpy( partial_c, dev_partial_c, blockspergrid*sizeof(float), cudamemcpydevicetohost ) ); // finish up on the CPU side c = 0; for (int i=0; i<blockspergrid; i++) { c += partial_c[i]; HANDLE_ERROR( cudafree( dev_a ) ); HANDLE_ERROR( cudafree( dev_b ) ); HANDLE_ERROR( cudafree( dev_partial_c ) ); // free memory on the CPU side free( partial_c ); data->returnvalue = c; return 0; Main program Insure at least 2 devices are present Allocate memory on the host for a and b int main( void ) { int devicecount; HANDLE_ERROR( cudagetdevicecount( &devicecount ) ); if (devicecount < 2) { printf( "We need at least two compute 1.0 or greater " "devices, but only found %d\n", devicecount ); return 0; float *a = (float*)malloc( sizeof(float) * N ); HANDLE_NULL( a ); float *b = (float*)malloc( sizeof(float) * N ); HANDLE_NULL( b );

12 Multidevice - 4 Main program (continued) We initialize data as before Assuming two devices, we divide the data in half We call routine twice, once for each data set The thread for data[0] is in a new thread; main is handling data[1] in its own thread When finished, we end the thread handling data[0], clean up, and print the combined results // fill in the host memory with data for (int i=0; i<n; i++) { a[i] = i; b[i] = i*2; // prepare for multithread DataStruct data[2]; data[0].deviceid = 0; data[0].size = N/2; data[0].a = a; data[0].b = b; data[1].deviceid = 1; data[1].size = N/2; data[1].a = a + N/2; data[1].b = b + N/2; CUTThread thread = start_thread( routine, &(data[0]) ); routine( &(data[1]) ); end_thread( thread ); // free memory on the CPU side free( a ); free( b ); printf( "Value calculated: %f\n", data[0].returnvalue + data[1].returnvalue ); return 0;

13 Potential Problems with Pinned Memory We learned in Ch10 how to pin memory on the host Using the techniques discussed previously the memory will appear page locked only to the thread that has allocated the memory If the pointer to this memory is shared between threads, the other threads will see this as pageable memory Remedy allocate pinned memory as portable When we use cudahostalloc we need to specify the flag cudahostallocportable As we will see in the program examples, it is possible to combine multiple flags so that host memory is portable, zero copy, and write enabled

14 Portable PM - 1 We use our dot product program one last time The code shown to the right is unchanged from previous examples #include "../common/book.h" #define imin(a,b) (a<b?a:b) #define N (33*1024*1024) const int threadsperblock = 256; const int blockspergrid = imin( 32, (N/2+threadsPerBlock-1) / threadsperblock ); global void dot( int size, float *a, float *b, float *c ) { shared float cache[threadsperblock]; int tid = threadidx.x + blockidx.x * blockdim.x; int cacheindex = threadidx.x; float temp = 0; while (tid < size) { temp += a[tid] * b[tid]; tid += blockdim.x * griddim.x; // set the cache values cache[cacheindex] = temp; // synchronize threads in this block syncthreads(); // for reductions, threadsperblock must be a power of 2 // because of the following code int i = blockdim.x/2; while (i!= 0) { if (cacheindex < i) cache[cacheindex] += cache[cacheindex + i]; syncthreads(); i /= 2; if (cacheindex == 0) c[blockidx.x] = cache[0];

15 Portable PM - 2 DataStruct is the same We need to avoid possibly calling cudasetdevice twice (an error) so we use an if statement Since we will use zero copy memory there is no cudamemcpy Rather there is cudahostgetdevicepointer Call the kernel as before struct DataStruct { int deviceid; int size; int offset; float *a; float *b; float returnvalue; ; void* routine( void *pvoiddata ) { DataStruct *data = (DataStruct*)pvoidData; if (data->deviceid!= 0) { HANDLE_ERROR( cudasetdevice( data->deviceid ) ); HANDLE_ERROR( cudasetdeviceflags( cudadevicemaphost ) ); int size = data->size; float *a, *b, c, *partial_c; float *dev_a, *dev_b, *dev_partial_c; // allocate memory on the CPU side a = data->a; b = data->b; partial_c = (float*)malloc( blockspergrid*sizeof(float) ); // allocate the memory on the GPU HANDLE_ERROR( cudahostgetdevicepointer( &dev_a, a, 0 ) ); HANDLE_ERROR( cudahostgetdevicepointer( &dev_b, b, 0 ) ); HANDLE_ERROR( cudamalloc( (void**)&dev_partial_c, blockspergrid*sizeof(float) ) ); // offset 'a' and 'b' to where this GPU is gets it data dev_a += data->offset; dev_b += data->offset; dot<<<blockspergrid,threadsperblock>>>( size, dev_a, dev_b, dev_partial_c );

16 Portable PM - 3 Copy the results back Combine the results Clean up // copy the array 'c' back from the GPU to the CPU HANDLE_ERROR( cudamemcpy( partial_c, dev_partial_c, blockspergrid*sizeof(float), cudamemcpydevicetohost ) ); // finish up on the CPU side c = 0; for (int i=0; i<blockspergrid; i++) { c += partial_c[i]; HANDLE_ERROR( cudafree( dev_partial_c ) ); // free memory on the CPU side free( partial_c ); data->returnvalue = c; return 0; Main program Make sure there are multiple GPU devices int main( void ) { int devicecount; HANDLE_ERROR( cudagetdevicecount( &devicecount ) ); if (devicecount < 2) { printf( "We need at least two compute 1.0 or greater " "devices, but only found %d\n", devicecount ); return 0;

17 Portable PM - 4 Make sure the host memory can be mapped Prepare to set flags Combine three flags in cudahostalloc Initialize the data as before cudadeviceprop prop; for (int i=0; i<2; i++) { HANDLE_ERROR( cudagetdeviceproperties( &prop, i ) ); if (prop.canmaphostmemory!= 1) { printf( "Device %d can not map memory.\n", i ); return 0; float *a, *b; HANDLE_ERROR( cudasetdevice( 0 ) ); HANDLE_ERROR( cudasetdeviceflags( cudadevicemaphost ) ); HANDLE_ERROR( cudahostalloc( (void**)&a, N*sizeof(float), cudahostallocwritecombined cudahostallocportable cudahostallocmapped ) ); HANDLE_ERROR( cudahostalloc( (void**)&b, N*sizeof(float), cudahostallocwritecombined cudahostallocportable cudahostallocmapped ) ); // fill in the host memory with data for (int i=0; i<n; i++) { a[i] = i; b[i] = i*2;

18 Portable PM - 5 Assuming two devices, we divide the data in half We call routine twice, once for each data set The thread for data[0] is in a new thread; main is handling data[1] in its own thread When finished, we end the thread handling data[0], clean up, and print the combined results // prepare for multithread DataStruct data[2]; data[0].deviceid = 0; data[0].offset = 0; data[0].size = N/2; data[0].a = a; data[0].b = b; data[1].deviceid = 1; data[1].offset = N/2; data[1].size = N/2; data[1].a = a; data[1].b = b; CUTThread thread = start_thread( routine, &(data[1]) ); routine( &(data[0]) ); end_thread( thread ); // free memory on the CPU side HANDLE_ERROR( cudafreehost( a ) ); HANDLE_ERROR( cudafreehost( b ) ); printf( "Value calculated: %f\n", data[0].returnvalue + data[1].returnvalue ); return 0;

Zero-copy. Table of Contents. Multi-GPU Learning CUDA to Solve Scientific Problems. Objectives. Technical Issues Zero-copy. Multigpu.

Zero-copy. Table of Contents. Multi-GPU Learning CUDA to Solve Scientific Problems. Objectives. Technical Issues Zero-copy. Multigpu. Table of Contents Multi-GPU Learning CUDA to Solve Scientific Problems. 1 Objectives Miguel Cárdenas Montes 2 Zero-copy Centro de Investigaciones Energéticas Medioambientales y Tecnológicas, Madrid, Spain

More information

Pinned-Memory. Table of Contents. Streams Learning CUDA to Solve Scientific Problems. Objectives. Technical Issues Stream. Pinned-memory.

Pinned-Memory. Table of Contents. Streams Learning CUDA to Solve Scientific Problems. Objectives. Technical Issues Stream. Pinned-memory. Table of Contents Streams Learning CUDA to Solve Scientific Problems. 1 Objectives Miguel Cárdenas Montes Centro de Investigaciones Energéticas Medioambientales y Tecnológicas, Madrid, Spain miguel.cardenas@ciemat.es

More information

GPU Computing Workshop CSU Getting Started. Garland Durham Quantos Analytics

GPU Computing Workshop CSU Getting Started. Garland Durham Quantos Analytics 1 GPU Computing Workshop CSU 2013 Getting Started Garland Durham Quantos Analytics nvidia-smi 2 At command line, run command nvidia-smi to get/set GPU properties. nvidia-smi Options: -q query -L list attached

More information

CUDA. More on threads, shared memory, synchronization. cuprintf

CUDA. More on threads, shared memory, synchronization. cuprintf CUDA More on threads, shared memory, synchronization cuprintf Library function for CUDA Developers Copy the files from /opt/cuprintf into your source code folder #include cuprintf.cu global void testkernel(int

More information

Introduction to CUDA C

Introduction to CUDA C Introduction to CUDA C What will you learn today? Start from Hello, World! Write and launch CUDA C kernels Manage GPU memory Run parallel kernels in CUDA C Parallel communication and synchronization Race

More information

Introduction to CUDA C

Introduction to CUDA C NVIDIA GPU Technology Introduction to CUDA C Samuel Gateau Seoul December 16, 2010 Who should you thank for this talk? Jason Sanders Senior Software Engineer, NVIDIA Co-author of CUDA by Example What is

More information

Part II CUDA C/C++ Language Overview and Programming Techniques

Part II CUDA C/C++ Language Overview and Programming Techniques Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA Part II CUDA C/C++ Language Overview and Programming Techniques Outline GPU-Helloworld CUDA C/C++ Language Overview (with simple

More information

Graphics Interoperability

Graphics Interoperability Goals Graphics Interoperability Some questions you may have If you have a single NVIDIA card, can it be used both for CUDA computations and graphics display using OpenGL without transferring images back

More information

Parallel Programming and Debugging with CUDA C. Geoff Gerfin Sr. System Software Engineer

Parallel Programming and Debugging with CUDA C. Geoff Gerfin Sr. System Software Engineer Parallel Programming and Debugging with CUDA C Geoff Gerfin Sr. System Software Engineer CUDA - NVIDIA s Architecture for GPU Computing Broad Adoption Over 250M installed CUDA-enabled GPUs GPU Computing

More information

GPU Programming Using CUDA

GPU Programming Using CUDA GPU Programming Using CUDA Michael J. Schnieders Depts. of Biomedical Engineering & Biochemistry The University of Iowa & Gregory G. Howes Department of Physics and Astronomy The University of Iowa Iowa

More information

AMS 148 Chapter 8: Optimization in CUDA, and Advanced Topics

AMS 148 Chapter 8: Optimization in CUDA, and Advanced Topics AMS 148 Chapter 8: Optimization in CUDA, and Advanced Topics Steven Reeves 1 Optimizing Data Transfers in CUDA C/C++ This section we will discuss code optimization with how to efficiently transfer data

More information

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA Sreepathi Pai October 18, 2017 URCS Outline Background Memory Code Execution Model Outline Background Memory Code Execution Model

More information

Advanced Topics: Streams, Multi-GPU, Tools, Libraries, etc.

Advanced Topics: Streams, Multi-GPU, Tools, Libraries, etc. CSC 391/691: GPU Programming Fall 2011 Advanced Topics: Streams, Multi-GPU, Tools, Libraries, etc. Copyright 2011 Samuel S. Cho Streams Until now, we have largely focused on massively data-parallel execution

More information

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series Introduction to GPU Computing Using CUDA Spring 2014 Westgid Seminar Series Scott Northrup SciNet www.scinethpc.ca March 13, 2014 Outline 1 Heterogeneous Computing 2 GPGPU - Overview Hardware Software

More information

Memory concept. Grid concept, Synchronization. GPU Programming. Szénási Sándor.

Memory concept. Grid concept, Synchronization. GPU Programming.   Szénási Sándor. Memory concept Grid concept, Synchronization GPU Programming http://cuda.nik.uni-obuda.hu Szénási Sándor szenasi.sandor@nik.uni-obuda.hu GPU Education Center of Óbuda University MEMORY CONCEPT Off-chip

More information

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series

Introduction to GPU Computing Using CUDA. Spring 2014 Westgid Seminar Series Introduction to GPU Computing Using CUDA Spring 2014 Westgid Seminar Series Scott Northrup SciNet www.scinethpc.ca (Slides http://support.scinet.utoronto.ca/ northrup/westgrid CUDA.pdf) March 12, 2014

More information

CUDA C Programming Mark Harris NVIDIA Corporation

CUDA C Programming Mark Harris NVIDIA Corporation CUDA C Programming Mark Harris NVIDIA Corporation Agenda Tesla GPU Computing CUDA Fermi What is GPU Computing? Introduction to Tesla CUDA Architecture Programming & Memory Models Programming Environment

More information

Programação CUDA: um caminho verde para a computação de alto desempenho

Programação CUDA: um caminho verde para a computação de alto desempenho Programação CUDA: um caminho verde para a computação de alto desempenho Attilio Cucchieri IFSC USP http://www.ifsc.usp.br/ lattice/ THIRD LECTURE OUTLINE OF THE THIRD LECTURE CUDA architecture (II) OUTLINE

More information

ECE 574 Cluster Computing Lecture 15

ECE 574 Cluster Computing Lecture 15 ECE 574 Cluster Computing Lecture 15 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 30 March 2017 HW#7 (MPI) posted. Project topics due. Update on the PAPI paper Announcements

More information

CUDA 2.2 Pinned Memory APIs

CUDA 2.2 Pinned Memory APIs CUDA 2.2 Pinned Memory APIs July 2012 July 2012 ii Table of Contents Table of Contents... 1 1. Overview... 2 1.1 Portable pinned memory : available to all contexts... 3 1.2 Mapped pinned memory : zero-copy...

More information

CS377P Programming for Performance GPU Programming - I

CS377P Programming for Performance GPU Programming - I CS377P Programming for Performance GPU Programming - I Sreepathi Pai UTCS November 9, 2015 Outline 1 Introduction to CUDA 2 Basic Performance 3 Memory Performance Outline 1 Introduction to CUDA 2 Basic

More information

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh GPU Programming EPCC The University of Edinburgh Contents NVIDIA CUDA C Proprietary interface to NVIDIA architecture CUDA Fortran Provided by PGI OpenCL Cross platform API 2 NVIDIA CUDA CUDA allows NVIDIA

More information

High Performance Linear Algebra on Data Parallel Co-Processors I

High Performance Linear Algebra on Data Parallel Co-Processors I 926535897932384626433832795028841971693993754918980183 592653589793238462643383279502884197169399375491898018 415926535897932384626433832795028841971693993754918980 592653589793238462643383279502884197169399375491898018

More information

Massively Parallel Algorithms

Massively Parallel Algorithms Massively Parallel Algorithms Introduction to CUDA & Many Fundamental Concepts of Parallel Programming G. Zachmann University of Bremen, Germany cgvr.cs.uni-bremen.de Hybrid/Heterogeneous Computation/Architecture

More information

ECE 574 Cluster Computing Lecture 17

ECE 574 Cluster Computing Lecture 17 ECE 574 Cluster Computing Lecture 17 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 28 March 2019 HW#8 (CUDA) posted. Project topics due. Announcements 1 CUDA installing On Linux

More information

GPU Computing: Introduction to CUDA. Dr Paul Richmond

GPU Computing: Introduction to CUDA. Dr Paul Richmond GPU Computing: Introduction to CUDA Dr Paul Richmond http://paulrichmond.shef.ac.uk This lecture CUDA Programming Model CUDA Device Code CUDA Host Code and Memory Management CUDA Compilation Programming

More information

CUDA. Sathish Vadhiyar High Performance Computing

CUDA. Sathish Vadhiyar High Performance Computing CUDA Sathish Vadhiyar High Performance Computing Hierarchical Parallelism Parallel computations arranged as grids One grid executes after another Grid consists of blocks Blocks assigned to SM. A single

More information

Introduction to CUDA Programming (Compute Unified Device Architecture) Jongsoo Kim Korea Astronomy and Space Science 2018 Workshop

Introduction to CUDA Programming (Compute Unified Device Architecture) Jongsoo Kim Korea Astronomy and Space Science 2018 Workshop Introduction to CUDA Programming (Compute Unified Device Architecture) Jongsoo Kim Korea Astronomy and Space Science Institute @COMAC 2018 Workshop www.top500.org Summit #1, Linpack: 122.3 Pflos/s 4356

More information

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list

CUDA Programming. Week 1. Basic Programming Concepts Materials are copied from the reference list CUDA Programming Week 1. Basic Programming Concepts Materials are copied from the reference list G80/G92 Device SP: Streaming Processor (Thread Processors) SM: Streaming Multiprocessor 128 SP grouped into

More information

GPU 1. CSCI 4850/5850 High-Performance Computing Spring 2018

GPU 1. CSCI 4850/5850 High-Performance Computing Spring 2018 GPU 1 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono Basic Elements of CUDA Algoritmi e Calcolo Parallelo References q This set of slides is mainly based on: " CUDA Technical Training, Dr. Antonino Tumeo, Pacific Northwest National Laboratory " Slide of

More information

Parallel Computing. Lecture 19: CUDA - I

Parallel Computing. Lecture 19: CUDA - I CSCI-UA.0480-003 Parallel Computing Lecture 19: CUDA - I Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com GPU w/ local DRAM (device) Behind CUDA CPU (host) Source: http://hothardware.com/reviews/intel-core-i5-and-i7-processors-and-p55-chipset/?page=4

More information

Lecture 3: Introduction to CUDA

Lecture 3: Introduction to CUDA CSCI-GA.3033-004 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Introduction to CUDA Some slides here are adopted from: NVIDIA teaching kit Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

More information

Lecture 2: Introduction to CUDA C

Lecture 2: Introduction to CUDA C CS/EE 217 GPU Architecture and Programming Lecture 2: Introduction to CUDA C David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2013 1 CUDA /OpenCL Execution Model Integrated host+device app C program Serial or

More information

CSE 599 I Accelerated Computing - Programming GPUS. Advanced Host / Device Interface

CSE 599 I Accelerated Computing - Programming GPUS. Advanced Host / Device Interface CSE 599 I Accelerated Computing - Programming GPUS Advanced Host / Device Interface Objective Take a slightly lower-level view of the CPU / GPU interface Learn about different CPU / GPU communication techniques

More information

CUDA GPGPU Workshop CUDA/GPGPU Arch&Prog

CUDA GPGPU Workshop CUDA/GPGPU Arch&Prog CUDA GPGPU Workshop 2012 CUDA/GPGPU Arch&Prog Yip Wichita State University 7/11/2012 GPU-Hardware perspective GPU as PCI device Original PCI PCIe Inside GPU architecture GPU as PCI device Traditional PC

More information

Using CUDA. Oswald Haan

Using CUDA. Oswald Haan Using CUDA Oswald Haan ohaan@gwdg.de A first Example: Adding two Vectors void add( int N, float*a, float*b, float*c ) { int i; for(i=0; i

More information

Module 2: Introduction to CUDA C. Objective

Module 2: Introduction to CUDA C. Objective ECE 8823A GPU Architectures Module 2: Introduction to CUDA C 1 Objective To understand the major elements of a CUDA program Introduce the basic constructs of the programming model Illustrate the preceding

More information

Speed Up Your Codes Using GPU

Speed Up Your Codes Using GPU Speed Up Your Codes Using GPU Wu Di and Yeo Khoon Seng (Department of Mechanical Engineering) The use of Graphics Processing Units (GPU) for rendering is well known, but their power for general parallel

More information

The Architecture of Graphic Processor Unit GPU and CUDA programming

The Architecture of Graphic Processor Unit GPU and CUDA programming The Architecture of Graphic Processor Unit GPU and CUDA programming P. Bakowski 1 Evolution of parallel architectures We. can distinguish 3 generations of massively parallel architectures (scientific calculation):

More information

CUDA C/C++ BASICS. NVIDIA Corporation

CUDA C/C++ BASICS. NVIDIA Corporation CUDA C/C++ BASICS NVIDIA Corporation What is CUDA? CUDA Architecture Expose GPU parallelism for general-purpose computing Retain performance CUDA C/C++ Based on industry-standard C/C++ Small set of extensions

More information

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory

CUDA Exercises. CUDA Programming Model Lukas Cavigelli ETZ E 9 / ETZ D Integrated Systems Laboratory CUDA Exercises CUDA Programming Model 05.05.2015 Lukas Cavigelli ETZ E 9 / ETZ D 61.1 Integrated Systems Laboratory Exercises 1. Enumerate GPUs 2. Hello World CUDA kernel 3. Vectors Addition Threads and

More information

Josef Pelikán, Jan Horáček CGG MFF UK Praha

Josef Pelikán, Jan Horáček CGG MFF UK Praha GPGPU and CUDA 2012-2018 Josef Pelikán, Jan Horáček CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ 1 / 41 Content advances in hardware multi-core vs. many-core general computing

More information

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA CUDA PROGRAMMING MODEL Carlo Nardone Sr. Solution Architect, NVIDIA EMEA CUDA: COMMON UNIFIED DEVICE ARCHITECTURE Parallel computing architecture and programming model GPU Computing Application Includes

More information

Scientific GPU computing with Go A novel approach to highly reliable CUDA HPC 1 February 2014

Scientific GPU computing with Go A novel approach to highly reliable CUDA HPC 1 February 2014 Scientific GPU computing with Go A novel approach to highly reliable CUDA HPC 1 February 2014 Arne Vansteenkiste Ghent University Real-world example (micromagnetism) DyNaMat LAB @ UGent: Microscale Magnetic

More information

Lecture 10. Efficient Host-Device Data Transfer

Lecture 10. Efficient Host-Device Data Transfer 1 Lecture 10 Efficient Host-Device Data fer 2 Objective To learn the important concepts involved in copying (transferring) data between host and device System Interconnect Direct Memory Access Pinned memory

More information

An Introduction to GPU Computing and CUDA Architecture

An Introduction to GPU Computing and CUDA Architecture An Introduction to GPU Computing and CUDA Architecture Sarah Tariq, NVIDIA Corporation GPU Computing GPU: Graphics Processing Unit Traditionally used for real-time rendering High computational density

More information

Parallel Programming Slide 4-1. realtime 3D graphics needs highly parallel, multithreaded, manycore processor GPUs (Graphics Processing Unit)

Parallel Programming Slide 4-1. realtime 3D graphics needs highly parallel, multithreaded, manycore processor GPUs (Graphics Processing Unit) Parallel Programming Slide 4-1 4 GPGPU Programming realtime 3D graphics needs highly parallel, multithreaded, manycore processor GPUs (Graphics Processing Unit) hundreds or even thousands of independent

More information

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA CUDA Parallel Programming Model Scalable Parallel Programming with CUDA Some Design Goals Scale to 100s of cores, 1000s of parallel threads Let programmers focus on parallel algorithms not mechanics of

More information

CUDA Parallel Programming Model Michael Garland

CUDA Parallel Programming Model Michael Garland CUDA Parallel Programming Model Michael Garland NVIDIA Research Some Design Goals Scale to 100s of cores, 1000s of parallel threads Let programmers focus on parallel algorithms not mechanics of a parallel

More information

CUDA Odds and Ends. Administrivia. Administrivia. Agenda. Patrick Cozzi University of Pennsylvania CIS Spring Assignment 5.

CUDA Odds and Ends. Administrivia. Administrivia. Agenda. Patrick Cozzi University of Pennsylvania CIS Spring Assignment 5. Administrivia CUDA Odds and Ends Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2011 Assignment 5 Handed out Wednesday, 03/16 Due Friday, 03/25 Project One page pitch due Sunday, 03/20, at 11:59pm

More information

CUDA Odds and Ends. Joseph Kider University of Pennsylvania CIS Fall 2011

CUDA Odds and Ends. Joseph Kider University of Pennsylvania CIS Fall 2011 CUDA Odds and Ends Joseph Kider University of Pennsylvania CIS 565 - Fall 2011 Sources Patrick Cozzi Spring 2011 NVIDIA CUDA Programming Guide CUDA by Example Programming Massively Parallel Processors

More information

Module 2: Introduction to CUDA C

Module 2: Introduction to CUDA C ECE 8823A GPU Architectures Module 2: Introduction to CUDA C 1 Objective To understand the major elements of a CUDA program Introduce the basic constructs of the programming model Illustrate the preceding

More information

CUDA Basics. July 6, 2016

CUDA Basics. July 6, 2016 Mitglied der Helmholtz-Gemeinschaft CUDA Basics July 6, 2016 CUDA Kernels Parallel portion of application: execute as a kernel Entire GPU executes kernel, many threads CUDA threads: Lightweight Fast switching

More information

Efficient Data Transfers

Efficient Data Transfers Efficient Data fers Slide credit: Slides adapted from David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2016 PCIE Review Typical Structure of a CUDA Program Global variables declaration Function prototypes global

More information

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks. Sharing the resources of an SM Warp 0 Warp 1 Warp 47 Register file A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks Shared A single SRAM (ex. 16KB)

More information

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY This lecture: Programming model and language Introduction to memory spaces and memory access Shared memory Matrix multiplication example Lecture

More information

Introduction to GPU Computing Junjie Lai, NVIDIA Corporation

Introduction to GPU Computing Junjie Lai, NVIDIA Corporation Introduction to GPU Computing Junjie Lai, NVIDIA Corporation Outline Evolution of GPU Computing Heterogeneous Computing CUDA Execution Model & Walkthrough of Hello World Walkthrough : 1D Stencil Once upon

More information

CS/CoE 1541 Final exam (Fall 2017). This is the cumulative final exam given in the Fall of Question 1 (12 points): was on Chapter 4

CS/CoE 1541 Final exam (Fall 2017). This is the cumulative final exam given in the Fall of Question 1 (12 points): was on Chapter 4 CS/CoE 1541 Final exam (Fall 2017). Name: This is the cumulative final exam given in the Fall of 2017. Question 1 (12 points): was on Chapter 4 Question 2 (13 points): was on Chapter 4 For Exam 2, you

More information

GPU Programming Introduction

GPU Programming Introduction GPU Programming Introduction DR. CHRISTOPH ANGERER, NVIDIA AGENDA Introduction to Heterogeneous Computing Using Accelerated Libraries GPU Programming Languages Introduction to CUDA Lunch What is Heterogeneous

More information

CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation

CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation CUDA C/C++ Basics GTC 2012 Justin Luitjens, NVIDIA Corporation What is CUDA? CUDA Platform Expose GPU computing for general purpose Retain performance CUDA C/C++ Based on industry-standard C/C++ Small

More information

CUDA C/C++ BASICS. NVIDIA Corporation

CUDA C/C++ BASICS. NVIDIA Corporation CUDA C/C++ BASICS NVIDIA Corporation What is CUDA? CUDA Architecture Expose GPU parallelism for general-purpose computing Retain performance CUDA C/C++ Based on industry-standard C/C++ Small set of extensions

More information

Vector Addition on the Device: main()

Vector Addition on the Device: main() Vector Addition on the Device: main() #define N 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space

More information

Introduction to GPU Programming

Introduction to GPU Programming Introduction to GPU Programming Volodymyr (Vlad) Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) Part III CUDA C and CUDA API Hands-on:

More information

HPCSE II. GPU programming and CUDA

HPCSE II. GPU programming and CUDA HPCSE II GPU programming and CUDA What is a GPU? Specialized for compute-intensive, highly-parallel computation, i.e. graphic output Evolution pushed by gaming industry CPU: large die area for control

More information

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY This lecture: Programming model and language Memory spaces and memory access Shared memory Examples Lecture questions: 1. Suggest two significant

More information

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan

CUDA Workshop. High Performance GPU computing EXEBIT Karthikeyan CUDA Workshop High Performance GPU computing EXEBIT- 2014 Karthikeyan CPU vs GPU CPU Very fast, serial, Low Latency GPU Slow, massively parallel, High Throughput Play Demonstration Compute Unified Device

More information

CUDA%Asynchronous%Memory%Usage%and%Execu6on% Yukai&Hung& Department&of&Mathema>cs& Na>onal&Taiwan&University

CUDA%Asynchronous%Memory%Usage%and%Execu6on% Yukai&Hung& Department&of&Mathema>cs& Na>onal&Taiwan&University CUDA%Asynchronous%Memory%Usage%and%Execu6on% Yukai&Hung& a0934147@gmail.com Department&of&Mathema>cs& Na>onal&Taiwan&University Page8Locked%Memory% Page8Locked%Memory%! &Regular&pageable&and&pageIlocked&or&pinned&host&memory&

More information

CUDA Performance Considerations (2 of 2) Varun Sampath Original Slides by Patrick Cozzi University of Pennsylvania CIS Spring 2012

CUDA Performance Considerations (2 of 2) Varun Sampath Original Slides by Patrick Cozzi University of Pennsylvania CIS Spring 2012 CUDA Performance Considerations (2 of 2) Varun Sampath Original Slides by Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2012 Agenda Instruction Optimizations Mixed Instruction Types Loop Unrolling

More information

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

GPU Programming Using CUDA. Samuli Laine NVIDIA Research GPU Programming Using CUDA Samuli Laine NVIDIA Research Today GPU vs CPU Different architecture, different workloads Basics of CUDA Executing code on GPU Managing memory between CPU and GPU CUDA API Quick

More information

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh

Learn CUDA in an Afternoon. Alan Gray EPCC The University of Edinburgh Learn CUDA in an Afternoon Alan Gray EPCC The University of Edinburgh Overview Introduction to CUDA Practical Exercise 1: Getting started with CUDA GPU Optimisation Practical Exercise 2: Optimising a CUDA

More information

COSC 462. CUDA Basics: Blocks, Grids, and Threads. Piotr Luszczek. November 1, /10

COSC 462. CUDA Basics: Blocks, Grids, and Threads. Piotr Luszczek. November 1, /10 COSC 462 CUDA Basics: Blocks, Grids, and Threads Piotr Luszczek November 1, 2017 1/10 Minimal CUDA Code Example global void sum(double x, double y, double *z) { *z = x + y; } int main(void) { double *device_z,

More information

COSC 462 Parallel Programming

COSC 462 Parallel Programming November 22, 2017 1/12 COSC 462 Parallel Programming CUDA Beyond Basics Piotr Luszczek Mixing Blocks and Threads int N = 100, SN = N * sizeof(double); global void sum(double *a, double *b, double *c) {

More information

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture Rafia Inam Mälardalen Real-Time Research Centre Mälardalen University, Västerås, Sweden http://www.mrtc.mdh.se rafia.inam@mdh.se CONTENTS

More information

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun Outline Memory Management CIS 565 Fall 2011 Qing Sun sunqing@seas.upenn.edu Kernels Matrix multiplication Managing Memory CPU and GPU have separate memory spaces Host (CPU) code manages device (GPU) memory

More information

University of Bielefeld

University of Bielefeld Geistes-, Natur-, Sozial- und Technikwissenschaften gemeinsam unter einem Dach Introduction to GPU Programming using CUDA Olaf Kaczmarek University of Bielefeld STRONGnet Summerschool 2011 ZIF Bielefeld

More information

Lecture 6b Introduction of CUDA programming

Lecture 6b Introduction of CUDA programming CS075 1896 1920 1987 2006 Lecture 6b Introduction of CUDA programming 0 1 0, What is CUDA? CUDA Architecture Expose GPU parallelism for general-purpose computing Retain performance CUDA C/C++ Based on

More information

Concurrent Kernels and Multiple GPUs

Concurrent Kernels and Multiple GPUs Concurrent Kernels and Multiple GPUs 1 Page Locked Host Memory host memory that is page locked or pinned executing a zero copy 2 Concurrent Kernels streams and concurrency squaring numbers with concurrent

More information

Cartoon parallel architectures; CPUs and GPUs

Cartoon parallel architectures; CPUs and GPUs Cartoon parallel architectures; CPUs and GPUs CSE 6230, Fall 2014 Th Sep 11! Thanks to Jee Choi (a senior PhD student) for a big assist 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ~ socket 14 ~ core 14 ~ HWMT+SIMD

More information

Lecture 10!! Introduction to CUDA!

Lecture 10!! Introduction to CUDA! 1(50) Lecture 10 Introduction to CUDA Ingemar Ragnemalm Information Coding, ISY 1(50) Laborations Some revisions may happen while making final adjustments for Linux Mint. Last minute changes may occur.

More information

GPGPUGPGPU: Multi-GPU Programming

GPGPUGPGPU: Multi-GPU Programming GPGPUGPGPU: Multi-GPU Programming Fall 2012 HW4 global void cuda_transpose(const float *ad, const int n, float *atd) { } int i = threadidx.y + blockidx.y*blockdim.y; int j = threadidx.x + blockidx.x*blockdim.x;

More information

Lecture 11: GPU programming

Lecture 11: GPU programming Lecture 11: GPU programming David Bindel 4 Oct 2011 Logistics Matrix multiply results are ready Summary on assignments page My version (and writeup) on CMS HW 2 due Thursday Still working on project 2!

More information

CS 179: GPU Computing. Lecture 2: The Basics

CS 179: GPU Computing. Lecture 2: The Basics CS 179: GPU Computing Lecture 2: The Basics Recap Can use GPU to solve highly parallelizable problems Performance benefits vs. CPU Straightforward extension to C language Disclaimer Goal for Week 1: Fast-paced

More information

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17

CUDA Lecture 2. Manfred Liebmann. Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 CUDA Lecture 2 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de December 15, 2015 CUDA Programming Fundamentals CUDA

More information

ME964 High Performance Computing for Engineering Applications

ME964 High Performance Computing for Engineering Applications ME964 High Performance Computing for Engineering Applications Wrap-up, Streams in CUDA GPU computing using thrust March 22, 2012 Dan Negrut, 2012 ME964 UW-Madison How is education supposed to make me feel

More information

Mathematical computations with GPUs

Mathematical computations with GPUs Master Educational Program Information technology in applications Mathematical computations with GPUs CUDA Alexey A. Romanenko arom@ccfit.nsu.ru Novosibirsk State University CUDA - Compute Unified Device

More information

COSC 6374 Parallel Computations Introduction to CUDA

COSC 6374 Parallel Computations Introduction to CUDA COSC 6374 Parallel Computations Introduction to CUDA Edgar Gabriel Fall 2014 Disclaimer Material for this lecture has been adopted based on various sources Matt Heavener, CS, State Univ. of NY at Buffalo

More information

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

GPU Programming Using CUDA. Samuli Laine NVIDIA Research GPU Programming Using CUDA Samuli Laine NVIDIA Research Today GPU vs CPU Different architecture, different workloads Basics of CUDA Executing code on GPU Managing memory between CPU and GPU CUDA API Quick

More information

Memory Management. Memory Access Bandwidth. Memory Spaces. Memory Spaces

Memory Management. Memory Access Bandwidth. Memory Spaces. Memory Spaces Memory Access Bandwidth Memory Management Bedrich Benes, Ph.D. Purdue University Department of Computer Graphics Technology High Performance Computer Graphics Lab Host and device different memory spaces

More information

Towards Automatic Heterogeneous Computing Performance Analysis. Carl Pearson Adviser: Wen-Mei Hwu

Towards Automatic Heterogeneous Computing Performance Analysis. Carl Pearson Adviser: Wen-Mei Hwu Towards Automatic Heterogeneous Computing Performance Analysis Carl Pearson pearson@illinois.edu Adviser: Wen-Mei Hwu 2018 03 30 1 Outline High Performance Computing Challenges Vision CUDA Allocation and

More information

GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique GPU programming: CUDA basics Sylvain Collange Inria Rennes Bretagne Atlantique sylvain.collange@inria.fr This lecture: CUDA programming We have seen some GPU architecture Now how to program it? 2 Outline

More information

Introduction to CUDA Programming

Introduction to CUDA Programming Introduction to CUDA Programming Hemant Shukla hshukla@lbl.gov Trends Scien+fic Data Deluge LSST JGI LOFAR SKA Figure courtesy of Kunle Olukotun, Lance Hammond, Herb SuTer, and Burton Smith Tradi+onal source

More information

Acceleration of Agent-Based PandemicModeling on Multiple GPUs

Acceleration of Agent-Based PandemicModeling on Multiple GPUs Western Michigan University ScholarWorks at WMU Master's Theses Graduate College 8-2015 Acceleration of Agent-Based PandemicModeling on Multiple GPUs Barzan Shekh Western Michigan University, barzanc@gmail.com

More information

Introduction to CUDA (1 of n*)

Introduction to CUDA (1 of n*) Administrivia Introduction to CUDA (1 of n*) Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2011 Paper presentation due Wednesday, 02/23 Topics first come, first serve Assignment 4 handed today

More information

Paralization on GPU using CUDA An Introduction

Paralization on GPU using CUDA An Introduction Paralization on GPU using CUDA An Introduction Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Introduction to GPU 2 Introduction to CUDA Graphics Processing

More information

Scientific Computations Using Graphics Processors

Scientific Computations Using Graphics Processors Scientific Computations Using Graphics Processors Blair Perot Ali Khajeh-Saeed Tim McGuiness History Kevin Bowers, X Division Los Alamos Lab (2003) Lots of Memory Uses Memory Banks Cheap (commodity) Relativistic

More information

Lecture 8: GPU Programming. CSE599G1: Spring 2017

Lecture 8: GPU Programming. CSE599G1: Spring 2017 Lecture 8: GPU Programming CSE599G1: Spring 2017 Announcements Project proposal due on Thursday (4/28) 5pm. Assignment 2 will be out today, due in two weeks. Implement GPU kernels and use cublas library

More information

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci

CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci TECHNISCHE UNIVERSITÄT WIEN Fakultät für Informatik Cyber-Physical Systems Group CUDA Programming (Basics, Cuda Threads, Atomics) Ezio Bartocci Outline of CUDA Basics Basic Kernels and Execution on GPU

More information

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research Introduction to CUDA CME343 / ME339 18 May 2011 James Balfour [ jbalfour@nvidia.com] NVIDIA Research CUDA Programing system for machines with GPUs Programming Language Compilers Runtime Environments Drivers

More information

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1 Lecture 15: Introduction to GPU programming Lecture 15: Introduction to GPU programming p. 1 Overview Hardware features of GPGPU Principles of GPU programming A good reference: David B. Kirk and Wen-mei

More information