CUDA by Example. The University of Mississippi Computer Science Seminar Series. April 28, 2010

Similar documents
Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

Review. Lecture 10. Today s Outline. Review. 03b.cu. 03?.cu CUDA (II) Matrix addition CUDA-C API

Outline 2011/10/8. Memory Management. Kernels. Matrix multiplication. CIS 565 Fall 2011 Qing Sun

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

Register file. A single large register file (ex. 16K registers) is partitioned among the threads of the dispatched blocks.

GPU Programming Using CUDA

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

Tesla Architecture, CUDA and Optimization Strategies

Introduction to CUDA

Lecture 3: Introduction to CUDA

Programming with CUDA, WS09

INTRODUCTION TO GPU COMPUTING IN AALTO. Topi Siro

Introduction to CUDA (1 of n*)

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

Introduction to GPU programming. Introduction to GPU programming p. 1/17

GPU Programming Using CUDA. Samuli Laine NVIDIA Research

Module 2: Introduction to CUDA C. Objective

Module 3: CUDA Execution Model -I. Objective

GPU programming. Dr. Bernhard Kainz

CUDA (Compute Unified Device Architecture)

General-purpose computing on graphics processing units (GPGPU)

General Purpose GPU programming (GP-GPU) with Nvidia CUDA. Libby Shoop

Introduction to GPGPUs and to CUDA programming model

CUDA C Programming Mark Harris NVIDIA Corporation

NVIDIA GPU CODING & COMPUTING

Real-time Graphics 9. GPGPU

CSE 160 Lecture 24. Graphical Processing Units

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Real-time Graphics 9. GPGPU

Parallel Computing. Lecture 19: CUDA - I

Information Coding / Computer Graphics, ISY, LiTH. Introduction to CUDA. Ingemar Ragnemalm Information Coding, ISY

Introduction to Parallel Computing with CUDA. Oswald Haan

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

Mathematical computations with GPUs

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Introduction to GPU Computing Junjie Lai, NVIDIA Corporation

CUDA Architecture & Programming Model

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

COMP 322: Fundamentals of Parallel Programming. Flynn s Taxonomy for Parallel Computers

Parallel Numerical Algorithms

Module 2: Introduction to CUDA C

CUDA Programming. Aiichiro Nakano

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

High Performance Linear Algebra on Data Parallel Co-Processors I

CUDA Basics. July 6, 2016

GPU Computing: A Quick Start

Lecture 10!! Introduction to CUDA!

CS179 GPU Programming Recitation 4: CUDA Particles

Stanford University. NVIDIA Tesla M2090. NVIDIA GeForce GTX 690

Image convolution with CUDA

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

ECE 408 / CS 483 Final Exam, Fall 2014

Lecture 11: GPU programming

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

CS377P Programming for Performance GPU Programming - I

INTRODUCTION TO GPU COMPUTING IN AALTO. Topi Siro

Basic Elements of CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

CUDA C/C++ BASICS. NVIDIA Corporation

Lessons learned from a simple application

CUDA Programming Model

Josef Pelikán, Jan Horáček CGG MFF UK Praha

OpenCL parallel Processing using General Purpose Graphical Processing units TiViPE software development

Introduction to GPGPU and GPU-architectures

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Introduction to CUDA programming

Data Parallel Execution Model

CSC266 Introduction to Parallel Computing using GPUs Introduction to CUDA

Scientific discovery, analysis and prediction made possible through high performance computing.

ECE 574 Cluster Computing Lecture 15

Practical Introduction to CUDA and GPU

Computation to Core Mapping Lessons learned from a simple application

CUDA Parallelism Model

GPU Programming. Alan Gray, James Perry EPCC The University of Edinburgh

Accelerating image registration on GPUs

CS 179: GPU Computing

Basics of CADA Programming - CUDA 4.0 and newer

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

Parallel Programming and Debugging with CUDA C. Geoff Gerfin Sr. System Software Engineer

CUDA Performance Optimization. Patrick Legresley

Performance Diagnosis for Hybrid CPU/GPU Environments

Automatic translation from CUDA to C++ Luca Atzori, Vincenzo Innocente, Felice Pantaleo, Danilo Piparo

Lecture 1: an introduction to CUDA

CUDA Parallel Programming Model. Scalable Parallel Programming with CUDA

GPU Computing Workshop CSU Getting Started. Garland Durham Quantos Analytics

CUDA Parallel Programming Model Michael Garland

GPU Programming for Mathematical and Scientific Computing

Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA

CUDA Memory Types All material not from online sources/textbook copyright Travis Desell, 2012

GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

Module Memory and Data Locality

COSC 462 Parallel Programming

GPU ACCELERATED DATABASE MANAGEMENT SYSTEMS

Lecture 2: Introduction to CUDA C

Today s Content. Lecture 7. Trends. Factors contributed to the growth of Beowulf class computers. Introduction. CUDA Programming CUDA (I)

HPC COMPUTING WITH CUDA AND TESLA HARDWARE. Timothy Lanfear, NVIDIA

NVJPEG. DA _v0.2.0 October nvjpeg Libary Guide

Transcription:

CUDA by Example The University of Mississippi Computer Science Seminar Series Martin.Lilleeng.Satra@sintef.no SINTEF ICT Department of Applied Mathematics April 28, 2010

Outline 1 The GPU 2 cudapi 3 CUDA MJPEG-encoding 4 Summary Applied Mathematics 2/50

Moore's Law Moore's Law states that the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. Applied Mathematics 3/50

Why GPUs? Three problems for serial computing: The power wall The von Neumann bottleneck The Instruction Level Parallelism wall We need more ops, and serial computing is no longer capable of delievering it. The frequency of new processors is not increasing anymore. One possible solution: Compute on multiple cores! Applied Mathematics 4/50

GPU history - 1981 Applied Mathematics 5/50

GPU history - 1992 Applied Mathematics 6/50

GPU history - 2001 Applied Mathematics 7/50

The GPU cudapi CUDA MJPEG-encoding Summary GPU history - 2009 Applied Mathematics 8/50

The GPU today The graphics card GPU - massively parallel processor (480 cores) Memory: 1.5 GB (177.4 GB/sec) Connected to the host through a PCIe x 16 Gen2 bus (8 GB/s) Processor clock @ 1401 MHz Figure: The NVIDIA GeForce GTX 480 card. Applied Mathematics 9/50

The GPU vs. the CPU A lot more transistors for oating point operations! But: Algorithms cannot simply be translated 1-to-1 from serial CPU-code to GPU-code. You have to parallelize the whole or parts of your algorithm to achieve eective GPU-code. Applied Mathematics 10/50

A heterogeneous architecture On a heterogeneous architecture several types of processors work together, each one solving the task for which it is best suited, e.g., the CPU in cooperation with the GPU. The CPU is running the show, and the GPU launches programs on many cores in parallel, as requested by the CPU-program. Do data parallel processing on the GPU, and high-level logic on the CPU. Applied Mathematics 11/50

CUDA - Compute Unied Device Architecture Small set of extensions to the C language Made to expose the GPU to the programmer, and allow easy access to the computational resources of the GPU Does support some C++-features, like templates, and more are being added A Fortran compiler is also available, developed by NVIDIA in cooperation with The Portland Group Applied Mathematics 12/50

The CUDA tools What you need to get started: 1 CUDA driver (device driver for the graphics card) 2 CUDA toolkit (CUDA compiler, runtime library etc.) 3 CUDA SDK (software development kit, with code examples) All this can be found at http://nvidia.com/cuda Applied Mathematics 13/50

OpenCL Open Computing Language Initially developed by Apple Has a lot in common with CUDA Released in December 2008 (version 1.0) Applied Mathematics 14/50

Our rst example A HelloWorld-type of application using CUDA. Calculate decimals of π by using geometric observations. This is not the best way to calculate π, but is serves us good as an example of parallel programming with CUDA. Applied Mathematics 15/50

p = πr 2 4r 2 (1) π = 4p (2) We can calculate the ratio p to get an estimate of π: 1 By placing n dots within the square, some fraction k of these dots will lie inside the circle. 2 This gives us p k n and π 4k n. Applied Mathematics 16/50

The CUDA grid Applied Mathematics 17/50

The CUDA grid We set our block size to 16x16. The block size is a dicult parameter to set, and it can have a big impact on performance. Applied Mathematics 18/50

The CUDA grid Then we set our grid to be 8x8. This gives us (16 8) x (16 8) = 16384 threads totally. If we use a larger grid we get a ner resultion, and a better estimate of π. Applied Mathematics 19/50

The thread workload We could let each thread represent one pixel, but this would not give each thread much of a workload. Therefore, we let each thread calculate several points. Applied Mathematics 20/50

Setup on the CPU (host) The dim3-variables are used later when we call our CUDA-program. const int width = 128; const int height = 128; // set the block size const dim3 blocksize (16, 16); // determine the size of the grid const size_t gridwidth = width / blocksize. x; const size_t gridheight = height / blocksize. y; const dim3 gridsize ( gridwidth, gridheight ); // number of points per thread (=m ^2) const int m = 10; Applied Mathematics 21/50

Allocating device memory First we allocate memory on the device (the GPU), which we will use to save our intermediate results to. cudamallocpitch allocates width*height*sizeof(float) bytes of linear memory on the device. Ensures ecient memory access, when address is updated from row to row. float * d_results ; size_t d_pitch ; // allocate device memory CUDA_SAFE_CALL ( cudamallocpitch (( void **)& d_results, & d_pitch, width * sizeof ( float ), height )); Applied Mathematics 22/50

Allocating host memory We also need to allocate memory on the host, so we have somewhere to download our intermediate results to. // allocate host memory float * h_results = new float [ width * height ]; Applied Mathematics 23/50

The CUDA kernel The global -keyword denes a CUDA kernel. global void cudapikernel ( float * results, size_t pitch ) { int i = blockidx. x * blockdim. x + threadidx. x; int j = blockidx. y * blockdim. y + threadidx. y; float dx = 1.0 f / ( float ) ( griddim. x * blockdim. x ); float dy = 1.0 f / ( float ) ( griddim. y * blockdim. y );... int inside = 0; Applied Mathematics 24/50

The CUDA kernel cont'd... Calculate whether the points (x, y) are inside our outside the unit circle. for ( int l = 0; l < m; ++ l) { float y = j / ( float ) ( griddim.y * blockdim.y ); y += dy * (l /( float )m ); for ( int k = 0; k < m; ++ k) { float x = i / ( float ) ( griddim.x * blockdim.x ); x += dx * (k /( float )m ); if ( x * x + y * y < 1.0 f) { // point is inside circle! ++ inside ; } } } float * elementptr = ( float *) (( char *) results + j * pitch ); } elementptr [ i] = inside ; Applied Mathematics 25/50

Calling the CUDA kernel Now we know enough to write our call to the CUDA-kernel. // run kernel cudapikernel <<< gridsize, blocksize >>>( d_results, d_pitch ); CUT_CHECK_ERROR (" cudapikernel " ); Applied Mathematics 26/50

Device to host copy Now we copy the intermediate results from the GPU to the CPU. // fetch results from device CUDA_SAFE_CALL ( cudamemcpy2d ( h_results, width * sizeof ( float ), d_results, d_pitch, width * sizeof ( float ), height, cudamemcpydevicetohost )); Applied Mathematics 27/50

The host code Lastly, we calculate π from the intermediate results we fetched from the device. // sum up results from each CUDA - thread float k = 0.0; for ( int i = 0; i < width * height ; ++i) { k += h_results [ i ]; } // calculate final result int n = ( width * m) * ( height * m ); float pi = 4 * (( float ) k / n ); cout << " PI is ( approximately ): " << pi << endl ; Applied Mathematics 28/50

Optimization: Allocating host memory using the CUDA API We should also allocate memory on the host using the CUDA API. This ensures fast memory transfers between the device and the host. cudamallochost allocates page-locked memory that is accessible to the device. float * h_results ; // allocate host memory CUDA_SAFE_CALL ( cudamallochost (( void **)& h_results, width * height * sizeof ( float ))); Applied Mathematics 29/50

Demo Applied Mathematics 30/50

JPEG A lossy compression method for photographic images Applied Mathematics 31/50

Motion-JPEG A videoformat where each frame is individually compressed as a JPEG image Uses no inter-frame compression (motion compensation) Used by many portable devices, e.g., webcams and other devices that stream video Applied Mathematics 32/50

CPU-version of the algorithm We already have a CPU-version of the algorithm for M-JPEG encoding. It is often the case that you want to accelerate some already existing algorithm. Understanding the algorithm in detail is important. Applied Mathematics 33/50

M-JPEG encoding RGB > YUV Split up image into macroblocks DCT Quantize Zig zag image, truncate and do Huffman Split the image into macroblocks of size 8x8. Applied Mathematics 34/50

M-JPEG encoding RGB > YUV Split up image into macroblocks DCT Quantize Zig zag image, truncate and do Huffman Subtract 128 to shift the values so that they are centered around zero. Perform a two-dimensional DCT on each macroblock, converting them into a frequency-domain representation. Applied Mathematics 34/50

M-JPEG encoding RGB > YUV Split up image into macroblocks DCT Quantize Zig zag image, truncate and do Huffman Reduce the amount of information in the high frequency components, by simply dividing each component by a predened constant. The human eye is not so good at distinguishing the exact strength of a high frequency brightness variation. Applied Mathematics 34/50

M-JPEG encoding RGB > YUV Split up image into macroblocks DCT Quantize Zig zag image, truncate and do Huffman Go through the macroblock in a zigzag-pattern. Remove the trailing zeros. Human-encoding is a lossless compression algorithm which further compresses the result. Applied Mathematics 34/50

Mapping an algorithm to the GPU 1 Identify which parts are most computationally demanding (by proling) 2 Investigate if all or some of these parts can be done in a parallel fashion 3 Write CUDA-kernels for these parts, which are called from the host code 4 Repeat Applied Mathematics 35/50

Proling the CPU-version By running a proler like gprof (remember to compile your code with -gp), we can determine which functions are dominating our programs runtime. We nd that the DCT spends well over 90% of the runtime. Applied Mathematics 36/50

Overview CPU Read frame from file GPU Split up image into macroblocks DCT Zigzag, truncate and Huffman Quantize Write to file Figure: Overview CPU/GPU. Applied Mathematics 37/50

DCT on the CPU Let us take a look at the CPU-version's DCT. For each macroblock, the following code is executed: for ( int u = 0; u < 8; ++u) { for ( int v = 0; v < 8; ++v) { float dct = 0; for ( int j = 0; j < 8; ++j) { for ( int i = 0; i < 8; ++i) { float coeff = in_data [( y+j )* width +( x+i )] - 128.0 f; dct += coeff * ( float ) ( cos ((2* i +1)* u* PI /16.0 f) * cos ((2* j +1)* v* PI /16.0 f )); } } float a1 =! u? ISQRT2 : 1.0 f; float a2 =! v? ISQRT2 : 1.0 f; /* Scale according to normalizing function */ dct *= a1 * a2 /4.0 f;... Applied Mathematics 38/50

Quantization on the CPU......and the quantization: } } /* Quantize */ out_data [( y+v )* width +( x+u )] = ( int16_t )( floor (0.5 f + dct / ( float )( quantization [v *8+ u ]))); Applied Mathematics 39/50

The GPU compute grid VIDEO FRAME MACROBLOCK PIXEL GPU COMPUTE GRID GPU COMPUTE BLOCK THREAD Figure: One videoframe and the GPU compute grid. Applied Mathematics 40/50

Quantization matrices We store the three quantization matrices (for Y, U and V) in constant memory on the device. /* * * Quantization matrices */ constant float yquanttbl [] = { 16, 11, 12, 14, 12, 10, 16, 14, 13, 14, 18, 17, 16, 19, 24, 40, 26, 24, 22, 22, 24, 49, 35, 37, 29, 40, 58, 51, 61, 30, 57, 51, 56, 55, 64, 72, 92, 78, 64, 68, 87, 69, 55, 56, 80, 109, 81, 87, 95, 98, 103, 104, 103, 62, 77, 113, 121, 112, 100, 120, 92, 101, 103, 99 };... Applied Mathematics 41/50

The CUDA-kernel Reads one macroblock into shared memory. syncthreads() is a barrier, which make threads wait until all threads have reached this point in the kernel. global void cudadct ( float * output, size_t output_pitch float * input, size_t input_pitch ) { const int bx = blockidx. x; const int by = blockidx. y; const int tx = threadidx. x; const int ty = threadidx. y; const int x = ( bx * 8) + tx ; const int y = ( by * 8) + ty ; shared float macroblock [8][8]; float * inputptr = ( float *)(( char *) input + y * input_pitch ); macroblock [ ty ][ tx ] = inputptr [ x ]; syncthreads (); Applied Mathematics 42/50

The CUDA-kernel cont'd... cosf is an intrinsic function. float dct = 0; for ( int j = 0; j < 8; ++ j) { for ( int i = 0; i < 8; ++ i) { float coeff = macroblock [ j ][ i] - 128.0 f; dct += coeff * cosf ((2 * i + 1) * tx * PI / 16.0 f) * cosf ((2 * j + 1) * ty * PI / 16.0 f ); } } float a1 =! tx? ISQRT2 : 1.0 f; float a2 =! ty? ISQRT2 : 1.0 f; /* Scale according to normalizing function */ dct *= a1 * a2 / 4.0 f; /* Quantize */ float * inputptr = ( float *)(( char *) output + y * output_pitch ); } output [ x] = dct / yquanttbl [ ty * 8 + tx ]; Applied Mathematics 43/50

Moving data between host and device Using cudamemcpy2d, similar to the cudapi-example: // copy image to device CUDA_SAFE_CALL ( cudamemcpy2d ( h_in_data, h_pitch_in, d_in_data, d_pitch_in, width * sizeof ( float ), height, cudamemcpyhosttodevice )); // launch kernel cudadct <<< gridsize, blocksize >>>( d_out_data, d_pitch_out d_in_data, d_pitch_in ); CUT_CHECK_ERROR (" cudadct " ); // copy image back to host CUDA_SAFE_CALL ( cudamemcpy2d ( d_out_data, d_pitch_out, h_out_data, h_pitch_out, width * sizeof ( float ), height, cudamemcpydevicetohost )); Applied Mathematics 44/50

Optimizations Run kernel on four consecutive macroblocks to achieve coalescing. Applied Mathematics 45/50

Optimizations cont'd Threaded I/O on host Double-buering input and output buers on the CPU Makes it possible to do le IO and GPU memory transfers simultaneously Can be implemented using threading on the CPU, e.g., boost::thread Could also run the Human-encoding in a separate thread, since this is a compute intensive algorithm Applied Mathematics 46/50

Optimizations cont'd Streams Utilizing CUDA streams Uses asynchronous memory transfers Allows overlapping of uploading, kernel invocation, and downloading Applied Mathematics 47/50

Is GPU-computing worth the eort? It takes time to learn a new technology and a new platform. It will take time to rewrite your existing algorithms. But: You will get signicant speed ups (5-100X) of algorithms that benet from parallelism. Once you have your parallel algorithms, they will scale on newer generations of parallel hardware. Applied Mathematics 48/50

CUDA Documentation Programming Guide Reference Manual Best Practices Guide Pay special attention to the Performance Guidelines-section of the programming guide to get the most performance out of your CUDA-apps http://gpgpu.org Applied Mathematics 49/50

Good luck with your CUDA-programming, and thank you :-) Contact: Martin.Lilleeng.Satra@sintef.no http://martinsa.at.i.uio.no Applied Mathematics 50/50