Intro to GPU s for Parallel Computing

Similar documents
General Purpose GPU programming (GP-GPU) with Nvidia CUDA. Libby Shoop

GPU Computation CSCI 4239/5239 Advanced Computer Graphics Spring 2014

CIS 665: GPU Programming. Lecture 2: The CUDA Programming Model

Threading Hardware in G80

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Using The CUDA Programming Model

Introduction to CUDA (1 of n*)

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

GPGPU. Lecture 2: CUDA

Parallel Computing. Lecture 19: CUDA - I

Lecture 3: Introduction to CUDA

Lecture 1: Introduction

CUDA (Compute Unified Device Architecture)

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

GPU Programming. Lecture 2: CUDA C Basics. Miaoqing Huang University of Arkansas 1 / 34

CS 668 Parallel Computing Spring 2011

Lecture 1: Introduction and Computational Thinking

Module 2: Introduction to CUDA C

Mattan Erez. The University of Texas at Austin

CUDA Programming Model

Lecture 2: Introduction to CUDA C

Introduction to CELL B.E. and GPU Programming. Agenda

Core/Many-Core Architectures and Programming. Prof. Huiyang Zhou

CS427 Multicore Architecture and Parallel Computing

Portland State University ECE 588/688. Graphics Processors

Module 2: Introduction to CUDA C. Objective

Performance Insights on Executing Non-Graphics Applications on CUDA on the NVIDIA GeForce 8800 GTX

Massively Parallel Architectures

GPGPU/CUDA/C Workshop 2012

Mattan Erez. The University of Texas at Austin

CS 677: Parallel Programming for Many-core Processors Lecture 1

Tesla Architecture, CUDA and Optimization Strategies

CUDA C Programming Mark Harris NVIDIA Corporation

An Example of Physical Reality Behind CUDA. GeForce Speedup of Applications Why Massively Parallel Processor.

Many-core Processors Lecture 1. Instructor: Philippos Mordohai Webpage:

Lecture 15: Introduction to GPU programming. Lecture 15: Introduction to GPU programming p. 1

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

High Performance Computing

CS179 GPU Programming Introduction to CUDA. Lecture originally by Luke Durant and Tamas Szalay

CS 193G. Lecture 1: Introduction to Massively Parallel Computing

CUDA PROGRAMMING MODEL. Carlo Nardone Sr. Solution Architect, NVIDIA EMEA

HPC COMPUTING WITH CUDA AND TESLA HARDWARE. Timothy Lanfear, NVIDIA

This is a draft chapter from an upcoming CUDA textbook by David Kirk from NVIDIA and Prof. Wen-mei Hwu from UIUC.

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

ECE 574 Cluster Computing Lecture 15

An Introduction to GPGPU Pro g ra m m ing - CUDA Arc hitec ture

GPU & High Performance Computing (by NVIDIA) CUDA. Compute Unified Device Architecture Florian Schornbaum

ECE 574 Cluster Computing Lecture 17

Spring 2009 Prof. Hyesoon Kim

Introduction to CUDA

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

CUDA programming model. N. Cardoso & P. Bicudo. Física Computacional (FC5)

COMP 322: Fundamentals of Parallel Programming. Flynn s Taxonomy for Parallel Computers

Current Trends in Computer Graphics Hardware

Spring 2011 Prof. Hyesoon Kim

Mattan Erez. The University of Texas at Austin

Josef Pelikán, Jan Horáček CGG MFF UK Praha

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University

Lecture 11: GPU programming

The NVIDIA GeForce 8800 GPU

EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 18 GPUs (III)

GPUs and GPGPUs. Greg Blanton John T. Lubia

Tesla GPU Computing A Revolution in High Performance Computing

Nvidia G80 Architecture and CUDA Programming

Introduction to GPU (Graphics Processing Unit) Architecture & Programming

Accelerating image registration on GPUs

High Performance Linear Algebra on Data Parallel Co-Processors I

Introduction to GPU programming. Introduction to GPU programming p. 1/17

CS GPU and GPGPU Programming Lecture 8+9: GPU Architecture 7+8. Markus Hadwiger, KAUST

CUDA Architecture & Programming Model

EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 22 CUDA

CSCI-GA Graphics Processing Units (GPUs): Architecture and Programming Lecture 2: Hardware Perspective of GPUs

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

GPU Architecture and Programming. Ozcan Ozturk Bilkent University Ankara, Turkey

From Brook to CUDA. GPU Technology Conference

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

CSE 160 Lecture 24. Graphical Processing Units

Lecture 9. Outline. CUDA : a General-Purpose Parallel Computing Architecture. CUDA Device and Threads CUDA. CUDA Architecture CUDA (I)

Master Informatics Eng.

Introduction to CUDA Programming

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

GPGPU, 1st Meeting Mordechai Butrashvily, CEO GASS

Parallel Systems Course: Chapter IV. GPU Programming. Jan Lemeire Dept. ETRO November 6th 2008

Introduction to GPGPUs and to CUDA programming model

Introduction to CUDA (1 of n*)

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

GPGPU introduction and network applications. PacketShaders, SSLShader

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

High-Performance Computing Using GPUs

G P G P U : H I G H - P E R F O R M A N C E C O M P U T I N G

Today s Content. Lecture 7. Trends. Factors contributed to the growth of Beowulf class computers. Introduction. CUDA Programming CUDA (I)

Working with Metal Overview

University of Bielefeld

GPU Computing with CUDA. Part 2: CUDA Introduction

Programming in CUDA. Malik M Khan

Real-time Graphics 9. GPGPU

Real-time Graphics 9. GPGPU

EE382N (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 09 GPUs (II) Mattan Erez. The University of Texas at Austin

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Transcription:

Intro to GPU s for Parallel Computing Goals for Rest of Course Learn how to program massively parallel processors and achieve high performance functionality and maintainability scalability across future generations Acquire technical knowledge required to achieve the above goals principles and patterns of parallel programming processor architecture features and constraints programming API, tools and techniques Overview of architecture first, then introduce architecture as we go 1

Equipment Your own, if CUDA-enabled; will use CUDA SDK in C Compute Unified Device Architecture NVIDIA G80 or newer G80 emulator won t quite work Lab machine uaa-csetesla.duckdns.org Ubuntu two Intel Xeon E5-2609 @2.4Ghz, each four cores 128 Gb memory Two nvidia Quadro 4000 s 256 CUDA Cores 1 Ghz Clock 2 Gb memory Why Massively Parallel Processors A quiet revolution and potential build-up 2006 Calculation: 367 GFLOPS vs. 32 GFLOPS G80 Memory Bandwidth: 86.4 GB/s vs. 8.4 GB/s Until recently, programmed through graphics API GPU in every PC and workstation massive volume and potential impact 2

CPUs and GPUs have fundamentally different design philosophies Control CPU ALU ALU ALU ALU GPU DRAM DRAM Architecture of a CUDA-capable GPU Host Input Assembler Thread Execution Manager Building Block Streaming Multiprocessor (SM) Streaming Processor () Texture Texture Texture Texture Texture Texture Texture Texture Load/store Load/store Load/store Load/store Load/store Load/store Global Memory 32 SM s each with 8 s on one Quadro 4000 3

GT200 Characteristics 1 TFLOPS peak performance (25-50 times of current highend microprocessors) 265 GFLOPS sustained for apps such as Visual Molecular Dynamics (VMD) Massively parallel, 128 cores, 90W Massively threaded, sustains 1000s of threads per app 30-100 times speedup over high-end microprocessors on scientific and media applications: medical imaging, molecular dynamics I think they're right on the money, but the huge performance differential (currently 3 GPUs ~= 300 SGI Altix Itanium2s) will invite close scrutiny so I have to be careful what I say publically until I triple check those numbers. -John Stone, VMD group, Physics UIUC Future Apps Reflect a Concurrent World Exciting applications in future mass computing market have been traditionally considered supercomputing applications Molecular dynamics simulation, Video and audio coding and manipulation, 3D imaging and visualization, Consumer game physics, and virtual reality products These Super-apps represent and model physical, concurrent world Various granularities of parallelism exist, but programming model must not hinder parallel implementation data delivery needs careful management 8 4

Sample of Previous GPU Projects Application Description Source Kernel % time H.264 LBM RC5-72 FEM RPES PNS SAXPY TRACF FDTD MRI-Q EC 06 version, change in guess vector 34,811 194 35% EC 06 version, change to single precision and print fewer reports 1,481 285 >99% Distributed.net RC5-72 challenge client code 1,979 218 >99% Finite element modeling, simulation of 3D graded materials 1,874 146 99% Rye Polynomial Equation Solver, quantum chem, 2-electron repulsion 1,104 281 99% Petri Net simulation of a distributed system 322 160 >99% Single-precision implementation of saxpy, 952 31 >99% used in Linpack s Gaussian elim. routine Two Point Angular Correlation Function 536 98 96% Finite-Difference Time Domain analysis of 1,365 93 16% 2D electromagnetic wave propagation Computing a matrix Q, a scanner s configuration in MRI reconstruction 490 33 >99% GPU Speedup Relative to CPU 60 50 40 30 20 10 0 Speedup of Applications Ke rn e l Ap p lic a tio n 210 457 79 431 316 263 H.264 L BM RC5-72 F EM RPES PNS SAXPY T PACF F DT D M RI-Q M RI- GeForce 8800 GTX vs. 2.2GHz Opteron 248 10 speedup in a kernel is typical, as long as the kernel can occupy enough parallel threads 25 to 400 speedup if the function s data requirements and control flow suit the GPU and the application is optimized F H D 5

GPU History CUDA Graphics Pipeline Elements 1. A scene description: vertices, triangles, colors, lighting 2.Transformations that map the scene to a camera viewpoint 3. Effects : texturing, shadow mapping, lighting calculations 4.Rasterizing: converting geometry into pixels 5.Pixel processing: depth tests, stencil tests, and other per-pixel operations. 6

Host CPU Host Interface Vertex Control VS/T&L Triangle Setup Vertex GPU A Fixed Function GPU Pipeline Raster Shader ROP Texture Frame Buffer Memory FBI Texture Mapping Example Texture mapping example: painting a world map texture image onto a globe object. 7

Anti-Aliasing Example Triangle Geometry Aliased Anti-Aliased Programmable Vertex and Pixel Processors 3D API Commands 3D Application or Game 3D API: OpenGL or Direct3D CPU GPU Boundary CPU GPU Command & Data Stream GPU Front End Pre-transformed Vertices Vertex Index Stream Programmable Vertex Processor Primitive Assembly Transformed Vertices Assembled Polygons, Lines, and Points Rasterized Pre-transformed Fragments Rasterization & Interpolation Pixel Location Stream Programmable Fragment Processor GPU Raster Ops Pixel Updates Transformed Fragments An example of separate vertex processor and fragment processor in a programmable graphics pipeline Framebuffer 8

Thread Processor 3/21/2017 GeForce 8800 GPU 2006 Mapped the separate programmable graphics stages to an array of unified processors Logical graphics pipeline visits processors three times with fixed-function graphics logic between visits Load balancing possible; different rendering algorithms present different loads among the programmable stages Dynamically allocated from unified processors Functionality of vertex and pixel shaders identical to the programmer geometry shader to process all vertices of a primitive instead of vertices in isolation Unified Graphics Pipeline GeForce 8800 Host Data Assembler Setup / Rstr / ZCull Vtx Thread Issue Geom Thread Issue Pixel Thread Issue TF TF TF TF TF TF TF TF L1 L1 L1 L1 L1 L1 L1 L1 L2 L2 L2 L2 L2 L2 FB FB FB FB FB FB 9

What is (Historical) GPGPU? General Purpose computation using GPU and graphics API in applications other than 3D graphics GPU accelerates critical path of application Data parallel algorithms leverage GPU attributes Large data arrays, streaming throughput Fine-grain SIMD parallelism Low-latency floating point (FP) computation Applications see http://gpgpu.org Game effects (FX) physics, image processing Physical modeling, computational engineering, matrix algebra, convolution, correlation, sorting Previous GPGPU Constraints Dealing with graphics API Working with the corner cases of the graphics API Addressing modes Limited texture size/dimension Shader capabilities Limited outputs Instruction sets Lack of Integer & bit ops Communication limited Between pixels Scatter a[i] = p Input Registers Fragment Program Output Registers FB Memory per thread per Shader per Context Texture Constants Temp Registers 10

Tesla GPU NVIDIA developed a more general purpose GPU Can programming it like a regular processor Must explicitly declare the data parallel parts of the workload Shader processors fully programming processors with instruction memory, cache, sequencing logic Memory load/store instructions with random byte addressing capability Parallel programming model primitives; threads, barrier synchronization, atomic operations CUDA Compute Unified Device Architecture General purpose programming model User kicks off batches of threads on the GPU GPU = dedicated super-threaded, massively data parallel co-processor Targeted software stack Compute oriented drivers, language, and tools Driver for loading computation programs into GPU Standalone Driver - Optimized for computation Interface designed for compute graphics-free API Data sharing with OpenGL buffer objects Guaranteed maximum download & readback speeds Explicit GPU memory management 11

Parallel Computing on a GPU 8-series GPUs deliver 25 to 200+ GFLOPS on compiled parallel C applications Available in laptops, desktops, and clusters GeForce 8800 GPU parallelism is doubling every year Programming model scales transparently Programmable in C with CUDA tools Multithreaded MD model uses application data parallelism and thread parallelism Tesla D870 Tesla S870 Overview CUDA programming model basic concepts and data types CUDA application programming interface - basic Simple examples to illustrate basic concepts and functionalities Performance features will be covered later 12

CUDA C with no shader limitations! Integrated host+device app C program Serial or modestly parallel parts in host C code Highly parallel parts in device MD/SIMT kernel C code Serial Code (host) Parallel Kernel (device) KernelA<<< nblk, ntid >>>(args);... Serial Code (host) Parallel Kernel (device) KernelB<<< nblk, ntid >>>(args);... CUDA Devices and Threads A compute device Is a coprocessor to the CPU or host Has its own DRAM (device memory) Runs many threads in parallel Is typically a GPU but can also be another type of parallel processing device Data-parallel portions of an application are expressed as device kernels which run on many threads Differences between GPU and CPU threads GPU threads are extremely lightweight Very little creation overhead GPU needs 1000s of threads for full efficiency Multi-core CPU needs only a few 13

G80 CUDA mode A Device Example Processors execute computing threads New operating mode/hw interface for computing Host Input Assembler Thread Execution Manager Texture Texture Texture Texture Texture Texture Texture Texture Load/store Load/store Load/store Load/store Load/store Load/store Global Memory 27 Type Qualifiers global, device, shared, local, host Extended C device float filter[n]; global void convolve (float *image) { Keywords threadidx, blockidx Intrinsics syncthreads Runtime API Memory, symbol, execution management Function launch shared float region[m];... region[threadidx] = image[i]; syncthreads()... image[j] = result; } // Allocate GPU memory void *myimage = cudamalloc(bytes) // 100 blocks, 10 threads per block convolve<<<100, 10>>> (myimage); 14

CUDA Platform CUDA Platform 30 15

Arrays of Parallel Threads A CUDA kernel is executed by an array of threads All threads run the same code (MD) Each thread has an ID that it uses to compute memory addresses and make control decisions threadid 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; Thread Blocks: Scalable Cooperation Divide monolithic thread array into multiple blocks Threads within a block cooperate via shared memory, atomic operations and barrier synchronization Threads in different blocks cannot cooperate Up to 65535 blocks, 512 threads/block threadid Thread Block 0 Thread Block 1 Thread Block N - 1 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; float x = input[threadid]; float y = func(x); output[threadid] = y; 16

Block IDs and Thread IDs We launch a grid of blocks of threads Each thread uses IDs to decide what data to work on Block ID: 1D, 2D, or 3D Usually 1D or 2D Thread ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data Image processing Solving PDEs on volumes Host Kernel 1 Kernel 2 Device Grid 1 Grid 2 Block (0, 0) Block (0, 1) Block (1, 0) Block (1, 1) Block (1, 1) (0,0,1) (1,0,1) (2,0,1) (3,0,1) Thread Thread Thread Thread (0,0,0) (1,0,0) (2,0,0) (3,0,0) Thread Thread Thread Thread (0,1,0) (1,1,0) (2,1,0) (3,1,0) Courtesy: NDVIA Figure 3.2. An Example of CUDA Thread Org CUDA Memory Model Overview Global memory Main means of communicating R/W Data between host and device Contents visible to all threads Long latency access We will focus on global memory for now Constant and texture memory will come later Host Grid Block (0, 0) Shared Memory Registers Registers Thread (0, 0) Thread (1, 0) Global Memory Block (1, 0) Shared Memory Registers Registers Thread (0, 0) Thread (1, 0) 17

CUDA Device Memory Allocation cudamalloc() Allocates object in the device Global Memory Requires two parameters Address of a pointer to the allocated object Size of allocated object cudafree() Frees object from device Global Memory Pointer to freed object Host Grid Block (0, 0) Registers Global Memory Shared Memory Thread (0, 0) Registers Thread (1, 0) Block (1, 0) Registers Shared Memory Thread (0, 0) DON T use a CPU pointer in a GPU function! Registers Thread (1, 0) 35 CUDA Device Memory Allocation (cont.) Code example: Allocate a 64 * 64 single precision float array Attach the allocated storage to Md d is often used to indicate a device data structure TILE_WIDTH = 64; float* Md; int size = TILE_WIDTH * TILE_WIDTH * sizeof(float); cudamalloc((void**)&md, size); cudafree(md); 18

CUDA Host-Device Data Transfer cudamemcpy() memory data transfer Requires four parameters Pointer to destination Pointer to source Number of bytes copied Type of transfer Host to Host Host to Device Host Device to Host Device to Device Non-blocking/asynchronous transfer Grid Block (0, 0) Registers Global Memory Shared Memory Thread (0, 0) Registers Thread (1, 0) Block (1, 0) Shared Memory Registers Registers Thread (0, 0) Thread (1, 0) CUDA Host-Device Data Transfer (cont.) Code example: Transfer a 64 * 64 single precision float array M is in host memory and Md is in device memory cudamemcpyhosttodevice and cudamemcpydevicetohost are symbolic constants cudamemcpy(md, M, size, cudamemcpyhosttodevice); cudamemcpy(m, Md, size, cudamemcpydevicetohost); 19

CUDA Keywords CUDA Function Declarations device float DeviceFunc() global void KernelFunc() Executed on the: device device Only callable from the: device host host float HostFunc() host host global defines a kernel function Must return void 20

CUDA Function Declarations (cont.) device functions cannot have their address taken For functions executed on the device: No recursion No static variable declarations inside the function No variable number of arguments Calling a Kernel Function Thread Creation A kernel function must be called with an execution configuration: global void KernelFunc(...); dim3 DimGrid(100, 50); // 5000 thread blocks dim3 DimBlock(4, 8, 8); // 256 threads per block size_t SharedMemBytes = 64; // 64 bytes of shared memory KernelFunc<<< DimGrid, DimBlock, SharedMemBytes >>>(...); Any call to a kernel function is asynchronous from CUDA 1.0 on, explicit synch needed for blocking 21

Next Time Code example 22