CUDA 6.5 Performance Report

Size: px
Start display at page:

Download "CUDA 6.5 Performance Report"

Transcription

1 CUDA 6.5 Performance Report 1

2 CUDA 6.5 Performance Report CUDART CUDA Runtime Library cufft Fast Fourier Transforms Library cublas Complete BLAS Library cusparse Sparse Matrix Library curand Random Number Generation (RNG) Library NPP Performance Primitives for Image & Video Processing Thrust Templated Parallel Algorithms & Data Structures math.h C99 floating-point Library cudnn Deep Neural Net building blocks Included in the CUDA Toolkit (free download): developer.nvidia.com/cuda-toolkit For more information on CUDA libraries: developer.nvidia.com/gpu-accelerated-libraries 2

3 cufft: Multi-dimensional FFTs Real and complex Single- and double-precision data types 1D, 2D and 3D batched transforms Flexible input and output data layouts XT interface supports dual-gpu cards New in CUDA 6.5 Device callbacks optimize use cases such as FFT + datatype conversion 6

4 cufft in 4 easy steps Step 1 Allocate space on GPU memory Step 2 Create plan specifying transform configuration like the size and type (real, complex, 1D, 2D and so on). Step 3 Execute the plan as many times as required, providing the pointer to the GPU data created in Step 1. Step 4 Destroy plan, free GPU memory 7

5 #include <stdlib.h> #include <stdio.h> #include cufft.h #define NX 256 #define NY 128 Example cufft Program /* Use the CUFFT plan to transform the signal out of place. * Note: idata!= odata indicates an out of place * transformation to CUFFT at execution time. */ cufftexecc2c(plan, idata, odata, CUFFT_FORWARD); int main() { cuffthandle plan; cufftcomplex *idata, *odata; int i; cudamalloc((void**)&idata, sizeof(cufftcomplex)*nx*ny); cudamalloc((void**)&odata, sizeof(cufftcomplex)*nx*ny); /* create a 2D FFT plan */ cufftplan2d(&plan, NX, NY, CUFFT_C2C); /* Inverse transform the signal in place */ cufftexecc2c(plan, odata, odata, CUFFT_INVERSE); /* Destroy the CUFFT plan */ cufftdestroy(plan); cudafree(idata); cudafree(odata); return 0 } /* initialize idata */ Check out the CUDA SDK for more 8

6 GFLOPS GFLOPS cufft: up to 700 GFLOPS Single Precision Double Precision log2(transform_size) log2(transform_size) 1D Complex, Batched FFTs Used in Audio Processing and as a Foundation for 2D and 3D FFTs Performance may vary based on OS version and motherboard configuration cufft 6.5 on K40c, ECC ON, 32M elements, input and output data on device Excludes time to create cufft plans 9

7 GFLOPS GFLOPS cufft: Consistently High Performance 800 Single Precision 350 Double Precision Powers of 2 Powers of 3 Powers of 5 Powers of E+0 1E+3 1E+6 1E+9 Transform Size Performance may vary based on OS version and motherboard configuration 0 1E+0 1E+3 1E+6 1E+9 Transform Size 1D Complex, Batched FFTs Used in Audio Processing and as a Foundation for 2D and 3D FFTs cufft 6.5 on K40c, ECC ON, 28M-33M elements, input and output data on device Excludes time to create cufft plans 10

8 Execution Time (ms) Execution Time (ms) cufft-xt on K10: 30% Faster on Large 3D FFTs cufft cufft-xt 256x256x256 0 cufft cufft-xt 512x512x512 Performance may vary based on OS version and motherboard configuration cufft 6.5 on K10, ECC ON, input and output data on device Excludes time to create cufft plans 11

9 cufft User-Defined Callbacks Without Callbacks: 3 kernels, 3 memory roundtrips Read input Convert to 32-bit Write 32-bit Read Perform FFT Write Read FFT output Convert to 8-bit Write 8- bit data User-defined Kernel cufft Internal Kernel User-defined Kernel With Callbacks: 1 kernel, 1 memory roundtrip Read 8- bit input Convert to 32-bit Perform FFT Convert to 8-bit Write output Merged into a single kernel 12

10 Normalized Performance Performance of cufft Callbacks Performance of single-precision complex cufft on 8-bit complex input and output datasets, including data conversion time 2.5x 2.0x 1.5x 1.0x 0.5x 0.0x cufft with separate kernels for data conversion cufft with callbacks for data conversion cufft 6.5 on K40, ECC ON, 512 1D C2C forward trasforms, 32M total elements Input and output data on device, excludes time to create cufft plans 13

11 cublas: Dense Linear Algebra on GPUs Complete BLAS implementation plus useful extensions Supports all 152 standard routines for single, double, complex, and double complex Host and device-callable interface XT Interface for Level 3 BLAS Distributed computations across multiple GPUs Out-of-core streaming to GPU, no upper limit on matrix size Drop-in BLAS intercepts CPU BLAS calls, streams to GPU 14

12 SGEMM SSYMM STRSM SSYRK CGEMM CSYMM CTRSM CSYRK DGEMM DSYMM DTRSM DSYRK ZGEMM ZSYMM ZTRSM ZSYRK GFLOPS cublas: >3 TFLOPS single-precision >1 TFLOPS double-precision 0 Single Single Complex Double Double Complex Performance may vary based on OS version and motherboard configuration cublas 6.5 on K40m, ECC ON, input and output data on device m=n=k=4096, transpose=no, side=right, fill=lower 15

13 GFLOPS cublas: ZGEMM 6x Faster than MKL cublas MKL Matrix Dimension (m=n=k) Performance may vary based on OS version and motherboard configuration cublas 6.5 on K40m, ECC ON, input and output data on device MKL on Intel IvyBridge single socket 12-core E GHz 16

14 SGEMM SSYRK STRSM CGEMM CSYRK DGEMM DSYRK DTRSM ZGEMM ZSYRK ZTRSM GFLOPS cublas-xt: Automatic Performance Boost >10 TF on a single node x GPU 4x GPUs 0 Single Single Complex Double Double Complex Performance may vary based on OS version and motherboard configuration cublas 6.5 on K40, double precision mode Input and output data on device, m=n=k=16384, transpose=no 17

15 cusparse: Sparse linear algebra routines Optimized sparse linear algebra BLAS routines matrixvector, matrix-matrix, triangular solve Support for variety of formats (CSR, COO, block variants) Incomplete-LU and Cholesky preconditioners y 1 y 2 y \alpha + \beta 4.0 y y 1 y 2 y 3 y 4 18

16 Speedup over MKL cusparse: 5x Faster than MKL 6x Sparse Matrix x Dense Vector (SpMV) 5x 4x 3x 2x 1x 0x Performance may vary based on OS version and motherboard configuration Average of s/c/d/z routines cusparse 6.5 on K40m, ECC ON, input and output data on device MKL on Intel IvyBridge single socket 12-core E GHz Matrices obtained from: 19

17 curand: Random Number Generation Generating high quality random numbers in parallel is hard Don t do it yourself, use a library! Pseudo- and Quasi-RNGs Mersenne Twister Supports several output distributions Statistical test results in documentation 20

18 GSamples / sec curand: Up to 70x Faster vs. Intel MKL curand MKL Sobol32 MRG32k3a Sobol32 MRG32k3a Sobol32 MRG32k3a Uniform Distribution Normal Distribution Log-Normal Distribution Performance may vary based on OS version and motherboard configuration curand 6.5 on K40c, ECC ON, double-precision input and output data on device MKL on Intel IvyBridge single socket 12-core E GHz 21

19 Gsamples / sec curand: High Performance RNGs XORWOW Philox MRG32k3a MTGP32 Sobol32 Scrambled Pseudo-random Uniform Distribution Normal Distribution Log-Normal Distribution Quasi-random Sobol64 Scrambled Performance may vary based on OS version and motherboard configuration curand 6.5 on K40m, ECC ON, double precision input and output data on device 22

20 NPP: NVIDIA Performance Primitives Over 5000 image and signal processing routines: color transforms, geometric transforms, move operations, linear filters, image & signal statistics, image & signal arithmetic, JPEG building blocks, image segmentation, median filter, BGR/YUV conversion, 3D LUT color conversion 23

21 NPP Speedup vs. Intel IPP 35x 30x 25x 20x 15x 28.9x 10x 5x 0x 5.7x Image Set (8-bit RGB) 20.4x Image Set Channel (8-bit RGB) 22.2x Image Resize (8-bit RGB) 6.4x Image Gaussian Filter (32-bit float) 15.2x Color Conversion 8-bit YUV422 to 8-bit RGB JPEG 8x8 Forward DCT Performance may vary based on OS version and motherboard configuration NPP 6.5 on K40m, input and output data on device IPP 7.0 on Intel IvyBridge single socket 12-core E GHz 24

22 Image Processing Function Group NPP Speedup vs. Intel IPP Filter Statistics Geometry Transforms JPEG Morphological Linear Transform Color Processing Color Conversion Threshold And Compare Alpha Composition Logical Arithmetic Data Exchange And Initialization 4.1x 6.4x 19.1x 22.0x 7.7x 10.0x 109.6x 11.2x 6.7x 15.3x 8.8x 17.1x 15.9x 0x 5x 10x 15x 20x 25x Performance may vary based on OS version and motherboard configuration NPP 6.5 on K40m, input and output data on device Each bar represents the average speedup over all routines in the function group IPP 7.0 on Intel IvyBridge single socket 12-core E GHz 25

23 CUDA C++ Template Library Template library for CUDA C++ Host and Device Containers that mimic the C++ STL Optimized Algorithms for sort, reduce, scan, etc. OpenMP Backend for portability Also available on github: thrust.github.com Allows applications and prototypes to be built quickly 26

24 Speedup Speedup Thrust Performance vs. Intel TBB 14x Thrust vs. TBB on 32M integers 50x Thrust Sort vs. TBB on 32M samples 12x 10x 40x 8x 30x 6x 20x 4x 2x 10x 0x reduce transform scan sort 0x char short int long float double Performance may vary based on OS version and motherboard configuration Thrust v1.7.2 on K40m, ECC ON, input and output data on device TBB 4.2 on Intel IvyBridge single socket 12-core E GHz 27

25 math.h: C99 floating-point library + extras CUDA math.h is industry proven, high performance, accurate Basic: +, *, /, 1/, sqrt, FMA (all IEEE-754 accurate for float, double, all rounding modes) Exponentials: exp, exp2, log, log2, log10,... Trigonometry: sin, cos, tan, asin, acos, atan2, sinh, cosh, asinh, acosh,... Special functions: lgamma, tgamma, erf, erfc Utility: fmod, remquo, modf, trunc, round, ceil, floor, fabs,... Bessel: j0, j1, jn, y0, y1, yn, cyl_bessel_i0, cyl_bessel_i1 Vector SIMD: vadd, vsub, vavrg, vabsdiff, vmin, vmax, vset Extras: rsqrt, rhypot, rcbrt, exp10, sinpi, sincos[pi], cospi, erf[c]inv, normcdf[inv] New in CUDA 6.5 Performance improvements [r]hypot[f], cbrt, sqrt, rsqrt, atan[f], exp[10][m1]f 28

26 Double-precision GFLOPS Math Performance Improvements RSQRT Improvement Increases N-Body Performance by 15% Significantly improved doubleprecision functions: [r]sqrt(), [r]hypot(), cbrt(), atan(), acosh() Significantly improved singleprecision functions: [r]hypot(), atanf(), expf(), exp10f(), expm1f() Performance of N-body sample code on Tesla K GFLOPS 801 GFLOPS CUDA 6.0 CUDA 6.5 CUDA 6.0 and CUDA 6.5 on K40m Sample code: 29

27 Introducing NVIDIA cudnn Forward and backward convolution routines tuned for NVIDIA GPUs Will be optimized for all future NVIDIA GPU generations Arbitrary dimension ordering, striding, and subregions for 4d tensors means easy integration into any neural net implementation Forward and backward paths for common layer types ReLu, Sigmoid, Tanh, Pooling, Softmax Download: Contact: 30

28 Using Caffe with cudnn Accelerate Caffe layer types by 1.2 3x Example: AlexNet Layer 2 forward: 1.9x faster convolution, 2.7x faster pooling Integrated into Caffe dev branch today! (targeting official release with Caffe 1.0) Overall AlexNet training time Caffe (GPU) 11x Caffe (cudnn) 14x Caffe (CPU*) 1x *CPU is 24 core 2.4GHz Intel MKL Baseline Caffe compared to Caffe accelerated by cudnn on K40 31

CUDA 6.0 Performance Report. April 2014

CUDA 6.0 Performance Report. April 2014 CUDA 6. Performance Report April 214 1 CUDA 6 Performance Report CUDART CUDA Runtime Library cufft Fast Fourier Transforms Library cublas Complete BLAS Library cusparse Sparse Matrix Library curand Random

More information

CUDA 7.0 Performance Report. May 2015

CUDA 7.0 Performance Report. May 2015 CUDA 7.0 Performance Report May 2015 1 CUDA 7.0 Performance Report cufft Fast Fourier Transforms Library cublas Complete BLAS Library cusparse Sparse Matrix Library New in cusolver Linear Solver Library

More information

CUDA Toolkit 4.0 Performance Report. June, 2011

CUDA Toolkit 4.0 Performance Report. June, 2011 CUDA Toolkit 4. Performance Report June, 211 CUDA Math Libraries High performance math routines for your applications: cufft Fast Fourier Transforms Library cublas Complete BLAS Library cusparse Sparse

More information

CUDA Toolkit 5.0 Performance Report. January 2013

CUDA Toolkit 5.0 Performance Report. January 2013 CUDA Toolkit 5.0 Performance Report January 2013 CUDA Math Libraries High performance math routines for your applications: cufft Fast Fourier Transforms Library cublas Complete BLAS Library cusparse Sparse

More information

Using OpenACC With CUDA Libraries

Using OpenACC With CUDA Libraries Using OpenACC With CUDA Libraries John Urbanic with NVIDIA Pittsburgh Supercomputing Center Copyright 2015 3 Ways to Accelerate Applications Applications Libraries Drop-in Acceleration CUDA Libraries are

More information

Introduction to GPGPUs and to CUDA programming model: CUDA Libraries

Introduction to GPGPUs and to CUDA programming model: CUDA Libraries Introduction to GPGPUs and to CUDA programming model: CUDA Libraries www.cineca.it Marzia Rivi m.rivi@cineca.it NVIDIA CUDA Libraries http://developer.nvidia.com/technologies/libraries CUDA Toolkit includes

More information

Using OpenACC With CUDA Libraries

Using OpenACC With CUDA Libraries Using OpenACC With CUDA Libraries John Urbanic with NVIDIA Pittsburgh Supercomputing Center Copyright 2018 3 Ways to Accelerate Applications Applications Libraries Drop-in Acceleration CUDA Libraries are

More information

CUDA Accelerated Compute Libraries. M. Naumov

CUDA Accelerated Compute Libraries. M. Naumov CUDA Accelerated Compute Libraries M. Naumov Outline Motivation Why should you use libraries? CUDA Toolkit Libraries Overview of performance CUDA Proprietary Libraries Address specific markets Third Party

More information

A Sampling of CUDA Libraries Michael Garland

A Sampling of CUDA Libraries Michael Garland A Sampling of CUDA Libraries Michael Garland NVIDIA Research CUBLAS Implementation of BLAS (Basic Linear Algebra Subprograms) on top of CUDA driver Self-contained at the API level, no direct interaction

More information

NVIDIA CUDA Libraries

NVIDIA CUDA Libraries NVIDIA CUDA Libraries Ujval Kapasi*, Elif Albuz*, Philippe Vandermersch*, Nathan Whitehead*, Frank Jargstorff* San Jose Convention Center Sept 22, 2010 *NVIDIA NVIDIA CUDA Libraries Applications 3 rd Party

More information

CUDA 8 PERFORMANCE OVERVIEW. November 2016

CUDA 8 PERFORMANCE OVERVIEW. November 2016 CUDA 8 PERFORMANCE OVERVIEW November 2016 CUDA 8 PERFORMANCE HIGHLIGHTS 2X 1.5-2X higher performance out-of-the-box Solve larger problems than possible before with Unified Memory SOCIAL NETWORK ANALYSIS

More information

CUDA math libraries APC

CUDA math libraries APC CUDA math libraries APC CUDA Libraries http://developer.nvidia.com/cuda-tools-ecosystem CUDA Toolkit CUBLAS linear algebra CUSPARSE linear algebra with sparse matrices CUFFT fast discrete Fourier transform

More information

GPU Computing using CUDA C/C++ Dr. Timo Stich Developer Technology Group

GPU Computing using CUDA C/C++ Dr. Timo Stich Developer Technology Group GPU Computing using CUDA C/C++ Dr. Timo Stich Developer Technology Group Why CUDA? Mainstream Massively Parallel Programming Over 300 Million CUDA capable GPUs sold Runs on GPU and CPU (PGI CUDA-x86) Additional

More information

CUFFT Library PG _V1.0 June, 2007

CUFFT Library PG _V1.0 June, 2007 PG-00000-003_V1.0 June, 2007 PG-00000-003_V1.0 Confidential Information Published by Corporation 2701 San Tomas Expressway Santa Clara, CA 95050 Notice This source code is subject to ownership rights under

More information

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University

CSE 591: GPU Programming. Programmer Interface. Klaus Mueller. Computer Science Department Stony Brook University CSE 591: GPU Programming Programmer Interface Klaus Mueller Computer Science Department Stony Brook University Compute Levels Encodes the hardware capability of a GPU card newer cards have higher compute

More information

Introduction to OpenACC Directives. Duncan Poole, NVIDIA

Introduction to OpenACC Directives. Duncan Poole, NVIDIA Introduction to OpenACC Directives Duncan Poole, NVIDIA GPUs Reaching Broader Set of Developers 1,000,000 s 100,000 s Early Adopters Research Universities Supercomputing Centers Oil & Gas CAE CFD Finance

More information

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE)

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) NATALIA GIMELSHEIN ANSHUL GUPTA STEVE RENNICH SEID KORIC NVIDIA IBM NVIDIA NCSA WATSON SPARSE MATRIX PACKAGE (WSMP) Cholesky, LDL T, LU factorization

More information

CUDA libraries. Lecture 5: libraries and tools. CUDA libraries. CUDA libraries

CUDA libraries. Lecture 5: libraries and tools. CUDA libraries. CUDA libraries Lecture 5: libraries and tools cublas Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Oxford e-research Centre Lecture 5 p. 1 basic linear algebra subroutines for dense

More information

Porting the NAS-NPB Conjugate Gradient Benchmark to CUDA. NVIDIA Corporation

Porting the NAS-NPB Conjugate Gradient Benchmark to CUDA. NVIDIA Corporation Porting the NAS-NPB Conjugate Gradient Benchmark to CUDA NVIDIA Corporation Outline! Overview of CG benchmark! Overview of CUDA Libraries! CUSPARSE! CUBLAS! Porting Sequence! Algorithm Analysis! Data/Code

More information

RAPID MULTI-GPU PROGRAMMING WITH CUDA LIBRARIES. Nikolay Markovskiy

RAPID MULTI-GPU PROGRAMMING WITH CUDA LIBRARIES. Nikolay Markovskiy RAPID MULTI-GPU PROGRAMMING WITH CUDA LIBRARIES Nikolay Markovskiy CUDA 6 cublas cufft 2. cublas-xt 3. cufft-xt 1. NVBLAS WHAT IS NVBLAS? Drop-in replacement of BLAS Built on top of cublas-xt BLAS Level

More information

Introduction to Scientific Programming using GPGPU and CUDA

Introduction to Scientific Programming using GPGPU and CUDA Introduction to Scientific Programming using GPGPU and CUDA Day 2 Sergio Orlandini s.orlandini@cineca.it Luca Ferraro l.ferraro@cineca.it 1 Rights & Credits These slides are CINECA 2014 and are released

More information

GPU ARCHITECTURE Chris Schultz, June 2017

GPU ARCHITECTURE Chris Schultz, June 2017 GPU ARCHITECTURE Chris Schultz, June 2017 MISC All of the opinions expressed in this presentation are my own and do not reflect any held by NVIDIA 2 OUTLINE CPU versus GPU Why are they different? CUDA

More information

GPU FOR DEEP LEARNING. 周国峰 Wuhan University 2017/10/13

GPU FOR DEEP LEARNING. 周国峰 Wuhan University 2017/10/13 GPU FOR DEEP LEARNING chandlerz@nvidia.com 周国峰 Wuhan University 2017/10/13 Why Deep Learning Boost Today? Nvidia SDK for Deep Learning? Agenda CUDA 8.0 cudnn TensorRT (GIE) NCCL DIGITS 2 Why Deep Learning

More information

GPU ARCHITECTURE Chris Schultz, June 2017

GPU ARCHITECTURE Chris Schultz, June 2017 Chris Schultz, June 2017 MISC All of the opinions expressed in this presentation are my own and do not reflect any held by NVIDIA 2 OUTLINE Problems Solved Over Time versus Why are they different? Complex

More information

CUDA 7 AND BEYOND MARK HARRIS, NVIDIA

CUDA 7 AND BEYOND MARK HARRIS, NVIDIA CUDA 7 AND BEYOND MARK HARRIS, NVIDIA C++11 CUDA 7 cusolver Runtime Compilation [&](char)c)){) ))for)(auto)x):)letters))) ))))if)(c)==)x))return)true;) ))return)false;) }) C++11 FEELS LIKE A NEW LANGUAGE

More information

Technology for a better society. hetcomp.com

Technology for a better society. hetcomp.com Technology for a better society hetcomp.com 1 J. Seland, C. Dyken, T. R. Hagen, A. R. Brodtkorb, J. Hjelmervik,E Bjønnes GPU Computing USIT Course Week 16th November 2011 hetcomp.com 2 9:30 10:15 Introduction

More information

Turing Architecture and CUDA 10 New Features. Minseok Lee, Developer Technology Engineer, NVIDIA

Turing Architecture and CUDA 10 New Features. Minseok Lee, Developer Technology Engineer, NVIDIA Turing Architecture and CUDA 10 New Features Minseok Lee, Developer Technology Engineer, NVIDIA Turing Architecture New SM Architecture Multi-Precision Tensor Core RT Core Turing MPS Inference Accelerated,

More information

GPU A rchitectures Architectures Patrick Neill May

GPU A rchitectures Architectures Patrick Neill May GPU Architectures Patrick Neill May 30, 2014 Outline CPU versus GPU CUDA GPU Why are they different? Terminology Kepler/Maxwell Graphics Tiled deferred rendering Opportunities What skills you should know

More information

DEEP NEURAL NETWORKS AND GPUS. Julie Bernauer

DEEP NEURAL NETWORKS AND GPUS. Julie Bernauer DEEP NEURAL NETWORKS AND GPUS Julie Bernauer GPU Computing GPU Computing Run Computations on GPUs x86 CUDA Framework to Program NVIDIA GPUs A simple sum of two vectors (arrays) in C void vector_add(int

More information

An Extension of the StarSs Programming Model for Platforms with Multiple GPUs

An Extension of the StarSs Programming Model for Platforms with Multiple GPUs An Extension of the StarSs Programming Model for Platforms with Multiple GPUs Eduard Ayguadé 2 Rosa M. Badia 2 Francisco Igual 1 Jesús Labarta 2 Rafael Mayo 1 Enrique S. Quintana-Ortí 1 1 Departamento

More information

MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures

MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures Stan Tomov Innovative Computing Laboratory University of Tennessee, Knoxville OLCF Seminar Series, ORNL June 16, 2010

More information

Level-3 BLAS on the TI C6678 multi-core DSP

Level-3 BLAS on the TI C6678 multi-core DSP Level-3 BLAS on the TI C6678 multi-core DSP Murtaza Ali, Eric Stotzer Texas Instruments {mali,estotzer}@ti.com Francisco D. Igual Dept. Arquitectura de Computadores y Automática Univ. Complutense de Madrid

More information

SVM multiclass classification in 10 steps 17/32

SVM multiclass classification in 10 steps 17/32 SVM multiclass classification in 10 steps import numpy as np # load digits dataset from sklearn import datasets digits = datasets. load_digits () # define training set size n_samples = len ( digits. images

More information

High-Performance Data Loading and Augmentation for Deep Neural Network Training

High-Performance Data Loading and Augmentation for Deep Neural Network Training High-Performance Data Loading and Augmentation for Deep Neural Network Training Trevor Gale tgale@ece.neu.edu Steven Eliuk steven.eliuk@gmail.com Cameron Upright c.upright@samsung.com Roadmap 1. The General-Purpose

More information

Leveraging the NVIDIA CUDA BLAS in the IMSL FORTRAN Library

Leveraging the NVIDIA CUDA BLAS in the IMSL FORTRAN Library Leveraging the NVIDIA CUDA BLAS in the IMSL FORTRAN Library Benchmarking the NVIDIA GPU A White Paper by Rogue Wave Software. October, 2010 Rogue Wave Softw are 5500 Flatiron Parkw ay, Suite 200 Boulder,

More information

A Standard for Batching BLAS Operations

A Standard for Batching BLAS Operations A Standard for Batching BLAS Operations Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 5/8/16 1 API for Batching BLAS Operations We are proposing, as a community

More information

Intel Math Kernel Library

Intel Math Kernel Library Intel Math Kernel Library Release 7.0 March 2005 Intel MKL Purpose Performance, performance, performance! Intel s scientific and engineering floating point math library Initially only basic linear algebra

More information

NVIDIA FOR DEEP LEARNING. Bill Veenhuis

NVIDIA FOR DEEP LEARNING. Bill Veenhuis NVIDIA FOR DEEP LEARNING Bill Veenhuis bveenhuis@nvidia.com Nvidia is the world s leading ai platform ONE ARCHITECTURE CUDA 2 GPU: Perfect Companion for Accelerating Apps & A.I. CPU GPU 3 Intro to AI AGENDA

More information

Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability

Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability Janis Keuper Itwm.fraunhofer.de/ml Competence Center High Performance Computing Fraunhofer ITWM, Kaiserslautern,

More information

EXTENDING THE REACH OF PARALLEL COMPUTING WITH CUDA

EXTENDING THE REACH OF PARALLEL COMPUTING WITH CUDA EXTENDING THE REACH OF PARALLEL COMPUTING WITH CUDA Mark Harris, NVIDIA @harrism #NVSC14 EXTENDING THE REACH OF CUDA 1 Machine Learning 2 Higher Performance 3 New Platforms 4 New Languages 2 GPUS: THE

More information

Solving Dense Linear Systems on Graphics Processors

Solving Dense Linear Systems on Graphics Processors Solving Dense Linear Systems on Graphics Processors Sergio Barrachina Maribel Castillo Francisco Igual Rafael Mayo Enrique S. Quintana-Ortí High Performance Computing & Architectures Group Universidad

More information

May 8-11, 2017 Silicon Valley. CUDA 9 AND BEYOND Mark Harris, May 10, 2017

May 8-11, 2017 Silicon Valley. CUDA 9 AND BEYOND Mark Harris, May 10, 2017 May 8-11, 2017 Silicon Valley CUDA 9 AND BEYOND Mark Harris, May 10, 2017 INTRODUCING CUDA 9 BUILT FOR VOLTA FASTER LIBRARIES Tesla V100 New GPU Architecture Tensor Cores NVLink Independent Thread Scheduling

More information

NVIDIA CUDA TOOLKIT V6.0

NVIDIA CUDA TOOLKIT V6.0 NVIDIA CUDA TOOLKIT V6.0 RN-06722-001 _v6.0 February 2014 Release Notes for Windows, Linux, and Mac OS TABLE OF CONTENTS Errata... iii CUDA 6.0 Release Candidate... iii Chapter 1. CUDA Toolkit Major Components...

More information

NEW FEATURES IN CUDA 6 MAKE GPU ACCELERATION EASIER MARK HARRIS

NEW FEATURES IN CUDA 6 MAKE GPU ACCELERATION EASIER MARK HARRIS NEW FEATURES IN CUDA 6 MAKE GPU ACCELERATION EASIER MARK HARRIS 1 Unified Memory CUDA 6 2 3 XT and Drop-in Libraries GPUDirect RDMA in MPI 4 Developer Tools 1 Unified Memory CUDA 6 2 3 XT and Drop-in Libraries

More information

CUDA 7.5 OVERVIEW WEBINAR 7/23/15

CUDA 7.5 OVERVIEW WEBINAR 7/23/15 CUDA 7.5 OVERVIEW WEBINAR 7/23/15 CUDA 7.5 https://developer.nvidia.com/cuda-toolkit 16-bit Floating-Point Storage 2x larger datasets in GPU memory Great for Deep Learning cusparse Dense Matrix * Sparse

More information

Premiers retours d expérience sur l utilisation de GPU pour des applications de mécanique des structures

Premiers retours d expérience sur l utilisation de GPU pour des applications de mécanique des structures Premiers retours d expérience sur l utilisation de GPU pour des applications de mécanique des structures Antoine Petitet et Stefanos Vlachoutsis Juin 2011 Copyright ESI Group, 2009. 2010. All rights reserved.

More information

May 8-11, 2017 Silicon Valley CUDA 9 AND BEYOND. Mark Harris, May 10, 2017

May 8-11, 2017 Silicon Valley CUDA 9 AND BEYOND. Mark Harris, May 10, 2017 May 8-11, 2017 Silicon Valley CUDA 9 AND BEYOND Mark Harris, May 10, 2017 INTRODUCING CUDA 9 BUILT FOR VOLTA FASTER LIBRARIES Tesla V100 New GPU Architecture Tensor Cores NVLink Independent Thread Scheduling

More information

HPC with the NVIDIA Accelerated Computing Toolkit Mark Harris, November 16, 2015

HPC with the NVIDIA Accelerated Computing Toolkit Mark Harris, November 16, 2015 HPC with the NVIDIA Accelerated Computing Toolkit Mark Harris, November 16, 2015 Accelerators Surge in World s Top Supercomputers 125 100 75 Top500: # of Accelerated Supercomputers 100+ accelerated systems

More information

Deep Learning: Transforming Engineering and Science The MathWorks, Inc.

Deep Learning: Transforming Engineering and Science The MathWorks, Inc. Deep Learning: Transforming Engineering and Science 1 2015 The MathWorks, Inc. DEEP LEARNING: TRANSFORMING ENGINEERING AND SCIENCE A THE NEW RISE ERA OF OF GPU COMPUTING 3 NVIDIA A IS NEW THE WORLD S ERA

More information

NVIDIA GPU TECHNOLOGY UPDATE

NVIDIA GPU TECHNOLOGY UPDATE NVIDIA GPU TECHNOLOGY UPDATE May 2015 Axel Koehler Senior Solutions Architect, NVIDIA NVIDIA: The VISUAL Computing Company GAMING DESIGN ENTERPRISE VIRTUALIZATION HPC & CLOUD SERVICE PROVIDERS AUTONOMOUS

More information

GPU ACCELERATION OF CHOLMOD: BATCHING, HYBRID AND MULTI-GPU

GPU ACCELERATION OF CHOLMOD: BATCHING, HYBRID AND MULTI-GPU April 4-7, 2016 Silicon Valley GPU ACCELERATION OF CHOLMOD: BATCHING, HYBRID AND MULTI-GPU Steve Rennich, Darko Stosic, Tim Davis, April 6, 2016 OBJECTIVE Direct sparse methods are among the most widely

More information

Built-in Types of Data

Built-in Types of Data Built-in Types of Data Types A data type is set of values and a set of operations defined on those values Python supports several built-in data types: int (for integers), float (for floating-point numbers),

More information

Deep Learning on Modern Architectures. Keren Zhou 4/17/2017

Deep Learning on Modern Architectures. Keren Zhou 4/17/2017 Deep Learning on Modern Architectures Keren Zhou 4/17/2017 HPC Software Stack Application Algorithm Data Layout CPU GPU MIC Others HPC Software Stack Deep Learning Algorithm Data Layout CPU GPU MIC Others

More information

Taipei Embedded Outreach OpenCL DSP Profile Proposals

Taipei Embedded Outreach OpenCL DSP Profile Proposals Copyright 2018 The Khronos Group Inc. Page 1 Taipei Embedded Outreach OpenCL DSP Profile Proposals Prof. Jenq-Kuen Lee, NTHU Taipei, January 2018 Copyright 2018 The Khronos Group Inc. Page 2 Outline Speaker

More information

CAPS Technology. ProHMPT, 2009 March12 th

CAPS Technology. ProHMPT, 2009 March12 th CAPS Technology ProHMPT, 2009 March12 th Overview of the Talk 1. HMPP in a nutshell Directives for Hardware Accelerators (HWA) 2. HMPP Code Generation Capabilities Efficient code generation for CUDA 3.

More information

Tesla GPU Computing A Revolution in High Performance Computing

Tesla GPU Computing A Revolution in High Performance Computing Tesla GPU Computing A Revolution in High Performance Computing Mark Harris, NVIDIA Agenda Tesla GPU Computing CUDA Fermi What is GPU Computing? Introduction to Tesla CUDA Architecture Programming & Memory

More information

IMPLEMENTING DEEP LEARNING USING CUDNN 이예하 VUNO INC.

IMPLEMENTING DEEP LEARNING USING CUDNN 이예하 VUNO INC. IMPLEMENTING DEEP LEARNING USING CUDNN 이예하 VUNO INC. CONTENTS Deep Learning Review Implementation on GPU using cudnn Optimization Issues Introduction to VUNO-Net DEEP LEARNING REVIEW BRIEF HISTORY OF NEURAL

More information

Research Faculty Summit Systems Fueling future disruptions

Research Faculty Summit Systems Fueling future disruptions Research Faculty Summit 2018 Systems Fueling future disruptions Wolong: A Back-end Optimizer for Deep Learning Computation Jilong Xue Researcher, Microsoft Research Asia System Challenge in Deep Learning

More information

Introduction to GPU Computing. 周国峰 Wuhan University 2017/10/13

Introduction to GPU Computing. 周国峰 Wuhan University 2017/10/13 Introduction to GPU Computing chandlerz@nvidia.com 周国峰 Wuhan University 2017/10/13 GPU and Its Application 3 Ways to Develop Your GPU APP An Example to Show the Developments Add GPUs: Accelerate Science

More information

Intel Math Kernel Library ( Intel MKL )

Intel Math Kernel Library ( Intel MKL ) Intel Math Kernel Library ( Intel MKL ) Part of Intel Parallel Studio XE Composer Edition December 2014 Copyright 2014, Intel Corporation. All rights reserved. *Other brands and names are the property

More information

Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA TESLA GPU Cluster

Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA TESLA GPU Cluster Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA TESLA GPU Cluster Veerendra Allada, Troy Benjegerdes Electrical and Computer Engineering, Ames Laboratory Iowa State University &

More information

Introduction to CUDA

Introduction to CUDA Introduction to CUDA Overview HW computational power Graphics API vs. CUDA CUDA glossary Memory model, HW implementation, execution Performance guidelines CUDA compiler C/C++ Language extensions Limitations

More information

Outline of High-Speed Quad-Precision Arithmetic Package ASLQUAD

Outline of High-Speed Quad-Precision Arithmetic Package ASLQUAD Outline of High-Speed Quad-Precision Arithmetic Package ASLQUAD OGATA Ryusei, KUBO Yoshiyuki, TAKEI Toshifumi Abstract The ASLQUAD high-speed quad-precision arithmetic package reduces numerical errors

More information

Tesla GPU Computing A Revolution in High Performance Computing

Tesla GPU Computing A Revolution in High Performance Computing Tesla GPU Computing A Revolution in High Performance Computing Gernot Ziegler, Developer Technology (Compute) (Material by Thomas Bradley) Agenda Tesla GPU Computing CUDA Fermi What is GPU Computing? Introduction

More information

Data Parallel Execution Model

Data Parallel Execution Model CS/EE 217 GPU Architecture and Parallel Programming Lecture 3: Kernel-Based Data Parallel Execution Model David Kirk/NVIDIA and Wen-mei Hwu, 2007-2013 Objective To understand the organization and scheduling

More information

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics

Technische Universität München. GPU Programming. Rüdiger Westermann Chair for Computer Graphics & Visualization. Faculty of Informatics GPU Programming Rüdiger Westermann Chair for Computer Graphics & Visualization Faculty of Informatics Overview Programming interfaces and support libraries The CUDA programming abstraction An in-depth

More information

Intel Performance Libraries

Intel Performance Libraries Intel Performance Libraries Powerful Mathematical Library Intel Math Kernel Library (Intel MKL) Energy Science & Research Engineering Design Financial Analytics Signal Processing Digital Content Creation

More information

CS 179: Lecture 10. Introduction to cublas

CS 179: Lecture 10. Introduction to cublas CS 179: Lecture 10 Introduction to cublas Table of contents, you are here. Welcome to week 4, this is new material from here on out so please ask questions and help the TAs to improve the lectures and

More information

Software Overview Release Rev: 3.0

Software Overview Release Rev: 3.0 Software Overview Release Rev: 3.0 1 Overview of ClearSpeed software The ClearSpeed Advance accelerators are provided with a package of runtime software. A software development kit (SDK) is also available

More information

NVIDIA CUDA TOOLKIT V6.5

NVIDIA CUDA TOOLKIT V6.5 NVIDIA CUDA TOOLKIT V6.5 RN-06722-001 _v6.5 August 2014 Release Notes for Windows, Linux, and Mac OS TABLE OF CONTENTS Errata... iii Known Issues...iii Chapter 1. CUDA Toolkit Overview... 1 Chapter 2.

More information

CafeGPI. Single-Sided Communication for Scalable Deep Learning

CafeGPI. Single-Sided Communication for Scalable Deep Learning CafeGPI Single-Sided Communication for Scalable Deep Learning Janis Keuper itwm.fraunhofer.de/ml Competence Center High Performance Computing Fraunhofer ITWM, Kaiserslautern, Germany Deep Neural Networks

More information

In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units Page 1 of 17 In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units Niloo Ranjan Jibonananda Sanyal Joshua New Page 2 of 17 Table of Contents In-Situ Statistical Analysis

More information

GPGPU, 1st Meeting Mordechai Butrashvily, CEO GASS

GPGPU, 1st Meeting Mordechai Butrashvily, CEO GASS GPGPU, 1st Meeting Mordechai Butrashvily, CEO GASS Agenda Forming a GPGPU WG 1 st meeting Future meetings Activities Forming a GPGPU WG To raise needs and enhance information sharing A platform for knowledge

More information

Cray Scientific Libraries. Overview

Cray Scientific Libraries. Overview Cray Scientific Libraries Overview What are libraries for? Building blocks for writing scientific applications Historically allowed the first forms of code re-use Later became ways of running optimized

More information

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić

How to perform HPL on CPU&GPU clusters. Dr.sc. Draško Tomić How to perform HPL on CPU&GPU clusters Dr.sc. Draško Tomić email: drasko.tomic@hp.com Forecasting is not so easy, HPL benchmarking could be even more difficult Agenda TOP500 GPU trends Some basics about

More information

HPC with Multicore and GPUs

HPC with Multicore and GPUs HPC with Multicore and GPUs Stan Tomov Electrical Engineering and Computer Science Department University of Tennessee, Knoxville COSC 594 Lecture Notes March 22, 2017 1/20 Outline Introduction - Hardware

More information

GTC 2013: DEVELOPMENTS IN GPU-ACCELERATED SPARSE LINEAR ALGEBRA ALGORITHMS. Kyle Spagnoli. Research EM Photonics 3/20/2013

GTC 2013: DEVELOPMENTS IN GPU-ACCELERATED SPARSE LINEAR ALGEBRA ALGORITHMS. Kyle Spagnoli. Research EM Photonics 3/20/2013 GTC 2013: DEVELOPMENTS IN GPU-ACCELERATED SPARSE LINEAR ALGEBRA ALGORITHMS Kyle Spagnoli Research Engineer @ EM Photonics 3/20/2013 INTRODUCTION» Sparse systems» Iterative solvers» High level benchmarks»

More information

Technical Report Performance Analysis of CULA on different NVIDIA GPU Architectures. Prateek Gupta

Technical Report Performance Analysis of CULA on different NVIDIA GPU Architectures. Prateek Gupta Technical Report 2014-02 Performance Analysis of CULA on different NVIDIA GPU Architectures Prateek Gupta May 20, 2014 1 Spring 2014: Performance Analysis of CULA on different NVIDIA GPU Architectures

More information

PARALUTION - a Library for Iterative Sparse Methods on CPU and GPU

PARALUTION - a Library for Iterative Sparse Methods on CPU and GPU - a Library for Iterative Sparse Methods on CPU and GPU Dimitar Lukarski Division of Scientific Computing Department of Information Technology Uppsala Programming for Multicore Architectures Research Center

More information

Using a GPU in InSAR processing to improve performance

Using a GPU in InSAR processing to improve performance Using a GPU in InSAR processing to improve performance Rob Mellors, ALOS PI 152 San Diego State University David Sandwell University of California, San Diego What is a GPU? (Graphic Processor Unit) A graphics

More information

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield NVIDIA GTX200: TeraFLOPS Visual Computing August 26, 2008 John Tynefield 2 Outline Execution Model Architecture Demo 3 Execution Model 4 Software Architecture Applications DX10 OpenGL OpenCL CUDA C Host

More information

GPU-Accelerated Deep Learning

GPU-Accelerated Deep Learning GPU-Accelerated Deep Learning July 6 th, 2016. Greg Heinrich. Credits: Alison B. Lowndes, Julie Bernauer, Leo K. Tam. PRACTICAL DEEP LEARNING EXAMPLES Image Classification, Object Detection, Localization,

More information

Intel Math Kernel Library 10.3

Intel Math Kernel Library 10.3 Intel Math Kernel Library 10.3 Product Brief Intel Math Kernel Library 10.3 The Flagship High Performance Computing Math Library for Windows*, Linux*, and Mac OS* X Intel Math Kernel Library (Intel MKL)

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

Advanced CUDA Optimization 1. Introduction

Advanced CUDA Optimization 1. Introduction Advanced CUDA Optimization 1. Introduction Thomas Bradley Agenda CUDA Review Review of CUDA Architecture Programming & Memory Models Programming Environment Execution Performance Optimization Guidelines

More information

Introduction to CUDA C/C++ Mark Ebersole, NVIDIA CUDA Educator

Introduction to CUDA C/C++ Mark Ebersole, NVIDIA CUDA Educator Introduction to CUDA C/C++ Mark Ebersole, NVIDIA CUDA Educator What is CUDA? Programming language? Compiler? Classic car? Beer? Coffee? CUDA Parallel Computing Platform www.nvidia.com/getcuda Programming

More information

Fastest and most used math library for Intel -based systems 1

Fastest and most used math library for Intel -based systems 1 Fastest and most used math library for Intel -based systems 1 Speaker: Alexander Kalinkin Contributing authors: Peter Caday, Kazushige Goto, Louise Huot, Sarah Knepper, Mesut Meterelliyoz, Arthur Araujo

More information

NOVEL GPU FEATURES: PERFORMANCE AND PRODUCTIVITY. Peter Messmer

NOVEL GPU FEATURES: PERFORMANCE AND PRODUCTIVITY. Peter Messmer NOVEL GPU FEATURES: PERFORMANCE AND PRODUCTIVITY Peter Messmer pmessmer@nvidia.com COMPUTATIONAL CHALLENGES IN HEP Low-Level Trigger High-Level Trigger Monte Carlo Analysis Lattice QCD 2 COMPUTATIONAL

More information

CUB. collective software primitives. Duane Merrill. NVIDIA Research

CUB. collective software primitives. Duane Merrill. NVIDIA Research CUB collective software primitives Duane Merrill NVIDIA Research What is CUB?. A design model for collective primitives How to make reusable SIMT software constructs. A library of collective primitives

More information

OpenFOAM + GPGPU. İbrahim Özküçük

OpenFOAM + GPGPU. İbrahim Özküçük OpenFOAM + GPGPU İbrahim Özküçük Outline GPGPU vs CPU GPGPU plugins for OpenFOAM Overview of Discretization CUDA for FOAM Link (cufflink) Cusp & Thrust Libraries How Cufflink Works Performance data of

More information

PyCUDA. Continued...

PyCUDA. Continued... PyCUDA Continued... gpuarray Vector Types pycuda.gpuarray.vec All CUDA vector types are supported: float3, int3, long4, etc, Available as numpy data types Field names x, y, z, and w as in CUDA Construct

More information

NEW ADVANCES IN GPU LINEAR ALGEBRA

NEW ADVANCES IN GPU LINEAR ALGEBRA GTC 2012: NEW ADVANCES IN GPU LINEAR ALGEBRA Kyle Spagnoli EM Photonics 5/16/2012 QUICK ABOUT US» HPC/GPU Consulting Firm» Specializations in:» Electromagnetics» Image Processing» Fluid Dynamics» Linear

More information

Applications of Berkeley s Dwarfs on Nvidia GPUs

Applications of Berkeley s Dwarfs on Nvidia GPUs Applications of Berkeley s Dwarfs on Nvidia GPUs Seminar: Topics in High-Performance and Scientific Computing Team N2: Yang Zhang, Haiqing Wang 05.02.2015 Overview CUDA The Dwarfs Dynamic Programming Sparse

More information

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017

Achieving Peak Performance on Intel Hardware. Intel Software Developer Conference London, 2017 Achieving Peak Performance on Intel Hardware Intel Software Developer Conference London, 2017 Welcome Aims for the day You understand some of the critical features of Intel processors and other hardware

More information

Dense matrix algebra and libraries (and dealing with Fortran)

Dense matrix algebra and libraries (and dealing with Fortran) Dense matrix algebra and libraries (and dealing with Fortran) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Dense matrix algebra and libraries (and dealing with Fortran)

More information

CUDA Toolkit 4.2 CUFFT Library

CUDA Toolkit 4.2 CUFFT Library CUDA Toolkit 4.2 CUFFT Library PG-05327-040_v01 March 2012 Programming Guide Contents 1 Introduction 2 2 Using the CUFFT API 3 2.1 Data Layout..................................... 4 2.1.1 FFTW Compatibility

More information

A Few Numerical Libraries for HPC

A Few Numerical Libraries for HPC A Few Numerical Libraries for HPC CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) A Few Numerical Libraries for HPC Spring 2016 1 / 37 Outline 1 HPC == numerical linear

More information

WHAT S NEW IN CUDA 8. Siddharth Sharma, Oct 2016

WHAT S NEW IN CUDA 8. Siddharth Sharma, Oct 2016 WHAT S NEW IN CUDA 8 Siddharth Sharma, Oct 2016 WHAT S NEW IN CUDA 8 Why Should You Care >2X Run Computations Faster* Solve Larger Problems** Critical Path Analysis * HOOMD Blue v1.3.3 Lennard-Jones liquid

More information

CUDA Toolkit 5.0 CUFFT Library DRAFT. PG _v01 April Programming Guide

CUDA Toolkit 5.0 CUFFT Library DRAFT. PG _v01 April Programming Guide CUDA Toolkit 5.0 CUFFT Library PG-05327-050_v01 April 2012 Programming Guide Contents 1 Introduction 2 2 Using the CUFFT API 3 2.1 Data Layout..................................... 4 2.1.1 FFTW Compatibility

More information