A TALENTED CPU-TO-GPU MEMORY MAPPING TECHNIQUE

Size: px
Start display at page:

Download "A TALENTED CPU-TO-GPU MEMORY MAPPING TECHNIQUE"

Transcription

1 A TALENTED CPU-TO-GPU MEMORY MAPPING TECHNIQUE Abu Asaduzzaman, Deepthi Gummadi, and Chok M. Yip Department of Electrical Engineering and Computer Science Wichita State University Wichita, Kansas, USA Abstract In order to fast effective analysis of large systems, high performance computing (HPC) is essential. NVIDIA Compute Unified Device Architecture (CUDA)- assisted central processing unit (CPU) and graphics processing unit (GPU) computing platform has proven its potential to be used for HPC supports. In CPU/GPU computing, original data and instructions are copied from CPU-main-memory to GPU-global-memory. Inside GPU, it would be beneficial to keep the data into shared memory (shared only by the threads of that block) than in the global memory (shared by all threads). However, GPU shared memory is much smaller than GPU global memory (for Fermi Tesla C2075, total shared memory per block is 48 KB and total global memory is 5.6 GB). In this paper, we introduce a CPU-main-memory to GPU-globalmemory mapping technique to improve the GPU/overall system performance by increasing the effectiveness of GPU shared memory. Experimental results, from solving Laplace s equation for 512x512 matrix using Fermi and Kepler cards, show that proposed CPU-to-GPU memory mapping technique help decrease the overall execution time by more than 75%. Index Terms Cache memory organization; CUDA architecture; electric charge distribution; GPU memory; high performance computing; I. INTRODUCTION Modern CPU consists of a number of cores optimized for sequential serial processing while a GPU consists of hundreds of smaller, more efficient cores designed for handling multiple tasks simultaneously. GPUs help CPU as a powerful energy-efficient data centers in many small-and-medium businesses around the world. GPUaccelerated computing is the use of a GPU together with a CPU to accelerate scientific, engineering, and enterprise applications. CUDA Version 5.5 toolkit helps developers to obtain the best performance. Parallelism and optimization techniques simplify the programming for CUDA capable GPU architecture. Whereas, Nvidia has announced CUDA 6, the latest version of its GPU programming language, which adds a Unified Memory capability as shown in Figure 1. Unified memory relieves programmers from the trials and tribulations of having to manually copy data back and forth between separate CPU and GPU memory spaces [1]. Fig. 1. CUDA memory model: (a) unified and (b) actual. Data in GPU global memory takes more time to be processed when compared with data in GPU shared memory. In CUDA assisted multithreaded programming, a thread usually processes data that are not in consecutive CPU-memory locations. This CPU-data may not autoqualify to be in the GPU shared memory. As a result, overall system performance may decrease significantly. Since the introduction of dual-core netbook machines (in year 2005) to 16-core workstation computers, parallel processing is a reality. Today, command-prompt machines are almost out, multithreaded CPU/GPU computers are in [2]. To take advantage of the multicore systems, software engineers are developing parallel applications that will also meet the requirements of the growing highperformance computation. NVIDIA CUDA/GPU technology provides multithreading without context switching [3]. However, due to the proper mapping from CPU-

2 memory to GPU-memory, GPU shared memory may not be used efficively. Therefore, a smart memory mapping technique is needed to improve the GPU, as well as the overall system performance. This work is aimed to develop a methodology to rearrange the data while copying from CPU to GPU so that the data associated to the threads in a GPUblock resides together and fits in GPU-shared memory and hence improve performance. The rest of the paper is organized as follow: Section II motivates the work by presenting some related important articles. Section III introduces the proposed CPUto-GPU data mapping technique. Experimental details are described in Section IV. Experimental results are discussed in Section V. Finally, this work is concluded in Section VI. the parallel parts are sent to the GPU card. Figure 3 illustrates a typical CPU-GPU organization. For each parallel part, multiple threads are generated. Threads are executed in the GPU cores concurrently in parallel. In a GPU, different types of memories are available; global memory is the largest memory available to all the computational blocks and it is visible to each and every thread in the same compute grid with large size; shared memory is within a computational block, which is visible to threads running within the computational block. The shared memory is very fast to access, but much smaller in capacity than the global memory. GPU shared memory helps improve performance mainly because (i) it is dedicated to a CUDA-block and (ii) closer to the processing cores (see Figure 3). The results from GPU are sent back to the CPU. II. BACKGROUND AND MOTIVATION In this section, we briefly discuss CPU cache memory hierarchy, GPU memory organization, data level parallelism, and traditional CPU-to-GPU memory/data mapping. A. CPU Memory Organization Most contemporary CPUs (from Intel, AMD, and IBM) have multicore architecture and each core has its own private level-1 cache (CL1). Cache memory organization of such a multicore system also has private or shared level-2 cache (CL2) and main memory. CL1 is usually split into instruction cache (I1) and data cache (D1), but CL2 is usually unified. Cache memory organization of an Intel-like 4-core CPU system is illustrated in Figure 2 [4]. Fig. 2. Intel-like CPU cache memory organization. B. GPU Memory Organization In a multicore CPU and manycore GPU platform that supports CUDA applications, the user starts the application on the CPU. The initialization and serial parts are executed in the CPU. The data and code for Fig. 3. GPU memory organization [2]. C. Data Level Parallelism Data parallelism is an important parallel processing technique as it can take advantage of the locality principle. In data parallelism, a program is decomposed into concurrent units which execute the same instructions on distinct data [5], [6]. The Massachusetts Institute of Technology (MIT) researchers introduce 2 data parallelism strategies for concurrent execution: spatial data partitioning (SDP) and temporal data partitioning (TDP) [7]. In SDP strategy, by using spatial indexes data is divided among processes Figure 4. This strategy is applicable when we have large dimensions of spatial data and has few dependencies. And, for enabling the communication and synchronization we need some additional instructions. The latency of the parallelized application decreases. The load-balancing of the parallelized application tends to be easy while the application performs

3 the same amount of work on all spatial indices. into GPU shared memory. Therefore, a new CPU-to- GPU memory mapping is needed to improve the GPU shared memory performance. Fig. 4. Two Parallelization Strategies [7]. In TDP strategy, according to temporal index data is divided among processes. Performing computation on each process on all spatial indices associated with its assigned temporal index as illustrated in Figure 4. The communication will be application dependent in the parallel implementation. Whereas, in a typical TDP implementation, starting from assigned temporal index the process executes all instructions on the data. This strategy is applicable when we have large temporal data dimension with few dependences. The throughput of the application increases in a parallelized application. The latency of the application remains the same. If we write applications using TDP strategy with another pattern, the load-balancing will be easy, even the computation varies tremendously between inputs. Experimental results show that the pure TDP implementation achieves the best throughput, while the pure SDP implementation achieves the best latency although with a loss of quality. Fig. 5. Traditional CPU to GPU global memory mapping. B. Proposed CPU-to-GPU Memory Mapping In this work, we propose a novel CPU-main-memory to GPU-global-memory mapping technique to increase system performance. As shown in Figure 6, CPU-data should be regrouped such a way that data associated to the same thread can be stored in the consecutive memory locations. This data regrouping and mapping should be done on CPU during run-time. III. COPYING CPU-DATA TO GPU-MEMORY First we discuss the traditional method of copying CPU-data to GPU-global-memory. Then we present our proposed technique to move CPU-data to GPU-globalmemory to increase performance. A. Traditional CPU-to-GPU Memory Mapping In traditional GPU computing, data (and instruction) from CPU memory are copied into GPU memory as shown in Figure 5. The data from a single block/thread is directly copied from CPU memory to GPU-globalmemory. As a result, the data in GPU global memory may be stored in different memory blocks. And that makes it difficult (if not impossible) to store that data Fig. 6. Proposed CPU memory to GPU shared memory mapping. According to this mapping strategy, data X1, X2, etc. from different CPU-memory locations are stored together

4 in GPU global memory. Unlike the traditional method, this organization should allow to keep the data in the GPU shared memory and increase performance. IV. EXPERIMENTAL DETAILS In this section, CPU/GPU system parameters, 2D electrical charge distribution, and developed CUDA/C code for GPU with/without shared memory are discussed. A. CPU/GPU System Parameters We use two popular GPU cards (Fermi and Kepler) with a multicore CPU. The system configuration parameters for the workstation are summarized in Table I. The dual-processor (quad-core per processor) workstation runs at 2.13 GHz. The Fermi card has 14 streaming multiprocessors (SMs), each SM has 32 CUDA cores. The Kepler card has 13 SMs, each SM has 192 CUDA cores. The operating system used is Debian 6.0. Parameter CPU TABLE I SYSTEM PARAMETERS Description Intel Xeon CPU Cores 8 CPU RAM Fermi GPU Card 6GB NVIDIA Tesla C2075 Fermi GPU Cores 448 Fermi Clock Speed Fermi Global Memory Fermi Shared Memory Kepler GPU Card 1.15 GHz 5.4GB 49KB/Block NVIDIA Tesla K20m Kepler GPU Cores 2496 Kepler Clock Speed Kepler Global Memory Kepler Shared Memory Operating System 0.71 GHz 4.8GB 49KB/Block Linux Debian ɛ x(i.j) (φ i+1,j φ i,j ) /dx + ɛ y(i.j) (φ i,j+1 φ i,j ) /dy + ɛ x(i 1.j) (φ i,j φ i 1,j ) /dx + ɛ y(i,j 1) (φ i,j φ i,j 1 ) /dy = 0 (1) Where dx and dy are the spatial grid size, φ i,j is the electric potential defined at lattice point (i, j), and ɛ x(i.j) and ɛ y(i.j) are the effective x- and y-direction permittivity defined at edges of the element cell (i, j). For very uniform material, electric potential can be considered the same in all directions. Therefore, Equation 1 becomes a 2D problem as shown in Equation 2 and can be solved using the discrete approach. (φ i+1,j φ i,j ) /dx + (φ i,j+1 φ i,j ) /dy + (φ i,j φ i 1,j ) /dx + (φ i,j φ i,j 1 ) /dy = 0 (2) The multithreaded CUDA/C shared memory implementation of the 2D Laplace s equation for charge distribution is shown in Figure 7. Here, the right values of i (i.e., current threadidx.x) and j (i.e., threadidx.y) for each thread, and the shared variables As[i][j] are used for memory latency hiding optimization. Thread executions are synchronized to ensure correctness. B. 2D Electric Charge Distribution In many cases, when the charge distribution is not known, the Poisson s equation can be used to solve electrostatic problems. For materials with electric potential φ and medium permittivity ɛ, based on the finite-difference approximations, the Laplace s equation (a customized form of the Poisson s equation) for a 2D problem can be presented as Equation 1. Fig. 7. Main loop in CUDA/C to solve Laplace s equation for charge distribution. V. RESULTS AND DISCUSSION We conduct the experiment of high electric charge distribution (as seen in Equation 2) using the code

5 sample (in Figure 7). We implement three versions of the program: (i) CPU-only, (ii) GPU without shared memory, and (iii) GPU with shared memory. While copying data from CPU-main-memory to GPU-globalmemory, we apply our proposed technique so that the shared memory can be used efficiently. A. Validation of CUDA/C Programs In order to validate the developed CUDA/C programs, we consider an 8x8 matrix. As shown in Figure 8, initially Node(4,4), Node(4,5), Node(5,4), and Node(5,5) are set with a high value of and all other nodes are set to a low value 0 (zero). Nodes right outside of the 8x8 matrix are also set to a low value 0 (as a boundary condition). Fig. 9. Validation of the developed CUDA/C code. B. Impact of the Number of Threads In the experiments, execution time decreases as the number of threads increases as illustrated in Figure 10. Results show that for small number of threads (less than 8), Kepler takes more time when compared with that of Fermi; but for large number of threads (greater than 16), Fermi takes more time compared to Kepler. The reason for Kepler taking more time than Fermi for less than 16 threads is that Fermi runs at a faster clock rate (Fermi at 1.15 GHz, Kepler at 0.71 GHz). The reason for Fermi taking more time than Kepler for more than 16 threads is that Fermi has less load/store units than Kepler does (Fermi has 16 units, Kepler has 32 units). Fig. 8. An 8x8 matrix with boundary condition. Using the CPU/C and CUDA/C (without GPU shared memory) codes we calculate the new values of all the nodes of the matrix as stated in Equation 3. Where, 1 n 8 and 1 m 8. N n,m = 1 5 (N n,m 1 + N n,m+1 + Fig. 10. GPU Time Vs Number of Threads. N n,m + N n 1,m + N n+1,m ) (3) The program stops when each and every node has a value less than 1. Figure 9 shows values for Node(1,1), Node(3,4), Node(5,5) and Node(8,8) after iterations 1, 10, 50 and 100. As expected, it is observed that both CPU/C and CUDA/C versions produce exactly the same value for each node after any number of iteration. C. Impact of GPU Shared Memory For 16x16 threads, both times decrease as the GPU shared memory size increases (as shown in Figure 11). It should be noted that Fermi takes less time than Kepler does. This is probably because Ferni runs at a faster speed than Kepler and Fermi has a larger memory bus width than Kepler (Fermi bus width 384-bit, Kepler bus width 320-bit).

6 Fig. 11. GPU Time Vs Shared Memory Used. D. Impact of CPU-to-GPU Memory Mapping Finally, we evaluate the impact of the CPU-to-GPU memory mapping technique. Figure 12 shows the execution times due to Fermi card while solving the Laplace s equation for electric charge distribution on a 512x512 thin surface. For the number of threads greater than 9x9, GPU shared memory shows improvement. For more than 16x16 threads, execution time increases; this is probably due to the limitation of 16 load/store units. Experimental results directs that the proposed CPU-to-GPU memory mapping with GPU shared memory provides the best performance. Fig. 12. Impact of Data Regrouping. efficient CPU-memory to GPU-global-memory mapping algorithms are required to improve performance. In this paper, we present a CPU-to-GPU memory mapping technique that enhances the GPU (as well as the overall system) performance. We implement three solutions (CPUonly, CPU/GPU without shared memory, and CPU/GPU with shared memory) to solve Laplace s equation for electric charge distribution on a 2D thin surface using NVIDIA Fermi (448 cores) and Kepler (2496 cores) GPU cards. Experimental results clearly support the usefulness of GPU-shared-memory for both GPU cards. Results also show that the proper regrouping of CPUdata while copying into GPU-global-memory help improve performance. Based on the experimental results the proposed CPU-to-GPU memory mapping technique is capable of decreasing the overall execution time by more than 75%. In many research including computational analysis of composite materials, where modeling and simulation of nanocomposites (that requires large number of computations) is the primary challenge, high performance computing is a must. We plan to extend this CPU-to- GPU memory mapping technique to study composite materials for aircraft applications in our next endeavor. REFERENCES [1] M. Harris, [2] B. Chapman, G. Jost, and R. D. Pas, Using OpenMP: Portable Shared Memory Parallel Programming (Scientific and Engineering Computation). The MIT Press, Oct [3] T. Edison, Gpu memory system, [4] A. Asaduzzaman, A power-aware multi-level cache organization effective for multi-core embedded systems, in JCP, [5] K. S. McKinley, S. Carr, and C. Tseng, Improving data locality with loop transformations, ACM Transactions on Programming Languages and Systems, vol. 18, no. 4, p. 424, [6] P. J. Denning, The locality principle, 2006, communication Networks and Computer Systems. [7] H. Hoffmann, A. Agarwal, and S. Devadas, Partitioning strategies for concurrent programming, in Massachusetts Institute of Technology (MIT), CSAL Lab, VI. CONCLUSION NVIDIA CUDA-accelerated GPU computing has potential to provide faster and inexpensive solutions to address massively large/complex problems. In CPU/GPU computing, CPU-data is first copied into GPU-globalmemory. It would be beneficial to keep the data into GPU-shared-memory than into GPU-global-memory. As shared memory is much smaller than global memory,

Finite Element Integration and Assembly on Modern Multi and Many-core Processors

Finite Element Integration and Assembly on Modern Multi and Many-core Processors Finite Element Integration and Assembly on Modern Multi and Many-core Processors Krzysztof Banaś, Jan Bielański, Kazimierz Chłoń AGH University of Science and Technology, Mickiewicza 30, 30-059 Kraków,

More information

Optimization solutions for the segmented sum algorithmic function

Optimization solutions for the segmented sum algorithmic function Optimization solutions for the segmented sum algorithmic function ALEXANDRU PÎRJAN Department of Informatics, Statistics and Mathematics Romanian-American University 1B, Expozitiei Blvd., district 1, code

More information

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms

What is GPU? CS 590: High Performance Computing. GPU Architectures and CUDA Concepts/Terms CS 590: High Performance Computing GPU Architectures and CUDA Concepts/Terms Fengguang Song Department of Computer & Information Science IUPUI What is GPU? Conventional GPUs are used to generate 2D, 3D

More information

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield NVIDIA GTX200: TeraFLOPS Visual Computing August 26, 2008 John Tynefield 2 Outline Execution Model Architecture Demo 3 Execution Model 4 Software Architecture Applications DX10 OpenGL OpenCL CUDA C Host

More information

Profiling-Based L1 Data Cache Bypassing to Improve GPU Performance and Energy Efficiency

Profiling-Based L1 Data Cache Bypassing to Improve GPU Performance and Energy Efficiency Profiling-Based L1 Data Cache Bypassing to Improve GPU Performance and Energy Efficiency Yijie Huangfu and Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University {huangfuy2,wzhang4}@vcu.edu

More information

Introduction to GPU hardware and to CUDA

Introduction to GPU hardware and to CUDA Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 35 Course outline Introduction to GPU hardware

More information

N-Body Simulation using CUDA. CSE 633 Fall 2010 Project by Suraj Alungal Balchand Advisor: Dr. Russ Miller State University of New York at Buffalo

N-Body Simulation using CUDA. CSE 633 Fall 2010 Project by Suraj Alungal Balchand Advisor: Dr. Russ Miller State University of New York at Buffalo N-Body Simulation using CUDA CSE 633 Fall 2010 Project by Suraj Alungal Balchand Advisor: Dr. Russ Miller State University of New York at Buffalo Project plan Develop a program to simulate gravitational

More information

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University CSE 591/392: GPU Programming Introduction Klaus Mueller Computer Science Department Stony Brook University First: A Big Word of Thanks! to the millions of computer game enthusiasts worldwide Who demand

More information

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI. CSCI 402: Computer Architectures Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI 6.6 - End Today s Contents GPU Cluster and its network topology The Roofline performance

More information

Warps and Reduction Algorithms

Warps and Reduction Algorithms Warps and Reduction Algorithms 1 more on Thread Execution block partitioning into warps single-instruction, multiple-thread, and divergence 2 Parallel Reduction Algorithms computing the sum or the maximum

More information

THE COMPARISON OF PARALLEL SORTING ALGORITHMS IMPLEMENTED ON DIFFERENT HARDWARE PLATFORMS

THE COMPARISON OF PARALLEL SORTING ALGORITHMS IMPLEMENTED ON DIFFERENT HARDWARE PLATFORMS Computer Science 14 (4) 2013 http://dx.doi.org/10.7494/csci.2013.14.4.679 Dominik Żurek Marcin Pietroń Maciej Wielgosz Kazimierz Wiatr THE COMPARISON OF PARALLEL SORTING ALGORITHMS IMPLEMENTED ON DIFFERENT

More information

CS 179 Lecture 4. GPU Compute Architecture

CS 179 Lecture 4. GPU Compute Architecture CS 179 Lecture 4 GPU Compute Architecture 1 This is my first lecture ever Tell me if I m not speaking loud enough, going too fast/slow, etc. Also feel free to give me lecture feedback over email or at

More information

Two-Phase flows on massively parallel multi-gpu clusters

Two-Phase flows on massively parallel multi-gpu clusters Two-Phase flows on massively parallel multi-gpu clusters Peter Zaspel Michael Griebel Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität Bonn Workshop Programming of Heterogeneous

More information

Parallel Computing: Parallel Architectures Jin, Hai

Parallel Computing: Parallel Architectures Jin, Hai Parallel Computing: Parallel Architectures Jin, Hai School of Computer Science and Technology Huazhong University of Science and Technology Peripherals Computer Central Processing Unit Main Memory Computer

More information

TR An Overview of NVIDIA Tegra K1 Architecture. Ang Li, Radu Serban, Dan Negrut

TR An Overview of NVIDIA Tegra K1 Architecture. Ang Li, Radu Serban, Dan Negrut TR-2014-17 An Overview of NVIDIA Tegra K1 Architecture Ang Li, Radu Serban, Dan Negrut November 20, 2014 Abstract This paperwork gives an overview of NVIDIA s Jetson TK1 Development Kit and its Tegra K1

More information

GPGPU/CUDA/C Workshop 2012

GPGPU/CUDA/C Workshop 2012 GPGPU/CUDA/C Workshop 2012 Day-1: GPGPU/CUDA/C and WSU Presenter(s): Abu Asaduzzaman Nasrin Sultana Wichita State University July 10, 2012 GPGPU/CUDA/C Workshop 2012 Outline Introduction to the Workshop

More information

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller Entertainment Graphics: Virtual Realism for the Masses CSE 591: GPU Programming Introduction Computer games need to have: realistic appearance of characters and objects believable and creative shading,

More information

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center

An Introduction to GPU Architecture and CUDA C/C++ Programming. Bin Chen April 4, 2018 Research Computing Center An Introduction to GPU Architecture and CUDA C/C++ Programming Bin Chen April 4, 2018 Research Computing Center Outline Introduction to GPU architecture Introduction to CUDA programming model Using the

More information

GPU Performance Optimisation. Alan Gray EPCC The University of Edinburgh

GPU Performance Optimisation. Alan Gray EPCC The University of Edinburgh GPU Performance Optimisation EPCC The University of Edinburgh Hardware NVIDIA accelerated system: Memory Memory GPU vs CPU: Theoretical Peak capabilities NVIDIA Fermi AMD Magny-Cours (6172) Cores 448 (1.15GHz)

More information

TUNING CUDA APPLICATIONS FOR MAXWELL

TUNING CUDA APPLICATIONS FOR MAXWELL TUNING CUDA APPLICATIONS FOR MAXWELL DA-07173-001_v6.5 August 2014 Application Note TABLE OF CONTENTS Chapter 1. Maxwell Tuning Guide... 1 1.1. NVIDIA Maxwell Compute Architecture... 1 1.2. CUDA Best Practices...2

More information

Motivation for Parallelism. Motivation for Parallelism. ILP Example: Loop Unrolling. Types of Parallelism

Motivation for Parallelism. Motivation for Parallelism. ILP Example: Loop Unrolling. Types of Parallelism Motivation for Parallelism Motivation for Parallelism The speed of an application is determined by more than just processor speed. speed Disk speed Network speed... Multiprocessors typically improve the

More information

ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS

ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS Ferdinando Alessi Annalisa Massini Roberto Basili INGV Introduction The simulation of wave propagation

More information

Performance Benefits of NVIDIA GPUs for LS-DYNA

Performance Benefits of NVIDIA GPUs for LS-DYNA Performance Benefits of NVIDIA GPUs for LS-DYNA Mr. Stan Posey and Dr. Srinivas Kodiyalam NVIDIA Corporation, Santa Clara, CA, USA Summary: This work examines the performance characteristics of LS-DYNA

More information

GPU Architecture. Alan Gray EPCC The University of Edinburgh

GPU Architecture. Alan Gray EPCC The University of Edinburgh GPU Architecture Alan Gray EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? Architectural reasons for accelerator performance advantages Latest GPU Products From

More information

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research

Introduction to CUDA CME343 / ME May James Balfour [ NVIDIA Research Introduction to CUDA CME343 / ME339 18 May 2011 James Balfour [ jbalfour@nvidia.com] NVIDIA Research CUDA Programing system for machines with GPUs Programming Language Compilers Runtime Environments Drivers

More information

On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters

On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters 1 On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters N. P. Karunadasa & D. N. Ranasinghe University of Colombo School of Computing, Sri Lanka nishantha@opensource.lk, dnr@ucsc.cmb.ac.lk

More information

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.

More information

Particle-in-Cell Simulations on Modern Computing Platforms. Viktor K. Decyk and Tajendra V. Singh UCLA

Particle-in-Cell Simulations on Modern Computing Platforms. Viktor K. Decyk and Tajendra V. Singh UCLA Particle-in-Cell Simulations on Modern Computing Platforms Viktor K. Decyk and Tajendra V. Singh UCLA Outline of Presentation Abstraction of future computer hardware PIC on GPUs OpenCL and Cuda Fortran

More information

Parallel Implementation of the NIST Statistical Test Suite

Parallel Implementation of the NIST Statistical Test Suite Parallel Implementation of the NIST Statistical Test Suite Alin Suciu, Iszabela Nagy, Kinga Marton, Ioana Pinca Computer Science Department Technical University of Cluj-Napoca Cluj-Napoca, Romania Alin.Suciu@cs.utcluj.ro,

More information

TUNING CUDA APPLICATIONS FOR MAXWELL

TUNING CUDA APPLICATIONS FOR MAXWELL TUNING CUDA APPLICATIONS FOR MAXWELL DA-07173-001_v7.0 March 2015 Application Note TABLE OF CONTENTS Chapter 1. Maxwell Tuning Guide... 1 1.1. NVIDIA Maxwell Compute Architecture... 1 1.2. CUDA Best Practices...2

More information

GPGPU/CUDA/C Workshop 2012

GPGPU/CUDA/C Workshop 2012 GPGPU/CUDA/C Workshop 2012 Day-2: Intro to CUDA/C Programming Presenter(s): Abu Asaduzzaman Chok Yip Wichita State University July 11, 2012 GPGPU/CUDA/C Workshop 2012 Outline Review: Day-1 Brief history

More information

CUDA Optimizations WS Intelligent Robotics Seminar. Universität Hamburg WS Intelligent Robotics Seminar Praveen Kulkarni

CUDA Optimizations WS Intelligent Robotics Seminar. Universität Hamburg WS Intelligent Robotics Seminar Praveen Kulkarni CUDA Optimizations WS 2014-15 Intelligent Robotics Seminar 1 Table of content 1 Background information 2 Optimizations 3 Summary 2 Table of content 1 Background information 2 Optimizations 3 Summary 3

More information

GPU Programming. Lecture 1: Introduction. Miaoqing Huang University of Arkansas 1 / 27

GPU Programming. Lecture 1: Introduction. Miaoqing Huang University of Arkansas 1 / 27 1 / 27 GPU Programming Lecture 1: Introduction Miaoqing Huang University of Arkansas 2 / 27 Outline Course Introduction GPUs as Parallel Computers Trend and Design Philosophies Programming and Execution

More information

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Introduction: Modern computer architecture The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Motivation: Multi-Cores where and why Introduction: Moore s law Intel

More information

GPU Programming Using NVIDIA CUDA

GPU Programming Using NVIDIA CUDA GPU Programming Using NVIDIA CUDA Siddhante Nangla 1, Professor Chetna Achar 2 1, 2 MET s Institute of Computer Science, Bandra Mumbai University Abstract: GPGPU or General-Purpose Computing on Graphics

More information

CUDA. Matthew Joyner, Jeremy Williams

CUDA. Matthew Joyner, Jeremy Williams CUDA Matthew Joyner, Jeremy Williams Agenda What is CUDA? CUDA GPU Architecture CPU/GPU Communication Coding in CUDA Use cases of CUDA Comparison to OpenCL What is CUDA? What is CUDA? CUDA is a parallel

More information

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics H. Y. Schive ( 薛熙于 ) Graduate Institute of Physics, National Taiwan University Leung Center for Cosmology and Particle Astrophysics

More information

B. Tech. Project Second Stage Report on

B. Tech. Project Second Stage Report on B. Tech. Project Second Stage Report on GPU Based Active Contours Submitted by Sumit Shekhar (05007028) Under the guidance of Prof Subhasis Chaudhuri Table of Contents 1. Introduction... 1 1.1 Graphic

More information

CSE 599 I Accelerated Computing - Programming GPUS. Memory performance

CSE 599 I Accelerated Computing - Programming GPUS. Memory performance CSE 599 I Accelerated Computing - Programming GPUS Memory performance GPU Teaching Kit Accelerated Computing Module 6.1 Memory Access Performance DRAM Bandwidth Objective To learn that memory bandwidth

More information

OPTIMIZATION OF THE CODE OF THE NUMERICAL MAGNETOSHEATH-MAGNETOSPHERE MODEL

OPTIMIZATION OF THE CODE OF THE NUMERICAL MAGNETOSHEATH-MAGNETOSPHERE MODEL Journal of Theoretical and Applied Mechanics, Sofia, 2013, vol. 43, No. 2, pp. 77 82 OPTIMIZATION OF THE CODE OF THE NUMERICAL MAGNETOSHEATH-MAGNETOSPHERE MODEL P. Dobreva Institute of Mechanics, Bulgarian

More information

Tesla Architecture, CUDA and Optimization Strategies

Tesla Architecture, CUDA and Optimization Strategies Tesla Architecture, CUDA and Optimization Strategies Lan Shi, Li Yi & Liyuan Zhang Hauptseminar: Multicore Architectures and Programming Page 1 Outline Tesla Architecture & CUDA CUDA Programming Optimization

More information

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it Lab 1 Starts Today Already posted on Canvas (under Assignment) Let s look at it CS 590: High Performance Computing Parallel Computer Architectures Fengguang Song Department of Computer Science IUPUI 1

More information

HPC with Multicore and GPUs

HPC with Multicore and GPUs HPC with Multicore and GPUs Stan Tomov Electrical Engineering and Computer Science Department University of Tennessee, Knoxville COSC 594 Lecture Notes March 22, 2017 1/20 Outline Introduction - Hardware

More information

Applications of Berkeley s Dwarfs on Nvidia GPUs

Applications of Berkeley s Dwarfs on Nvidia GPUs Applications of Berkeley s Dwarfs on Nvidia GPUs Seminar: Topics in High-Performance and Scientific Computing Team N2: Yang Zhang, Haiqing Wang 05.02.2015 Overview CUDA The Dwarfs Dynamic Programming Sparse

More information

Introduction to CUDA Programming

Introduction to CUDA Programming Introduction to CUDA Programming Steve Lantz Cornell University Center for Advanced Computing October 30, 2013 Based on materials developed by CAC and TACC Outline Motivation for GPUs and CUDA Overview

More information

How to Optimize Geometric Multigrid Methods on GPUs

How to Optimize Geometric Multigrid Methods on GPUs How to Optimize Geometric Multigrid Methods on GPUs Markus Stürmer, Harald Köstler, Ulrich Rüde System Simulation Group University Erlangen March 31st 2011 at Copper Schedule motivation imaging in gradient

More information

GPU Fundamentals Jeff Larkin November 14, 2016

GPU Fundamentals Jeff Larkin November 14, 2016 GPU Fundamentals Jeff Larkin , November 4, 206 Who Am I? 2002 B.S. Computer Science Furman University 2005 M.S. Computer Science UT Knoxville 2002 Graduate Teaching Assistant 2005 Graduate

More information

WHY PARALLEL PROCESSING? (CE-401)

WHY PARALLEL PROCESSING? (CE-401) PARALLEL PROCESSING (CE-401) COURSE INFORMATION 2 + 1 credits (60 marks theory, 40 marks lab) Labs introduced for second time in PP history of SSUET Theory marks breakup: Midterm Exam: 15 marks Assignment:

More information

To Use or Not to Use: CPUs Cache Optimization Techniques on GPGPUs

To Use or Not to Use: CPUs Cache Optimization Techniques on GPGPUs To Use or Not to Use: CPUs Optimization Techniques on GPGPUs D.R.V.L.B. Thambawita Department of Computer Science and Technology Uva Wellassa University Badulla, Sri Lanka Email: vlbthambawita@gmail.com

More information

General Purpose GPU Computing in Partial Wave Analysis

General Purpose GPU Computing in Partial Wave Analysis JLAB at 12 GeV - INT General Purpose GPU Computing in Partial Wave Analysis Hrayr Matevosyan - NTC, Indiana University November 18/2009 COmputationAL Challenges IN PWA Rapid Increase in Available Data

More information

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms Complexity and Advanced Algorithms Introduction to Parallel Algorithms Why Parallel Computing? Save time, resources, memory,... Who is using it? Academia Industry Government Individuals? Two practical

More information

3D Helmholtz Krylov Solver Preconditioned by a Shifted Laplace Multigrid Method on Multi-GPUs

3D Helmholtz Krylov Solver Preconditioned by a Shifted Laplace Multigrid Method on Multi-GPUs 3D Helmholtz Krylov Solver Preconditioned by a Shifted Laplace Multigrid Method on Multi-GPUs H. Knibbe, C. W. Oosterlee, C. Vuik Abstract We are focusing on an iterative solver for the three-dimensional

More information

CSC501 Operating Systems Principles. OS Structure

CSC501 Operating Systems Principles. OS Structure CSC501 Operating Systems Principles OS Structure 1 Announcements q TA s office hour has changed Q Thursday 1:30pm 3:00pm, MRC-409C Q Or email: awang@ncsu.edu q From department: No audit allowed 2 Last

More information

NVIDIA Think about Computing as Heterogeneous One Leo Liao, 1/29/2106, NTU

NVIDIA Think about Computing as Heterogeneous One Leo Liao, 1/29/2106, NTU NVIDIA Think about Computing as Heterogeneous One Leo Liao, 1/29/2106, NTU GPGPU opens the door for co-design HPC, moreover middleware-support embedded system designs to harness the power of GPUaccelerated

More information

Experiences with the Sparse Matrix-Vector Multiplication on a Many-core Processor

Experiences with the Sparse Matrix-Vector Multiplication on a Many-core Processor Experiences with the Sparse Matrix-Vector Multiplication on a Many-core Processor Juan C. Pichel Centro de Investigación en Tecnoloxías da Información (CITIUS) Universidade de Santiago de Compostela, Spain

More information

Efficient Tridiagonal Solvers for ADI methods and Fluid Simulation

Efficient Tridiagonal Solvers for ADI methods and Fluid Simulation Efficient Tridiagonal Solvers for ADI methods and Fluid Simulation Nikolai Sakharnykh - NVIDIA San Jose Convention Center, San Jose, CA September 21, 2010 Introduction Tridiagonal solvers very popular

More information

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS 1 Last time Each block is assigned to and executed on a single streaming multiprocessor (SM). Threads execute in groups of 32 called warps. Threads in

More information

high performance medical reconstruction using stream programming paradigms

high performance medical reconstruction using stream programming paradigms high performance medical reconstruction using stream programming paradigms This Paper describes the implementation and results of CT reconstruction using Filtered Back Projection on various stream programming

More information

Parallel Computing. Hwansoo Han (SKKU)

Parallel Computing. Hwansoo Han (SKKU) Parallel Computing Hwansoo Han (SKKU) Unicore Limitations Performance scaling stopped due to Power consumption Wire delay DRAM latency Limitation in ILP 10000 SPEC CINT2000 2 cores/chip Xeon 3.0GHz Core2duo

More information

POST-SIEVING ON GPUs

POST-SIEVING ON GPUs POST-SIEVING ON GPUs Andrea Miele 1, Joppe W Bos 2, Thorsten Kleinjung 1, Arjen K Lenstra 1 1 LACAL, EPFL, Lausanne, Switzerland 2 NXP Semiconductors, Leuven, Belgium 1/18 NUMBER FIELD SIEVE (NFS) Asymptotically

More information

CME 213 S PRING Eric Darve

CME 213 S PRING Eric Darve CME 213 S PRING 2017 Eric Darve Summary of previous lectures Pthreads: low-level multi-threaded programming OpenMP: simplified interface based on #pragma, adapted to scientific computing OpenMP for and

More information

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Hybrid Computing @ KAUST Many Cores and OpenACC Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Agenda Hybrid Computing n Hybrid Computing n From Multi-Physics

More information

DIFFERENTIAL. Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka

DIFFERENTIAL. Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka USE OF FOR Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka Faculty of Nuclear Sciences and Physical Engineering Czech Technical University in Prague Mini workshop on advanced numerical methods

More information

GPU-Accelerated Parallel Sparse LU Factorization Method for Fast Circuit Analysis

GPU-Accelerated Parallel Sparse LU Factorization Method for Fast Circuit Analysis GPU-Accelerated Parallel Sparse LU Factorization Method for Fast Circuit Analysis Abstract: Lower upper (LU) factorization for sparse matrices is the most important computing step for circuit simulation

More information

Accelerating String Matching Algorithms on Multicore Processors Cheng-Hung Lin

Accelerating String Matching Algorithms on Multicore Processors Cheng-Hung Lin Accelerating String Matching Algorithms on Multicore Processors Cheng-Hung Lin Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan Abstract String matching is the most

More information

NVidia s GPU Microarchitectures. By Stephen Lucas and Gerald Kotas

NVidia s GPU Microarchitectures. By Stephen Lucas and Gerald Kotas NVidia s GPU Microarchitectures By Stephen Lucas and Gerald Kotas Intro Discussion Points - Difference between CPU and GPU - Use s of GPUS - Brie f History - Te sla Archite cture - Fermi Architecture -

More information

Programmable Graphics Hardware (GPU) A Primer

Programmable Graphics Hardware (GPU) A Primer Programmable Graphics Hardware (GPU) A Primer Klaus Mueller Stony Brook University Computer Science Department Parallel Computing Explained video Parallel Computing Explained Any questions? Parallelism

More information

Fundamental CUDA Optimization. NVIDIA Corporation

Fundamental CUDA Optimization. NVIDIA Corporation Fundamental CUDA Optimization NVIDIA Corporation Outline Fermi/Kepler Architecture Kernel optimizations Launch configuration Global memory throughput Shared memory access Instruction throughput / control

More information

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

More information

Quantitative study of computing time of direct/iterative solver for MoM by GPU computing

Quantitative study of computing time of direct/iterative solver for MoM by GPU computing Quantitative study of computing time of direct/iterative solver for MoM by GPU computing Keisuke Konno 1a), Hajime Katsuda 2, Kei Yokokawa 1, Qiang Chen 1, Kunio Sawaya 3, and Qiaowei Yuan 4 1 Department

More information

Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters

Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences,

More information

CUDA Experiences: Over-Optimization and Future HPC

CUDA Experiences: Over-Optimization and Future HPC CUDA Experiences: Over-Optimization and Future HPC Carl Pearson 1, Simon Garcia De Gonzalo 2 Ph.D. candidates, Electrical and Computer Engineering 1 / Computer Science 2, University of Illinois Urbana-Champaign

More information

Benchmarking the Memory Hierarchy of Modern GPUs

Benchmarking the Memory Hierarchy of Modern GPUs 1 of 30 Benchmarking the Memory Hierarchy of Modern GPUs In 11th IFIP International Conference on Network and Parallel Computing Xinxin Mei, Kaiyong Zhao, Chengjian Liu, Xiaowen Chu CS Department, Hong

More information

CUDA GPGPU Workshop 2012

CUDA GPGPU Workshop 2012 CUDA GPGPU Workshop 2012 Parallel Programming: C thread, Open MP, and Open MPI Presenter: Nasrin Sultana Wichita State University 07/10/2012 Parallel Programming: Open MP, MPI, Open MPI & CUDA Outline

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include 3.1 Overview Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include GPUs (Graphics Processing Units) AMD/ATI

More information

Effect of memory latency

Effect of memory latency CACHE AWARENESS Effect of memory latency Consider a processor operating at 1 GHz (1 ns clock) connected to a DRAM with a latency of 100 ns. Assume that the processor has two ALU units and it is capable

More information

E6895 Advanced Big Data Analytics Lecture 8: GPU Examples and GPU on ios devices

E6895 Advanced Big Data Analytics Lecture 8: GPU Examples and GPU on ios devices E6895 Advanced Big Data Analytics Lecture 8: GPU Examples and GPU on ios devices Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist, Graph

More information

Efficient Computation of Radial Distribution Function on GPUs

Efficient Computation of Radial Distribution Function on GPUs Efficient Computation of Radial Distribution Function on GPUs Yi-Cheng Tu * and Anand Kumar Department of Computer Science and Engineering University of South Florida, Tampa, Florida 2 Overview Introduction

More information

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono Introduction to CUDA Algoritmi e Calcolo Parallelo References q This set of slides is mainly based on: " CUDA Technical Training, Dr. Antonino Tumeo, Pacific Northwest National Laboratory " Slide of Applied

More information

Evaluation Of The Performance Of GPU Global Memory Coalescing

Evaluation Of The Performance Of GPU Global Memory Coalescing Evaluation Of The Performance Of GPU Global Memory Coalescing Dae-Hwan Kim Department of Computer and Information, Suwon Science College, 288 Seja-ro, Jeongnam-myun, Hwaseong-si, Gyeonggi-do, Rep. of Korea

More information

INTRODUCTION TO OPENACC. Analyzing and Parallelizing with OpenACC, Feb 22, 2017

INTRODUCTION TO OPENACC. Analyzing and Parallelizing with OpenACC, Feb 22, 2017 INTRODUCTION TO OPENACC Analyzing and Parallelizing with OpenACC, Feb 22, 2017 Objective: Enable you to to accelerate your applications with OpenACC. 2 Today s Objectives Understand what OpenACC is and

More information

Numerical Simulation on the GPU

Numerical Simulation on the GPU Numerical Simulation on the GPU Roadmap Part 1: GPU architecture and programming concepts Part 2: An introduction to GPU programming using CUDA Part 3: Numerical simulation techniques (grid and particle

More information

OpenACC programming for GPGPUs: Rotor wake simulation

OpenACC programming for GPGPUs: Rotor wake simulation DLR.de Chart 1 OpenACC programming for GPGPUs: Rotor wake simulation Melven Röhrig-Zöllner, Achim Basermann Simulations- und Softwaretechnik DLR.de Chart 2 Outline Hardware-Architecture (CPU+GPU) GPU computing

More information

CUDA Accelerated Linpack on Clusters. E. Phillips, NVIDIA Corporation

CUDA Accelerated Linpack on Clusters. E. Phillips, NVIDIA Corporation CUDA Accelerated Linpack on Clusters E. Phillips, NVIDIA Corporation Outline Linpack benchmark CUDA Acceleration Strategy Fermi DGEMM Optimization / Performance Linpack Results Conclusions LINPACK Benchmark

More information

Computer Architecture

Computer Architecture Computer Architecture Slide Sets WS 2013/2014 Prof. Dr. Uwe Brinkschulte M.Sc. Benjamin Betting Part 10 Thread and Task Level Parallelism Computer Architecture Part 10 page 1 of 36 Prof. Dr. Uwe Brinkschulte,

More information

Big Data Analytics Performance for Large Out-Of- Core Matrix Solvers on Advanced Hybrid Architectures

Big Data Analytics Performance for Large Out-Of- Core Matrix Solvers on Advanced Hybrid Architectures Procedia Computer Science Volume 51, 2015, Pages 2774 2778 ICCS 2015 International Conference On Computational Science Big Data Analytics Performance for Large Out-Of- Core Matrix Solvers on Advanced Hybrid

More information

Fundamental CUDA Optimization. NVIDIA Corporation

Fundamental CUDA Optimization. NVIDIA Corporation Fundamental CUDA Optimization NVIDIA Corporation Outline! Fermi Architecture! Kernel optimizations! Launch configuration! Global memory throughput! Shared memory access! Instruction throughput / control

More information

Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA

Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA (Technical Report UMIACS-TR-2010-08) Zheng Wei and Joseph JaJa Department of Electrical and Computer Engineering Institute

More information

An Execution Strategy and Optimized Runtime Support for Parallelizing Irregular Reductions on Modern GPUs

An Execution Strategy and Optimized Runtime Support for Parallelizing Irregular Reductions on Modern GPUs An Execution Strategy and Optimized Runtime Support for Parallelizing Irregular Reductions on Modern GPUs Xin Huo, Vignesh T. Ravi, Wenjing Ma and Gagan Agrawal Department of Computer Science and Engineering

More information

Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters

Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters Flux Vector Splitting Methods for the Euler Equations on 3D Unstructured Meshes for CPU/GPU Clusters Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences,

More information

CS427 Multicore Architecture and Parallel Computing

CS427 Multicore Architecture and Parallel Computing CS427 Multicore Architecture and Parallel Computing Lecture 6 GPU Architecture Li Jiang 2014/10/9 1 GPU Scaling A quiet revolution and potential build-up Calculation: 936 GFLOPS vs. 102 GFLOPS Memory Bandwidth:

More information

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC GPGPUs in HPC VILLE TIMONEN Åbo Akademi University 2.11.2010 @ CSC Content Background How do GPUs pull off higher throughput Typical architecture Current situation & the future GPGPU languages A tale of

More information

Preparing seismic codes for GPUs and other

Preparing seismic codes for GPUs and other Preparing seismic codes for GPUs and other many-core architectures Paulius Micikevicius Developer Technology Engineer, NVIDIA 2010 SEG Post-convention Workshop (W-3) High Performance Implementations of

More information

Memory Bound Computing

Memory Bound Computing Memory Bound Computing Francesc Alted Freelance Consultant & Trainer http://www.blosc.org/professional-services.html Advanced Scientific Programming in Python Reading, UK September, 2016 Goals Recognize

More information

Parallel Programming Multicore systems

Parallel Programming Multicore systems FYS3240 PC-based instrumentation and microcontrollers Parallel Programming Multicore systems Spring 2011 Lecture #9 Bekkeng, 4.4.2011 Introduction Until recently, innovations in processor technology have

More information

Optimizing CUDA for GPU Architecture. CSInParallel Project

Optimizing CUDA for GPU Architecture. CSInParallel Project Optimizing CUDA for GPU Architecture CSInParallel Project August 13, 2014 CONTENTS 1 CUDA Architecture 2 1.1 Physical Architecture........................................... 2 1.2 Virtual Architecture...........................................

More information

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 6. Parallel Processors from Client to Cloud

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 6. Parallel Processors from Client to Cloud COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 6 Parallel Processors from Client to Cloud Introduction Goal: connecting multiple computers to get higher performance

More information

GPU > CPU. FOR HIGH PERFORMANCE COMPUTING PRESENTATION BY - SADIQ PASHA CHETHANA DILIP

GPU > CPU. FOR HIGH PERFORMANCE COMPUTING PRESENTATION BY - SADIQ PASHA CHETHANA DILIP GPU > CPU. FOR HIGH PERFORMANCE COMPUTING PRESENTATION BY - SADIQ PASHA CHETHANA DILIP INTRODUCTION or With the exponential increase in computational power of todays hardware, the complexity of the problem

More information

CDA3101 Recitation Section 13

CDA3101 Recitation Section 13 CDA3101 Recitation Section 13 Storage + Bus + Multicore and some exam tips Hard Disks Traditional disk performance is limited by the moving parts. Some disk terms Disk Performance Platters - the surfaces

More information