Integrating GPUs as fast co-processors into the existing parallel FE package FEAST

Size: px
Start display at page:

Download "Integrating GPUs as fast co-processors into the existing parallel FE package FEAST"

Transcription

1 Integrating GPUs as fast co-processors into the existing parallel FE package FEAST Dipl.-Inform. Dominik Göddeke Mathematics III: Applied Mathematics and Numerics Computer Science VII: Computer Graphics University of Dortmund ASIM th Symposium on Simulation Technique Workshop Implementational Issues in Scientific Computing Hannover, Germany, September 14, 2006

2 Acknowledgements This work is a joint collaboration of Christian Becker, Stefan Turek and the FEAST group in Dortmund Robert Strzodka, Stanford University, Max Planck Center Patrick McCormick, Los Alamos National Laboratories

3 Overview 1 Motivation and Background 2 Integration into FEAST 3 Preliminary Results 4 Summary and Conclusions

4 Overview 1 Motivation and Background 2 Integration into FEAST 3 Preliminary Results 4 Summary and Conclusions

5 Motivation We want to solve large systems that arise from FEM discretisations fast on commodity clusters. CPUs are general-purpose and only achieve close-to-peak performance in-cache. CPUs devote most of the area to memory (hierachies) and not to processing elements (PEs). Emerging parallel specialised chips are PE-dominated and provide potentially lots of FLOPS and huge memory bandwidth. Goal: Investigate how such designs can be used as numerical co-processors in scientific computing.

6 GPU Characteristics Focus exemplary on Graphics Processors: High-level view of the GPU: parallel array processor with up to 24 cores, deeply pipelined. Peak performance > 200 GFLOP/s but very hard to achieve in practice, so more importantly: Sustained memory bandwidth > 20 GB/s for streaming access patterns despite tiny caches. Up to 1 GB onboard DDR4 memory clocked at > 1 GHz. High-end models cost about EUR 500 and consume Watts.

7 GPU Programming and Limitations Challenge: Reformulate algorithms to the data-stream based programming paradigm! Currently only programmable through graphics APIs, some background required. Incoherent branches and incoherent memory access patterns are expensive. Full gather support, very limited scatter. No read-modify-write! Only saturated with lots of parallel threads in flight. Most important limitations PCIe bus between host system and GPU delivers up to 2 GB/s only. GPUs only provide quasi-ieee 32-bit floating point storage and arithmetics. No double precision!

8 Mathematical Background Test problem: Poisson equation in 2D u = f in some domain Ω R 2 with Dirichlet BCs Bilinear conforming Finite Elements (Q 1 ) for increasing level of refinement of underlying quadrilateral mesh. Resulting linear system matrices comprise nine bands. Example: Unitsquare, multigrid in single and double precision: single precision double precision Level Cycles Error Reduction Cycles Error Reduction E E E E E E E E E E E E E E E E E E

9 Mixed Precision Iterative Refinement Single precision computation insufficient for required result accuracy, but: High precision only necessary at few, crucial stages! Mixed precision iterative refinement approach to solve Ax = b: Compute d = b Ax in high precision. Solve Ac = d approximately in low precision. Update x = x + c in high precision and iterate. Use arbitrary iterative inner solvers until few digits are gained locally. Fits naturally on target hardware: Few, high precision updates on the CPU and expensive low precision iterative solution on the GPU. Exhaustive experimental and theoretical foundation: very robust wrt. solvers, degrees of anisotropy in the discretisation and matrix condition. Combined GPU-CPU scheme is up to five times faster than and as accurate as computing entirely on the CPU in double precision, or emulating double precision on the GPU.

10 FEAST Solution Strategy ScaRC approach: Combine advantages of (parallel) domain decomposition and multigrid methods. Exploit structured subdomains for high efficiency. Hide anisotropies locally to increase robustness. Globally unstructured locally structured. Recursive solution: Smooth outer global multigrid with local multigrid on the refined macros. Low communication overhead.

11 Overview 1 Motivation and Background 2 Integration into FEAST 3 Preliminary Results 4 Summary and Conclusions

12 Integration into FEAST FEAST: Under development since 1999, 100K+ lines of code, tuned data structures, adaptions for clusters (MPI) and NEC vector machines. Consequence: Full rewrite to incorporate GPUs is out of question! Goal: Minimally invasive integration. Some observations: Local sub-problems are all highly structured. Smoothing of the outer multigrid is performed locally with small communication overhead. GPU backend adds new smoother, while FEAST maintains all global data structures. Data flow example: Outer MG calls smoother, matrix and current defect are duplicated into GPU memory, smoothing is performed independently, correction term is read back to the CPU.

13 Integration Issues First prototype straightforward to assemble. Expected 5x speedup based on performance of standalone GPU-CPU iterative refinement multigrid (not MG-MG!) solver. Observed a disappointing break-even. Identified and addressed two main bottlenecks: Poor performance for small problem sizes (not enough parallel threads in flight to saturate PEs). Transfers to and from on-chip memory ( manual prefetching ), CPU and GPU computations all done sequentially. Resulting performance for MG-MG: 3.5x compared to CPU-only solution on a single node (Athlon X , GeForce 7800 GTX). GPU smoother only provides multigrid with local Jacobi, anisotropic macros require more powerful smoothers.

14 Performance Improvements Poor performance for small problem sizes Outer MG with F or W cycles results in smoothing of (too) many (too) small problems. GPUs inappropriate for small problems, plus additional transfer overhead/penalty. Solution: Dynamic CPU-GPU switch based on problem size: Small problems are rescheduled to a single precision CPU smoother. Overlapping transfers and computing CPU idle when GPU computes and vice versa, but some CPU support required to orchestrate GPU. Solution: Streaming compute model: Smooth problem i on GPU while transferring data for problem i + 1 and i 1 and update defect for problem i 1.

15 Coarsely Adapted Grids GPU offers MG with Jacobi smoother. CPU offers wide range of MG smoothers, esp. for anisotropic generalised tensorproduct meshes. Goal: Many easy sub-problems are scheduled on the GPU, while the CPU smoothes few hard ones with a more powerful numerical scheme in the meantime. This is a hard dynamic scheduling problem. All tests so far are based on (suboptimal) static partitionings of the domain.

16 Overview 1 Motivation and Background 2 Integration into FEAST 3 Preliminary Results 4 Summary and Conclusions

17 Test Environment Cluster with 32 compute nodes and 1 master node. Dual Intel EM64T 3.4 GHz, NVIDIA Quadro FX1400 PCIe mid-range graphics card. Fully connected via Infiniband. Two test cases: A Full cartesian case: Static 3:1 scheduling GPU:GPU, both MG-Jacobi as local smoother to outer (parallel) MG. B Coarsely adapted grids: Some sub-problems require a more powerful smoother (CPU), while cartesian sub-problems are scheduled to the GPU. Level 10 computations missing for all test cases, results are preliminary since we did not have time to adapt FEAST to the Xeon architecture.

18 Test Case A CPU vs. CPU-GPU, one or two jobs per dualnode CPU, GPU Performance Study for 1x16p, 2x16p (Threshold=20K) CPU, GPU Performance Study for 1x16p, 2x16p (Threshold=20K) 300 1x16p CPU MGCPU x16p CPU MGCPU x16p GPU FX1400 2x16p CPU MGCPU2 2x16p GPU FX x16p GPU FX1400 2x16p CPU MGCPU2 2x16p GPU FX1400 Seconds Seconds per macro grid node Level absolute and normalised time for solution Configurations: 1x16p CPU: 16 nodes, one CPU each, one CPU process 1x16p GPU: 16 nodes, one GPU each, one GPU process 2x16p CPU: 16 nodes, two CPUs each, two CPU process 2x16p GPU: 16 nodes, two CPUs and one GPU each, one CPU and one GPU process Level

19 Test Case A CPU vs. CPU-GPU, one or two jobs per dualnode CPU, GPU Performance Study for 1x16p, 2x16p (Threshold=20K) CPU, GPU Performance Study for 1x16p, 2x16p (Threshold=20K) 300 1x16p CPU MGCPU x16p CPU MGCPU x16p GPU FX1400 2x16p CPU MGCPU2 2x16p GPU FX x16p GPU FX1400 2x16p CPU MGCPU2 2x16p GPU FX1400 Seconds Seconds per macro grid node Level Level absolute and normalised time for solution 2nd CPU job per node gains 20% performance only (shared FSB). CPU-GPU configurations jiggle because of CPU-switch in GPU module for small levels. For large problem sizes, the GPU outperforms the CPU jobs, 1x16 GPU is even faster than 2x16 CPU!

20 Test Case A CPU vs. CPU-GPU scalability test CPU, GPU Performance Study for 1x32p, 2x16p (Threshold=20K) CPU, GPU Performance Study for 1x32p, 2x16p (Threshold=20K) 250 1x32p CPU MGCPU x32p CPU MGCPU2 1x32p GPU FX1400 2x16p CPU MGCPU x32p GPU FX1400 2x16p CPU MGCPU2 Seconds x16p GPU FX1400 Seconds per macro grid node x16p GPU FX Level absolute and normalised time for solution Configurations: 2x16p CPU: 16 nodes, two CPUs each, two CPU process 2x16p GPU: 16 nodes, two CPUs and one GPU each, one CPU and one GPU process 1x32p CPU: 32 nodes, one CPU each, one CPU process 1x32p GPU: 32 nodes, one CPU and one GPU each, one GPU process Level

21 Test Case A CPU vs. CPU-GPU scalability test CPU, GPU Performance Study for 1x32p, 2x16p (Threshold=20K) CPU, GPU Performance Study for 1x32p, 2x16p (Threshold=20K) 250 1x32p CPU MGCPU x32p CPU MGCPU2 1x32p GPU FX1400 2x16p CPU MGCPU x32p GPU FX1400 2x16p CPU MGCPU2 Seconds x16p GPU FX1400 Seconds per macro grid node x16p GPU FX Level Level absolute and normalised time for solution Significant gain of 1x32 shows importance of memory bandwidth for the Xeons. CPU-GPU configuration wins by a smaller margin than before. Tendency: Increasing the problem sizes leads to increasing time per grid node on the CPU, but not on the GPU.

22 Test Case B Coarsely adapted grids CPU, GPU Performance Study for 2x16p (Threshold=20K) CPU, GPU Performance Study for 2x16p (Threshold=20K) 350 2x16p CPU ADI + CPU JAC x16p CPU ADI + CPU JAC 2x16p CPU ADI + GPU FX1400 2x16p CPU ADI + GPU FX Seconds Seconds per macro grid node Level Level absolute and normalised time for solution Configurations: 2x16p CPU: 16 nodes, CPU-MG-ADITRIGS and CPU-MG-JACOBI 2x16p GPU: 16 nodes, CPU-MG-ADITRIGS and GPU-MG-JACOBI

23 Test Case B Coarsely adapted grids CPU, GPU Performance Study for 2x16p (Threshold=20K) CPU, GPU Performance Study for 2x16p (Threshold=20K) 350 2x16p CPU ADI + CPU JAC x16p CPU ADI + CPU JAC 2x16p CPU ADI + GPU FX1400 2x16p CPU ADI + GPU FX Seconds Seconds per macro grid node Level Level absolute and normalised time for solution Results are consistent with previous graphs. Additional advantage of CPU-GPU configuration: Less strain on the memory subsystem.

24 Overview 1 Motivation and Background 2 Integration into FEAST 3 Preliminary Results 4 Summary and Conclusions

25 Summary and Conclusions Clearly work in progress in its current state. Interesting perspectives: Inexpensive upgrade of commodity clusters wrt. to TCO. Potential to accelerate production codes. But: Maintaining two code lines on the solver and data structure level, not on the application level. Paradigm shift to data parallelism: Multicores, Cell BE etc., so start learning now: The first honest attempt at petascale computing, the IBM Roadrunner at LANL, will contain multi-gpus, Cells, Opterons and will in general be a massively parallel hybrid machine.

26 Further Reading Owens et al. A Survey Of General Purpose Computing On Graphics Hardware, Eurographics 2005 State Of The Art Report Papers, Demos, Forums, FAQs goeddeke/gpgpu Tutorials

Integrating GPUs as fast co-processors into the existing parallel FE package FEAST

Integrating GPUs as fast co-processors into the existing parallel FE package FEAST Integrating GPUs as fast co-processors into the existing parallel FE package FEAST Dominik Göddeke Universität Dortmund dominik.goeddeke@math.uni-dortmund.de Christian Becker christian.becker@math.uni-dortmund.de

More information

High Performance Computing for PDE Towards Petascale Computing

High Performance Computing for PDE Towards Petascale Computing High Performance Computing for PDE Towards Petascale Computing S. Turek, D. Göddeke with support by: Chr. Becker, S. Buijssen, M. Grajewski, H. Wobker Institut für Angewandte Mathematik, Univ. Dortmund

More information

Performance and accuracy of hardware-oriented. native-, solvers in FEM simulations

Performance and accuracy of hardware-oriented. native-, solvers in FEM simulations Robert Strzodka, Stanford University Dominik Göddeke, Universität Dortmund Performance and accuracy of hardware-oriented native-, emulated- and mixed-precision solvers in FEM simulations Number of slices

More information

Accelerating Double Precision FEM Simulations with GPUs

Accelerating Double Precision FEM Simulations with GPUs Accelerating Double Precision FEM Simulations with GPUs Dominik Göddeke 1 3 Robert Strzodka 2 Stefan Turek 1 dominik.goeddeke@math.uni-dortmund.de 1 Mathematics III: Applied Mathematics and Numerics, University

More information

Performance and accuracy of hardware-oriented native-, solvers in FEM simulations

Performance and accuracy of hardware-oriented native-, solvers in FEM simulations Performance and accuracy of hardware-oriented native-, emulated- and mixed-precision solvers in FEM simulations Dominik Göddeke Angewandte Mathematik und Numerik, Universität Dortmund Acknowledgments Joint

More information

Case study: GPU acceleration of parallel multigrid solvers

Case study: GPU acceleration of parallel multigrid solvers Case study: GPU acceleration of parallel multigrid solvers Dominik Göddeke Architecture of Computing Systems GPGPU and CUDA Tutorials Dresden, Germany, February 25 2008 2 Acknowledgements Hilmar Wobker,

More information

GPU Cluster Computing for FEM

GPU Cluster Computing for FEM GPU Cluster Computing for FEM Dominik Göddeke Sven H.M. Buijssen, Hilmar Wobker and Stefan Turek Angewandte Mathematik und Numerik TU Dortmund, Germany dominik.goeddeke@math.tu-dortmund.de GPU Computing

More information

Computing on GPU Clusters

Computing on GPU Clusters Computing on GPU Clusters Robert Strzodka (MPII), Dominik Göddeke G (TUDo( TUDo), Dominik Behr (AMD) Conference on Parallel Processing and Applied Mathematics Wroclaw, Poland, September 13-16, 16, 2009

More information

High Performance Computing for PDE Some numerical aspects of Petascale Computing

High Performance Computing for PDE Some numerical aspects of Petascale Computing High Performance Computing for PDE Some numerical aspects of Petascale Computing S. Turek, D. Göddeke with support by: Chr. Becker, S. Buijssen, M. Grajewski, H. Wobker Institut für Angewandte Mathematik,

More information

GPU Acceleration of Unmodified CSM and CFD Solvers

GPU Acceleration of Unmodified CSM and CFD Solvers GPU Acceleration of Unmodified CSM and CFD Solvers Dominik Göddeke Sven H.M. Buijssen, Hilmar Wobker and Stefan Turek Angewandte Mathematik und Numerik TU Dortmund, Germany dominik.goeddeke@math.tu-dortmund.de

More information

Accelerating Double Precision FEM Simulations with GPUs

Accelerating Double Precision FEM Simulations with GPUs In Proceedings of ASIM 2005-18th Symposium on Simulation Technique, Sept. 2005. Accelerating Double Precision FEM Simulations with GPUs Dominik Göddeke dominik.goeddeke@math.uni-dortmund.de Universität

More information

FOR P3: A monolithic multigrid FEM solver for fluid structure interaction

FOR P3: A monolithic multigrid FEM solver for fluid structure interaction FOR 493 - P3: A monolithic multigrid FEM solver for fluid structure interaction Stefan Turek 1 Jaroslav Hron 1,2 Hilmar Wobker 1 Mudassar Razzaq 1 1 Institute of Applied Mathematics, TU Dortmund, Germany

More information

The GPU as a co-processor in FEM-based simulations. Preliminary results. Dipl.-Inform. Dominik Göddeke.

The GPU as a co-processor in FEM-based simulations. Preliminary results. Dipl.-Inform. Dominik Göddeke. The GPU as a co-processor in FEM-based simulations Preliminary results Dipl.-Inform. Dominik Göddeke dominik.goeddeke@mathematik.uni-dortmund.de Institute of Applied Mathematics University of Dortmund

More information

GPU Cluster Computing for Finite Element Applications

GPU Cluster Computing for Finite Element Applications GPU Cluster Computing for Finite Element Applications Dominik Göddeke, Hilmar Wobker, Sven H.M. Buijssen and Stefan Turek Applied Mathematics TU Dortmund dominik.goeddeke@math.tu-dortmund.de http://www.mathematik.tu-dortmund.de/~goeddeke

More information

Performance. Computing (UCHPC)) for Finite Element Simulations

Performance. Computing (UCHPC)) for Finite Element Simulations technische universität dortmund Universität Dortmund fakultät für mathematik LS III (IAM) UnConventional High Performance Computing (UCHPC)) for Finite Element Simulations S. Turek, Chr. Becker, S. Buijssen,

More information

Efficient Finite Element Geometric Multigrid Solvers for Unstructured Grids on GPUs

Efficient Finite Element Geometric Multigrid Solvers for Unstructured Grids on GPUs Efficient Finite Element Geometric Multigrid Solvers for Unstructured Grids on GPUs Markus Geveler, Dirk Ribbrock, Dominik Göddeke, Peter Zajac, Stefan Turek Institut für Angewandte Mathematik TU Dortmund,

More information

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM Dominik Göddeke and Robert Strzodka Institut für Angewandte Mathematik (LS3), TU Dortmund Max Planck Institut

More information

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers and Applications in CFD and CSM Dominik Göddeke Institut für Angewandte Mathematik (LS3) TU Dortmund dominik.goeddeke@math.tu-dortmund.de SIMTECH

More information

Towards a complete FEM-based simulation toolkit on GPUs: Geometric Multigrid solvers

Towards a complete FEM-based simulation toolkit on GPUs: Geometric Multigrid solvers Towards a complete FEM-based simulation toolkit on GPUs: Geometric Multigrid solvers Markus Geveler, Dirk Ribbrock, Dominik Göddeke, Peter Zajac, Stefan Turek Institut für Angewandte Mathematik TU Dortmund,

More information

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers

Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers Mixed-Precision GPU-Multigrid Solvers with Strong Smoothers Dominik Göddeke Institut für Angewandte Mathematik (LS3) TU Dortmund dominik.goeddeke@math.tu-dortmund.de ILAS 2011 Mini-Symposium: Parallel

More information

Hardware-Oriented Numerics - High Performance FEM Simulation of PDEs

Hardware-Oriented Numerics - High Performance FEM Simulation of PDEs Hardware-Oriented umerics - High Performance FEM Simulation of PDEs Stefan Turek Institut für Angewandte Mathematik, Univ. Dortmund http://www.mathematik.uni-dortmund.de/ls3 http://www.featflow.de Performance

More information

Finite Element Multigrid Solvers for PDE Problems on GPUs and GPU Clusters

Finite Element Multigrid Solvers for PDE Problems on GPUs and GPU Clusters Finite Element Multigrid Solvers for PDE Problems on GPUs and GPU Clusters Robert Strzodka Integrative Scientific Computing Max Planck Institut Informatik www.mpi-inf.mpg.de/ ~strzodka Dominik Göddeke

More information

Very fast simulation of nonlinear water waves in very large numerical wave tanks on affordable graphics cards

Very fast simulation of nonlinear water waves in very large numerical wave tanks on affordable graphics cards Very fast simulation of nonlinear water waves in very large numerical wave tanks on affordable graphics cards By Allan P. Engsig-Karup, Morten Gorm Madsen and Stefan L. Glimberg DTU Informatics Workshop

More information

Hardware-Oriented Finite Element Multigrid Solvers for PDEs

Hardware-Oriented Finite Element Multigrid Solvers for PDEs Hardware-Oriented Finite Element Multigrid Solvers for PDEs Dominik Göddeke Institut für Angewandte Mathematik (LS3) TU Dortmund dominik.goeddeke@math.tu-dortmund.de ASIM Workshop Trends in CSE, Garching,

More information

Cyclic Reduction Tridiagonal Solvers on GPUs Applied to Mixed Precision Multigrid

Cyclic Reduction Tridiagonal Solvers on GPUs Applied to Mixed Precision Multigrid PREPRINT OF AN ARTICLE ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 1 Cyclic Reduction Tridiagonal Solvers on GPUs Applied to Mixed Precision Multigrid Dominik Göddeke

More information

Large scale Imaging on Current Many- Core Platforms

Large scale Imaging on Current Many- Core Platforms Large scale Imaging on Current Many- Core Platforms SIAM Conf. on Imaging Science 2012 May 20, 2012 Dr. Harald Köstler Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen,

More information

Resilient geometric finite-element multigrid algorithms using minimised checkpointing

Resilient geometric finite-element multigrid algorithms using minimised checkpointing Resilient geometric finite-element multigrid algorithms using minimised checkpointing Dominik Göddeke, Mirco Altenbernd, Dirk Ribbrock Institut für Angewandte Mathematik (LS3) Fakultät für Mathematik TU

More information

Numerical Algorithms on Multi-GPU Architectures

Numerical Algorithms on Multi-GPU Architectures Numerical Algorithms on Multi-GPU Architectures Dr.-Ing. Harald Köstler 2 nd International Workshops on Advances in Computational Mechanics Yokohama, Japan 30.3.2010 2 3 Contents Motivation: Applications

More information

Performance and accuracy of hardware-oriented native-, emulated- and mixed-precision solvers in FEM simulations (Part 2: Double Precision GPUs)

Performance and accuracy of hardware-oriented native-, emulated- and mixed-precision solvers in FEM simulations (Part 2: Double Precision GPUs) Performance and accuracy of hardware-oriented native-, emulated- and mixed-precision solvers in FEM simulations (Part 2: Double Precision GPUs) Dominik Göddeke and Robert Strzodka Applied Mathematics,

More information

How to Optimize Geometric Multigrid Methods on GPUs

How to Optimize Geometric Multigrid Methods on GPUs How to Optimize Geometric Multigrid Methods on GPUs Markus Stürmer, Harald Köstler, Ulrich Rüde System Simulation Group University Erlangen March 31st 2011 at Copper Schedule motivation imaging in gradient

More information

Dominik Göddeke* and Hilmar Wobker

Dominik Göddeke* and Hilmar Wobker 254 Int. J. Computational Science and Engineering, Vol. 4, No. 4, 2009 Co-processor acceleration of an unmodified parallel solid mechanics code with FEASTGPU Dominik Göddeke* and Hilmar Wobker Institute

More information

Two main topics: `A posteriori (error) control of FEM/FV discretizations with adaptive meshing strategies' `(Iterative) Solution strategies for huge s

Two main topics: `A posteriori (error) control of FEM/FV discretizations with adaptive meshing strategies' `(Iterative) Solution strategies for huge s . Trends in processor technology and their impact on Numerics for PDE's S. Turek Institut fur Angewandte Mathematik, Universitat Heidelberg Im Neuenheimer Feld 294, 69120 Heidelberg, Germany http://gaia.iwr.uni-heidelberg.de/~ture

More information

Accelerating image registration on GPUs

Accelerating image registration on GPUs Accelerating image registration on GPUs Harald Köstler, Sunil Ramgopal Tatavarty SIAM Conference on Imaging Science (IS10) 13.4.2010 Contents Motivation: Image registration with FAIR GPU Programming Combining

More information

Why GPUs? Robert Strzodka (MPII), Dominik Göddeke G. TUDo), Dominik Behr (AMD)

Why GPUs? Robert Strzodka (MPII), Dominik Göddeke G. TUDo), Dominik Behr (AMD) Why GPUs? Robert Strzodka (MPII), Dominik Göddeke G (TUDo( TUDo), Dominik Behr (AMD) Conference on Parallel Processing and Applied Mathematics Wroclaw, Poland, September 13-16, 16, 2009 www.gpgpu.org/ppam2009

More information

Automatic Generation of Algorithms and Data Structures for Geometric Multigrid. Harald Köstler, Sebastian Kuckuk Siam Parallel Processing 02/21/2014

Automatic Generation of Algorithms and Data Structures for Geometric Multigrid. Harald Köstler, Sebastian Kuckuk Siam Parallel Processing 02/21/2014 Automatic Generation of Algorithms and Data Structures for Geometric Multigrid Harald Köstler, Sebastian Kuckuk Siam Parallel Processing 02/21/2014 Introduction Multigrid Goal: Solve a partial differential

More information

Technology for a better society. hetcomp.com

Technology for a better society. hetcomp.com Technology for a better society hetcomp.com 1 J. Seland, C. Dyken, T. R. Hagen, A. R. Brodtkorb, J. Hjelmervik,E Bjønnes GPU Computing USIT Course Week 16th November 2011 hetcomp.com 2 9:30 10:15 Introduction

More information

Fast and Accurate Finite Element Multigrid Solvers for PDE Simulations on GPU Clusters

Fast and Accurate Finite Element Multigrid Solvers for PDE Simulations on GPU Clusters Fast and Accurate Finite Element Multigrid Solvers for PDE Simulations on GPU Clusters Dominik Göddeke Institut für Angewandte Mathematik (LS3) TU Dortmund dominik.goeddeke@math.tu-dortmund.de Kolloquium

More information

On Level Scheduling for Incomplete LU Factorization Preconditioners on Accelerators

On Level Scheduling for Incomplete LU Factorization Preconditioners on Accelerators On Level Scheduling for Incomplete LU Factorization Preconditioners on Accelerators Karl Rupp, Barry Smith rupp@mcs.anl.gov Mathematics and Computer Science Division Argonne National Laboratory FEMTEC

More information

PhD Student. Associate Professor, Co-Director, Center for Computational Earth and Environmental Science. Abdulrahman Manea.

PhD Student. Associate Professor, Co-Director, Center for Computational Earth and Environmental Science. Abdulrahman Manea. Abdulrahman Manea PhD Student Hamdi Tchelepi Associate Professor, Co-Director, Center for Computational Earth and Environmental Science Energy Resources Engineering Department School of Earth Sciences

More information

Accelerating Implicit LS-DYNA with GPU

Accelerating Implicit LS-DYNA with GPU Accelerating Implicit LS-DYNA with GPU Yih-Yih Lin Hewlett-Packard Company Abstract A major hindrance to the widespread use of Implicit LS-DYNA is its high compute cost. This paper will show modern GPU,

More information

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC 2015-09-14 Algorithms, System and Data Centre Optimisation for Energy Efficient HPC Vincent Heuveline URZ Computing Centre of Heidelberg University EMCL Engineering Mathematics and Computing Lab 1 Energy

More information

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE)

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) NATALIA GIMELSHEIN ANSHUL GUPTA STEVE RENNICH SEID KORIC NVIDIA IBM NVIDIA NCSA WATSON SPARSE MATRIX PACKAGE (WSMP) Cholesky, LDL T, LU factorization

More information

Realization of a low energy HPC platform powered by renewables - A case study: Technical, numerical and implementation aspects

Realization of a low energy HPC platform powered by renewables - A case study: Technical, numerical and implementation aspects Realization of a low energy HPC platform powered by renewables - A case study: Technical, numerical and implementation aspects Markus Geveler, Stefan Turek, Dirk Ribbrock PACO Magdeburg 2015 / 7 / 7 markus.geveler@math.tu-dortmund.de

More information

Data parallel algorithms, algorithmic building blocks, precision vs. accuracy

Data parallel algorithms, algorithmic building blocks, precision vs. accuracy Data parallel algorithms, algorithmic building blocks, precision vs. accuracy Robert Strzodka Architecture of Computing Systems GPGPU and CUDA Tutorials Dresden, Germany, February 25 2008 2 Overview Parallel

More information

DIFFERENTIAL. Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka

DIFFERENTIAL. Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka USE OF FOR Tomáš Oberhuber, Atsushi Suzuki, Jan Vacata, Vítězslav Žabka Faculty of Nuclear Sciences and Physical Engineering Czech Technical University in Prague Mini workshop on advanced numerical methods

More information

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Introduction: Modern computer architecture The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Motivation: Multi-Cores where and why Introduction: Moore s law Intel

More information

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it Lab 1 Starts Today Already posted on Canvas (under Assignment) Let s look at it CS 590: High Performance Computing Parallel Computer Architectures Fengguang Song Department of Computer Science IUPUI 1

More information

Lecture 13: March 25

Lecture 13: March 25 CISC 879 Software Support for Multicore Architectures Spring 2007 Lecture 13: March 25 Lecturer: John Cavazos Scribe: Ying Yu 13.1. Bryan Youse-Optimization of Sparse Matrix-Vector Multiplication on Emerging

More information

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can

More information

Study and implementation of computational methods for Differential Equations in heterogeneous systems. Asimina Vouronikoy - Eleni Zisiou

Study and implementation of computational methods for Differential Equations in heterogeneous systems. Asimina Vouronikoy - Eleni Zisiou Study and implementation of computational methods for Differential Equations in heterogeneous systems Asimina Vouronikoy - Eleni Zisiou Outline Introduction Review of related work Cyclic Reduction Algorithm

More information

MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures

MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures MAGMA a New Generation of Linear Algebra Libraries for GPU and Multicore Architectures Stan Tomov Innovative Computing Laboratory University of Tennessee, Knoxville OLCF Seminar Series, ORNL June 16, 2010

More information

GPGPU. Peter Laurens 1st-year PhD Student, NSC

GPGPU. Peter Laurens 1st-year PhD Student, NSC GPGPU Peter Laurens 1st-year PhD Student, NSC Presentation Overview 1. What is it? 2. What can it do for me? 3. How can I get it to do that? 4. What s the catch? 5. What s the future? What is it? Introducing

More information

Efficient multigrid solvers for strongly anisotropic PDEs in atmospheric modelling

Efficient multigrid solvers for strongly anisotropic PDEs in atmospheric modelling Iterative Solvers Numerical Results Conclusion and outlook 1/22 Efficient multigrid solvers for strongly anisotropic PDEs in atmospheric modelling Part II: GPU Implementation and Scaling on Titan Eike

More information

High-Order Finite-Element Earthquake Modeling on very Large Clusters of CPUs or GPUs

High-Order Finite-Element Earthquake Modeling on very Large Clusters of CPUs or GPUs High-Order Finite-Element Earthquake Modeling on very Large Clusters of CPUs or GPUs Gordon Erlebacher Department of Scientific Computing Sept. 28, 2012 with Dimitri Komatitsch (Pau,France) David Michea

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures

Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures Performance and Accuracy of Lattice-Boltzmann Kernels on Multi- and Manycore Architectures Dirk Ribbrock, Markus Geveler, Dominik Göddeke, Stefan Turek Angewandte Mathematik, Technische Universität Dortmund

More information

Matrix-free multi-gpu Implementation of Elliptic Solvers for strongly anisotropic PDEs

Matrix-free multi-gpu Implementation of Elliptic Solvers for strongly anisotropic PDEs Iterative Solvers Numerical Results Conclusion and outlook 1/18 Matrix-free multi-gpu Implementation of Elliptic Solvers for strongly anisotropic PDEs Eike Hermann Müller, Robert Scheichl, Eero Vainikko

More information

Introduction to parallel computers and parallel programming. Introduction to parallel computersand parallel programming p. 1

Introduction to parallel computers and parallel programming. Introduction to parallel computersand parallel programming p. 1 Introduction to parallel computers and parallel programming Introduction to parallel computersand parallel programming p. 1 Content A quick overview of morden parallel hardware Parallelism within a chip

More information

GPU Computation Strategies & Tricks. Ian Buck NVIDIA

GPU Computation Strategies & Tricks. Ian Buck NVIDIA GPU Computation Strategies & Tricks Ian Buck NVIDIA Recent Trends 2 Compute is Cheap parallelism to keep 100s of ALUs per chip busy shading is highly parallel millions of fragments per frame 0.5mm 64-bit

More information

AmgX 2.0: Scaling toward CORAL Joe Eaton, November 19, 2015

AmgX 2.0: Scaling toward CORAL Joe Eaton, November 19, 2015 AmgX 2.0: Scaling toward CORAL Joe Eaton, November 19, 2015 Agenda Introduction to AmgX Current Capabilities Scaling V2.0 Roadmap for the future 2 AmgX Fast, scalable linear solvers, emphasis on iterative

More information

Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions

Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions Ziming Zhong Vladimir Rychkov Alexey Lastovetsky Heterogeneous Computing

More information

Reconstruction of Trees from Laser Scan Data and further Simulation Topics

Reconstruction of Trees from Laser Scan Data and further Simulation Topics Reconstruction of Trees from Laser Scan Data and further Simulation Topics Helmholtz-Research Center, Munich Daniel Ritter http://www10.informatik.uni-erlangen.de Overview 1. Introduction of the Chair

More information

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation Ray Browell nvidia Technology Theater SC12 1 2012 ANSYS, Inc. nvidia Technology Theater SC12 HPC Revolution Recent

More information

Stream Processing with CUDA TM A Case Study Using Gamebryo's Floodgate Technology

Stream Processing with CUDA TM A Case Study Using Gamebryo's Floodgate Technology Stream Processing with CUDA TM A Case Study Using Gamebryo's Floodgate Technology Dan Amerson, Technical Director, Emergent Game Technologies Purpose Why am I giving this talk? To answer this question:

More information

Practical Scientific Computing

Practical Scientific Computing Practical Scientific Computing Performance-optimized Programming Preliminary discussion: July 11, 2008 Dr. Ralf-Peter Mundani, mundani@tum.de Dipl.-Ing. Ioan Lucian Muntean, muntean@in.tum.de MSc. Csaba

More information

Accelerating GPU computation through mixed-precision methods. Michael Clark Harvard-Smithsonian Center for Astrophysics Harvard University

Accelerating GPU computation through mixed-precision methods. Michael Clark Harvard-Smithsonian Center for Astrophysics Harvard University Accelerating GPU computation through mixed-precision methods Michael Clark Harvard-Smithsonian Center for Astrophysics Harvard University Outline Motivation Truncated Precision using CUDA Solving Linear

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

Optimizing Data Locality for Iterative Matrix Solvers on CUDA

Optimizing Data Locality for Iterative Matrix Solvers on CUDA Optimizing Data Locality for Iterative Matrix Solvers on CUDA Raymond Flagg, Jason Monk, Yifeng Zhu PhD., Bruce Segee PhD. Department of Electrical and Computer Engineering, University of Maine, Orono,

More information

Advances of parallel computing. Kirill Bogachev May 2016

Advances of parallel computing. Kirill Bogachev May 2016 Advances of parallel computing Kirill Bogachev May 2016 Demands in Simulations Field development relies more and more on static and dynamic modeling of the reservoirs that has come a long way from being

More information

Finite Difference Time Domain (FDTD) Simulations Using Graphics Processors

Finite Difference Time Domain (FDTD) Simulations Using Graphics Processors Finite Difference Time Domain (FDTD) Simulations Using Graphics Processors Samuel Adams and Jason Payne US Air Force Research Laboratory, Human Effectiveness Directorate (AFRL/HE), Brooks City-Base, TX

More information

GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement

GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement Hartwig Anzt, Piotr Luszczek 2, Jack Dongarra 234, and Vincent Heuveline Karlsruhe Institute of Technology, Karlsruhe,

More information

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics H. Y. Schive ( 薛熙于 ) Graduate Institute of Physics, National Taiwan University Leung Center for Cosmology and Particle Astrophysics

More information

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620 Introduction to Parallel and Distributed Computing Linh B. Ngo CPSC 3620 Overview: What is Parallel Computing To be run using multiple processors A problem is broken into discrete parts that can be solved

More information

OP2 FOR MANY-CORE ARCHITECTURES

OP2 FOR MANY-CORE ARCHITECTURES OP2 FOR MANY-CORE ARCHITECTURES G.R. Mudalige, M.B. Giles, Oxford e-research Centre, University of Oxford gihan.mudalige@oerc.ox.ac.uk 27 th Jan 2012 1 AGENDA OP2 Current Progress Future work for OP2 EPSRC

More information

Workshop on Efficient Solvers in Biomedical Applications, Graz, July 2-5, 2012

Workshop on Efficient Solvers in Biomedical Applications, Graz, July 2-5, 2012 Workshop on Efficient Solvers in Biomedical Applications, Graz, July 2-5, 2012 This work was performed under the auspices of the U.S. Department of Energy by under contract DE-AC52-07NA27344. Lawrence

More information

Kartik Lakhotia, Rajgopal Kannan, Viktor Prasanna USENIX ATC 18

Kartik Lakhotia, Rajgopal Kannan, Viktor Prasanna USENIX ATC 18 Accelerating PageRank using Partition-Centric Processing Kartik Lakhotia, Rajgopal Kannan, Viktor Prasanna USENIX ATC 18 Outline Introduction Partition-centric Processing Methodology Analytical Evaluation

More information

Computing architectures Part 2 TMA4280 Introduction to Supercomputing

Computing architectures Part 2 TMA4280 Introduction to Supercomputing Computing architectures Part 2 TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Supercomputing What is the motivation for Supercomputing? Solve complex problems fast and accurately:

More information

Solving Large Complex Problems. Efficient and Smart Solutions for Large Models

Solving Large Complex Problems. Efficient and Smart Solutions for Large Models Solving Large Complex Problems Efficient and Smart Solutions for Large Models 1 ANSYS Structural Mechanics Solutions offers several techniques 2 Current trends in simulation show an increased need for

More information

Graphics Processor Acceleration and YOU

Graphics Processor Acceleration and YOU Graphics Processor Acceleration and YOU James Phillips Research/gpu/ Goals of Lecture After this talk the audience will: Understand how GPUs differ from CPUs Understand the limits of GPU acceleration Have

More information

Contents. Preface xvii Acknowledgments. CHAPTER 1 Introduction to Parallel Computing 1. CHAPTER 2 Parallel Programming Platforms 11

Contents. Preface xvii Acknowledgments. CHAPTER 1 Introduction to Parallel Computing 1. CHAPTER 2 Parallel Programming Platforms 11 Preface xvii Acknowledgments xix CHAPTER 1 Introduction to Parallel Computing 1 1.1 Motivating Parallelism 2 1.1.1 The Computational Power Argument from Transistors to FLOPS 2 1.1.2 The Memory/Disk Speed

More information

High performance 2D Discrete Fourier Transform on Heterogeneous Platforms. Shrenik Lad, IIIT Hyderabad Advisor : Dr. Kishore Kothapalli

High performance 2D Discrete Fourier Transform on Heterogeneous Platforms. Shrenik Lad, IIIT Hyderabad Advisor : Dr. Kishore Kothapalli High performance 2D Discrete Fourier Transform on Heterogeneous Platforms Shrenik Lad, IIIT Hyderabad Advisor : Dr. Kishore Kothapalli Motivation Fourier Transform widely used in Physics, Astronomy, Engineering

More information

Application Performance on Dual Processor Cluster Nodes

Application Performance on Dual Processor Cluster Nodes Application Performance on Dual Processor Cluster Nodes by Kent Milfeld milfeld@tacc.utexas.edu edu Avijit Purkayastha, Kent Milfeld, Chona Guiang, Jay Boisseau TEXAS ADVANCED COMPUTING CENTER Thanks Newisys

More information

High Performance Computing with Accelerators

High Performance Computing with Accelerators High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing

More information

Using GPUs for unstructured grid CFD

Using GPUs for unstructured grid CFD Using GPUs for unstructured grid CFD Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Oxford e-research Centre Schlumberger Abingdon Technology Centre, February 17th, 2011

More information

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs A paper comparing modern architectures Joakim Skarding Christian Chavez Motivation Continue scaling of performance

More information

Optimisation Myths and Facts as Seen in Statistical Physics

Optimisation Myths and Facts as Seen in Statistical Physics Optimisation Myths and Facts as Seen in Statistical Physics Massimo Bernaschi Institute for Applied Computing National Research Council & Computer Science Department University La Sapienza Rome - ITALY

More information

GPU-Accelerated Algebraic Multigrid for Commercial Applications. Joe Eaton, Ph.D. Manager, NVAMG CUDA Library NVIDIA

GPU-Accelerated Algebraic Multigrid for Commercial Applications. Joe Eaton, Ph.D. Manager, NVAMG CUDA Library NVIDIA GPU-Accelerated Algebraic Multigrid for Commercial Applications Joe Eaton, Ph.D. Manager, NVAMG CUDA Library NVIDIA ANSYS Fluent 2 Fluent control flow Accelerate this first Non-linear iterations Assemble

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

Speedup Altair RADIOSS Solvers Using NVIDIA GPU

Speedup Altair RADIOSS Solvers Using NVIDIA GPU Innovation Intelligence Speedup Altair RADIOSS Solvers Using NVIDIA GPU Eric LEQUINIOU, HPC Director Hongwei Zhou, Senior Software Developer May 16, 2012 Innovation Intelligence ALTAIR OVERVIEW Altair

More information

On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters

On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters 1 On the Comparative Performance of Parallel Algorithms on Small GPU/CUDA Clusters N. P. Karunadasa & D. N. Ranasinghe University of Colombo School of Computing, Sri Lanka nishantha@opensource.lk, dnr@ucsc.cmb.ac.lk

More information

8. Hardware-Aware Numerics. Approaching supercomputing...

8. Hardware-Aware Numerics. Approaching supercomputing... Approaching supercomputing... Numerisches Programmieren, Hans-Joachim Bungartz page 1 of 48 8.1. Hardware-Awareness Introduction Since numerical algorithms are ubiquitous, they have to run on a broad spectrum

More information

8. Hardware-Aware Numerics. Approaching supercomputing...

8. Hardware-Aware Numerics. Approaching supercomputing... Approaching supercomputing... Numerisches Programmieren, Hans-Joachim Bungartz page 1 of 22 8.1. Hardware-Awareness Introduction Since numerical algorithms are ubiquitous, they have to run on a broad spectrum

More information

Challenges of Scaling Algebraic Multigrid Across Modern Multicore Architectures. Allison H. Baker, Todd Gamblin, Martin Schulz, and Ulrike Meier Yang

Challenges of Scaling Algebraic Multigrid Across Modern Multicore Architectures. Allison H. Baker, Todd Gamblin, Martin Schulz, and Ulrike Meier Yang Challenges of Scaling Algebraic Multigrid Across Modern Multicore Architectures. Allison H. Baker, Todd Gamblin, Martin Schulz, and Ulrike Meier Yang Multigrid Solvers Method of solving linear equation

More information

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms Complexity and Advanced Algorithms Introduction to Parallel Algorithms Why Parallel Computing? Save time, resources, memory,... Who is using it? Academia Industry Government Individuals? Two practical

More information

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller Entertainment Graphics: Virtual Realism for the Masses CSE 591: GPU Programming Introduction Computer games need to have: realistic appearance of characters and objects believable and creative shading,

More information

Bandwidth Avoiding Stencil Computations

Bandwidth Avoiding Stencil Computations Bandwidth Avoiding Stencil Computations By Kaushik Datta, Sam Williams, Kathy Yelick, and Jim Demmel, and others Berkeley Benchmarking and Optimization Group UC Berkeley March 13, 2008 http://bebop.cs.berkeley.edu

More information

Fast Tridiagonal Solvers on GPU

Fast Tridiagonal Solvers on GPU Fast Tridiagonal Solvers on GPU Yao Zhang John Owens UC Davis Jonathan Cohen NVIDIA GPU Technology Conference 2009 Outline Introduction Algorithms Design algorithms for GPU architecture Performance Bottleneck-based

More information

Fujitsu s Approach to Application Centric Petascale Computing

Fujitsu s Approach to Application Centric Petascale Computing Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview

More information

Warps and Reduction Algorithms

Warps and Reduction Algorithms Warps and Reduction Algorithms 1 more on Thread Execution block partitioning into warps single-instruction, multiple-thread, and divergence 2 Parallel Reduction Algorithms computing the sum or the maximum

More information

Maximizing Memory Performance for ANSYS Simulations

Maximizing Memory Performance for ANSYS Simulations Maximizing Memory Performance for ANSYS Simulations By Alex Pickard, 2018-11-19 Memory or RAM is an important aspect of configuring computers for high performance computing (HPC) simulation work. The performance

More information