Improving NAMD Performance on Volta GPUs

Size: px
Start display at page:

Download "Improving NAMD Performance on Volta GPUs"

Transcription

1 Improving NAMD Performance on Volta GPUs David Hardy - Research Programmer, University of Illinois at Urbana-Champaign Ke Li - HPC Developer Technology Engineer, NVIDIA John Stone - Senior Research Programmer, University of Illinois at Urbana-Champaign

2 Journey of a Legacy HPC Application NAMD has been developed for more than 20 years Parallel scaling of large systems on supercomputers - Blue Waters (NCSA), Titan (ORNL), Stampede (TACC), Summit (ORNL) First full-featured molecular dynamics code to adopt CUDA - Stone, et al. J Comput Chem, 28: , 2007 Why were certain design choices made? What lessons have we learned? Where does development need to go in the Age of Volta?

3 NAMD & VMD: Computational Microscope Enable researchers to investigate systems described at the atomic scale NAMD - molecular dynamics simulation VMD - visualization, system preparation and analysis Neuron Ribosome Virus Capsid

4 Simulated System Sizes Have Increased Exponentially 10 8 HIV capsid Number of atoms Lysozyme ApoA1 ATP Synthase Ribosome STMV

5 Parallelism in Molecular Dynamics Limited to Each Timestep Computational workflow of MD: Update coordinates about 1% of computational work Initialize coordinates forces, coordinates Force calculation about 99% of computational work Occasional output of reduced quantities (energy, temperature, pressure) Occasional output of coordinates (trajectory snapshot)

6 Work Dominated by Nonbonded Forces 90% Non-bonded forces, short-range cutoff force calculation update coordinates 5% Long-range electrostatics, gridded (e.g. PME) 2% Bonded forces (bonds, angles, etc.) 2% Correction for excluded interactions 1% Integration, constraints, thermostat, barostat Apply GPU acceleration first to the most expensive part

7 Parallelize by Spatial Decomposition of Atoms Data parallelism is common to many codes

8 NAMD Also Decomposes Compute Objects Kale et al., J. Comp. Phys. 151: , 1999 Spatially decompose data and communication Separate but related work decomposition Compute objects create much greater amount of parallelization, facilitating iterative, measurement-based load balancing system, all from use of Charm++

9 Overlap Calculations, Offload Nonbonded Forces Phillips et al., SC2002 Offload to GPU Objects are assigned to processors and queued as data arrives

10 Early Nonbonded Forces Kernel Used All Memory Systems Start with most expensive calculation: direct nonbonded interactions. Decompose work into pairs of patches, identical to NAMD structure. GPU hardware assigns patch-pairs to multiprocessors dynamically. Force computation on single multiprocessor (GeForce 8800 GTX has 16) Texture Unit Force Table Interpolation 8kB cache 16kB Shared Memory Patch A Coordinates & Parameters 32-way SIMD Multiprocessor multiplexed threads 32kB Registers Patch B Coords, Params, & Forces Constants Exclusions 8kB cache 768 MB Main Memory, no cache, 300+ cycle latency

11 NAMD Performance Improved Using Early GPUs ApoA1 Performance Full NAMD, not test harness Useful performance boost 8x speedup for nonbonded 5x speedup overall w/o PME 3.5x speedup overall w/ PME GPU = quad-core CPU Plans for better performance Overlap GPU and CPU work. Tune or port remaining work. PME, bonded, integration, etc. seconds per step CPU faster GPU Nonbond PME Other 2.67 GHz Core 2 Quad Extreme + GeForce 8800 GTX

12 Reduce Communication Latency by Separating Work Units Phillips et al., SC2008 GPU x Remote Force f Local Force f CPU Remote Local f Local Update x Other Nodes/Processes f x One Timestep

13 Early GPU Fits Into Parallel NAMD as Coprocessor Offload most expensive calculation: non-bonded forces Fits into existing parallelization Extends existing code without modifying core data structures Requires work aggregation and kernel scheduling considerations to optimize remote communication GPU is treated as a coprocessor

14 NAMD Scales Well on Kepler Based Computers 64 21M atoms 32 Performance (ns per day) GTC 2017 (2fs timestep) M atoms 1 Blue Waters XK7 (GTC16) Kepler based Titan XK7 (GTC16) 0.5 Edison XC30 (SC14) Blue Waters XE6 (SC14) Biomedical Technology Research Center for Macromolecular Modeling and Bioinformatics Beckman Institute, University Number of Illinois at of Urbana-Champaign Nodes

15 Large Rate Difference Between Pascal and CPU 20x FLOP rate difference between GPU and CPU Requires full use of CPU cores and vectorization! Balance between GPU and CPU capability keeps shifting towards GPU NVIDIA plots show only through Pascal Volta widens the performance gap! Difference made worse by multiple GPUs per CPU (e.g. DGX, Summit) Past efforts to balance work between GPU and CPU are now CPU bound

16 Balancing Work Between GPU and CPU Case Study: Multilevel Summation CUDA Kernels in VMD Hardy, et al. Journal of Parallel Computing, 35: , 2009 Effort to balance computational work between GPUs and CPU GPU gets only straightforward data parallel algorithms, regularized work units CPU gets algorithms less well suited to GPU, plus overflow work Partial model: ~10M atoms Le, et al. PLoS Comput Bio, 6:e , 2010 Electrostatic field of chromatophore model from multilevel summation method: computed with 3 GPUs (G80) in ~90 seconds, 46x faster than single CPU socket

17 Time-Averaged Electrostatics Analysis on NCSA Blue Waters NCSA Blue Waters Node Type Cray XE6 Compute Node: 32 CPU cores (2xAMD 6200 CPUs) Cray XK6 GPU-accelerated Compute Node: 16 CPU cores + NVIDIA X2090 (Fermi) GPU Seconds per trajectory frame for one compute node Speedup for GPU XK6 nodes vs. CPU XE6 nodes XK6 nodes are 4.15x faster overall Tests on XK7 nodes indicate MSM is CPU-bound with the Kepler K20X GPU. Performance not much faster (yet) than Fermi X2090 Need to move spatial hashing, prolongation, interpolation onto the GPU In progress. XK7 nodes 4.3x faster overall Preliminary performance for VMD time-averaged electrostatics w/ Multilevel Summation Method on the NCSA Blue Waters Early Science System speedup ok CPU-bound

18 Multilevel Summation on the GPU Accelerate short-range cutoff and lattice cutoff parts Performance profile for 0.5 Å map of potential for 1.5 M atoms. Hardware platform is Intel QX6700 CPU and NVIDIA GTX 280. Computational steps CPU (s) w/ GPU (s) Speedup Short-range cutoff Long-range anterpolation 0.18 restriction 0.16 lattice cutoff prolongation 0.17 interpolation 3.47 Total Multilevel summation of electrostatic potentials using graphics processing units. D. Hardy, J. Stone, K. Schulten. J. Parallel Computing, 35: , 2009.

19 Balancing Work Between GPU and CPU Case Study: Multilevel Summation CUDA Kernels in VMD Conclusions: Successful (at the time) of exploiting task and data level parallelism Work was reasonably well balanced between CPUs and GPUs for Tesla and Fermi For Kepler and later, approach was CPU bound

20 Reduce Latency, Offload All Force Computation Overlapped GPU communication and computation (2012) Offload atom-based work for PME (2013) - Use higher order interpolation with coarser grid Emphasis on improving communication latency - Reduce parallel FFT communication Faster nonbonded force kernels (2016) Offload entire PME using cufft (for single node use) (2016) Offload remaining force terms (2017) - Includes: bonds, angles, dihedrals, impropers, crossterms, exclusions Emphasis on using GPUs more effectively

21 Overlapped GPU Communication and Computation Allows incremental results from a single grid to be processed on CPU before grid finishes on GPU Allows merging and prioritizing of remote and local work GPU side: - Write results to host-mapped memory (also without streaming) - threadfence_system() and syncthreads() - Atomic increment for next output queue location - Write result index to output queue CPU side: - Poll end of output queue (int array) in host memory

22 Non-overlapped Kernel Communication Integration unable to start until GPU kernel finishes

23 Overlapped Kernel Communication GPU kernel communicates results while running; patches begin integration as soon as data arrives

24 Faster Non-bonded force computation in NAMD Two levels of spatial sorting Simulation box is divided into patches Within the patch, atoms are sorted spatially into groups of 32 using orthogonal recursive bisection method 7 S6623: Advances in NAMD GPU Performance

25 Faster Non-bonded force compute Patch 1 Compute 1 Compute 1 Patch 2 32 Patch 2 Compute 2 Patch 3 Compute = all pairwise interactions between two patches 32 For GPU, compute is split into tiles of size 32x32 atoms 8 S6623: Advances in NAMD GPU Performance

26 Faster Non-bonded force computation F i Warp 1 32 Warp 2 Warp 3 Warp 4 Atoms in patch j Atoms in patch i F j One warp per tiles Loop through 32x32 tile diagonally Avoids race condition when storing forces F i and F j Bitmask used for exclusion lookup 9 S6623: Advances in NAMD GPU Performance

27 Neighbor list construction on GPU Bounding box neighbor list Sort neighbor list R Compute forces atom-based neighbor list R Sort neighbor list 11 S6623: Advances in NAMD GPU Performance

28 Neighbor list sorting Load imbalance! Thread block No load imbalance sort Warp 1 Warp 2 Warp 3 Warp 4 Warp 1 Warp 2 Warp 3 Tile lists executed on the same thread block should have approximately the same work load Simple solution is to sort according to tile list length Also minimizes tail effects at the end of kernel execution 12 S6623: Advances in NAMD GPU Performance

29 Single-Node GPU Performance Competitive on Maxwell New kernels by Antti-Pekka Hynninen, NVIDIA Stone, Hynninen, et al., International Workshop on OpenPOWER for HPC (IWOPH'16), 2016

30 More Improvement from Offloading Bonded Forces GPU offloading for bonds, angles, dihedrals, impropers, exclusions, and crossterms Computation in single precision Forces are accumulated in fixed point Virials are accumulated in fixed point Code path exists for double precision accumulation on Pascal and newer GPUs Reduces CPU workload and hence improves performance on GPU-heavy systems Speedup DGX-1 apoa1 f1atpase stmv New kernels by Antti-Pekka Hynninen, NVIDIA

31 Supercomputers Increasing GPU to CPU Ratio Blue Waters, Titan with Cray XK7 nodes 1 K20 / 16-core AMD Opteron Summit nodes 6 Volta / 42 cores IBM Power 9 Only 7 cores supporting each Volta!

32 Limited Scaling Even After Offloading All Forces Results on NVIDIA DGX-1 (Intel Haswell using 28-cores with Volta V100 GPUs) 4 Performance (ns per day) STMV 1 million atoms NAMD 2.13 NAMD Number of NVIDIA Voltas Offloading all forces Nonbonded forces only

33 CPU Integrator Calculation (1%) Causing Bottleneck Nsight Systems profiling of NAMD running STMV (1M atoms) on 1 Volta & 28 CPU cores Too much CPU work: 2200 patches across 28 cores [nonbonded] [bonded] [PME] Too much communication! GPU is not being kept busy Patches running sequentially within each core

34 CPU integrator work is mostly data parallel, but Uses double precision for positions, velocities, forces - Data layout is array of structures (AOS), not well-suited to vectorization Each NAMD patch runs integrator in separate user-level thread to make source code more accessible - Benefit from vectorization is reduced, loop over atoms in each patch Too many exceptional cases handled within same code path - E.g. fixed atoms, pseudo-atom particles (Drude and lone pair) - Test conditionals for simulation options and rare events (e.g. trajectory output) every timestep

35 CPU integrator work is mostly data parallel, but Additional communication is required - Send reductions for kinetic energies and virial - Receive broadcast periodic cell rescaling when simulating constant pressure A few core algorithms have sequential parts, with reduced parallelism - Not suitable for CPU vectorization - Rigid bond constraints successive relaxation over each hydrogen group - Random number generator Gaussian numbers using Box Muller transform with rejection - Reductions calculated over hydrogen groups irregular data access patterns

36 Strategies for Overcoming Bottleneck Data structures for CPU vectorization - Convert atom data storage from AOS (array of structures) form into vector friendly SOA (structure of arrays) form Algorithms for CPU vectorization - Replace non-vectorizing RNG code with vectorized version - Replace rigid bond constraints sequential algorithm with one capable of fine-grained parallelism (maybe LINCS or Matrix-SHAKE) Offload integrator to GPU - Main challenge is aggregating patch data - Use vectorized algorithms, adapt curand for Gaussian random numbers

37 Goal: Developing GPU-based NAMD CPU primarily manages GPU kernel launching - CPU prepares and aggregates data structures for GPU, handles Charm++ communication Reduces overhead of host-to-device memory transfers - Data lives on the GPU, clusters of patches - Communicate edge patches for force calculation and atom migration Design new data structures capable of GPU or CPU vectorization - Major refactor of code to reveal more data parallelism Kernel-based design with consistent interfaces across GPU and CPU - Need fallback kernels for CPU (e.g. continue to support Blue Waters and Titan)

38 Some Conclusions For HPC apps in general and NAMD in particular Balance between CPU and GPU computational capability continues to shift in favor of the GPU - The 1% workload from 10 years ago is now 60% or more in wall clock time Any past code that has attempted to load balance work between CPU and GPU is today likely to be CPU bound Best utilization of GPU might require keeping all data on the GPU - Motivates turning a previously CPU-based code using GPU as an accelerator into a GPUbased code using CPU as a kernel management and communication coprocessor Volta + CUDA 9.x + CUDA Toolkit Libraries provide enough general purpose support to allow moving an application entirely to the GPU

39 Related Talks S ORNL Summit: Petascale Molecular Dynamics Simulations on the Summit POWER9/Volta Supercomputer - James Phillips S Accelerating Molecular Modeling Tasks on Desktop and Pre-Exascale Supercomputers - John Stone S Optimizing HPC Simulation and Visualization Codes Using the NVIDIA Nsight Systems - Robert Knight, Daniel Horowitz, John Stone S VMD: Biomolecular Visualization from Atoms to Cells Using Ray Tracing, Rasterization, and VR - John Stone SE OpenACC User Group Meeting - Sunita Chandrasekaran, Guido Juckeland, John Stone, Randy Allen

40 Acknowledgments Special thanks to: - NVIDIA CUDA Team - NVIDIA NSight Systems Team - James Phillips, NCSA, UIUC - In memoriam Antti-Pekka Hynninen, NVIDIA - Theoretical Biophysics Research Group, Beckman Institute, UIUC - Parallel Programming Laboratory, Dept of Computer Science, UIUC Grants: - NIH P41-GM Center for Macromolecular Modeling and Bioinformatics - Summit Center for Accelerated Application Readiness, OLCF, ORNL

Using GPUs to compute the multilevel summation of electrostatic forces

Using GPUs to compute the multilevel summation of electrostatic forces Using GPUs to compute the multilevel summation of electrostatic forces David J. Hardy Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of

More information

Accelerating Molecular Modeling Applications with Graphics Processors

Accelerating Molecular Modeling Applications with Graphics Processors Accelerating Molecular Modeling Applications with Graphics Processors John Stone Theoretical and Computational Biophysics Group University of Illinois at Urbana-Champaign Research/gpu/ SIAM Conference

More information

Interactive Supercomputing for State-of-the-art Biomolecular Simulation

Interactive Supercomputing for State-of-the-art Biomolecular Simulation Interactive Supercomputing for State-of-the-art Biomolecular Simulation John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of

More information

NAMD, CUDA, and Clusters: Taking GPU Molecular Dynamics Beyond the Deskop

NAMD, CUDA, and Clusters: Taking GPU Molecular Dynamics Beyond the Deskop NAMD, CUDA, and Clusters: Taking GPU Molecular Dynamics Beyond the Deskop James Phillips Research/gpu/ Beckman Institute University of Illinois at Urbana-Champaign Theoretical and Computational Biophysics

More information

Accelerating NAMD with Graphics Processors

Accelerating NAMD with Graphics Processors Accelerating NAMD with Graphics Processors James Phillips John Stone Klaus Schulten Research/namd/ NAMD: Practical Supercomputing 24,000 users can t all be computer experts. 18% are NIH-funded; many in

More information

Multilevel Summation of Electrostatic Potentials Using GPUs

Multilevel Summation of Electrostatic Potentials Using GPUs Multilevel Summation of Electrostatic Potentials Using GPUs David J. Hardy Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at

More information

GPU-Accelerated Analysis of Large Biomolecular Complexes

GPU-Accelerated Analysis of Large Biomolecular Complexes GPU-Accelerated Analysis of Large Biomolecular Complexes John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign

More information

VMD: Immersive Molecular Visualization and Interactive Ray Tracing for Domes, Panoramic Theaters, and Head Mounted Displays

VMD: Immersive Molecular Visualization and Interactive Ray Tracing for Domes, Panoramic Theaters, and Head Mounted Displays VMD: Immersive Molecular Visualization and Interactive Ray Tracing for Domes, Panoramic Theaters, and Head Mounted Displays John E. Stone Theoretical and Computational Biophysics Group Beckman Institute

More information

Accelerating Scientific Applications with GPUs

Accelerating Scientific Applications with GPUs Accelerating Scientific Applications with GPUs John Stone Theoretical and Computational Biophysics Group University of Illinois at Urbana-Champaign Research/gpu/ Workshop on Programming Massively Parallel

More information

OPTIMIZING HPC SIMULATION AND VISUALIZATION CODE USING NVIDIA NSIGHT SYSTEMS

OPTIMIZING HPC SIMULATION AND VISUALIZATION CODE USING NVIDIA NSIGHT SYSTEMS OPTIMIZING HPC SIMULATION AND VISUALIZATION CODE USING NVIDIA NSIGHT SYSTEMS Daniel Horowitz Director of Platform Developer Tools, NVIDIA, Robert (Bob) Knight Principal System Software Engineer, NVIDIA

More information

In-Situ Visualization and Analysis of Petascale Molecular Dynamics Simulations with VMD

In-Situ Visualization and Analysis of Petascale Molecular Dynamics Simulations with VMD In-Situ Visualization and Analysis of Petascale Molecular Dynamics Simulations with VMD John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University

More information

Analysis and Visualization Algorithms in VMD

Analysis and Visualization Algorithms in VMD 1 Analysis and Visualization Algorithms in VMD David Hardy Research/~dhardy/ NAIS: State-of-the-Art Algorithms for Molecular Dynamics (Presenting the work of John Stone.) VMD Visual Molecular Dynamics

More information

Scaling in a Heterogeneous Environment with GPUs: GPU Architecture, Concepts, and Strategies

Scaling in a Heterogeneous Environment with GPUs: GPU Architecture, Concepts, and Strategies Scaling in a Heterogeneous Environment with GPUs: GPU Architecture, Concepts, and Strategies John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology

More information

Accelerating Computational Biology by 100x Using CUDA. John Stone Theoretical and Computational Biophysics Group, University of Illinois

Accelerating Computational Biology by 100x Using CUDA. John Stone Theoretical and Computational Biophysics Group, University of Illinois Accelerating Computational Biology by 100x Using CUDA John Stone Theoretical and Computational Biophysics Group, University of Illinois GPU Computing Commodity devices, omnipresent in modern computers

More information

IBM J. Res. & Dev.Acun et al.: Scalable Molecular Dynamics with NAMD on the Summit System Page 1

IBM J. Res. & Dev.Acun et al.: Scalable Molecular Dynamics with NAMD on the Summit System Page 1 IBM J. Res. & Dev.Acun et al.: Scalable Molecular Dynamics with NAMD on the Summit System Page 1 Scalable Molecular Dynamics with NAMD on the Summit System B. Acun, D. J. Hardy, L. V. Kale, K. Li, J. C.

More information

NAMD Serial and Parallel Performance

NAMD Serial and Parallel Performance NAMD Serial and Parallel Performance Jim Phillips Theoretical Biophysics Group Serial performance basics Main factors affecting serial performance: Molecular system size and composition. Cutoff distance

More information

Experience with NAMD on GPU-Accelerated Clusters

Experience with NAMD on GPU-Accelerated Clusters Experience with NAMD on GPU-Accelerated Clusters James Phillips John Stone Klaus Schulten Research/gpu/ Outline Motivational images o NAMD simulations Why all the uss about GPUs? What is message-driven

More information

CUDA Experiences: Over-Optimization and Future HPC

CUDA Experiences: Over-Optimization and Future HPC CUDA Experiences: Over-Optimization and Future HPC Carl Pearson 1, Simon Garcia De Gonzalo 2 Ph.D. candidates, Electrical and Computer Engineering 1 / Computer Science 2, University of Illinois Urbana-Champaign

More information

NAMD at Extreme Scale. Presented by: Eric Bohm Team: Eric Bohm, Chao Mei, Osman Sarood, David Kunzman, Yanhua, Sun, Jim Phillips, John Stone, LV Kale

NAMD at Extreme Scale. Presented by: Eric Bohm Team: Eric Bohm, Chao Mei, Osman Sarood, David Kunzman, Yanhua, Sun, Jim Phillips, John Stone, LV Kale NAMD at Extreme Scale Presented by: Eric Bohm Team: Eric Bohm, Chao Mei, Osman Sarood, David Kunzman, Yanhua, Sun, Jim Phillips, John Stone, LV Kale Overview NAMD description Power7 Tuning Support for

More information

Strong Scaling for Molecular Dynamics Applications

Strong Scaling for Molecular Dynamics Applications Strong Scaling for Molecular Dynamics Applications Scaling and Molecular Dynamics Strong Scaling How does solution time vary with the number of processors for a fixed problem size Classical Molecular Dynamics

More information

Simulating Life at the Atomic Scale

Simulating Life at the Atomic Scale Simulating Life at the Atomic Scale James Phillips Beckman Institute, University of Illinois Research/namd/ Beckman Institute University of Illinois at Urbana-Champaign Theoretical and Computational Biophysics

More information

Recent Advances in Heterogeneous Computing using Charm++

Recent Advances in Heterogeneous Computing using Charm++ Recent Advances in Heterogeneous Computing using Charm++ Jaemin Choi, Michael Robson Parallel Programming Laboratory University of Illinois Urbana-Champaign April 12, 2018 1 / 24 Heterogeneous Computing

More information

Using AWS EC2 GPU Instances for Computational Microscopy and Biomolecular Simulation

Using AWS EC2 GPU Instances for Computational Microscopy and Biomolecular Simulation Using AWS EC2 GPU Instances for Computational Microscopy and Biomolecular Simulation John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University

More information

GPU Acceleration of Molecular Modeling Applications

GPU Acceleration of Molecular Modeling Applications GPU Acceleration of Molecular Modeling Applications James Phillips John Stone Research/gpu/ NAMD: Practical Supercomputing 25,000 users can t all be computer experts. 18% are NIH-funded; many in other

More information

Visualization of Energy Conversion Processes in a Light Harvesting Organelle at Atomic Detail

Visualization of Energy Conversion Processes in a Light Harvesting Organelle at Atomic Detail Visualization of Energy Conversion Processes in a Light Harvesting Organelle at Atomic Detail Theoretical and Computational Biophysics Group Center for the Physics of Living Cells Beckman Institute for

More information

Faster, Cheaper, Better: Biomolecular Simulation with NAMD, VMD, and CUDA

Faster, Cheaper, Better: Biomolecular Simulation with NAMD, VMD, and CUDA Faster, Cheaper, Better: Biomolecular Simulation with NAMD, VMD, and CUDA John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

S5386 Publication-Quality Ray Tracing of Molecular Graphics with OptiX

S5386 Publication-Quality Ray Tracing of Molecular Graphics with OptiX S5386 Publication-Quality Ray Tracing of Molecular Graphics with OptiX John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

Refactoring NAMD for Petascale Machines and Graphics Processors

Refactoring NAMD for Petascale Machines and Graphics Processors Refactoring NAMD for Petascale Machines and Graphics Processors James Phillips Research/namd/ NAMD Design Designed from the beginning as a parallel program Uses the Charm++ idea: Decompose the computation

More information

GPU-Accelerated Visualization and Analysis of Petascale Molecular Dynamics Simulations

GPU-Accelerated Visualization and Analysis of Petascale Molecular Dynamics Simulations GPU-Accelerated Visualization and Analysis of Petascale Molecular Dynamics Simulations John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology

More information

BEST BANG FOR YOUR BUCK

BEST BANG FOR YOUR BUCK Carsten Kutzner Theoretical & Computational Biophysics MPI for biophysical Chemistry BEST BANG FOR YOUR BUCK Cost-efficient MD simulations COST-EFFICIENT MD SIMULATIONS TASK 1: CORE-H.XTC HOW TO GET OPTIMAL

More information

Proteins and Mesoscale Data: Visualization of Molecular Dynamics

Proteins and Mesoscale Data: Visualization of Molecular Dynamics Proteins and Mesoscale Data: Visualization of Molecular Dynamics John E. Stone Theoretical and Computational Biophysics Group Beckman Institute, University of Illinois at Urbana-Champaign http://www.ks.uiuc.edu/research/vmd/

More information

Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms

Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms Early Experiences Porting the NAMD and VMD Molecular Simulation and Analysis Software to GPU-Accelerated OpenPOWER Platforms John E. Stone, Antti-Pekka Hynninen, James C. Phillips, Klaus Schulten Theoretical

More information

Accelerating Molecular Modeling Applications with GPU Computing

Accelerating Molecular Modeling Applications with GPU Computing Accelerating Molecular Modeling Applications with GPU Computing John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at

More information

Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM

Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM 25th March, GTC 2014, San Jose CA AnE- Pekka Hynninen ane.pekka.hynninen@nrel.gov NREL is a na*onal laboratory of the U.S. Department of Energy,

More information

The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System

The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System Alan Humphrey, Qingyu Meng, Martin Berzins Scientific Computing and Imaging Institute & University of Utah I. Uintah Overview

More information

S4410: Visualization and Analysis of Petascale Molecular Simulations with VMD

S4410: Visualization and Analysis of Petascale Molecular Simulations with VMD S4410: Visualization and Analysis of Petascale Molecular Simulations with VMD John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University

More information

Graphics Processor Acceleration and YOU

Graphics Processor Acceleration and YOU Graphics Processor Acceleration and YOU James Phillips Research/gpu/ Goals of Lecture After this talk the audience will: Understand how GPUs differ from CPUs Understand the limits of GPU acceleration Have

More information

Abhinav Bhatele, Laxmikant V. Kale University of Illinois at Urbana Champaign Sameer Kumar IBM T. J. Watson Research Center

Abhinav Bhatele, Laxmikant V. Kale University of Illinois at Urbana Champaign Sameer Kumar IBM T. J. Watson Research Center Abhinav Bhatele, Laxmikant V. Kale University of Illinois at Urbana Champaign Sameer Kumar IBM T. J. Watson Research Center Motivation: Contention Experiments Bhatele, A., Kale, L. V. 2008 An Evaluation

More information

TUNING CUDA APPLICATIONS FOR MAXWELL

TUNING CUDA APPLICATIONS FOR MAXWELL TUNING CUDA APPLICATIONS FOR MAXWELL DA-07173-001_v6.5 August 2014 Application Note TABLE OF CONTENTS Chapter 1. Maxwell Tuning Guide... 1 1.1. NVIDIA Maxwell Compute Architecture... 1 1.2. CUDA Best Practices...2

More information

CME 213 S PRING Eric Darve

CME 213 S PRING Eric Darve CME 213 S PRING 2017 Eric Darve Summary of previous lectures Pthreads: low-level multi-threaded programming OpenMP: simplified interface based on #pragma, adapted to scientific computing OpenMP for and

More information

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES Cliff Woolley, NVIDIA PREFACE This talk presents a case study of extracting parallelism in the UMT2013 benchmark for 3D unstructured-mesh

More information

NAMD GPU Performance Benchmark. March 2011

NAMD GPU Performance Benchmark. March 2011 NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory

More information

TUNING CUDA APPLICATIONS FOR MAXWELL

TUNING CUDA APPLICATIONS FOR MAXWELL TUNING CUDA APPLICATIONS FOR MAXWELL DA-07173-001_v7.0 March 2015 Application Note TABLE OF CONTENTS Chapter 1. Maxwell Tuning Guide... 1 1.1. NVIDIA Maxwell Compute Architecture... 1 1.2. CUDA Best Practices...2

More information

Visualization and Analysis of Petascale Molecular Dynamics Simulations

Visualization and Analysis of Petascale Molecular Dynamics Simulations Visualization and Analysis of Petascale Molecular Dynamics Simulations John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

Analysis of Performance Gap Between OpenACC and the Native Approach on P100 GPU and SW26010: A Case Study with GTC-P

Analysis of Performance Gap Between OpenACC and the Native Approach on P100 GPU and SW26010: A Case Study with GTC-P Analysis of Performance Gap Between OpenACC and the Native Approach on P100 GPU and SW26010: A Case Study with GTC-P Stephen Wang 1, James Lin 1, William Tang 2, Stephane Ethier 2, Bei Wang 2, Simon See

More information

Using GPUs to Supercharge Visualization and Analysis of Molecular Dynamics Simulations with VMD

Using GPUs to Supercharge Visualization and Analysis of Molecular Dynamics Simulations with VMD Using GPUs to Supercharge Visualization and Analysis of Molecular Dynamics Simulations with VMD John E. Stone http://www.ks.uiuc.edu/research/vmd/ VMD Visual Molecular Dynamics Visualization and analysis

More information

ECE 498AL. Lecture 21-22: Application Performance Case Studies: Molecular Visualization and Analysis

ECE 498AL. Lecture 21-22: Application Performance Case Studies: Molecular Visualization and Analysis ECE 498AL Lecture 21-22: Application Performance Case Studies: Molecular Visualization and Analysis Guest Lecture by John Stone Theoretical and Computational Biophysics Group NIH Resource for Macromolecular

More information

CUDA. Matthew Joyner, Jeremy Williams

CUDA. Matthew Joyner, Jeremy Williams CUDA Matthew Joyner, Jeremy Williams Agenda What is CUDA? CUDA GPU Architecture CPU/GPU Communication Coding in CUDA Use cases of CUDA Comparison to OpenCL What is CUDA? What is CUDA? CUDA is a parallel

More information

VMD: Interactive Publication-Quality Molecular Ray Tracing with OptiX

VMD: Interactive Publication-Quality Molecular Ray Tracing with OptiX VMD: Interactive Publication-Quality Molecular Ray Tracing with OptiX John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

X10 specific Optimization of CPU GPU Data transfer with Pinned Memory Management

X10 specific Optimization of CPU GPU Data transfer with Pinned Memory Management X10 specific Optimization of CPU GPU Data transfer with Pinned Memory Management Hideyuki Shamoto, Tatsuhiro Chiba, Mikio Takeuchi Tokyo Institute of Technology IBM Research Tokyo Programming for large

More information

NAMD Performance Benchmark and Profiling. February 2012

NAMD Performance Benchmark and Profiling. February 2012 NAMD Performance Benchmark and Profiling February 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

High Performance Molecular Visualization and Analysis on GPUs

High Performance Molecular Visualization and Analysis on GPUs High Performance Molecular Visualization and Analysis on GPUs John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at

More information

High Performance Computing with Accelerators

High Performance Computing with Accelerators High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing

More information

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS 1 Last time Each block is assigned to and executed on a single streaming multiprocessor (SM). Threads execute in groups of 32 called warps. Threads in

More information

GPU Histogramming: Radial Distribution Functions

GPU Histogramming: Radial Distribution Functions GPU Histogramming: Radial Distribution Functions John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign

More information

Hybrid Implementation of 3D Kirchhoff Migration

Hybrid Implementation of 3D Kirchhoff Migration Hybrid Implementation of 3D Kirchhoff Migration Max Grossman, Mauricio Araya-Polo, Gladys Gonzalez GTC, San Jose March 19, 2013 Agenda 1. Motivation 2. The Problem at Hand 3. Solution Strategy 4. GPU Implementation

More information

ENABLING NEW SCIENCE GPU SOLUTIONS

ENABLING NEW SCIENCE GPU SOLUTIONS ENABLING NEW SCIENCE TESLA BIO Workbench The NVIDIA Tesla Bio Workbench enables biophysicists and computational chemists to push the boundaries of life sciences research. It turns a standard PC into a

More information

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav CMPE655 - Multiple Processor Systems Fall 2015 Rochester Institute of Technology Contents What is GPGPU? What s the need? CUDA-Capable GPU Architecture

More information

John E. Stone. S03: High Performance Computing with CUDA. Heterogeneous GPU Computing for Molecular Modeling

John E. Stone. S03: High Performance Computing with CUDA. Heterogeneous GPU Computing for Molecular Modeling S03: High Performance Computing with CUDA Heterogeneous GPU Computing for Molecular Modeling John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology

More information

Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation SC14: International Conference for High Performance Computing, Networking, Storage and Analysis Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation James C.

More information

Parallel Computing 35 (2009) Contents lists available at ScienceDirect. Parallel Computing. journal homepage:

Parallel Computing 35 (2009) Contents lists available at ScienceDirect. Parallel Computing. journal homepage: Parallel Computing 35 (2009) 164 177 Contents lists available at ScienceDirect Parallel Computing journal homepage: www.elsevier.com/locate/parco Multilevel summation of electrostatic potentials using

More information

Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions

Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions Data Partitioning on Heterogeneous Multicore and Multi-GPU systems Using Functional Performance Models of Data-Parallel Applictions Ziming Zhong Vladimir Rychkov Alexey Lastovetsky Heterogeneous Computing

More information

2006: Short-Range Molecular Dynamics on GPU. San Jose, CA September 22, 2010 Peng Wang, NVIDIA

2006: Short-Range Molecular Dynamics on GPU. San Jose, CA September 22, 2010 Peng Wang, NVIDIA 2006: Short-Range Molecular Dynamics on GPU San Jose, CA September 22, 2010 Peng Wang, NVIDIA Overview The LAMMPS molecular dynamics (MD) code Cell-list generation and force calculation Algorithm & performance

More information

HANDLING LOAD IMBALANCE IN DISTRIBUTED & SHARED MEMORY

HANDLING LOAD IMBALANCE IN DISTRIBUTED & SHARED MEMORY HANDLING LOAD IMBALANCE IN DISTRIBUTED & SHARED MEMORY Presenters: Harshitha Menon, Seonmyeong Bak PPL Group Phil Miller, Sam White, Nitin Bhat, Tom Quinn, Jim Phillips, Laxmikant Kale MOTIVATION INTEGRATED

More information

GPU Accelerated Visualization and Analysis in VMD

GPU Accelerated Visualization and Analysis in VMD GPU Accelerated Visualization and Analysis in VMD John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign

More information

Preliminary Experiences with the Uintah Framework on on Intel Xeon Phi and Stampede

Preliminary Experiences with the Uintah Framework on on Intel Xeon Phi and Stampede Preliminary Experiences with the Uintah Framework on on Intel Xeon Phi and Stampede Qingyu Meng, Alan Humphrey, John Schmidt, Martin Berzins Thanks to: TACC Team for early access to Stampede J. Davison

More information

Harnessing GPUs to Probe Biomolecular Machines at Atomic Detail

Harnessing GPUs to Probe Biomolecular Machines at Atomic Detail Harnessing GPUs to Probe Biomolecular Machines at Atomic Detail John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE)

GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) GPU ACCELERATION OF WSMP (WATSON SPARSE MATRIX PACKAGE) NATALIA GIMELSHEIN ANSHUL GUPTA STEVE RENNICH SEID KORIC NVIDIA IBM NVIDIA NCSA WATSON SPARSE MATRIX PACKAGE (WSMP) Cholesky, LDL T, LU factorization

More information

GPU-Accelerated Analysis of Petascale Molecular Dynamics Simulations

GPU-Accelerated Analysis of Petascale Molecular Dynamics Simulations GPU-Accelerated Analysis of Petascale Molecular Dynamics Simulations John Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology University of Illinois

More information

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Hybrid Computing @ KAUST Many Cores and OpenACC Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Agenda Hybrid Computing n Hybrid Computing n From Multi-Physics

More information

NAMD: A Portable and Highly Scalable Program for Biomolecular Simulations

NAMD: A Portable and Highly Scalable Program for Biomolecular Simulations NAMD: A Portable and Highly Scalable Program for Biomolecular Simulations Abhinav Bhatele 1, Sameer Kumar 3, Chao Mei 1, James C. Phillips 2, Gengbin Zheng 1, Laxmikant V. Kalé 1 1 Department of Computer

More information

NAMD Performance Benchmark and Profiling. November 2010

NAMD Performance Benchmark and Profiling. November 2010 NAMD Performance Benchmark and Profiling November 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox Compute resource - HPC Advisory

More information

Performance potential for simulating spin models on GPU

Performance potential for simulating spin models on GPU Performance potential for simulating spin models on GPU Martin Weigel Institut für Physik, Johannes-Gutenberg-Universität Mainz, Germany 11th International NTZ-Workshop on New Developments in Computational

More information

Lecture 1: Introduction and Computational Thinking

Lecture 1: Introduction and Computational Thinking PASI Summer School Advanced Algorithmic Techniques for GPUs Lecture 1: Introduction and Computational Thinking 1 Course Objective To master the most commonly used algorithm techniques and computational

More information

Technology for a better society. hetcomp.com

Technology for a better society. hetcomp.com Technology for a better society hetcomp.com 1 J. Seland, C. Dyken, T. R. Hagen, A. R. Brodtkorb, J. Hjelmervik,E Bjønnes GPU Computing USIT Course Week 16th November 2011 hetcomp.com 2 9:30 10:15 Introduction

More information

Particle-in-Cell Simulations on Modern Computing Platforms. Viktor K. Decyk and Tajendra V. Singh UCLA

Particle-in-Cell Simulations on Modern Computing Platforms. Viktor K. Decyk and Tajendra V. Singh UCLA Particle-in-Cell Simulations on Modern Computing Platforms Viktor K. Decyk and Tajendra V. Singh UCLA Outline of Presentation Abstraction of future computer hardware PIC on GPUs OpenCL and Cuda Fortran

More information

Accelerating Leukocyte Tracking Using CUDA: A Case Study in Leveraging Manycore Coprocessors

Accelerating Leukocyte Tracking Using CUDA: A Case Study in Leveraging Manycore Coprocessors Accelerating Leukocyte Tracking Using CUDA: A Case Study in Leveraging Manycore Coprocessors Michael Boyer, David Tarjan, Scott T. Acton, and Kevin Skadron University of Virginia IPDPS 2009 Outline Leukocyte

More information

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller Entertainment Graphics: Virtual Realism for the Masses CSE 591: GPU Programming Introduction Computer games need to have: realistic appearance of characters and objects believable and creative shading,

More information

PERFORMANCE PORTABILITY WITH OPENACC. Jeff Larkin, NVIDIA, November 2015

PERFORMANCE PORTABILITY WITH OPENACC. Jeff Larkin, NVIDIA, November 2015 PERFORMANCE PORTABILITY WITH OPENACC Jeff Larkin, NVIDIA, November 2015 TWO TYPES OF PORTABILITY FUNCTIONAL PORTABILITY PERFORMANCE PORTABILITY The ability for a single code to run anywhere. The ability

More information

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University CSE 591/392: GPU Programming Introduction Klaus Mueller Computer Science Department Stony Brook University First: A Big Word of Thanks! to the millions of computer game enthusiasts worldwide Who demand

More information

NVIDIA Update and Directions on GPU Acceleration for Earth System Models

NVIDIA Update and Directions on GPU Acceleration for Earth System Models NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,

More information

Optimization Principles and Application Performance Evaluation of a Multithreaded GPU Using CUDA

Optimization Principles and Application Performance Evaluation of a Multithreaded GPU Using CUDA Optimization Principles and Application Performance Evaluation of a Multithreaded GPU Using CUDA Shane Ryoo, Christopher I. Rodrigues, Sara S. Baghsorkhi, Sam S. Stone, David B. Kirk, and Wen-mei H. Hwu

More information

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013 Lecture 13: Memory Consistency + a Course-So-Far Review Parallel Computer Architecture and Programming Today: what you should know Understand the motivation for relaxed consistency models Understand the

More information

Duksu Kim. Professional Experience Senior researcher, KISTI High performance visualization

Duksu Kim. Professional Experience Senior researcher, KISTI High performance visualization Duksu Kim Assistant professor, KORATEHC Education Ph.D. Computer Science, KAIST Parallel Proximity Computation on Heterogeneous Computing Systems for Graphics Applications Professional Experience Senior

More information

Warps and Reduction Algorithms

Warps and Reduction Algorithms Warps and Reduction Algorithms 1 more on Thread Execution block partitioning into warps single-instruction, multiple-thread, and divergence 2 Parallel Reduction Algorithms computing the sum or the maximum

More information

Introduction to GPU computing

Introduction to GPU computing Introduction to GPU computing Nagasaki Advanced Computing Center Nagasaki, Japan The GPU evolution The Graphic Processing Unit (GPU) is a processor that was specialized for processing graphics. The GPU

More information

Titan - Early Experience with the Titan System at Oak Ridge National Laboratory

Titan - Early Experience with the Titan System at Oak Ridge National Laboratory Office of Science Titan - Early Experience with the Titan System at Oak Ridge National Laboratory Buddy Bland Project Director Oak Ridge Leadership Computing Facility November 13, 2012 ORNL s Titan Hybrid

More information

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center It s a Multicore World John Urbanic Pittsburgh Supercomputing Center Waiting for Moore s Law to save your serial code start getting bleak in 2004 Source: published SPECInt data Moore s Law is not at all

More information

Programmable Graphics Hardware (GPU) A Primer

Programmable Graphics Hardware (GPU) A Primer Programmable Graphics Hardware (GPU) A Primer Klaus Mueller Stony Brook University Computer Science Department Parallel Computing Explained video Parallel Computing Explained Any questions? Parallelism

More information

S WHAT THE PROFILER IS TELLING YOU: OPTIMIZING GPU KERNELS. Jakob Progsch, Mathias Wagner GTC 2018

S WHAT THE PROFILER IS TELLING YOU: OPTIMIZING GPU KERNELS. Jakob Progsch, Mathias Wagner GTC 2018 S8630 - WHAT THE PROFILER IS TELLING YOU: OPTIMIZING GPU KERNELS Jakob Progsch, Mathias Wagner GTC 2018 1. Know your hardware BEFORE YOU START What are the target machines, how many nodes? Machine-specific

More information

Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay

Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology

More information

GPU Clusters for High- Performance Computing Jeremy Enos Innovative Systems Laboratory

GPU Clusters for High- Performance Computing Jeremy Enos Innovative Systems Laboratory GPU Clusters for High- Performance Computing Jeremy Enos Innovative Systems Laboratory National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Presentation Outline NVIDIA

More information

GRAPHICS PROCESSING UNITS

GRAPHICS PROCESSING UNITS GRAPHICS PROCESSING UNITS Slides by: Pedro Tomás Additional reading: Computer Architecture: A Quantitative Approach, 5th edition, Chapter 4, John L. Hennessy and David A. Patterson, Morgan Kaufmann, 2011

More information

Steve Scott, Tesla CTO SC 11 November 15, 2011

Steve Scott, Tesla CTO SC 11 November 15, 2011 Steve Scott, Tesla CTO SC 11 November 15, 2011 What goal do these products have in common? Performance / W Exaflop Expectations First Exaflop Computer K Computer ~10 MW CM5 ~200 KW Not constant size, cost

More information

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 Gilad Shainer, Tong Liu (Mellanox); Jeffrey Layton (Dell); Joshua Mora (AMD) High Performance Interconnects for

More information

S6261 VMD+OptiX: Streaming Interactive Ray Tracing from Remote GPU Clusters to Your VR Headset

S6261 VMD+OptiX: Streaming Interactive Ray Tracing from Remote GPU Clusters to Your VR Headset NIH BTRC for Macromolecular Modeling and Bioinformatics S6261 VMD+OptiX: Streaming Interactive Ray Tracing from Remote GPU Clusters to Your VR Headset John E. Stone Theoretical and Computational Biophysics

More information

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs A paper comparing modern architectures Joakim Skarding Christian Chavez Motivation Continue scaling of performance

More information

INTRODUCTION TO OPENACC. Analyzing and Parallelizing with OpenACC, Feb 22, 2017

INTRODUCTION TO OPENACC. Analyzing and Parallelizing with OpenACC, Feb 22, 2017 INTRODUCTION TO OPENACC Analyzing and Parallelizing with OpenACC, Feb 22, 2017 Objective: Enable you to to accelerate your applications with OpenACC. 2 Today s Objectives Understand what OpenACC is and

More information

Petascale Multiscale Simulations of Biomolecular Systems. John Grime Voth Group Argonne National Laboratory / University of Chicago

Petascale Multiscale Simulations of Biomolecular Systems. John Grime Voth Group Argonne National Laboratory / University of Chicago Petascale Multiscale Simulations of Biomolecular Systems John Grime Voth Group Argonne National Laboratory / University of Chicago About me Background: experimental guy in grad school (LSCM, drug delivery)

More information

OpenStaPLE, an OpenACC Lattice QCD Application

OpenStaPLE, an OpenACC Lattice QCD Application OpenStaPLE, an OpenACC Lattice QCD Application Enrico Calore Postdoctoral Researcher Università degli Studi di Ferrara INFN Ferrara Italy GTC Europe, October 10 th, 2018 E. Calore (Univ. and INFN Ferrara)

More information