Advanced MD performance tuning. 15/09/2017 High Performance Molecular Dynamics, Bologna,

Size: px
Start display at page:

Download "Advanced MD performance tuning. 15/09/2017 High Performance Molecular Dynamics, Bologna,"

Transcription

1 Advanced MD performance tuning 15/09/2017 High Performance Molecular Dynamics, Bologna,

2 General Strategy for improving performance Request CPU time from an HPC centre and investigate what resources are available. Try to understand what these resources are and how they can be applied to your simulations. Can I use GPUs? Multi-core processors? Choose an MD program taking into account also performance issues on the available computer system and not just functionality or scientific relevance. Read the manual, ask for technical help, etc before even setting up the simulation. Run a few sample simulations and read the program output for any hints on how to improve performance. When parameters are ready, perform scaling tests to determine the optimum number of nodes to use. CPU budgets are limited so the best option may not be the one with the highest performance As the project progresses be prepared to modify and test new options according to results. For example, the system may become inhomogenous or you may need to apply restraints which could affect performance and parallelisation options. Make sure the results are still correct! Be careful about modifying cut-offs or other options which may affect the correctness of the simulation. 15/09/2017 High Performance Molecular Dynamics, Bologna,

3 Some performance options to consider - GROMACS GROMACS mdrun -npme controls the number of cores dedicated for the PME calculation. For n total cores, pme=n/4 for n>16 (but check manual: see also tune_pme) -gcom frequency of exchange of energies -nstlist neighbour list update frequency (default: 10) -resethway reset timers (for benchmarks) -noconfout switches off final conf output (benchmarks only) -dlb dynamic load balancing (default: auto) 15/09/2017 High Performance Molecular Dynamics, Bologna,

4 Some performance options to consider - NAMD fullelectfrequency number of timesteps between full electrostatic evaluations (default: non-bonded freq), e.g. calculate long range electrostatics every 4 fs. rigidbonds controls how Shake is used (default: none). If water H-O and H-D distances are constrained. outputenergies frequency of energy calculation. Very frequent calculations will slow the simulation (esp. for GPUs). PMEProcessors number of processors for FFT and reciprocal sum. numinputprocs, numoutputprocs- Parallel I/O options for very large simulations usecompressedpsf- Use compressed.psf files for memory optimised build of NAMD for very large simulations. 15/09/2017 High Performance Molecular Dynamics, Bologna,

5 Test case 1. Performance and scaling of lignocellulose with GROMACS. It was reported in a conference that GROMACS on Omni-PATH gave much worse performance than on Infiniband network. Can this be true? If so why, given that Omni- Path should be very highly optimised for programs like GROMACS? Time to investigate.. 15/09/2017 High Performance Molecular Dynamics, Bologna,

6 Simulations of LignoCellulose on Marconi/Broadwell LignoCellulose-rf benchmark used in study and the PRACE benchmark suite is relatively large (~3M atoms) and uses Reaction Field instead of PME electrostatics. tpr available from PRACE website for example, tar.gz Not clear from the publication how the simulations were run, only that optimised parameters were used. 15/09/2017 High Performance Molecular Dynamics, Bologna,

7 Performance (ns/day) Simulations of LignoCellulose on Marconi/Broadwell First attempts using default GROMACS options and no OMP threads. Do not look so bad but compare with published data and we see it is really poor max performance 12ns/day instead of ns/day Gromacs LignoCellulose on Marconi A #nodes std 15/09/2017 High Performance Molecular Dynamics, Bologna,

8 Simulations of LignoCellulose on Marconi/Broadwell Perhaps the output from GROMACS can help.. NOTE: 74 % of the run time was spent communicating energies, you might want to use the -gcom option of mdrun Core t (s) Wall t (s) (%) Time: (ns/day) (hour/ns) Performance: Finished mdrun on rank 0 Thu Apr 27 18:04: Big clue here. The simulation is very large so communication is going to be important. Online search reveals other possible options for large simulations: -resethway (reset time counters) -noconfout (don t output final config) -nstlist (neighbour list size) 15/09/2017 High Performance Molecular Dynamics, Bologna,

9 Performance (ns/day) Simulations of LignoCellulose on Marconi/Broadwell Try again with these options, Gromacs LignoCellulose on Marconi A1 mpirun -n <tasks> mdrun -v -s topol.tpr -gcom 20 -resethway noconfout nstlist Much better although slightly lower performance than reported for Infiniband std optimised #nodes 15/09/2017 High Performance Molecular Dynamics, Bologna,

10 Simulations of LignoCellulose on Marconi/Broadwell We might be able to do better but makes sense to measure the performance with some performance tools. source $INTEL_HOME/itac_2017/bin/itacvars.sh export OMP_NUM_THREADS=1 mpirun -trace -n 32 mdrun -v -s topol.tpr -resethway - noconfout -gcom 20 GROMACS has been compiled with IntelMPI on Marconi so we can use Intel performance tools to profile the program. Intel Trace Analyser and Collector (ITAC) is easy because original program does not need to be recompiled. 15/09/2017 High Performance Molecular Dynamics, Bologna,

11 Simulations of LignoCellulose on Marconi/Broadwell Performance Analysis 1 node Very little MPI use this is good because it means the cores are not wasting time communicating. Mainly point-to-point communications (mpi_sendrecv), but also some collective (mpi_bcast) 15/09/2017 High Performance Molecular Dynamics, Bologna,

12 Simulations of LignoCellulose on Marconi/Broadwell Performance Analysis 32 nodes 15/09/2017 High Performance Molecular Dynamics, Bologna,

13 Simulations of LignoCellulose on Marconi/Broadwell Performance Analysis 128 nodes Program time is very heavily dominated by MPI calls, particularly collective calls (MPI_Bcast) 15/09/2017 High Performance Molecular Dynamics, Bologna,

14 Simulations of LignoCellulose on Marconi/Broadwell Performance Analysis ITAC results show that not only communication is as expected important but also which MPI calls are involved (MPI_Bcast). IntelMPI gives the possibility of changing the algorithm used for particular MPI calls. I_MPI_ADJUST_BCAST MPI_Bcast 1.Binomial 2.Recursive doubling 3.Ring 4.Topology aware binomial 5.Topology aware recursive doubling 6.Topology aware ring 7.Shumilin's 8.Knomial 9.Topology aware SHMbased flat 10.Topology aware SHMbased Knomial 11.Topology aware SHMbased Knary 15/09/2017 High Performance Molecular Dynamics, Bologna,

15 Performance (ns/day) GROMACS Performance as a function of MPI Broadcast algorithm 95 Ligno Cellulose Gromacs Performance as a function of Intel MPI_Bcast algorithm lignocellulose, 120 nodes MPI_Bcast algothim export I_MPI_ADJUST_BCAST=<algorithm> (0=default) In this example I_MPI_ADJUST_BCAST=3 gives small performance boost but the default (0) still ok. 15/09/2017 High Performance Molecular Dynamics, Bologna,

16 Test Case 2: NAMD on KNL NAMD as well as offering similar functionality to GROMACS is also highly optimised and shows good parallel scalabilty. The programming model though is rather particular, not based directly on MPI but rather on a library called Charm++ (don t confuse with the force-field/md program Charmm). For Intel KNL processors, NAMD and Intel suggest the SMP (Symmetric Mult-Processor) build of NAMD based on a mixed task/thread-like parallelisation (similar to, but not the same as, MPI/OpenMP). For multi-node essential to allocate cores for communications. Unfortunately, the NAMD-SMP syntax is complicated (see mpirun -n $nodes -perhost 1 namd2.smp +ppn 134 +pemap commap 67 stmv.namd number of KNL nodes tasks per node threads per node for many nodes need to increase the number of communication cores thread mapping core dedicated for comms 15/09/2017 High Performance Molecular Dynamics, Bologna,

17 Performance (ns/day) Performance (ns/day) NAMD- KNL results 25 STMV NAMD 2.12 Marconi Broadwell/KNL APOA1 NAMD BDW/KNL #nodes STMV KNL nodes For these benchmarks we used the same options so especially for few nodes performance may not be the optimum. bdw KNL 15/09/2017 High Performance Molecular Dynamics, Bologna,

18 NAMD KNL - Summary For Intel KNL need to use NAMD-SMP build. Our results show that for NAMD using KNL instead of Broadwell or similar only makes sense for very large systems (e.g. millions of atoms), although careful tuning of the Charm++ options may give better results. No advantage to using KNL flat mode instead of cache mode. 15/09/2017 High Performance Molecular Dynamics, Bologna,

19 Performance (ns/day) Case study 3: GROMACS DPPC on Intel Skylake GROMACS DPPC (GROMACS Skylake) For OmniPATH network (e.g. Marconi) Intel recommends reserving some cores/node to drive the network On Broadwell only slight difference observed, but for Skylake the difference is significant! #nodes 46 cores/node 48 cores/noed (no data for 46 cores/node since GROMACS cannot perform domaindecomposition as 23 is a prime factor.) 15/09/2017 High Performance Molecular Dynamics, Bologna,

20 Potassium channel Kir3.2 Marconi A1

21 Potassium Channel (kir 3.2) Galile o

22 Potassium Channel (kir 3.2) Galile o

23 Summary of Gromacs benchmarks on K80 GPUs # run MPI ranks OMP threads GPU ns/day , ,

24 Lipid Bilayer (DPPC)

25 Lipid Bilayer (DPPC)

26 Marconi A3

Benchmark results on Knight Landing architecture

Benchmark results on Knight Landing architecture Benchmark results on Knight Landing architecture Domenico Guida, CINECA SCAI (Bologna) Giorgio Amati, CINECA SCAI (Roma) Milano, 21/04/2017 KNL vs BDW A1 BDW A2 KNL cores per node 2 x 18 @2.3 GHz 1 x 68

More information

Benchmark results on Knight Landing (KNL) architecture

Benchmark results on Knight Landing (KNL) architecture Benchmark results on Knight Landing (KNL) architecture Domenico Guida, CINECA SCAI (Bologna) Giorgio Amati, CINECA SCAI (Roma) Roma 23/10/2017 KNL, BDW, SKL A1 BDW A2 KNL A3 SKL cores per node 2 x 18 @2.3

More information

High Performance Molecular Dynamics

High Performance Molecular Dynamics High Performance Molecular Dynamics Parallelism and Parallel algorithms Andrew Emerson (a.emerson@cineca.it) Agenda 1. Molecular Dynamics milestones 2. Anatomy of a serial Molecular Dynamics program 3.

More information

Strong Scaling for Molecular Dynamics Applications

Strong Scaling for Molecular Dynamics Applications Strong Scaling for Molecular Dynamics Applications Scaling and Molecular Dynamics Strong Scaling How does solution time vary with the number of processors for a fixed problem size Classical Molecular Dynamics

More information

GROMACS Performance Benchmark and Profiling. September 2012

GROMACS Performance Benchmark and Profiling. September 2012 GROMACS Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource

More information

Installation and Test of Molecular Dynamics Simulation Packages on SGI Altix and Hydra-Cluster at JKU Linz

Installation and Test of Molecular Dynamics Simulation Packages on SGI Altix and Hydra-Cluster at JKU Linz Installation and Test of Molecular Dynamics Simulation Packages on SGI Altix and Hydra-Cluster at JKU Linz Rene Kobler May 25, 25 Contents 1 Abstract 2 2 Status 2 3 Changelog 2 4 Installation Notes 3 4.1

More information

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Running MD on HPC architectures I. Hybrid Clusters

Running MD on HPC architectures I. Hybrid Clusters Running MD on HPC architectures I. Hybrid Clusters Alessandro Grottesi Cineca Today's lecture You will learn: Gromacs @ CINECA: set up and launch of simulations Launch MD code (GROMACS, NAMD) Optimize

More information

n N c CIni.o ewsrg.au

n N c CIni.o ewsrg.au @NCInews NCI and Raijin National Computational Infrastructure 2 Our Partners General purpose, highly parallel processors High FLOPs/watt and FLOPs/$ Unit of execution Kernel Separate memory subsystem GPGPU

More information

Molecular Simulation Methods with Gromacs

Molecular Simulation Methods with Gromacs Molecular Simulation Methods with Gromacs Reciprocal space Direct space 1 9 17 253341 49 57 2 10 18 26 50 58 3442 3 11 19 27 51 3543 59 5 21 29 13 37 53 45 61 223038 6 14 46 54 62 23 7 15 313947 55 63

More information

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune PORTING CP2K TO THE INTEL XEON PHI ARCHER Technical Forum, Wed 30 th July Iain Bethune (ibethune@epcc.ed.ac.uk) Outline Xeon Phi Overview Porting CP2K to Xeon Phi Performance Results Lessons Learned Further

More information

NAMD Performance Benchmark and Profiling. November 2010

NAMD Performance Benchmark and Profiling. November 2010 NAMD Performance Benchmark and Profiling November 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox Compute resource - HPC Advisory

More information

BEST BANG FOR YOUR BUCK

BEST BANG FOR YOUR BUCK Carsten Kutzner Theoretical & Computational Biophysics MPI for biophysical Chemistry BEST BANG FOR YOUR BUCK Cost-efficient MD simulations COST-EFFICIENT MD SIMULATIONS TASK 1: CORE-H.XTC HOW TO GET OPTIMAL

More information

NAMD GPU Performance Benchmark. March 2011

NAMD GPU Performance Benchmark. March 2011 NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory

More information

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads... OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.

More information

Gromacs 4 tips & tricks Or: Speeding up your simulations

Gromacs 4 tips & tricks Or: Speeding up your simulations Gromacs 4 tips & tricks Or: Speeding up your simulations Erik Lindahl lindahl@cbr.su.se Center for Biomembrane Research Stockholm University, Sweden CBR Topics Algorithms behind the options Tips & tricks

More information

Reusing this material

Reusing this material XEON PHI BASICS Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Performance comparison between a massive SMP machine and clusters

Performance comparison between a massive SMP machine and clusters Performance comparison between a massive SMP machine and clusters Martin Scarcia, Stefano Alberto Russo Sissa/eLab joint Democritos/Sissa Laboratory for e-science Via Beirut 2/4 34151 Trieste, Italy Stefano

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

Divide-and-Conquer Molecular Simulation:

Divide-and-Conquer Molecular Simulation: Divide-and-Conquer Molecular Simulation: GROMACS, OpenMM & CUDA ISC 2010-05-31 Erik Lindahl CBR lindahl@cbr.su.se Center for Biomembrane Research Stockholm University These will soon be small computers

More information

Performance analysis tools: Intel VTuneTM Amplifier and Advisor. Dr. Luigi Iapichino

Performance analysis tools: Intel VTuneTM Amplifier and Advisor. Dr. Luigi Iapichino Performance analysis tools: Intel VTuneTM Amplifier and Advisor Dr. Luigi Iapichino luigi.iapichino@lrz.de Which tool do I use in my project? A roadmap to optimisation After having considered the MPI layer,

More information

Performance Study of Popular Computational Chemistry Software Packages on Cray HPC Systems

Performance Study of Popular Computational Chemistry Software Packages on Cray HPC Systems Performance Study of Popular Computational Chemistry Software Packages on Cray HPC Systems Junjie Li (lijunj@iu.edu) Shijie Sheng (shengs@iu.edu) Raymond Sheppard (rsheppar@iu.edu) Pervasive Technology

More information

GRID Testing and Profiling. November 2017

GRID Testing and Profiling. November 2017 GRID Testing and Profiling November 2017 2 GRID C++ library for Lattice Quantum Chromodynamics (Lattice QCD) calculations Developed by Peter Boyle (U. of Edinburgh) et al. Hybrid MPI+OpenMP plus NUMA aware

More information

Molecular Simulation with GROMACS on CUDA GPUs Erik Lindahl

Molecular Simulation with GROMACS on CUDA GPUs Erik Lindahl Webinar 20130404 Molecular Simulation with GROMACS on CUDA GPUs Erik Lindahl GROMACS is used on a wide range of resources We re comfortably on the single-μs scale today Larger machines often mean larger

More information

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries.

Munara Tolubaeva Technical Consulting Engineer. 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries. Munara Tolubaeva Technical Consulting Engineer 3D XPoint is a trademark of Intel Corporation in the U.S. and/or other countries. notices and disclaimers Intel technologies features and benefits depend

More information

Outline. Motivation Parallel k-means Clustering Intel Computing Architectures Baseline Performance Performance Optimizations Future Trends

Outline. Motivation Parallel k-means Clustering Intel Computing Architectures Baseline Performance Performance Optimizations Future Trends Collaborators: Richard T. Mills, Argonne National Laboratory Sarat Sreepathi, Oak Ridge National Laboratory Forrest M. Hoffman, Oak Ridge National Laboratory Jitendra Kumar, Oak Ridge National Laboratory

More information

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA) Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS

More information

AMD EPYC and NAMD Powering the Future of HPC February, 2019

AMD EPYC and NAMD Powering the Future of HPC February, 2019 AMD EPYC and NAMD Powering the Future of HPC February, 19 Exceptional Core Performance NAMD is a compute-intensive workload that benefits from AMD EPYC s high core IPC (Instructions Per Clock) and high

More information

NAMD Serial and Parallel Performance

NAMD Serial and Parallel Performance NAMD Serial and Parallel Performance Jim Phillips Theoretical Biophysics Group Serial performance basics Main factors affecting serial performance: Molecular system size and composition. Cutoff distance

More information

arxiv: v1 [cs.dc] 3 Jul 2015

arxiv: v1 [cs.dc] 3 Jul 2015 Best bang for your buck: GPU nodes for GROMACS biomolecular simulations arxiv:1507.00898v1 [cs.dc] 3 Jul 2015 Carsten Kutzner,, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Bert L. de Groot, and Helmut

More information

Performance Analysis and Petascaling Enabling of GROMACS

Performance Analysis and Petascaling Enabling of GROMACS Fabio Affinito (a), Andrew Emerson (a) Leandar Litov (b), Peicho Petkov, (b) Rossen Apostolov (c) (d), Lilit Axner (c) Berk Hess (d) and Erik Lindahl (d) Maria Francesca Iozzi (e) a) CINECA Supercomputing,

More information

Introduction to Performance Tuning & Optimization Tools

Introduction to Performance Tuning & Optimization Tools Introduction to Performance Tuning & Optimization Tools a[i] a[i+1] + a[i+2] a[i+3] b[i] b[i+1] b[i+2] b[i+3] = a[i]+b[i] a[i+1]+b[i+1] a[i+2]+b[i+2] a[i+3]+b[i+3] Ian A. Cosden, Ph.D. Manager, HPC Software

More information

Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ,

Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ, Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - fabio.baruffa@lrz.de LRZ, 27.6.- 29.6.2016 Architecture Overview Intel Xeon Processor Intel Xeon Phi Coprocessor, 1st generation Intel Xeon

More information

Comparison and analysis of parallel tasking performance for an irregular application

Comparison and analysis of parallel tasking performance for an irregular application Comparison and analysis of parallel tasking performance for an irregular application Patrick Atkinson, University of Bristol (p.atkinson@bristol.ac.uk) Simon McIntosh-Smith, University of Bristol Motivation

More information

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries How to Boost the Performance of Your MPI and PGAS s with MVAPICH2 Libraries A Tutorial at the MVAPICH User Group (MUG) Meeting 18 by The MVAPICH Team The Ohio State University E-mail: panda@cse.ohio-state.edu

More information

Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM

Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM Heterogeneous CPU+GPU Molecular Dynamics Engine in CHARMM 25th March, GTC 2014, San Jose CA AnE- Pekka Hynninen ane.pekka.hynninen@nrel.gov NREL is a na*onal laboratory of the U.S. Department of Energy,

More information

Koronis Performance Tuning 2. By Brent Swartz December 1, 2011

Koronis Performance Tuning 2. By Brent Swartz December 1, 2011 Koronis Performance Tuning 2 By Brent Swartz December 1, 2011 Application Tuning Methodology http://docs.sgi.com/library/tpl/cgi-bin/browse.cgi? coll=linux&db=bks&cmd=toc&pth=/sgi_develop er/lx_86_apptune

More information

IXPUG 16. Dmitry Durnov, Intel MPI team

IXPUG 16. Dmitry Durnov, Intel MPI team IXPUG 16 Dmitry Durnov, Intel MPI team Agenda - Intel MPI 2017 Beta U1 product availability - New features overview - Competitive results - Useful links - Q/A 2 Intel MPI 2017 Beta U1 is available! Key

More information

GROMACS Performance Benchmark and Profiling. August 2011

GROMACS Performance Benchmark and Profiling. August 2011 GROMACS Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Game-changing Extreme GPU computing with The Dell PowerEdge C4130

Game-changing Extreme GPU computing with The Dell PowerEdge C4130 Game-changing Extreme GPU computing with The Dell PowerEdge C4130 A Dell Technical White Paper This white paper describes the system architecture and performance characterization of the PowerEdge C4130.

More information

Parallel Molecular Dynamics

Parallel Molecular Dynamics Agenda Introduction to Classical Molecular Dynamics Parallelisation of Molecular Dynamics Atom and Force Decomposition Domain Decomposition Parallelisation of the electrostatics calculation Why do Molecular

More information

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati,

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, HPC-CINECA infrastructure: The New Marconi System HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, g.amati@cineca.it Agenda 1. New Marconi system Roadmap Some performance info

More information

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads... OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.

More information

The Optimal CPU and Interconnect for an HPC Cluster

The Optimal CPU and Interconnect for an HPC Cluster 5. LS-DYNA Anwenderforum, Ulm 2006 Cluster / High Performance Computing I The Optimal CPU and Interconnect for an HPC Cluster Andreas Koch Transtec AG, Tübingen, Deutschland F - I - 15 Cluster / High Performance

More information

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA) Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit

More information

LAMMPS Performance Benchmark and Profiling. July 2012

LAMMPS Performance Benchmark and Profiling. July 2012 LAMMPS Performance Benchmark and Profiling July 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

CPMD Performance Benchmark and Profiling. February 2014

CPMD Performance Benchmark and Profiling. February 2014 CPMD Performance Benchmark and Profiling February 2014 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting

More information

NAMD Performance Benchmark and Profiling. February 2012

NAMD Performance Benchmark and Profiling. February 2012 NAMD Performance Benchmark and Profiling February 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

Shared Memory Programming With OpenMP Computer Lab Exercises

Shared Memory Programming With OpenMP Computer Lab Exercises Shared Memory Programming With OpenMP Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/presentations/fsu

More information

AMBER 11 Performance Benchmark and Profiling. July 2011

AMBER 11 Performance Benchmark and Profiling. July 2011 AMBER 11 Performance Benchmark and Profiling July 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning 5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:

More information

HPC Architectures evolution: the case of Marconi, the new CINECA flagship system. Piero Lanucara

HPC Architectures evolution: the case of Marconi, the new CINECA flagship system. Piero Lanucara HPC Architectures evolution: the case of Marconi, the new CINECA flagship system Piero Lanucara Many advantages as a supercomputing resource: Low energy consumption. Limited floor space requirements Fast

More information

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division

Intel VTune Amplifier XE. Dr. Michael Klemm Software and Services Group Developer Relations Division Intel VTune Amplifier XE Dr. Michael Klemm Software and Services Group Developer Relations Division Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED AS IS. NO LICENSE, EXPRESS

More information

Bei Wang, Dmitry Prohorov and Carlos Rosales

Bei Wang, Dmitry Prohorov and Carlos Rosales Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512

More information

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 Gilad Shainer, Tong Liu (Mellanox); Jeffrey Layton (Dell); Joshua Mora (AMD) High Performance Interconnects for

More information

Introduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero

Introduction to Intel Xeon Phi programming techniques. Fabio Affinito Vittorio Ruggiero Introduction to Intel Xeon Phi programming techniques Fabio Affinito Vittorio Ruggiero Outline High level overview of the Intel Xeon Phi hardware and software stack Intel Xeon Phi programming paradigms:

More information

INTEL HPC DEVELOPER CONFERENCE Fuel Your Insight

INTEL HPC DEVELOPER CONFERENCE Fuel Your Insight INTEL HPC DEVELOPER CONFERENCE Fuel Your Insight Large-scale Distributed Rendering with the OSPRay Ray Tracing Framework Carson Brownlee Shared-memory Distributed-memory Why MPI? Data that exceeds the

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

LS-DYNA Performance on Intel Scalable Solutions

LS-DYNA Performance on Intel Scalable Solutions LS-DYNA Performance on Intel Scalable Solutions Nick Meng, Michael Strassmaier, James Erwin, Intel nick.meng@intel.com, michael.j.strassmaier@intel.com, james.erwin@intel.com Jason Wang, LSTC jason@lstc.com

More information

SCALABLE HYBRID PROTOTYPE

SCALABLE HYBRID PROTOTYPE SCALABLE HYBRID PROTOTYPE Scalable Hybrid Prototype Part of the PRACE Technology Evaluation Objectives Enabling key applications on new architectures Familiarizing users and providing a research platform

More information

Using an HPC Cloud for Weather Science

Using an HPC Cloud for Weather Science Using an HPC Cloud for Weather Science Provided By: Transforming Operational Environmental Predictions Around the Globe Moving EarthCast Technologies from Idea to Production EarthCast Technologies produces

More information

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE , NOW AND IN THE FUTURE Which, why and how do they compare in our systems? 08.07.2018 I MUG 18, COLUMBUS (OH) I DAMIAN ALVAREZ Outline FZJ mission JSC s role JSC s vision for Exascale-era computing JSC

More information

Programming Models for Supercomputing in the Era of Multicore

Programming Models for Supercomputing in the Era of Multicore Programming Models for Supercomputing in the Era of Multicore Marc Snir MULTI-CORE CHALLENGES 1 Moore s Law Reinterpreted Number of cores per chip doubles every two years, while clock speed decreases Need

More information

Capability Models for Manycore Memory Systems: A Case-Study with Xeon Phi KNL

Capability Models for Manycore Memory Systems: A Case-Study with Xeon Phi KNL SABELA RAMOS, TORSTEN HOEFLER Capability Models for Manycore Memory Systems: A Case-Study with Xeon Phi KNL spcl.inf.ethz.ch Microarchitectures are becoming more and more complex CPU L1 CPU L1 CPU L1 CPU

More information

HPC and AI Solution Overview. Garima Kochhar HPC and AI Innovation Lab

HPC and AI Solution Overview. Garima Kochhar HPC and AI Innovation Lab HPC and AI Solution Overview Garima Kochhar HPC and AI Innovation Lab 1 Dell EMC HPC and DL team charter Design, develop and integrate HPC and DL Heading systems Lorem ipsum dolor sit amet, consectetur

More information

Shared Memory Programming With OpenMP Exercise Instructions

Shared Memory Programming With OpenMP Exercise Instructions Shared Memory Programming With OpenMP Exercise Instructions John Burkardt Interdisciplinary Center for Applied Mathematics & Information Technology Department Virginia Tech... Advanced Computational Science

More information

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace

Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace Determining Optimal MPI Process Placement for Large- Scale Meteorology Simulations with SGI MPIplace James Southern, Jim Tuccillo SGI 25 October 2016 0 Motivation Trend in HPC continues to be towards more

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Code optimization in a 3D diffusion model

Code optimization in a 3D diffusion model Code optimization in a 3D diffusion model Roger Philp Intel HPC Software Workshop Series 2016 HPC Code Modernization for Intel Xeon and Xeon Phi February 18 th 2016, Barcelona Agenda Background Diffusion

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

GROMACS. Evaluation of Loop Transformation methods with CUDA. Noriko Hata, 1 Masami Takata 1 and Kazuki Joe 1 2. GROMACS

GROMACS. Evaluation of Loop Transformation methods with CUDA. Noriko Hata, 1 Masami Takata 1 and Kazuki Joe 1 2. GROMACS CUDA GROMACS 1 1 1 GPU GROMACS CUDA GPU GROMACS GROMACS-OpenMM GROMACS-OpenMM CPU GROMACS GROMACS-OpenMM GROMACS Evaluation of Loop Transformation methods with CUDA Noriko Hata, 1 Masami Takata 1 and Kazuki

More information

ecse08-10: Optimal parallelisation in CASTEP

ecse08-10: Optimal parallelisation in CASTEP ecse08-10: Optimal parallelisation in CASTEP Arjen, Tamerus University of Cambridge at748@cam.ac.uk Phil, Hasnip University of York phil.hasnip@york.ac.uk July 31, 2017 Abstract We describe an improved

More information

Symmetric Computing. ISC 2015 July John Cazes Texas Advanced Computing Center

Symmetric Computing. ISC 2015 July John Cazes Texas Advanced Computing Center Symmetric Computing ISC 2015 July 2015 John Cazes Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required:

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector A brief Introduction to MPI 2 What is MPI? Message Passing Interface Explicit parallel model All parallelism is explicit:

More information

SOLUTIONS BRIEF: Transformation of Modern Healthcare

SOLUTIONS BRIEF: Transformation of Modern Healthcare SOLUTIONS BRIEF: Transformation of Modern Healthcare Healthcare & The Intel Xeon Scalable Processor Intel is committed to bringing the best of our manufacturing, design and partner networks to enable our

More information

Approaches to acceleration: GPUs vs Intel MIC. Fabio AFFINITO SCAI department

Approaches to acceleration: GPUs vs Intel MIC. Fabio AFFINITO SCAI department Approaches to acceleration: GPUs vs Intel MIC Fabio AFFINITO SCAI department Single core Multi core Many core GPU Intel MIC 61 cores 512bit-SIMD units from http://www.karlrupp.net/ from http://www.karlrupp.net/

More information

Molecular Modelling and the Cray XC30 Performance Counters. Michael Bareford, ARCHER CSE Team

Molecular Modelling and the Cray XC30 Performance Counters. Michael Bareford, ARCHER CSE Team Molecular Modelling and the Cray XC30 Performance Counters Michael Bareford, ARCHER CSE Team michael.bareford@epcc.ed.ac.uk Reusing this material This work is licensed under a Creative Commons Attribution-

More information

OpenFOAM Performance Testing and Profiling. October 2017

OpenFOAM Performance Testing and Profiling. October 2017 OpenFOAM Performance Testing and Profiling October 2017 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Huawei, Mellanox Compute resource - HPC

More information

ARCHER Champions 2 workshop

ARCHER Champions 2 workshop ARCHER Champions 2 workshop Mike Giles Mathematical Institute & OeRC, University of Oxford Sept 5th, 2016 Mike Giles (Oxford) ARCHER Champions 2 Sept 5th, 2016 1 / 14 Tier 2 bids Out of the 8 bids, I know

More information

Intel MPI Library Conditional Reproducibility

Intel MPI Library Conditional Reproducibility 1 Intel MPI Library Conditional Reproducibility By Michael Steyer, Technical Consulting Engineer, Software and Services Group, Developer Products Division, Intel Corporation Introduction High performance

More information

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC 2015-09-14 Algorithms, System and Data Centre Optimisation for Energy Efficient HPC Vincent Heuveline URZ Computing Centre of Heidelberg University EMCL Engineering Mathematics and Computing Lab 1 Energy

More information

Configuring and Running NAMD Simulations

Configuring and Running NAMD Simulations Configuring and Running NAMD Simulations Jim Phillips Theoretical Biophysics Group NAMD Basics NAMD is a batch-mode program. A text configuration file controls all options for input, output, and simulation

More information

ENABLING NEW SCIENCE GPU SOLUTIONS

ENABLING NEW SCIENCE GPU SOLUTIONS ENABLING NEW SCIENCE TESLA BIO Workbench The NVIDIA Tesla Bio Workbench enables biophysicists and computational chemists to push the boundaries of life sciences research. It turns a standard PC into a

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests

Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests Turbo Boost Up, AVX Clock Down: Complications for Scaling Tests Steve Lantz 12/8/2017 1 What Is CPU Turbo? (Sandy Bridge) = nominal frequency http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/hc23.19.9-desktop-cpus/hc23.19.921.sandybridge_power_10-rotem-intel.pdf

More information

Porting and Optimisation of UM on ARCHER. Karthee Sivalingam, NCAS-CMS. HPC Workshop ECMWF JWCRP

Porting and Optimisation of UM on ARCHER. Karthee Sivalingam, NCAS-CMS. HPC Workshop ECMWF JWCRP Porting and Optimisation of UM on ARCHER Karthee Sivalingam, NCAS-CMS HPC Workshop ECMWF JWCRP Acknowledgements! NCAS-CMS Bryan Lawrence Jeffrey Cole Rosalyn Hatcher Andrew Heaps David Hassell Grenville

More information

HPC Architectures past,present and emerging trends

HPC Architectures past,present and emerging trends HPC Architectures past,present and emerging trends Andrew Emerson, Cineca a.emerson@cineca.it 27/09/2016 High Performance Molecular 1 Dynamics - HPC architectures Agenda Computational Science Trends in

More information

Debugging, benchmarking, tuning i.e. software development tools. Martin Čuma Center for High Performance Computing University of Utah

Debugging, benchmarking, tuning i.e. software development tools. Martin Čuma Center for High Performance Computing University of Utah Debugging, benchmarking, tuning i.e. software development tools Martin Čuma Center for High Performance Computing University of Utah m.cuma@utah.edu SW development tools Development environments Compilers

More information

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs

Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs Presenting: Comparing the Power and Performance of Intel's SCC to State-of-the-Art CPUs and GPUs A paper comparing modern architectures Joakim Skarding Christian Chavez Motivation Continue scaling of performance

More information

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 Technologies and application performance Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 The landscape is changing We are no longer in the general purpose era the argument of

More information

Molecular Dynamics and Quantum Mechanics Applications

Molecular Dynamics and Quantum Mechanics Applications Understanding the Performance of Molecular Dynamics and Quantum Mechanics Applications on Dell HPC Clusters High-performance computing (HPC) clusters are proving to be suitable environments for running

More information

Comparative Benchmarking of the First Generation of HPC-Optimised Arm Processors on Isambard

Comparative Benchmarking of the First Generation of HPC-Optimised Arm Processors on Isambard Prof Simon McIntosh-Smith Isambard PI University of Bristol / GW4 Alliance Comparative Benchmarking of the First Generation of HPC-Optimised Arm Processors on Isambard Isambard system specification 10,000+

More information

Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation

Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation SC14: International Conference for High Performance Computing, Networking, Storage and Analysis Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation James C.

More information

Intel Xeon Phi архитектура, модели программирования, оптимизация.

Intel Xeon Phi архитектура, модели программирования, оптимизация. Нижний Новгород, 2017 Intel Xeon Phi архитектура, модели программирования, оптимизация. Дмитрий Прохоров, Дмитрий Рябцев, Intel Agenda What and Why Intel Xeon Phi Top 500 insights, roadmap, architecture

More information

Parallel Constraint Programming (and why it is hard... ) Ciaran McCreesh and Patrick Prosser

Parallel Constraint Programming (and why it is hard... ) Ciaran McCreesh and Patrick Prosser Parallel Constraint Programming (and why it is hard... ) This Week s Lectures Search and Discrepancies Parallel Constraint Programming Why? Some failed attempts A little bit of theory and some very simple

More information

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer the future unleashed Alexey Belogortsev Field Application Engineer Intel Xeon Scalable Processors for High Performance Computing Growing Challenges in System Architecture The Walls System Bottlenecks Divergent

More information