Clustering Optimizations How to achieve optimal performance? Pak Lui

Size: px
Start display at page:

Download "Clustering Optimizations How to achieve optimal performance? Pak Lui"

Transcription

1 Clustering Optimizations How to achieve optimal performance? Pak Lui

2 130 Applications Best Practices Published Abaqus CPMD LS-DYNA MILC AcuSolve Dacapo minife OpenMX Amber Desmond MILC PARATEC AMG DL-POLY MSC Nastran PFA AMR Eclipse MR Bayes PFLOTRAN ABySS FLOW-3D MM5 Quantum ESPRESSO ANSYS CFX GADGET-2 MPQC RADIOSS ANSYS FLUENT GROMACS NAMD SPECFEM3D ANSYS Mechanics Himeno Nekbone WRF BQCD HOOMD-blue NEMO CCSM HYCOM NWChem CESM ICON Octopus COSMO Lattice QCD OpenAtom CP2K LAMMPS OpenFOAM For more information, visit: 2

3 Agenda Overview Of HPC Application Performance Ways To Inspect/Profile/Optimize HPC Applications CPU/Memory, File I/O, Network System Configurations and Tuning Case Studies, Performance Optimization and Highlights STAR-CCM+ ANSYS Fluent Conclusions 3

4 HPC Application Performance Overview To achieve scalability performance on HPC applications Involves understanding of the workload by performing profile analysis Tune for the most time spent (either CPU, Network, IO, etc) Underlying implicit requirement: Each node to perform similarly Run CPU/memory /network tests or cluster checker to identify bad node(s) Comparing behaviors of using different HW components Which pinpoint bottlenecks in different areas of the HPC cluster A selection of HPC applications will be shown To demonstrate method of profiling and analysis To determine the bottleneck in SW/HW To determine the effectiveness of tuning to improve on performance 4

5 Ways To Inspect and Profile Applications Computation (CPU/Accelerators) Tools: top, htop, perf top, pstack, Visual Profiler, etc Tests and Benchmarks: HPL, STREAM File I/O Bandwidth and Block Size: iostat, collectl, darshan, etc Characterization Tools and Benchmarks: iozone, ior, etc Network Interconnect Tools and Profilers: perfquery, MPI profilers (IPM, TAU, etc) Characterization Tools and Benchmarks: Latency and Bandwidth: OSU benchmarks, IMB 5

6 Case Study: STAR-CCM+ 6

7 STAR-CCM+ STAR-CCM+ An engineering process-oriented CFD tool Client-server architecture, object-oriented programming Delivers the entire CFD process in a single integrated software environment Developed by CD-adapco 7

8 Objectives The presented research was done to provide best practices CD-adapco performance benchmarking Interconnect performance comparisons Ways to increase CD-adapco productivity Power-efficient simulations The presented results will demonstrate The scalability of the compute environment The scalability of the compute environment/application Considerations for higher productivity and efficiency 8

9 Test Cluster Configuration Dell PowerEdge R720xd 32-node (640-core) Jupiter cluster Dual-Socket Hexa-Core Intel E GHz CPUs (Static max Perf in BIOS) Memory: 64GB memory, DDR MHz OS: RHEL 6.2, OFED InfiniBand SW stack Hard Drives: 24x 250GB 7.2 RPM SATA 2.5 on RAID 0 Intel Cluster Ready certified cluster Mellanox Connect-IB FDR InfiniBand and ConnectX-3 Ethernet adapters Mellanox SwitchX 6036 VPI InfiniBand and Ethernet switches MPI: Mellanox HPC-X v1.0.0 (based on OMPI), Platform MPI , Intel MPI Application: STAR-CCM+ version (unless specified otherwise) Benchmarks: Lemans_Poly_17M (Epsilon Euskadi Le Mans car external aerodynamics) 9

10 STAR-CCM+ Performance Network FDR InfiniBand delivers the best network scalability performance Provides up to 208% higher performance than 10GbE at 32 nodes Provides up to 191% higher performance than 40GbE at 32 nodes FDR IB scales linearly while 10/40GbE has scalability limitation beyond 16 nodes 208% 191% Higher is better 20 Processes/Node 10

11 STAR-CCM+ Profiling Network InfiniBand reduces network overhead; results higher CPU utilization Reducing MPI communication overhead with efficient network interconnect As less time spent on the network, overall application runtime is improved Ethernet solutions consumes more time in communications Spent 73%-95% of overall time in network due to congestion in Ethernet While FDR IB spent about 38% of overall runtime Higher is better 20 Processes/Node 11

12 STAR-CCM+ Profiling MPI Comm. Time Identified MPI overheads by profiling communication time Dealt with communications in collective, point-to-point and non-blocking operations 10/40GbE vs FDR IB: Spent longer time in Allreduce, Bcast, Recv, Waitany 12

13 STAR-CCM+ Profiling MPI Comm. Time Observed MPI time spent by different network hardware FDR IB: MPI_Test(46%), MPI_Waitany(13%), MPI_Alltoall (13%), MPI_Reduce(13%) 10GbE: MPI_Recv(29%), MPI_Waitany(25%), MPI_Allreduce(15%), MPI_Wait(10%) 13

14 STAR-CCM+ Performance Software Versions Improvement in latest STAR-CCM+ results in higher performance at scale v demonstrated a 28% gain compared to the v on 32-node run Slight Change in communication pattern helps to improve the scalability Improvement gap expects to widen at scale See subsequence slides in the MPI profiling to show the differences 28% 7% 14% Higher is better 20 Processes/Node 14

15 STAR-CCM+ Profiling MPI Time Spent Communication time has dropped with the latest STAR-CCM+ version Observed less time spent in MPI, although communication pattern is roughly the same MPI Barrier time is reduced significantly between the 2 releases Higher is better 20 Processes/Node 15

16 STAR-CCM+ Performance-MPI Implementations STAR-CCM+ has made various MPI implementations available to run Default MPI implementation used in STAR-CCM+ is Platform MPI MPI implementations started to differentiate beyond 8 nodes Optimization flags have been set already in vendor s startup scripts Support for HPC-X is based on the existing Open MPI support in STAR-CCM+ HPC-X provides 21% of higher scalability than the alternatives 21% Higher is better Version

17 STAR-CCM+ Performance Single/Dual Port Benefit of deploying dual-port InfiniBand is demonstrated at scale Running with dual port provides up to 11% higher performance at 32 nodes Connect-IB on PCIe Gen3 x16 slot which can provide additional throughput with 2 links 11% Higher is better 20 Processes/Node 17

18 STAR-CCM+ Performance Turbo Mode Enabling Turbo mode results in higher application performance Up to 17% of the improvement seen by enabling Turbo mode Higher performance gain seen with higher node count Boosting base frequency; consequently resulted in higher power consumption Using kernel tools called msr-tools to adjust Turbo Mode dynamically Allows dynamically turn off/on Turbo mode in the OS level 13% 17% Higher is better 20 Processes/Node 18

19 STAR-CCM+ Performance File IO Advantages for staging data to temporary file system in memory Data write of ~8GB occurs at the end of the run for the benchmark By staging on local FS, which avoid accessing by all processes (vs NFS) By staging on local /dev/shm, even higher performance gain is seen (~11% gain) Using temporary storage is not recommended for production environment While /dev/shm reduces outperforms localfs, it is not recommended for production If available, parallel file system is more preferred solution versus local or /dev/shm 8% 11% Higher is better Version

20 STAR-CCM+ Performance - System Generations New generations of software and hardware provide performance gain Performance gain demonstrated through variables in HW and SW Latest stack provides ~55% higher performance versus 1 generation behind Latest stack provides ~2.6x higher performance versus 2 generations behind System components used: WSM: X5670@2.93GHz, DDR , ConnectX-2 QDR IB, 1 disk, v SNB: E5-2680@2.7GHz, DDR , ConnectX-3 FDR IB, 24 disks, v IVB: E5-2680v2@2.8GHz, DDR , Connect-IB FDR IB, 24 disks, v % 55% Higher is better 20

21 STAR-CCM+ Profiling User/MPI Time Ratio STAR-CCM+ spent more time in computation than communication The time ratio for network gradually increases with more nodes in the job Improvement on network efficiency would reflect in improvement on overall runtime FDR InfiniBand 21

22 STAR-CCM+ Profiling Message Sizes Majority of messages are small messages Messages are concentrated below 64KB Number of messages increases with the number of nodes 22

23 STAR-CCM+ Profiling MPI Data Transfer As the cluster grows, less data transfers between MPI processes Drops from ~20GB per rank at 1 node vs ~3GB at 32 nodes Some node imbalances are seen through the amount of data transfers Rank 0 shows significantly higher network activities than other ranks 23

24 STAR-CCM+ Profiling Aggregated Transfer Aggregated data transfer refers to: Total amount of data being transferred in the network between all MPI ranks collectively Very large data transfer takes place in STAR-CCM+ High network throughput is required for delivering the network bandwidth 1.5TB of data transfer takes place between the MPI processes at 32 nodes Version

25 STAR-CCM+ Summary Performance STAR-CCM+ v improved on scalability over v by 28% at 32 nodes Performance gap expect to widen at higher node count FDR InfiniBand delivers the highest network performance for STAR-CCM+ to scale FDR IB provides higher performance against other networks FDR IB delivers ~191% higher compared to 40GbE, ~208% vs 10GbE on a 32 node run Deploying dual-port Connect-IB HCA provides 11% performance at 32 nodes Performance improvement seen compared to older hardware/software generations Approximately 55% higher performance for 1 generation and 2.6x for 2 generations Enabling Turbo mode results in higher application performance Up to 17% of the improvement seen by enabling Turbo mode Mellanox HPC-X provides better performance than the alternatives MPI Profiling Communication time reduction with v which improves overall performance Ethernet solutions consumes more time in communications Spent 73%-95% of overall time in network due to congestion in Ethernet, while IB spent ~38% 25

26 Case Study: ANSYS Fluent 26

27 ANSYS FLUENT Computational Fluid Dynamics (CFD) is a computational technology Enables the study of the dynamics of things that flow Enable better understanding of qualitative and quantitative physical phenomena in the flow which is used to improve engineering design CFD brings together a number of different disciplines Fluid dynamics, mathematical theory of partial differential systems, computational geometry, numerical analysis, Computer science ANSYS FLUENT is a leading CFD application from ANSYS Widely used in almost every industry sector and manufactured product 27

28 Objectives The presented research was done to provide best practices Fluent performance benchmarking MPI Library performance comparison Interconnect performance comparison CPUs comparison Compilers comparison The presented results will demonstrate The scalability of the compute environment/application Considerations for higher productivity and efficiency 28

29 Test Cluster Configuration Dell PowerEdge R720xd 32-node (640-core) Jupiter cluster Dual-Socket Hexa-Core Intel E GHz CPUs (Turbo mode enabled unless otherwise stated) Memory: 64GB memory, DDR MHz OS: RHEL 6.2, OFED InfiniBand SW stack Hard Drives: 24x 250GB 7.2 RPM SATA 2.5 on RAID 0 Intel Cluster Ready certified cluster Mellanox Connect-IB FDR InfiniBand adapters Mellanox ConnectX-3 QDR InfiniBand and Ethernet VPI adapters Mellanox SwitchX SX6036 VPI InfiniBand and Ethernet switches MPI: Mellanox HPC-X v1.2.0 based on OMPI, (Provided): Intel MPI , IBM Platform MPI 9.1 Application: ANSYS Fluent Benchmarks: eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_poly_14m, truck_14m Descriptions for the test cases can be found at the ANSYS Fluent 15.0 Benchmark page 29

30 Fluent Performance Interconnects FDR InfiniBand enables the highest cluster productivity Surpassed other network interconnect in scalability performance FDR InfiniBand tops performance among different network interconnects FDR InfiniBand outperforms QDR InfiniBand by up to 200% at 32 nodes Similarly, FDR outperforms 10GbE by 16 times, and 1GbE by over 39 times 16x 39x 200% Higher is better 30

31 Fluent Performance Interconnects FDR InfiniBand performance outperforms on other Fluent benchmarks 31

32 Fluent Performance MPI Implementations HPC-X delivers higher scalability performance than other MPIs compared HPC-X outperforms over the default Platform MPI by 10%, and Intel MPI by 19% Support of HPC-X on Fluent is based on the support of Open MPI on Fluent The new yalla pml reduces the overhead. Flags used for HPC-X: -mca coll_fca_enable 1 -mca coll_fca_np 0 -mca pml yalla -map-by node -mca mtl mxm -mca mtl_mxm_np 0 -x MXM_TLS=self,shm,ud --bind-to core 19% 10% Higher is better FDR InfiniBand 32

33 Fluent Performance MPI Implementations HPC-X outperforms other MPIs on other benchmark data 33

34 Fluent Performance Turbo Mode and Clock Advantages are seen with running higher clock rate with Fluent Either by enabling Turbo mode or higher CPU clock frequency Boosting CPU clock rate yields higher performance at lower cost Increasing to 2800MHz (from 2200MHz) run 42% faster, 18% of increased power Running turbo mode also yields higher performance but at higher cost Increase of 13% of performance at a expense of a 25% of increased power usage 18% 25% 42% 13% Higher is better FDR InfiniBand 34

35 Fluent Performance Best Published Results demonstrated by HPCAC outperforms the previous best record The ANSYS Fluent 15.0 Benchmark publishes ANSYS Fluent performance results HPCAC achieved 26.36% higher performance than the best published results (as of 9/22/2014), despite slower CPUs are used on the Jupiter cluster by the HPCAC The 32-node/640-core result beats previous record of 96-node/1920-core by 8.53% Performance is expected to climb on the Jupiter cluster if more nodes are available 26.36% 8.53% Higher is better 35

36 Fluent Profiling I/O Profiling Minor disk I/O activities take place on all MPI ranks for this workload Majority of the read activities are disk appeared at the beginning of the job run InfiniBand FDR 36

37 Fluent Profiling Point-to-point dataflow Communication seems to be limited to MPI ranks that is closer to self Heavy communications seen between first and last ranks Communication pattern does not change as the cluster scales However, the amount of data being transferred is reduced as the node scales 2 nodes 32 nodes InfiniBand FDR 37

38 Fluent Profiling Time Spent by MPI Calls Majority of the MPI time is spent on MPI_Waitall Accounts for 30% Wall time MPI_Allreduce 20% MPI_Recv 11% eddy_417k, 32 nodes Some load imbalances in network are observed Some ranks spent more time MPI_Waitall and MPI_Allreduce Might be related to how workload is distributed among the MPI ranks 38

39 Fluent Profiling MPI Message Sizes Majority of data transfer messages are small to medium sizes MPI_Allreduce: Large concentration of 4-byte msg (~18% wall time) MPI_Wait: Large concentration of 16-byte msg (~11% wall time) eddy_417k, 32 nodes 39

40 Fluent Summary Performance Jupiter cluster outperforms other system architectures on Fluent FDR InfiniBand delivers higher performance against QDR InfiniBand by 200% FDR IB outperforms 10GbE by up to 11 times at 32 nodes / 640 cores FDR InfiniBand enable Fluent to break previous performance record Outperforms previously set record by 25.38% at 640 cores/ 32 nodes Outperforms previously set record by 8.52% at 1920 cores/ 96 nodes HPC-X MPI delivers higher performance against other MPI Implementation HPC-X outperforms Platform MPI by 10%, outperforms Intel MPI by 19% CPU Higher CPU clock rate and Turbo mode yields higher performance for Fluent Bumping CPU clock (from 2200MHz to 2800MHz) yields 42% faster perf at 18% of increased power Enabling turbo mode translates to 13% of increase performance at a 25% of additional power usage Profiling Heavy usage in small msg in MPI_Waitall, MPI_Allreduce, MPI_Recv communications 40

41 Thank You HPC Advisory Council All trademarks are property of their respective owners. All information is provided As-Is without any kind of warranty. The HPC Advisory Council makes no representation to the accuracy and completeness of the information contained herein. HPC Advisory Council undertakes no duty and assumes no obligation to update or correct any information presented herein 41 41

STAR-CCM+ Performance Benchmark and Profiling. July 2014

STAR-CCM+ Performance Benchmark and Profiling. July 2014 STAR-CCM+ Performance Benchmark and Profiling July 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: CD-adapco, Intel, Dell, Mellanox Compute

More information

Application Performance Optimizations. Pak Lui

Application Performance Optimizations. Pak Lui Application Performance Optimizations Pak Lui 2 140 Applications Best Practices Published Abaqus CPMD LS-DYNA MILC AcuSolve Dacapo minife OpenMX Amber Desmond MILC PARATEC AMG DL-POLY MSC Nastran PFA AMR

More information

The Effect of In-Network Computing-Capable Interconnects on the Scalability of CAE Simulations

The Effect of In-Network Computing-Capable Interconnects on the Scalability of CAE Simulations The Effect of In-Network Computing-Capable Interconnects on the Scalability of CAE Simulations Ophir Maor HPC Advisory Council ophir@hpcadvisorycouncil.com The HPC-AI Advisory Council World-wide HPC non-profit

More information

The Effect of HPC Cluster Architecture on the Scalability Performance of CAE Simulations

The Effect of HPC Cluster Architecture on the Scalability Performance of CAE Simulations The Effect of HPC Cluster Architecture on the Scalability Performance of CAE Simulations Pak Lui HPC Advisory Council June 7, 2016 1 Agenda Introduction to HPC Advisory Council Benchmark Configuration

More information

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012 ANSYS Fluent 14 Performance Benchmark and Profiling October 2012 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information

More information

LS-DYNA Performance Benchmark and Profiling. April 2015

LS-DYNA Performance Benchmark and Profiling. April 2015 LS-DYNA Performance Benchmark and Profiling April 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

HPC Applications Performance and Optimizations Best Practices Pak Lui

HPC Applications Performance and Optimizations Best Practices Pak Lui HPC Applications Performance and Optimizations Best Practices Pak Lui 130 Applications Best Practices Published Abaqus CPMD LS-DYNA MILC AcuSolve Dacapo minife OpenMX Amber Desmond MILC PARATEC AMG DL-POLY

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Altair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015

Altair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015 Altair OptiStruct 13.0 Performance Benchmark and Profiling May 2015 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute

More information

MILC Performance Benchmark and Profiling. April 2013

MILC Performance Benchmark and Profiling. April 2013 MILC Performance Benchmark and Profiling April 2013 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting

More information

SNAP Performance Benchmark and Profiling. April 2014

SNAP Performance Benchmark and Profiling. April 2014 SNAP Performance Benchmark and Profiling April 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox For more information on the supporting

More information

CPMD Performance Benchmark and Profiling. February 2014

CPMD Performance Benchmark and Profiling. February 2014 CPMD Performance Benchmark and Profiling February 2014 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting

More information

GROMACS Performance Benchmark and Profiling. August 2011

GROMACS Performance Benchmark and Profiling. August 2011 GROMACS Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015 LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA

More information

CESM (Community Earth System Model) Performance Benchmark and Profiling. August 2011

CESM (Community Earth System Model) Performance Benchmark and Profiling. August 2011 CESM (Community Earth System Model) Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell,

More information

OCTOPUS Performance Benchmark and Profiling. June 2015

OCTOPUS Performance Benchmark and Profiling. June 2015 OCTOPUS Performance Benchmark and Profiling June 2015 2 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the

More information

OpenFOAM Performance Testing and Profiling. October 2017

OpenFOAM Performance Testing and Profiling. October 2017 OpenFOAM Performance Testing and Profiling October 2017 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Huawei, Mellanox Compute resource - HPC

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

AcuSolve Performance Benchmark and Profiling. October 2011

AcuSolve Performance Benchmark and Profiling. October 2011 AcuSolve Performance Benchmark and Profiling October 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox, Altair Compute

More information

LS-DYNA Performance Benchmark and Profiling. October 2017

LS-DYNA Performance Benchmark and Profiling. October 2017 LS-DYNA Performance Benchmark and Profiling October 2017 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: LSTC, Huawei, Mellanox Compute resource

More information

ICON Performance Benchmark and Profiling. March 2012

ICON Performance Benchmark and Profiling. March 2012 ICON Performance Benchmark and Profiling March 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource - HPC

More information

Altair RADIOSS Performance Benchmark and Profiling. May 2013

Altair RADIOSS Performance Benchmark and Profiling. May 2013 Altair RADIOSS Performance Benchmark and Profiling May 2013 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Altair, AMD, Dell, Mellanox Compute

More information

AcuSolve Performance Benchmark and Profiling. October 2011

AcuSolve Performance Benchmark and Profiling. October 2011 AcuSolve Performance Benchmark and Profiling October 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, Altair Compute

More information

GROMACS Performance Benchmark and Profiling. September 2012

GROMACS Performance Benchmark and Profiling. September 2012 GROMACS Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource

More information

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to

More information

AMBER 11 Performance Benchmark and Profiling. July 2011

AMBER 11 Performance Benchmark and Profiling. July 2011 AMBER 11 Performance Benchmark and Profiling July 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

CP2K Performance Benchmark and Profiling. April 2011

CP2K Performance Benchmark and Profiling. April 2011 CP2K Performance Benchmark and Profiling April 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

LAMMPS Performance Benchmark and Profiling. July 2012

LAMMPS Performance Benchmark and Profiling. July 2012 LAMMPS Performance Benchmark and Profiling July 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

LS-DYNA Performance Benchmark and Profiling. October 2017

LS-DYNA Performance Benchmark and Profiling. October 2017 LS-DYNA Performance Benchmark and Profiling October 2017 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: LSTC, Huawei, Mellanox Compute resource

More information

NAMD GPU Performance Benchmark. March 2011

NAMD GPU Performance Benchmark. March 2011 NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory

More information

NAMD Performance Benchmark and Profiling. February 2012

NAMD Performance Benchmark and Profiling. February 2012 NAMD Performance Benchmark and Profiling February 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia

More information

CP2K Performance Benchmark and Profiling. April 2011

CP2K Performance Benchmark and Profiling. April 2011 CP2K Performance Benchmark and Profiling April 2011 Note The following research was performed under the HPC Advisory Council HPC works working group activities Participating vendors: HP, Intel, Mellanox

More information

HYCOM Performance Benchmark and Profiling

HYCOM Performance Benchmark and Profiling HYCOM Performance Benchmark and Profiling Jan 2011 Acknowledgment: - The DoD High Performance Computing Modernization Program Note The following research was performed under the HPC Advisory Council activities

More information

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications

Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 Gilad Shainer, Tong Liu (Mellanox); Jeffrey Layton (Dell); Joshua Mora (AMD) High Performance Interconnects for

More information

ABySS Performance Benchmark and Profiling. May 2010

ABySS Performance Benchmark and Profiling. May 2010 ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

Performance Optimizations for LS-DYNA with Mellanox HPC-X Scalable Software Toolkit

Performance Optimizations for LS-DYNA with Mellanox HPC-X Scalable Software Toolkit Performance Optimizations for LS-DYNA with Mellanox HPC-X Scalable Software Toolkit Pak Lui 1, David Cho 1, Gilad Shainer 1, Scot Schultz 1, Brian Klaff 1 1 Mellanox Technologies, Inc. 1 Abstract From

More information

The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011

The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011 The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011 HPC Scale Working Group, Dec 2011 Gilad Shainer, Pak Lui, Tong Liu,

More information

Himeno Performance Benchmark and Profiling. December 2010

Himeno Performance Benchmark and Profiling. December 2010 Himeno Performance Benchmark and Profiling December 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource

More information

MM5 Modeling System Performance Research and Profiling. March 2009

MM5 Modeling System Performance Research and Profiling. March 2009 MM5 Modeling System Performance Research and Profiling March 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center

More information

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA

OPEN MPI WITH RDMA SUPPORT AND CUDA. Rolf vandevaart, NVIDIA OPEN MPI WITH RDMA SUPPORT AND CUDA Rolf vandevaart, NVIDIA OVERVIEW What is CUDA-aware History of CUDA-aware support in Open MPI GPU Direct RDMA support Tuning parameters Application example Future work

More information

IBM Information Technology Guide For ANSYS Fluent Customers

IBM Information Technology Guide For ANSYS Fluent Customers IBM ISV & Developer Relations Manufacturing IBM Information Technology Guide For ANSYS Fluent Customers A collaborative effort between ANSYS and IBM 2 IBM Information Technology Guide For ANSYS Fluent

More information

NEMO Performance Benchmark and Profiling. May 2011

NEMO Performance Benchmark and Profiling. May 2011 NEMO Performance Benchmark and Profiling May 2011 Note The following research was performed under the HPC Advisory Council HPC works working group activities Participating vendors: HP, Intel, Mellanox

More information

LAMMPSCUDA GPU Performance. April 2011

LAMMPSCUDA GPU Performance. April 2011 LAMMPSCUDA GPU Performance April 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory Council

More information

NAMD Performance Benchmark and Profiling. November 2010

NAMD Performance Benchmark and Profiling. November 2010 NAMD Performance Benchmark and Profiling November 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox Compute resource - HPC Advisory

More information

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA Gilad Shainer 1, Tong Liu 1, Pak Lui 1, Todd Wilde 1 1 Mellanox Technologies Abstract From concept to engineering, and from design to

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

GRID Testing and Profiling. November 2017

GRID Testing and Profiling. November 2017 GRID Testing and Profiling November 2017 2 GRID C++ library for Lattice Quantum Chromodynamics (Lattice QCD) calculations Developed by Peter Boyle (U. of Edinburgh) et al. Hybrid MPI+OpenMP plus NUMA aware

More information

2008 International ANSYS Conference

2008 International ANSYS Conference 2008 International ANSYS Conference Maximizing Productivity With InfiniBand-Based Clusters Gilad Shainer Director of Technical Marketing Mellanox Technologies 2008 ANSYS, Inc. All rights reserved. 1 ANSYS,

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for ANSYS Mechanical, ANSYS Fluent, and

More information

Performance Analysis of LS-DYNA in Huawei HPC Environment

Performance Analysis of LS-DYNA in Huawei HPC Environment Performance Analysis of LS-DYNA in Huawei HPC Environment Pak Lui, Zhanxian Chen, Xiangxu Fu, Yaoguo Hu, Jingsong Huang Huawei Technologies Abstract LS-DYNA is a general-purpose finite element analysis

More information

Dell HPC System for Manufacturing System Architecture and Application Performance

Dell HPC System for Manufacturing System Architecture and Application Performance Dell HPC System for Manufacturing System Architecture and Application Performance This Dell technical white paper describes the architecture of the Dell HPC System for Manufacturing and discusses performance

More information

Birds of a Feather Presentation

Birds of a Feather Presentation Mellanox InfiniBand QDR 4Gb/s The Fabric of Choice for High Performance Computing Gilad Shainer, shainer@mellanox.com June 28 Birds of a Feather Presentation InfiniBand Technology Leadership Industry Standard

More information

Performance Analysis of HPC Applications on Several Dell PowerEdge 12 th Generation Servers

Performance Analysis of HPC Applications on Several Dell PowerEdge 12 th Generation Servers Performance Analysis of HPC Applications on Several Dell PowerEdge 12 th Generation Servers This Dell technical white paper evaluates and provides recommendations for the performance of several HPC applications

More information

Memory Selection Guidelines for High Performance Computing with Dell PowerEdge 11G Servers

Memory Selection Guidelines for High Performance Computing with Dell PowerEdge 11G Servers Memory Selection Guidelines for High Performance Computing with Dell PowerEdge 11G Servers A Dell Technical White Paper By Garima Kochhar and Jacob Liberman High Performance Computing Engineering Dell

More information

FUSION1200 Scalable x86 SMP System

FUSION1200 Scalable x86 SMP System FUSION1200 Scalable x86 SMP System Introduction Life Sciences Departmental System Manufacturing (CAE) Departmental System Competitive Analysis: IBM x3950 Competitive Analysis: SUN x4600 / SUN x4600 M2

More information

In-Network Computing. Sebastian Kalcher, Senior System Engineer HPC. May 2017

In-Network Computing. Sebastian Kalcher, Senior System Engineer HPC. May 2017 In-Network Computing Sebastian Kalcher, Senior System Engineer HPC May 2017 Exponential Data Growth The Need for Intelligent and Faster Interconnect CPU-Centric (Onload) Data-Centric (Offload) Must Wait

More information

Maximizing Cluster Scalability for LS-DYNA

Maximizing Cluster Scalability for LS-DYNA Maximizing Cluster Scalability for LS-DYNA Pak Lui 1, David Cho 1, Gerald Lotto 1, Gilad Shainer 1 1 Mellanox Technologies, Inc. Sunnyvale, CA, USA 1 Abstract High performance network interconnect is an

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

RADIOSS Benchmark Underscores Solver s Scalability, Quality and Robustness

RADIOSS Benchmark Underscores Solver s Scalability, Quality and Robustness RADIOSS Benchmark Underscores Solver s Scalability, Quality and Robustness HPC Advisory Council studies performance evaluation, scalability analysis and optimization tuning of RADIOSS 12.0 on a modern

More information

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning

Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning 5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:

More information

The State of Accelerated Applications. Michael Feldman

The State of Accelerated Applications. Michael Feldman The State of Accelerated Applications Michael Feldman Accelerator Market in HPC Nearly half of all new HPC systems deployed incorporate accelerators Accelerator hardware performance has been advancing

More information

Optimizing LS-DYNA Productivity in Cluster Environments

Optimizing LS-DYNA Productivity in Cluster Environments 10 th International LS-DYNA Users Conference Computing Technology Optimizing LS-DYNA Productivity in Cluster Environments Gilad Shainer and Swati Kher Mellanox Technologies Abstract Increasing demand for

More information

HPC and AI Solution Overview. Garima Kochhar HPC and AI Innovation Lab

HPC and AI Solution Overview. Garima Kochhar HPC and AI Innovation Lab HPC and AI Solution Overview Garima Kochhar HPC and AI Innovation Lab 1 Dell EMC HPC and DL team charter Design, develop and integrate HPC and DL Heading systems Lorem ipsum dolor sit amet, consectetur

More information

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 Leading Supplier of End-to-End Interconnect Solutions Analyze Enabling the Use of Data Store ICs Comprehensive End-to-End InfiniBand and Ethernet Portfolio

More information

Scalable x86 SMP Server FUSION1200

Scalable x86 SMP Server FUSION1200 Scalable x86 SMP Server FUSION1200 Challenges Scaling compute-power is either Complex (scale-out / clusters) or Expensive (scale-up / SMP) Scale-out - Clusters Requires advanced IT skills / know-how (high

More information

Future Routing Schemes in Petascale clusters

Future Routing Schemes in Petascale clusters Future Routing Schemes in Petascale clusters Gilad Shainer, Mellanox, USA Ola Torudbakken, Sun Microsystems, Norway Richard Graham, Oak Ridge National Laboratory, USA Birds of a Feather Presentation Abstract

More information

2008 International ANSYS Conference

2008 International ANSYS Conference 28 International ANSYS Conference Maximizing Performance for Large Scale Analysis on Multi-core Processor Systems Don Mize Technical Consultant Hewlett Packard 28 ANSYS, Inc. All rights reserved. 1 ANSYS,

More information

Interconnect Your Future

Interconnect Your Future Interconnect Your Future Smart Interconnect for Next Generation HPC Platforms Gilad Shainer, August 2016, 4th Annual MVAPICH User Group (MUG) Meeting Mellanox Connects the World s Fastest Supercomputer

More information

AMD EPYC and NAMD Powering the Future of HPC February, 2019

AMD EPYC and NAMD Powering the Future of HPC February, 2019 AMD EPYC and NAMD Powering the Future of HPC February, 19 Exceptional Core Performance NAMD is a compute-intensive workload that benefits from AMD EPYC s high core IPC (Instructions Per Clock) and high

More information

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GPU ACCELERATED COMPUTING 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GAMING PRO ENTERPRISE VISUALIZATION DATA CENTER AUTO

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Accelerated computing is revolutionizing the economics of the data center. HPC and hyperscale customers deploy accelerated

More information

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of

More information

LS-DYNA Productivity and Power-aware Simulations in Cluster Environments

LS-DYNA Productivity and Power-aware Simulations in Cluster Environments LS-DYNA Productivity and Power-aware Simulations in Cluster Environments Gilad Shainer 1, Tong Liu 1, Jacob Liberman 2, Jeff Layton 2 Onur Celebioglu 2, Scot A. Schultz 3, Joshua Mora 3, David Cownie 3,

More information

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

System Design of Kepler Based HPC Solutions. Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering.

System Design of Kepler Based HPC Solutions. Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering. System Design of Kepler Based HPC Solutions Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering. Introduction The System Level View K20 GPU is a powerful parallel processor! K20 has

More information

Optimal BIOS settings for HPC with Dell PowerEdge 12 th generation servers

Optimal BIOS settings for HPC with Dell PowerEdge 12 th generation servers Optimal BIOS settings for HPC with Dell PowerEdge 12 th generation servers This Dell technical white paper analyses the various BIOS options available in Dell PowerEdge 12 th generation servers and provides

More information

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory

More information

Assessment of LS-DYNA Scalability Performance on Cray XD1

Assessment of LS-DYNA Scalability Performance on Cray XD1 5 th European LS-DYNA Users Conference Computing Technology (2) Assessment of LS-DYNA Scalability Performance on Cray Author: Ting-Ting Zhu, Cray Inc. Correspondence: Telephone: 651-65-987 Fax: 651-65-9123

More information

Performance of HPC Applications over InfiniBand, 10 Gb and 1 Gb Ethernet. Swamy N. Kandadai and Xinghong He and

Performance of HPC Applications over InfiniBand, 10 Gb and 1 Gb Ethernet. Swamy N. Kandadai and Xinghong He and Performance of HPC Applications over InfiniBand, 10 Gb and 1 Gb Ethernet Swamy N. Kandadai and Xinghong He swamy@us.ibm.com and xinghong@us.ibm.com ABSTRACT: We compare the performance of several applications

More information

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test

More information

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC? Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC? Nikola Rajovic, Paul M. Carpenter, Isaac Gelado, Nikola Puzovic, Alex Ramirez, Mateo Valero SC 13, November 19 th 2013, Denver, CO, USA

More information

SIMPLIFYING HPC SIMPLIFYING HPC FOR ENGINEERING SIMULATION WITH ANSYS

SIMPLIFYING HPC SIMPLIFYING HPC FOR ENGINEERING SIMULATION WITH ANSYS SIMPLIFYING HPC SIMPLIFYING HPC FOR ENGINEERING SIMULATION WITH ANSYS THE DELL WAY We are an acknowledged leader in academic supercomputing including major HPC systems installed at the Cambridge University

More information

Interconnect Your Future

Interconnect Your Future Interconnect Your Future Gilad Shainer 2nd Annual MVAPICH User Group (MUG) Meeting, August 2014 Complete High-Performance Scalable Interconnect Infrastructure Comprehensive End-to-End Software Accelerators

More information

Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand

Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand Matthew Koop, Wei Huang, Ahbinav Vishnu, Dhabaleswar K. Panda Network-Based Computing Laboratory Department of

More information

Application Acceleration Beyond Flash Storage

Application Acceleration Beyond Flash Storage Application Acceleration Beyond Flash Storage Session 303C Mellanox Technologies Flash Memory Summit July 2014 Accelerating Applications, Step-by-Step First Steps Make compute fast Moore s Law Make storage

More information

Intel Cluster Toolkit Compiler Edition 3.2 for Linux* or Windows HPC Server 2008*

Intel Cluster Toolkit Compiler Edition 3.2 for Linux* or Windows HPC Server 2008* Intel Cluster Toolkit Compiler Edition. for Linux* or Windows HPC Server 8* Product Overview High-performance scaling to thousands of processors. Performance leadership Intel software development products

More information

High-Performance Lustre with Maximum Data Assurance

High-Performance Lustre with Maximum Data Assurance High-Performance Lustre with Maximum Data Assurance Silicon Graphics International Corp. 900 North McCarthy Blvd. Milpitas, CA 95035 Disclaimer and Copyright Notice The information presented here is meant

More information

You will not hear hold music while waiting for the event to begin.

You will not hear hold music while waiting for the event to begin. To hear today s event : Listen via the audio stream through your computer speakers OR Listen via phone by clicking the teleconference request button in the Participants window You will not hear hold music

More information

GPUs and the Future of Accelerated Computing Emerging Technology Conference 2014 University of Manchester

GPUs and the Future of Accelerated Computing Emerging Technology Conference 2014 University of Manchester NVIDIA GPU Computing A Revolution in High Performance Computing GPUs and the Future of Accelerated Computing Emerging Technology Conference 2014 University of Manchester John Ashley Senior Solutions Architect

More information

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 Technologies and application performance Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 The landscape is changing We are no longer in the general purpose era the argument of

More information

MSC Nastran Explicit Nonlinear (SOL 700) on Advanced SGI Architectures

MSC Nastran Explicit Nonlinear (SOL 700) on Advanced SGI Architectures MSC Nastran Explicit Nonlinear (SOL 700) on Advanced SGI Architectures Presented By: Dr. Olivier Schreiber, Application Engineering, SGI Walter Schrauwen, Senior Engineer, Finite Element Development, MSC

More information

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR

Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Exploiting Full Potential of GPU Clusters with InfiniBand using MVAPICH2-GDR Presentation at Mellanox Theater () Dhabaleswar K. (DK) Panda - The Ohio State University panda@cse.ohio-state.edu Outline Communication

More information

HPC Performance in the Cloud: Status and Future Prospects

HPC Performance in the Cloud: Status and Future Prospects HPC Performance in the Cloud: Status and Future Prospects ISC Cloud 2012 Josh Simons, Office of the CTO, VMware 2009 VMware Inc. All rights reserved Cloud Cloud computing is a model for enabling ubiquitous,

More information

Game-changing Extreme GPU computing with The Dell PowerEdge C4130

Game-changing Extreme GPU computing with The Dell PowerEdge C4130 Game-changing Extreme GPU computing with The Dell PowerEdge C4130 A Dell Technical White Paper This white paper describes the system architecture and performance characterization of the PowerEdge C4130.

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Accelerated computing is revolutionizing the economics of the data center. HPC enterprise and hyperscale customers deploy

More information

Technical Computing Suite supporting the hybrid system

Technical Computing Suite supporting the hybrid system Technical Computing Suite supporting the hybrid system Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster Hybrid System Configuration Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster 6D mesh/torus Interconnect

More information

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters

Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters Designing Optimized MPI Broadcast and Allreduce for Many Integrated Core (MIC) InfiniBand Clusters K. Kandalla, A. Venkatesh, K. Hamidouche, S. Potluri, D. Bureddy and D. K. Panda Presented by Dr. Xiaoyi

More information

Solutions for Scalable HPC

Solutions for Scalable HPC Solutions for Scalable HPC Scot Schultz, Director HPC/Technical Computing HPC Advisory Council Stanford Conference Feb 2014 Leading Supplier of End-to-End Interconnect Solutions Comprehensive End-to-End

More information