Technical guide. Windows HPC server 2016 for LS-DYNA How to setup. Reference system setup - v1.0
|
|
- Arline Ryan
- 5 years ago
- Views:
Transcription
1 Technical guide Windows HPC server 2016 for LS-DYNA How to setup Reference system setup - v DYNAmore Nordic AB LS-DYNA / LS-PrePost
2 1 Introduction - Running LS-DYNA on Windows HPC cluster Contents 1 Introduction - Running LS-DYNA on Windows HPC cluster Assumptions Nomenclature Net resources Reference Windows HPC cluster HPC Cluster software components Hardware File server and File Share setup mpp/ls-dyna and MS MPI Message Passing Interface Using the HPC-Cluster from a CAE-user perspective License server for LS-DYNA Verify that the system works Alternative hardware and MPI software Copyright DYNAmore Nordic AB 1 LS-DYNA and Windows HPC
3 1 Introduction - Running LS-DYNA on Windows HPC cluster 1 Introduction - Running LS-DYNA on Windows HPC cluster The purpose of this guide is to be an aid when choosing hardware, setting up, and using a Windows HPC cluster for LS-DYNA for explicit and implicit analysis for small to medium sized workgroups of CAE-users, i.e users, and small to medium sized cluster, i.e. 40 to about 500 cores. To this end a reference system is described that is within the above scope. It is assumed in this guide that LS-DYNA is used in a pure Microsoft Windows environment. It is assumed that the reader has good knowledge about Windows, Windows Server 2016, networking, and user administration using Active Directory. The described Windows HPC reference system is of course useful also for other CAE-software, but is not covered here. Please note that there are alternative setups using Windows HPC that may be more suitable depending on the situation. 2 Assumptions An all Windows environment is assumed: Workstations for the CAE-users running Windows Professional or Enterprise (versions 7, 8, or 10). The reference system assumes Windows 10 Professional. Windows user management using Active Directory. HPC Server for LS-DYNA running Microsoft Server 2016 with Microsoft HPC Pack Analysis types Implicit or explicit analysis with smp and mpp/ls-dyna. No analyses with extreme IO-requirements, such as out of core implicit analysis or large metal forming analysis with adaptive mesh generation. If this is the case, then some modification of the software configuration may be needed to reach optimal performance. 3 Nomenclature As far as possible the nomenclature from the documentation of Microsoft Windows HPC-Pack 2016 is used. HPC High Performance Computing MPI Message Passing Interface standardized message passing for parallel computing designed to function on a wide variety of networking hardware, software, and operating systems. mpp (as in mpp/ls-dyna) message passing parallel, an mpp software uses message passing to be able to solve a problem in parallel spanning multiple cores, CPUs and/or Compute nodes DYNAmore Nordic AB 2 LS-DYNA and Windows HPC
4 4 Net resources 4 Net resources Microsoft HPC Pack 2016 documentation at technet.microsoft.com Microsoft HPC Pack 2016 Update 1, available at technet.microsoft.com 5 Reference Windows HPC cluster In Figure 1 below the reference system is shown. Head node with Fileserver Windows HPC Cluster InfiniBand Switch Compute node(s) CAE Workstations Company Active Directory server Gigabit Ethernet Figure 1: Reference system 5.1 HPC Cluster software components Components, software and function: Clients (from which jobs are submitted to the HPC Cluster): Workstations with Windows 10 Pro, LSTC WinSuite, Microsoft HPC-Pack 2016 Update 1 (Client utilities installation, which installs the Job Manager and tools needed by LSTC WinSuite). o Function: On the Client/Workstation the simulation model ( input file ) is created, stored on the File server in a suitable folder and submitted as a simulation job to the Head node. Results are viewed on the Client/Workstation. HPC Head node with Fileserver Microsoft Server 2016 Standard, LS-DYNA License Server, Microsoft HPC-Pack 2016 Update 1 (Head node Installation). o Function: The HPC Head node receives the simulation jobs from the Clients, puts them in a queue and then starts them as soon as sufficient resources are available on the Compute nodes. The results from the simulation jobs are store on the File server DYNAmore Nordic AB 3 LS-DYNA and Windows HPC
5 5 Reference Windows HPC cluster Compute node(s) Microsoft Server 2016 Standard, Microsoft HPC-Pack 2016 Update 1 (Compute node installation). o Function: The Compute nodes read the simulation job data from the File server, run the simulation jobs, and store the result files on the File server. All above servers/workstations and their users are registered in the company Active Directory. The users that can access and start jobs on the HPC Cluster are referred to as the CAE-users. Usually a Group in the Active Directory is created HPCgroup so that all users belonging to this group have appropriate access to the HPC Head node, File server Shares, and Compute node(s). When submitting jobs to the HPC Cluster, they are submitted to the HPC Head node and thus this is the only network server name the CAE-users need to know, e.g. HPCSrv. Notes: Microsoft 2016 HPC pack Update 1 is a monolithic installation file that contains options to install both Clients utilities, Head node, Compute node(s) et c. The reference HPC Cluster system has a file server dedicated to the HPC-Cluster, this is often a good choice as the HPC Cluster can generate a lot of IO and data. It is assumed that all servers and Compute node(s) in the HPC Cluster are attached to the company network are reachable from the Workstations. Other options are possible, e.g. with a private network for the HPC Cluster, but these are not explored here. LSTC WinSuite is a complete installation of LSTC products for Windows 7, 8, 10 computers. It contains LS-DYNA, LS-PrePost, LS-TaSC, LS-OPT, LS-Run, Manuals, Training material etc. LS-Run is a command center and is used e.g. to start and queue LS-DYNA simulations on the local workstation or remote Windows and Linux HPC Clusters. LSTC WinSuite can be remote installed on Workstations. 5.2 Hardware The hardware selection was made in Q Workstations Windows 10, 32 GB RAM, Professional level graphics card for CAD (OpenGL), 1 TB Hard drive, single 4-core Xeon CPU, Gigabit Ethernet card, Full HD display HPC Head node with File server 32 GB RAM, single 8 core Xeon CPU, 6x4 TB SAS Hard drives, RAID 10 controller card for SAS, Gigabit Ethernet card, Hard drives use RAID 10 for performance. Compute node(s) 2018 DYNAmore Nordic AB 4 LS-DYNA and Windows HPC
6 6 mpp/ls-dyna and MS MPI Message Passing Interface 192 GB RAM, 2x1 TB Hard drives (RAID 1), dual Xeon SP 6148 CPUs (20 cores/cpu), Mellanox ConnectX-3 InfiniBand cards, Gigabit Ethernet card InfiniBand Switch used for MPI Notes Mellanox SX port infiniband switch, QSFP+ connector cables for connection of Switch and Compute nodes For implicit analysis, more memory per node may be needed on the Compute node(s). Instead of Hard drives on the Compute Nodes, using SSDs may provide significantly better performance for certain types of analyses, but not for the types assumed here, see Section 2. In the system, InfiniBand is only used for MPI-communication between the LS-DYNA-processes. All other network traffic (SMB, TCP, UDP et c) is carried by the Gigabit Ethernet network. 5.3 File server and File Share setup The following file shares are available on the Head node/file server \\HPCSrv : \\HPCSrv\projects with subfolders for each CAE-user or project to store input files and simulation results. This share is also used by the Compute nodes during the simulations. This share should be accessible by all CAE-Users on their Workstations as well as on the compute nodes, else they will not be able to use the HPC Cluster \\HPCSrv\software contains subfolders o lsdyna: the LS-DYNA executables: both smp and mpp-versions in double and single precision. Microsoft HPC pack 2016 uses MS-MPI, thus mpp/ls-dyna binaries should be installed whose label include the phrase msmpi, e.g. lsdyna_mpp_s_r920_winx64_ifort131_msmpi.exe. o installation: Windows HPC Pack 2016 Update 1 installation file and LSTC WinSuite installation file. This is for convenience when adding new Clients (Workstations). 6 mpp/ls-dyna and MS MPI Message Passing Interface The parallel software mpp/ls-dyna works by splitting up the simulation model in N equally sized pieces and each piece is then handled by a separate mpp/ls-dyna process. For efficiency only one mpp/ls-dyna process should by run on each physical core (thus it is generally recommended to turn of hyperthreading or alternatively use methods such as pinning described in the MPI documentation). The different mpp/ls-dyna processes need to communicate to solver the total prob DYNAmore Nordic AB 5 LS-DYNA and Windows HPC
7 7 Using the HPC-Cluster from a CAE-user perspective lem and to that end MPI is used. As quick communication is crucial for performance, the MPI communication is made using a fast network such as InfiniBand between Compute nodes and methods such as user space memory copy between processes on the same Compute node. In the reference HPC Cluster, the MPI implementation is used that is included with Microsoft HPC Pack 2016: MS-MPI (Microsoft MPI). The experience is that it is safe to upgrade the MS MPI version on the HPC Cluster as newer versions, with e.g. bug fixes, are released by Microsoft. For more information on MPI and MS MPI, see technet.microsoft.com (search for MS MPI). 7 Using the HPC-Cluster from a CAE-user perspective The CAE-users uses the system in the following manner: Notes: 1. Create the LS-DYNA input file, e.g. main.k, using a preprocessor, e.g. using LS-PrePost or other pre-processor. 2. Save the input file in the users project folder on the Head Node/File Server \\HPCSrv\project\John\Project34\Crashsim12\main.k 3. Open LS-Run (part of LSTC WinSuite) select the appropriate LS-DYNA binary, solution options, and the input file \\HPCSrv\project\John\Project34\Crashsim12\main.k. Submit the simulation to the HPCSrv HPC Cluster queue. See also Figure 2 for an illustration. As soon as resources, i.e. Compute nodes, are available the simulation will be started. 4. The progress of the job can be monitored using LS-Run, when finished the results are available for postprocessing in \\HPCSrv\project\John\Project34\Crashsim12 (default location). Multiple jobs can be started, they are then run as soon as Compute node resources are available. More information on how to start an LS-DYNA analysis using LS-Run is available under the Help-menu in LS-Run. Jobs submitted by LS-Run to a Windows HPC Cluster are submitted to the standard Windows HPC server queue and thus cooperate/co-exist with other jobs submitted to the HPC Cluster, e.g. from other CAE-software DYNAmore Nordic AB 6 LS-DYNA and Windows HPC
8 8 License server for LS-DYNA Figure 2: LS-Run 8 License server for LS-DYNA The network license is installed by running LS-Run on the license server, e.g. the Head node. All Compute node must be able to access the license server and vice versa. The license is installed from the License option in LS-Run by importing the license file. To be able to use LS-DYNA on the cluster it is necessary to specify License type = network and Server hostname = LicenseServerHostname in the License menu in LS-Run on the computer submitting the job. 9 Verify that the system works To verify that the system works run a benchmark model on 1 or more cores. Suitable benchmarks for e.g. explicit analysis are included with LSTC WinSuite or can be found at Explicit simulation models are recommended as first benchmark. Check that the runtimes are reduced as the number of cores are increased, including over multiple cores. For large explicit analysis (more than about elements per used core, near linear reduction of runtimes are expected when scaling from 1 to more than 100 cores). For larger models LS-DYNA can effectively use up to more than cores in a single analysis simulation. To verify node connectivity and the performance of the MPI/InfiniBand-interconnect one can also use the mpipingpong test included in HPC Pack 2016 (see HPC Pack documentation at technet.microsoft.com). 10 Alternative hardware and MPI software There are alternatives to the hardware and software components selected for the HPC Cluster described here. A noncomplete list is: MPI: Intel MPI, IBM Platform MPI CPU: AMD EPYC Network for MPI: Intel Omni-Path 2018 DYNAmore Nordic AB 7 LS-DYNA and Windows HPC
9 11 Copyright 11 Copyright All trademarks, service marks, trade names, product names and logos appearing in his document are the property of their respective owners, including in some instances Livermore Software Technology Corporation (LSTC) DYNAmore Nordic AB 8 LS-DYNA and Windows HPC
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton
More informationOptimizing LS-DYNA Productivity in Cluster Environments
10 th International LS-DYNA Users Conference Computing Technology Optimizing LS-DYNA Productivity in Cluster Environments Gilad Shainer and Swati Kher Mellanox Technologies Abstract Increasing demand for
More informationDell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia
More informationPerformance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA
Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to
More informationANSYS High. Computing. User Group CAE Associates
ANSYS High Performance Computing User Group 010 010 CAE Associates Parallel Processing in ANSYS ANSYS offers two parallel processing methods: Shared-memory ANSYS: Shared-memory ANSYS uses the sharedmemory
More informationLS-DYNA Performance Benchmark and Profiling. October 2017
LS-DYNA Performance Benchmark and Profiling October 2017 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: LSTC, Huawei, Mellanox Compute resource
More informationMPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA
MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA Gilad Shainer 1, Tong Liu 1, Pak Lui 1, Todd Wilde 1 1 Mellanox Technologies Abstract From concept to engineering, and from design to
More informationLS-DYNA Performance Benchmark and Profiling. October 2017
LS-DYNA Performance Benchmark and Profiling October 2017 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: LSTC, Huawei, Mellanox Compute resource
More informationThe Cray CX1 puts massive power and flexibility right where you need it in your workgroup
The Cray CX1 puts massive power and flexibility right where you need it in your workgroup Up to 96 cores of Intel 5600 compute power 3D visualization Up to 32TB of storage GPU acceleration Small footprint
More informationSingle-Points of Performance
Single-Points of Performance Mellanox Technologies Inc. 29 Stender Way, Santa Clara, CA 9554 Tel: 48-97-34 Fax: 48-97-343 http://www.mellanox.com High-performance computations are rapidly becoming a critical
More informationPerformance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand. Abstract
Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand Abstract...1 Introduction...2 Overview of ConnectX Architecture...2 Performance Results...3 Acknowledgments...7 For
More informationHPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)
HPC and IT Issues Session Agenda Deployment of Simulation (Trends and Issues Impacting IT) Discussion Mapping HPC to Performance (Scaling, Technology Advances) Discussion Optimizing IT for Remote Access
More informationNew Features in LS-DYNA HYBRID Version
11 th International LS-DYNA Users Conference Computing Technology New Features in LS-DYNA HYBRID Version Nick Meng 1, Jason Wang 2, Satish Pathy 2 1 Intel Corporation, Software and Services Group 2 Livermore
More informationA Customized Job Manager for Metal Forming Simulations. with LS-DYNA
A Customized Job Manager for Metal Forming Simulations with LS-DYNA Yuzhong Xiao, Xinhai Zhu, Li Zhang, Houfu Fan LSTC Introduction Generally the simulation time of the metal forming analysis is relatively
More informationREADME Document. LS- DYNA MPP Program Manager for Windows. Version 1.0 Release: June 10, Welcome! Quick Start Workflow
README Document LS- DYNA MPP Program Manager for Windows Version 1.0 Release: June 10, 2016 Welcome! This document provides guidance on how to get started using the LS- DYNA MPP Program Manager for Windows.
More informationHybrid (MPP+OpenMP) version of LS-DYNA
Hybrid (MPP+OpenMP) version of LS-DYNA LS-DYNA Forum 2011 Jason Wang Oct. 12, 2011 Outline 1) Why MPP HYBRID 2) What is HYBRID 3) Benefits 4) How to use HYBRID Why HYBRID LS-DYNA LS-DYNA/MPP Speedup, 10M
More informationPlatform Choices for LS-DYNA
Platform Choices for LS-DYNA Manfred Willem and Lee Fisher High Performance Computing Division, HP lee.fisher@hp.com October, 2004 Public Benchmarks for LS-DYNA www.topcrunch.org administered by University
More informationLS-DYNA Performance on 64-Bit Intel Xeon Processor-Based Clusters
9 th International LS-DYNA Users Conference Computing / Code Technology (2) LS-DYNA Performance on 64-Bit Intel Xeon Processor-Based Clusters Tim Prince, PhD ME Hisaki Ohara, MS IS Nick Meng, MS EE Intel
More informationThe Optimal CPU and Interconnect for an HPC Cluster
5. LS-DYNA Anwenderforum, Ulm 2006 Cluster / High Performance Computing I The Optimal CPU and Interconnect for an HPC Cluster Andreas Koch Transtec AG, Tübingen, Deutschland F - I - 15 Cluster / High Performance
More informationA Customized Job Manager for Metal Forming Simulations with LS-DYNA
A Customized Job Manager for Metal Forming Simulations with LS-DYNA Yuzhong Xiao, Xinhai Zhu, Li Zhang, Houfu Fan Livermore Software Technology Corporation Abstract In the metal forming analysis, the simulation
More informationTrend Micro Core Protection Module 10.6 SP1 System Requirements
Trend Micro Core Protection Module 10.6 System Requirements Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing
More informationPART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System
INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE
More informationDell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for ANSYS Mechanical, ANSYS Fluent, and
More informationAltair RADIOSS Performance Benchmark and Profiling. May 2013
Altair RADIOSS Performance Benchmark and Profiling May 2013 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Altair, AMD, Dell, Mellanox Compute
More informationUsing the Workbench LS-DYNA Extension
Using the Workbench LS-DYNA Extension ANSYS, Inc. Southpointe 2600 ANSYS Drive Canonsburg, PA 15317 ansysinfo@ansys.com http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494 Release 18.1 April 2017 ANSYS,
More informationFast Setup and Integration of Abaqus on HPC Linux Cluster and the Study of Its Scalability
Fast Setup and Integration of Abaqus on HPC Linux Cluster and the Study of Its Scalability Betty Huang, Jeff Williams, Richard Xu Baker Hughes Incorporated Abstract: High-performance computing (HPC), the
More informationFaster Metal Forming Solution with Latest Intel Hardware & Software Technology
12 th International LS-DYNA Users Conference Computing Technologies(3) Faster Metal Forming Solution with Latest Intel Hardware & Software Technology Nick Meng 1, Jixian Sun 2, Paul J Besl 1 1 Intel Corporation,
More informationV18.1Hardware and System Requirements
V18.1Hardware and System Requirements These specifications are good as of 4 September 2018. Recommendations are constantly changing as technology advances. If this document is more than 30 days old, please
More informationLS-DYNA Productivity and Power-aware Simulations in Cluster Environments
LS-DYNA Productivity and Power-aware Simulations in Cluster Environments Gilad Shainer 1, Tong Liu 1, Jacob Liberman 2, Jeff Layton 2 Onur Celebioglu 2, Scot A. Schultz 3, Joshua Mora 3, David Cownie 3,
More informationCompute Cluster Server Lab 1: Installation of Microsoft Compute Cluster Server 2003
Compute Cluster Server Lab 1: Installation of Microsoft Compute Cluster Server 2003 Compute Cluster Server Lab 1: Installation of Microsoft Compute Cluster Server 2003... 1 Lab Objective... 1 Overview
More informationIntel Cluster Toolkit Compiler Edition 3.2 for Linux* or Windows HPC Server 2008*
Intel Cluster Toolkit Compiler Edition. for Linux* or Windows HPC Server 8* Product Overview High-performance scaling to thousands of processors. Performance leadership Intel software development products
More informationCube Base Reference Guide Cube Base CUBE BASE VERSION 6.4.4
Cube Base Reference Guide Cube Base CUBE BASE VERSION 6.4.4 1 Introduction System requirements of Cube, outlined in this section, include: Recommended workstation configuration Recommended server configuration
More informationSNAP Performance Benchmark and Profiling. April 2014
SNAP Performance Benchmark and Profiling April 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox For more information on the supporting
More informationLS-DYNA Performance Benchmark and Profiling. April 2015
LS-DYNA Performance Benchmark and Profiling April 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource
More informationQLogic TrueScale InfiniBand and Teraflop Simulations
WHITE Paper QLogic TrueScale InfiniBand and Teraflop Simulations For ANSYS Mechanical v12 High Performance Interconnect for ANSYS Computer Aided Engineering Solutions Executive Summary Today s challenging
More informationAcuSolve Performance Benchmark and Profiling. October 2011
AcuSolve Performance Benchmark and Profiling October 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, Altair Compute
More informationJob Submitter. User Guide. Version Engineering Technology Associates, Inc. All rights reserved.
Job Submitter User Guide Version 3.0 1998-2009 Engineering Technology Associates, Inc. All rights reserved. Engineering Technology Associates, Inc. 1133 E. Maple Road, Suite 200 Troy, MI 48083-2896 Phone:
More informationMaximizing Memory Performance for ANSYS Simulations
Maximizing Memory Performance for ANSYS Simulations By Alex Pickard, 2018-11-19 Memory or RAM is an important aspect of configuring computers for high performance computing (HPC) simulation work. The performance
More informationEngineers can be significantly more productive when ANSYS Mechanical runs on CPUs with a high core count. Executive Summary
white paper Computer-Aided Engineering ANSYS Mechanical on Intel Xeon Processors Engineer Productivity Boosted by Higher-Core CPUs Engineers can be significantly more productive when ANSYS Mechanical runs
More informationFUSION1200 Scalable x86 SMP System
FUSION1200 Scalable x86 SMP System Introduction Life Sciences Departmental System Manufacturing (CAE) Departmental System Competitive Analysis: IBM x3950 Competitive Analysis: SUN x4600 / SUN x4600 M2
More informationRecent Developments and Roadmap Part 0: Introduction. 12 th International LS-DYNA User s Conference June 5, 2012
Recent Developments and Roadmap Part 0: Introduction 12 th International LS-DYNA User s Conference June 5, 2012 1 Outline Introduction Recent developments. See the separate PDFs for: LS-PrePost Dummies
More informationFeedback on BeeGFS. A Parallel File System for High Performance Computing
Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December
More informationFEMAP/NX NASTRAN PERFORMANCE TUNING
FEMAP/NX NASTRAN PERFORMANCE TUNING Chris Teague - Saratech (949) 481-3267 www.saratechinc.com NX Nastran Hardware Performance History Running Nastran in 1984: Cray Y-MP, 32 Bits! (X-MP was only 24 Bits)
More informationAcuSolve Performance Benchmark and Profiling. October 2011
AcuSolve Performance Benchmark and Profiling October 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox, Altair Compute
More informationManaging CAE Simulation Workloads in Cluster Environments
Managing CAE Simulation Workloads in Cluster Environments Michael Humphrey V.P. Enterprise Computing Altair Engineering humphrey@altair.com June 2003 Copyright 2003 Altair Engineering, Inc. All rights
More informationHPC Architectures. Types of resource currently in use
HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationFuture Trends in Hardware and Software for use in Simulation
Future Trends in Hardware and Software for use in Simulation Steve Feldman VP/IT, CD-adapco April, 2009 HighPerformanceComputing Building Blocks CPU I/O Interconnect Software General CPU Maximum clock
More informationSymantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010
Symantec NetBackup PureDisk 6.6.1 Compatibility Matrix Created August 26, 2010 Copyright 2010 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo, and Backup Exec are trademarks or registered
More informationGROMACS Performance Benchmark and Profiling. August 2011
GROMACS Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource
More informationManufacturing Bringing New Levels of Performance to CAE Applications
Solution Brief: Manufacturing Bringing New Levels of Performance to CAE Applications Abstract Computer Aided Engineering (CAE) is used to help manufacturers bring products to market faster while maintaining
More informationAltair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015
Altair OptiStruct 13.0 Performance Benchmark and Profiling May 2015 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute
More informationLAMMPS, LS- DYNA, HPL, and WRF on iwarp vs. InfiniBand FDR
LAMMPS, LS- DYNA, HPL, and WRF on iwarp vs. InfiniBand FDR The use of InfiniBand as interconnect technology for HPC applications has been increasing over the past few years, replacing the aging Gigabit
More informationIT Business Management System Requirements Guide
IT Business Management System Requirements Guide IT Business Management Advanced or Enterprise Edition 8.1 This document supports the version of each product listed and supports all subsequent versions
More informationunleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer
the future unleashed Alexey Belogortsev Field Application Engineer Intel Xeon Scalable Processors for High Performance Computing Growing Challenges in System Architecture The Walls System Bottlenecks Divergent
More informationCPMD Performance Benchmark and Profiling. February 2014
CPMD Performance Benchmark and Profiling February 2014 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting
More informationAccelerating Implicit LS-DYNA with GPU
Accelerating Implicit LS-DYNA with GPU Yih-Yih Lin Hewlett-Packard Company Abstract A major hindrance to the widespread use of Implicit LS-DYNA is its high compute cost. This paper will show modern GPU,
More informationANSYS Fluent 14 Performance Benchmark and Profiling. October 2012
ANSYS Fluent 14 Performance Benchmark and Profiling October 2012 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information
More informationMILC Performance Benchmark and Profiling. April 2013
MILC Performance Benchmark and Profiling April 2013 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting
More informationBrother-Thoroughbred for DB2 LUW Version
Brother-Thoroughbred for DB2 LUW Version 6.1.01 System s September 30 th, 2014 COPYRIGHT INFORMATION DBI, Database-Brothers, Brother-Hawk, the DBI logo and all other DBI product or service names are registered
More informationTrend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,
More informationSystem Requirements. SuccessMaker 3
System Requirements SuccessMaker 3 System requirements are subject to change. For the latest information on system requirements, go to http://support.pearsonschool.com. For more information about Digital
More informationDell HPC System for Manufacturing System Architecture and Application Performance
Dell HPC System for Manufacturing System Architecture and Application Performance This Dell technical white paper describes the architecture of the Dell HPC System for Manufacturing and discusses performance
More informationTo Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC
To Infiniband or Not Infiniband, One Site s s Perspective Steve Woods MCNC 1 Agenda Infiniband background Current configuration Base Performance Application performance experience Future Conclusions 2
More informationCST STUDIO SUITE 2019
CST STUDIO SUITE 2019 MPI Computing Guide Copyright 1998-2019 Dassault Systemes Deutschland GmbH. CST Studio Suite is a Dassault Systèmes product. All rights reserved. 2 Contents 1 Introduction 4 2 Supported
More informationDEDICATED SERVERS WITH EBS
DEDICATED WITH EBS TABLE OF CONTENTS WHY CHOOSE A DEDICATED SERVER? 3 DEDICATED WITH EBS 4 INTEL ATOM DEDICATED 5 AMD OPTERON DEDICATED 6 INTEL XEON DEDICATED 7 MANAGED SERVICES 8 SERVICE GUARANTEES 9
More informationThe BioHPC Nucleus Cluster & Future Developments
1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationHigh Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman
High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman 1 2 Introduction to High Performance Computing (HPC) Introduction High-speed computing. Originally pertaining only to supercomputers
More informationOpenFOAM Performance Testing and Profiling. October 2017
OpenFOAM Performance Testing and Profiling October 2017 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Huawei, Mellanox Compute resource - HPC
More informationTrend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,
More informationTrend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,
More informationGROMACS Performance Benchmark and Profiling. September 2012
GROMACS Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource
More informationPerformance Analysis of LS-DYNA in Huawei HPC Environment
Performance Analysis of LS-DYNA in Huawei HPC Environment Pak Lui, Zhanxian Chen, Xiangxu Fu, Yaoguo Hu, Jingsong Huang Huawei Technologies Abstract LS-DYNA is a general-purpose finite element analysis
More informationHPC Considerations for Scalable Multidiscipline CAE Applications on Conventional Linux Platforms. Author: Correspondence: ABSTRACT:
HPC Considerations for Scalable Multidiscipline CAE Applications on Conventional Linux Platforms Author: Stan Posey Panasas, Inc. Correspondence: Stan Posey Panasas, Inc. Phone +510 608 4383 Email sposey@panasas.com
More informationThe rcuda middleware and applications
The rcuda middleware and applications Will my application work with rcuda? rcuda currently provides binary compatibility with CUDA 5.0, virtualizing the entire Runtime API except for the graphics functions,
More informationMaking Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010
Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing
More informationDEDICATED SERVERS WITH WEB HOSTING PRICED RIGHT
DEDICATED SERVERS WITH WEB HOSTING PRICED RIGHT TABLE OF CONTENTS WHY CHOOSE A DEDICATED SERVER? 3 DEDICATED SERVER ADVANTAGES 4 DEDICATED SERVERS WITH WEB HOSTING PRICED RIGHT 5 SERVICE GUARANTEES 6 WHY
More informationThe rcuda technology: an inexpensive way to improve the performance of GPU-based clusters Federico Silla
The rcuda technology: an inexpensive way to improve the performance of -based clusters Federico Silla Technical University of Valencia Spain The scope of this talk Delft, April 2015 2/47 More flexible
More informationTEST CASE DOCUMENTATION AND TESTING RESULTS TEST CASE ID AWG-ERIF-10. MAT 224 Dynamic Punch Test Aluminium 2024 LSTC-QA-LS-DYNA-AWG-ERIF-10-9
TEST CASE DOCUMENTATION AND TESTING RESULTS LSTC-QA-LS-DYNA-AWG-ERIF-10-9 TEST CASE ID AWG-ERIF-10 MAT 224 Dynamic Punch Test Aluminium 2024 Tested with LS-DYNA R R10 Revision 116539 Thursday 11 th May,
More informationOCTOPUS Performance Benchmark and Profiling. June 2015
OCTOPUS Performance Benchmark and Profiling June 2015 2 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the
More informationMolecular Devices High Content Screening Computer Specifications
Molecular Devices High Content Screening Computer Specifications Computer and Server Specifications for Offline Analysis with the AcuityXpress and MetaXpress Software, MDCStore Data Management Solution,
More informationAccelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet
WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test
More informationChoosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
More informationUsing Microsoft Azure Cloud for CAE Simulations
Using Microsoft Azure Cloud for CAE Simulations Easy Step-by-Step Guide and Live CAE Demonstration Reha Senturk, The UberCloud June 5, 2018 Summary of UberCloud Summary of Microsoft Azure Benefits and
More informationNAMD GPU Performance Benchmark. March 2011
NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory
More informationLS-DYNA Performance on Intel Scalable Solutions
LS-DYNA Performance on Intel Scalable Solutions Nick Meng, Michael Strassmaier, James Erwin, Intel nick.meng@intel.com, michael.j.strassmaier@intel.com, james.erwin@intel.com Jason Wang, LSTC jason@lstc.com
More informationGROMACS (GPU) Performance Benchmark and Profiling. February 2016
GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute
More informationMeltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies
Meltdown and Spectre Interconnect Evaluation Jan 2018 1 Meltdown and Spectre - Background Most modern processors perform speculative execution This speculation can be measured, disclosing information about
More informationHeadline in Arial Bold 30pt. SGI Altix XE Server ANSYS Microsoft Windows Compute Cluster Server 2003
Headline in Arial Bold 30pt SGI Altix XE Server ANSYS Microsoft Windows Compute Cluster Server 2003 SGI Altix XE Building Blocks XE Cluster Head Node Two dual core Xeon processors 16GB Memory SATA/SAS
More informationFuture Routing Schemes in Petascale clusters
Future Routing Schemes in Petascale clusters Gilad Shainer, Mellanox, USA Ola Torudbakken, Sun Microsystems, Norway Richard Graham, Oak Ridge National Laboratory, USA Birds of a Feather Presentation Abstract
More informationMELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구
MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 Leading Supplier of End-to-End Interconnect Solutions Analyze Enabling the Use of Data Store ICs Comprehensive End-to-End InfiniBand and Ethernet Portfolio
More informationNetXplorer. Installation Guide. Centralized NetEnforcer Management Software P/N D R3
NetXplorer Centralized NetEnforcer Management Software Installation Guide P/N D357006 R3 Important Notice Important Notice Allot Communications Ltd. ("Allot") is not a party to the purchase agreement
More informationLAMMPSCUDA GPU Performance. April 2011
LAMMPSCUDA GPU Performance April 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory Council
More informationLBRN - HPC systems : CCT, LSU
LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit
More informationHyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage.
Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage. To optimize performance in hyper-converged deployments with Proxmox VE and Ceph storage the appropriate
More informationCST STUDIO SUITE TM 2010 MPI Computing Guide
CST STUDIO SUITE TM 2010 MPI Computing Guide Contents 1 Introduction 2 2 Nomenclature 2 3 Terms 3 4 Technical Requirements 3 4.1 Interconnection Network............................ 3 4.1.1 Network Technology..........................
More informationSolving Large Complex Problems. Efficient and Smart Solutions for Large Models
Solving Large Complex Problems Efficient and Smart Solutions for Large Models 1 ANSYS Structural Mechanics Solutions offers several techniques 2 Current trends in simulation show an increased need for
More informationMellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007
Mellanox Technologies Maximize Cluster Performance and Productivity Gilad Shainer, shainer@mellanox.com October, 27 Mellanox Technologies Hardware OEMs Servers And Blades Applications End-Users Enterprise
More informationHPC Current Development in Indonesia. Dr. Bens Pardamean Bina Nusantara University Indonesia
HPC Current Development in Indonesia Dr. Bens Pardamean Bina Nusantara University Indonesia HPC Facilities Educational & Research Institutions in Indonesia CIBINONG SITE Basic Nodes: 80 node 2 processors
More informationLAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015
LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA
More information