The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

Similar documents
Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

SIMPLIFYING HPC SIMPLIFYING HPC FOR ENGINEERING SIMULATION WITH ANSYS

2008 International ANSYS Conference

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

ANSYS HPC Technology Leadership

CST STUDIO SUITE R Supported GPU Hardware

Why HPC for. ANSYS Mechanical and ANSYS CFD?

SGI Origin 400. The Integrated Workgroup Blade System Optimized for SME Workflows

University at Buffalo Center for Computational Research

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

Analyzing Performance and Power of Applications on GPUs with Dell 12G Platforms. Dr. Jeffrey Layton Enterprise Technologist HPC

Maximize automotive simulation productivity with ANSYS HPC and NVIDIA GPUs

Headline in Arial Bold 30pt. SGI Altix XE Server ANSYS Microsoft Windows Compute Cluster Server 2003

Technical guide. Windows HPC server 2016 for LS-DYNA How to setup. Reference system setup - v1.0

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

Recent Advances in ANSYS Toward RDO Practices Using optislang. Wim Slagter, ANSYS Inc. Herbert Güttler, MicroConsult GmbH

Solving Large Complex Problems. Efficient and Smart Solutions for Large Models

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012

Architecting High Performance Computing Systems for Fault Tolerance and Reliability

ANSYS HPC. Technology Leadership. Barbara Hutchings ANSYS, Inc. September 20, 2011

FUSION1200 Scalable x86 SMP System

CUSTOMERS: 1 of 1 7/13/ :34 AM. Reasons why you may be searching for a CVDI solution. an expert. Have an expert call me.

Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

LAMMPSCUDA GPU Performance. April 2011

Understanding Hardware Selection to Speedup Your CFD and FEA Simulations

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence

Speedup Altair RADIOSS Solvers Using NVIDIA GPU

Computer Aided Engineering with Today's Multicore, InfiniBand-Based Clusters ANSYS, Inc. All rights reserved. 1 ANSYS, Inc.

Faster Innovation - Accelerating SIMULIA Abaqus Simulations with NVIDIA GPUs. Baskar Rajagopalan Accelerated Computing, NVIDIA

Erkenntnisse aus aktuellen Performance- Messungen mit LS-DYNA

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Particleworks: Particle-based CAE Software fully ported to GPU

OpenFOAM Performance Testing and Profiling. October 2017

Aerodynamics of a hi-performance vehicle: a parallel computing application inside the Hi-ZEV project

GPUs and Emerging Architectures

DESCRIPTION GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage teraflops About ScaleMP

Altair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015

HP GTC Presentation May 2012

System Design of Kepler Based HPC Solutions. Saeed Iqbal, Shawn Gao and Kevin Tubbs HPC Global Solutions Engineering.

Real Application Performance and Beyond

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation

Advanced GPU Computing References with HP Workstations

STAR-CCM+ Performance Benchmark and Profiling. July 2014

IBM Information Technology Guide For ANSYS Fluent Customers

HPC Architectures. Types of resource currently in use

(software agnostic) Computational Considerations

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances

Engineers can be significantly more productive when ANSYS Mechanical runs on CPUs with a high core count. Executive Summary

LS-DYNA Performance Benchmark and Profiling. April 2015

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA

CPMD Performance Benchmark and Profiling. February 2014

Dell HPC System for Manufacturing System Architecture and Application Performance

Inspur AI Computing Platform

MICROWAY S NVIDIA TESLA V100 GPU SOLUTIONS GUIDE

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

DATARMOR: Comment s'y préparer? Tina Odaka

(Reaccredited with A Grade by the NAAC) RE-TENDER NOTICE. Advt. No. PU/R/RUSA Fund/Equipment Purchase-1/ Date:

LS-DYNA Performance Benchmark and Profiling. October 2017

LS-DYNA Performance Benchmark and Profiling. October 2017

Entry workstation with ultra-quiet and extreme I/O performance design. Up-to-date 22nm Processors, Latest Intel Platform Ready

Dell Solution for High Density GPU Infrastructure

High Performance Computing with Accelerators

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

ANSYS High. Computing. User Group CAE Associates

Software and Performance Engineering for numerical codes on GPU clusters

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

Purchasing Services SVC East Fowler Avenue Tampa, Florida (813)

Turbostream: A CFD solver for manycore

WHAT S NEW IN GRID 7.0. Mason Wu, GRID & ProViz Solutions Architect Nov. 2018

HPC Hardware Overview

LQCD Facilities at Jefferson Lab. Chip Watson May 6, 2011

NAMD Performance Benchmark and Profiling. January 2015

Scalable x86 SMP Server FUSION1200

Specialised Server Technology for HD Surveillance

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

LBRN - HPC systems : CCT, LSU

SNAP Performance Benchmark and Profiling. April 2014

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

Amazon Web Services: Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

IBM BladeCenter S Competitive Summary

REPORT DOCUMENTATION PAGE

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC

About Us. Are you ready for headache-free HPC? Call us and learn more about our custom clustering solutions.

Oncilla - a Managed GAS Runtime for Accelerating Data Warehousing Queries

NAMD GPU Performance Benchmark. March 2011

GPU-Acceleration of CAE Simulations. Bhushan Desam NVIDIA Corporation

Deploying remote GPU virtualization with rcuda. Federico Silla Technical University of Valencia Spain

AcuSolve Performance Benchmark and Profiling. October 2011

Dell EMC PowerEdge server portfolio: platforms and solutions

Comet Virtualization Code & Design Sprint

INCREASE IT EFFICIENCY, REDUCE OPERATING COSTS AND DEPLOY ANYWHERE

Session S0069: GPU Computing Advances in 3D Electromagnetic Simulation

Transcription:

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup Up to 96 cores of Intel 5600 compute power 3D visualization Up to 32TB of storage GPU acceleration Small footprint Normal office power Active noise suppression

Std. 16 Ports Gigabit Switch Unmanaged Layer 2 External Ports 2 x RJ45 3 x USB Zone 1 Power Supply Module N+1 (opt. Redundancy) Zone 1 Power Outlet 20Amp 110/240V Opt. InfiniBand Switch 12 or 24 Ports DDR/QDR 4x Zone 2 Power Outlet 20Amp 110/240V Zone 2 Power Supply Module N+1 (opt. Redundancy) Zone 1 4 Nodes Slot Accepts any Compute/Visualization/GPU- Compute/Storage Node Zone 2 4 Nodes Slot Accepts any Compute/Visualization/GPU- Compute/Storage Node 8 Blade slots in total per chassis

Compute Blade Dual socket Intel 5600 blade (up to 12 cores) On board InfiniBand and Gigabit Ethernet Up to 96 GB memory Maximum of 8 compute blades per CX1 chassis Visualization Blade Dual socket Intel 5600 blade (up to 12 cores) Plus NVIDIA Quadro graphics card On board InfiniBand and Gigabit Ethernet Up to 96 GB memory Maximum of 4 visualization blades per CX1 chassis

GPU Blade Dual socket Intel 5600 blade (up to 12 cores) Plus NVIDIA Tesla GPU (C2050) On board InfiniBand and Gigabit Ethernet Up to 96 GB memory Maximum of 4 GPU blades per CX1 chassis Storage Blade Dual socket Intel 5600 blade (up to 12 cores) Plus 8TB of storage On board InfiniBand and Gigabit Ethernet Up to 96 GB memory Maximum of 4 storage blades per CX1 chassis

8 x All compute best for scale out parallel applications running MPI 16 processors giving 96 Intel 5600 cores with up to 768GB of memory Over 1TFLOP of compute power. Example applications: ANSYS CFD environments which require scalability to 96 cores. ANSYS Mechanical where multiple stress analyses require a lot of compute power.

1 x 6 x Mixed functionality for the workgroup best for a team needing storage 14 processors giving 84 Intel 5600 cores with up to 672GB of memory plus 8TB of RAID storage and a 16 port built in Ethernet switch for team members to connect through Optionally this could include 4 SSD drives for high performance I/O often required in ANSYS Mechanical eigenvalue analysis

4 x Some of everything for the workgroup build the system your team needs 12 processors giving 72 Intel 5600 cores with up to 576GB of memory Plus 8TB of RAID storage Plus NVIDIA Quadro FX visualization All for your team to share 1 x 1 x

The Cray CX1 supports Windows HPC Server 2008 Makes it easy to integrate HPC functionality into the corporate network Single login through Active Directory Makes accessing HPC services as easy as getting to email (Exchange), SharePoint or other services Windows HPC server provides an entire HPC environment (scheduling, cluster management, etc.) built on the Windows Server platform

Ease of use is about being able to use the system without special power or data center infrastructure The Cray CX1 plugs into normal 20A office power The Cray CX1 has active noise suppression which makes it appreciably quieter than competing systems Table of sound levels L and corresponding sound pressure Sound Pressure Examples Level L p dbspl Jet aircraft, 50 m away 140 Threshold of pain 130 Threshold of discomfort 120 Chainsaw, 1 m distance 110 Disco, 1 m from speaker 100 Diesel truck, 10 m away 90 Curbside of busy road, 5 m 80 Vacuum cleaner, distance 1 m 70 Conversational speech, 1 m 60 Cray CX1 ~55 Average home 50 Quiet library 40 Quiet bedroom at night 30 Background in TV studio 20 Rustling leaf 10 Threshold of hearing 0 Closest competitive noise level Range of Noise with auto fan speed calibration

Shared CIFS Storage Shared by workstation and cluster 4TB Unformatted RAID 10 Shared Storage attached to one cluster compute blade (Cluster & Workstation) Compute Blades (Cluster) Workstation Blade (Visualization) Workstation Blade (Visualization) A Windows HPC Server 2008 Cluster 3 blades with 6 Intel Xeon 5600 Processors (up to 2.93GHz) Up to 24GB memory per blade An integrated 16 port Gigabit Ethernet switch A Windows 7 (64 bit) workstation Single blade with two Intel Xeon 5600 Processors 2.40GHz 24GB memory NVIDIA graphics with GPU acceleration Up to two high end monitor outputs Available Exclusively from Dell

ANSYS FLUENT 12.0 Truck_Poly_14m Scaling Results on Cray CX1-iWS External flow over a truck with polyhedral mesh 14M cells, DES turbulence, segregated implicit solver Fluent rating (bigger is better) 180 160 140 120 100 80 60 40 CX1-iWS scaling is excellent 5.8X speed up going from 4 to 24 cores Cray CX1-iWS Windows HPC Server 2008 3 compute blades Intel Xeon X5570, 2.93GHz 24 GB of memory per blade GigE interconnect iws with GigE Best published result with IB 20 0 4 cores 8 cores 16 cores 24 cores Intel X5570 (Nehalem) Comparison Dec 09

Comparing 2.93 GHz Intel Xeon X5570 (Nehalem) and 2.93 GHz Intel Xeon X5670 (Westmere) Please view the following webcasts http://www.reinvented-the-workstation.com/ondemand/

Relative performance per node (8 cores per X5570 node, 12 cores per X5670 node) X5570 X5670 1 node X5670 2 node X5670 3 node 1.5 1 0.5 0 X5670 1 node X5570 X5670 3 node X5670 2 node X5670 nodes average 20 30 percent higher performance for same cost and power consumption

12 cores per X5670 node; 8 cores per X5570 blade ANSYS Mechanical rating, solver only 600 400 200 0 Bigger is better 1 node 2 node 3 node 16 36 percent improvement with Westmere Xeon X5670 Xeon X5570

Storage blade with 4-way RAID0 shows 80% improved rating (8-way) for out-of-core solver. Using two blades with 2X available RAM is able to run in-core. ANSYS Mechanical rating, solver only 6 4 2 0 Bigger is better Storage blade Normal compute blade(s) 1 core 2 cores 8 cores 16 cores

The power user needs a powerhouse tool to get the job done The Cray CX1 is that powerhouse, with compute, storage, GPU, switches and visualization all integrated in one affordable, attractive office installable unit Cray represents the pinnacle of power and performance Small Footprint Huge Performance It s a Cray!

Cray CX1 Wins HPC Wire Best HPC Cluster Solution for 2009