APENet: LQCD clusters a la APE
|
|
- Thomasina Rose
- 5 years ago
- Views:
Transcription
1 Overview Hardware/Software Benchmarks Conclusions APENet: LQCD clusters a la APE Concept, Development and Use Roberto Ammendola Istituto Nazionale di Fisica Nucleare, Sezione Roma Tor Vergata Centro Ricerce E. Fermi SM&FT 2006, Bari 21th September 2006
2 Overview Hardware/Software Benchmarks Conclusions Motivation The APE group has traditionally focused on the design and the development of custom silicon, electronics and software optimized for Lattice Quantum ChromoDynamics. The APENet project was started to study the mixing of existing off-the-shelf computing technology (CPUs, motherboards and memories for PC clusters) with a custom interconnect architecture, derived from previous experience of the APE group. The focus is on building optimized, super-computer level platforms suited for numerical applications that are both CPU and memory intensive.
3 Overview Hardware/Software Benchmarks Conclusions Requested Features The idea was to build a switch-less network characterized by: High bandwidth Low latency Natural fit with LQCD and numerical grid-based algorithm. Not necessarily first neighbour communication. Good performance scaling as a function of the number of processors. Very good cost scaling even for large number of processors.
4 Overview Hardware/Software Benchmarks Conclusions Development stages APENet history: Sept 2001 June 2002: Release of the first HW prototype Sept 2002 Feb 2003: 2nd HW version March Sep 2003: 3rd HW version development and production Dec 2003: Electrical validation Sept 2004: 16 nodes APENet prototype cluster March Nov 2005: 128 nodes APENet cluster June 2006: Production starts
5 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software APENet Main Features APENet is a 3D network of point-to-point links with toroidal topology. Each computing node has 6 bi-directional full-duplex communication channels Computing nodes are arranged in a 3D cubic mesh Data is transmitted in packets which are routed to the destination node Lightweight low level protocol Wormhole routing Dimension ordered routing algorithm 2 Virtual Channels per receiving channel to prevent deadlocks
6 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software How Does it Look Like Here it is shown a example in a logic description and a real implementation: APE 16 (3,1,1) APE 15 (2,1,1) APE 12 (3,0,1) APE 8 (3,1,0) APE 14 (1,1,1) APE 11 (2,0,1) APE 7 (2,1,0) APE 4 (3,0,0) APE 13 (0,1,1) APE 10 (1,0,1) APE 6 (1,1,0) APE 3 (2,0,0) APE 9 (0,0,1) APE 2 (1,0,0) APE 5 (0,1,0) Y+ APE 1 (0,0,0) Z X+ X Z+ Y
7 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software The interconnection card Altera Stratix EP1S30, 1020 pin package, fastest speed grade National Serializers/Deserializers DS90CR485/486, 48 bit 133 MHz Usage of a programmable device allows possible logic redesign and quick on-field firmware upgrade.
8 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) 64b*133Mhz Crossbar Switch Router (a) Local PCI (b) (c) (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs EP1S30 PCI-X CORE PCI-X (64b*133MHz)
9 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
10 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
11 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
12 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
13 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
14 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Functional blocks 6.4(x2) Gb/s X+ X- Y+ Y- Z+ Z- 8DiffPairs*800Mhz 48bit*133Mhz (e) (e) (e) (e) (e) (e) Crossbar Switch 64b*133Mhz Router EP1S30 (a) Local PCI (b) (c) PCI-X CORE (d) (a): PCI & FIFOs (b): Command Queue FIFO (c): Gather/Scatter FIFO (d): RDMA Table Dual Port RAM (e): Channel FIFOs PCI-X (64b*133MHz)
15 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Software A new piece of Hardware needs new Software Device driver Application level library Application level test suites MPI library MPI level test suites
16 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software Submitting job A job submitting environment has been developed, aware of allowed network topologies on a given machine. A configuration file describes allowed topologies: APE_NET_TOPOLOGY APE_NET_PARTITION full APE_NET_PARTITION single APE_NET_PARTITION zline APE_NET_PARTITION zline Jobs are submitted with an mpirun derived script: aperun -topo zline0 cpi
17 Overview Hardware/Software Benchmarks Conclusions Main Features Hardware Software MPI Profiling It can be very hard to produce optimized parallel code. A tool for profiling MPI communication is under development, which can help understanding the communication requests of a certain code for a better tuning of its internal parameters. Informations are gathered regarding: communication vs computation time time spent per communication function transferred data size per communication function transferred data size transfer per rank...
18 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Testbed APE Dual Xeon "Nocona" 3.4 GHz 1 GB RAM DDR TB NFS export Service 100 Mbit Network User access 1Gbit Network APENet network
19 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Some notes about the assembling To host the machine we needed to rework the infrastructure: floor air condition power supply with UPS APENet assembling issues: 5% faulty devices 20% badly mounted devices 10% badly connected links General issues: 10% early dead disks
20 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications MPI Bandwidth benchmark Unidirectional Bandwidth 133 MHz Bidirectional Bandwidth 133 MHz Link Protocol Limit 600 Bandwidth (MB/s) K 4K 16K Message Size (bytes) 64K 256K 1M Peak Unidirectional Bandwidth 570MB/s Peak Bidirectional Bandwidth 720MB/s Link physical limit 590MB/s
21 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications MPI Latency benchmark One Way Latency Round Trip Latency Time (us) K Message Size (bytes) 4K 16K 64K Minimum Streaming Latency 1.9µs Minumum Round Trip Latency 6.9µs
22 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Real life benchmark The performances (in floating point operation per second) of 4 LQCD core routines are shown. The tests are executed with a fixed local lattice size (8 4 ). Allowed network topologies: 4 CPU: CPU: CPU: CPU: CPU: Best results are achieved when the global lattice geometry reflects the physical network topology.
23 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Application Scaling Block Dirac inversion Full Dirac inversion Dirac application 32 bit Dirac application 64 bit GFlops Number of CPUs 128
24 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Application Scaling MFlops per CPU Number of CPUs Block Dirac inversion Full Dirac inversion Dirac application 32 bit Dirac application 64 bit
25 Overview Hardware/Software Benchmarks Conclusions Bandwitdh Latency Applications Application Scaling Percentage of Communication Time Block Dirac inversion Full Dirac inversion Dirac application 32 bit Dirac application 64 bit Number of CPUs
26 Overview Hardware/Software Benchmarks Conclusions Future work Development is still in progress both for enhancing performances and for bug fixing. Activity will be focused mainly on: Multiport introduction Reduce CPU usage for communication Reduce latency for medium-large size data transfers
27 Overview Hardware/Software Benchmarks Conclusions Conclusions Developing APENet has been a very interesting challenge and performances can be (still) compared with commercial interconnect systems in HPC. Technology is running fast out there, and as we are not market driven (and under-sized), it s very easy to become obsolete. The acquired know-how is going to be transferred in the next generation APE machines (going for PETAFlops?).
28 Overview Hardware/Software Benchmarks Conclusions APENet people R. Ammendola R. Petronzio D. Rossetti A. Salamon N. Tantalo P. Vicini
arxiv: v1 [physics.comp-ph] 18 Feb 2011
arxiv:1102.3796v1 [physics.comp-ph] 18 Feb 2011 APEnet+: high bandwidth 3D direct network for petaflops scale commodity clusters R Ammendola 1, A Biagioni 2, O Frezza 2, F Lo Cicero 2, A Lonardo 2, P S
More informationAnalysis of performance improvements for host and GPU interface of the APENet+ 3D Torus network
Journal of Physics: Conference Series OPEN ACCESS Analysis of performance improvements for host and GPU interface of the APENet+ 3D Torus network To cite this article: R Ammendola A et al 2014 J. Phys.:
More informationarxiv: v1 [hep-lat] 1 Dec 2010
: a 3D toroidal network enabling Petaflops scale Lattice QCD simulations on commodity clusters arxiv:1012.0253v1 [hep-lat] 1 Dec 2010 ab, Andrea Biagioni c, Ottorino Frezza c, Francesca Lo Cicero c, Alessandro
More informationAPEnet+: a 3D Torus network optimized for GPU-based HPC Systems
APEnet+: a 3D Torus network optimized for GPU-based HPC Systems R Ammendola 1, A Biagioni 2, O Frezza 2, F Lo Cicero 2, A Lonardo 2, P S Paolucci 2, D Rossetti 2, F Simula 2, L Tosoratto 2 and P Vicini
More informationCluster Network Products
Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster
More informationA Case for High Performance Computing with Virtual Machines
A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation
More informationBlueGene/L. Computer Science, University of Warwick. Source: IBM
BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours
More informationGRID Testing and Profiling. November 2017
GRID Testing and Profiling November 2017 2 GRID C++ library for Lattice Quantum Chromodynamics (Lattice QCD) calculations Developed by Peter Boyle (U. of Edinburgh) et al. Hybrid MPI+OpenMP plus NUMA aware
More informationMM5 Modeling System Performance Research and Profiling. March 2009
MM5 Modeling System Performance Research and Profiling March 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center
More informationInitial Performance Evaluation of the Cray SeaStar Interconnect
Initial Performance Evaluation of the Cray SeaStar Interconnect Ron Brightwell Kevin Pedretti Keith Underwood Sandia National Laboratories Scalable Computing Systems Department 13 th IEEE Symposium on
More informationThe way toward peta-flops
The way toward peta-flops ISC-2011 Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Where things started from DESIGN CONCEPTS 2 New challenges and requirements! Optimal sustained flops
More information2008 International ANSYS Conference
2008 International ANSYS Conference Maximizing Productivity With InfiniBand-Based Clusters Gilad Shainer Director of Technical Marketing Mellanox Technologies 2008 ANSYS, Inc. All rights reserved. 1 ANSYS,
More informationBrand-New Vector Supercomputer
Brand-New Vector Supercomputer NEC Corporation IT Platform Division Shintaro MOMOSE SC13 1 New Product NEC Released A Brand-New Vector Supercomputer, SX-ACE Just Now. Vector Supercomputer for Memory Bandwidth
More informationScaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc
Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC
More informationNetwork Design Considerations for Grid Computing
Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom
More informationApplication Performance on Dual Processor Cluster Nodes
Application Performance on Dual Processor Cluster Nodes by Kent Milfeld milfeld@tacc.utexas.edu edu Avijit Purkayastha, Kent Milfeld, Chona Guiang, Jay Boisseau TEXAS ADVANCED COMPUTING CENTER Thanks Newisys
More informationHigh Performance MPI on IBM 12x InfiniBand Architecture
High Performance MPI on IBM 12x InfiniBand Architecture Abhinav Vishnu, Brad Benton 1 and Dhabaleswar K. Panda {vishnu, panda} @ cse.ohio-state.edu {brad.benton}@us.ibm.com 1 1 Presentation Road-Map Introduction
More informationSingle-Points of Performance
Single-Points of Performance Mellanox Technologies Inc. 29 Stender Way, Santa Clara, CA 9554 Tel: 48-97-34 Fax: 48-97-343 http://www.mellanox.com High-performance computations are rapidly becoming a critical
More informationPerformance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms
Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State
More informationBirds of a Feather Presentation
Mellanox InfiniBand QDR 4Gb/s The Fabric of Choice for High Performance Computing Gilad Shainer, shainer@mellanox.com June 28 Birds of a Feather Presentation InfiniBand Technology Leadership Industry Standard
More informationCommunication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.
Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance
More informationImproving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters
Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters Hari Subramoni, Ping Lai, Sayantan Sur and Dhabhaleswar. K. Panda Department of
More informationABySS Performance Benchmark and Profiling. May 2010
ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC
More informationInterconnect Your Future
Interconnect Your Future Gilad Shainer 2nd Annual MVAPICH User Group (MUG) Meeting, August 2014 Complete High-Performance Scalable Interconnect Infrastructure Comprehensive End-to-End Software Accelerators
More informationInfiniBand Experiences of PC²
InfiniBand Experiences of PC² Dr. Jens Simon simon@upb.de Paderborn Center for Parallel Computing (PC²) Universität Paderborn hpcline-infotag, 18. Mai 2004 PC² - Paderborn Center for Parallel Computing
More informationGPU peer-to-peer techniques applied to a cluster interconnect
GPU peer-to-peer techniques applied to a cluster interconnect Roberto Ammendola, Massimo Bernaschi, Andrea Biagioni, Mauro Bisson, Massimiliano Fatica Ottorino Frezza, Francesca Lo Cicero, Alessandro Lonardo,
More informationFeedback on BeeGFS. A Parallel File System for High Performance Computing
Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December
More informationInterconnection Network for Tightly Coupled Accelerators Architecture
Interconnection Network for Tightly Coupled Accelerators Architecture Toshihiro Hanawa, Yuetsu Kodama, Taisuke Boku, Mitsuhisa Sato Center for Computational Sciences University of Tsukuba, Japan 1 What
More informationTECHNOLOGIES FOR IMPROVED SCALING ON GPU CLUSTERS. Jiri Kraus, Davide Rossetti, Sreeram Potluri, June 23 rd 2016
TECHNOLOGIES FOR IMPROVED SCALING ON GPU CLUSTERS Jiri Kraus, Davide Rossetti, Sreeram Potluri, June 23 rd 2016 MULTI GPU PROGRAMMING Node 0 Node 1 Node N-1 MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM
More informationAn HS Link Network Interface Board for Parallel Computing
Carlos Ungil, Universidad de Zaragoza An S ink Network Interface Board for Parallel Computing A.Cruz,J.Pech,A.Tarancón,C..Ullod,C.Ungil An S ink Network Interface Board for Parallel Computing 1 RTNN attice
More informationL'esperimento apenext dell'infn dall'architettura all'installazione
L'esperimento apenext dell'infn dall'architettura all'installazione Davide Rossetti I.N.F.N Roma - gruppo APE* davide.rossetti@roma1.infn.it *http://apegate.roma1.infn.it/ape Index The problem The solution(s)
More informationTightly Coupled Accelerators Architecture
Tightly Coupled Accelerators Architecture Yuetsu Kodama Division of High Performance Computing Systems Center for Computational Sciences University of Tsukuba, Japan 1 What is Tightly Coupled Accelerators
More informationChapter 1. Introduction: Part I. Jens Saak Scientific Computing II 7/348
Chapter 1 Introduction: Part I Jens Saak Scientific Computing II 7/348 Why Parallel Computing? 1. Problem size exceeds desktop capabilities. Jens Saak Scientific Computing II 8/348 Why Parallel Computing?
More informationMaximizing Memory Performance for ANSYS Simulations
Maximizing Memory Performance for ANSYS Simulations By Alex Pickard, 2018-11-19 Memory or RAM is an important aspect of configuring computers for high performance computing (HPC) simulation work. The performance
More informationMap3D V58 - Multi-Processor Version
Map3D V58 - Multi-Processor Version Announcing the multi-processor version of Map3D. How fast would you like to go? 2x, 4x, 6x? - it's now up to you. In order to achieve these performance gains it is necessary
More informationCan Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? Sayantan Sur, Abhinav Vishnu, Hyun-Wook Jin, Wei Huang and D. K. Panda {surs, vishnu, jinhy, huanwei, panda}@cse.ohio-state.edu
More informationMellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007
Mellanox Technologies Maximize Cluster Performance and Productivity Gilad Shainer, shainer@mellanox.com October, 27 Mellanox Technologies Hardware OEMs Servers And Blades Applications End-Users Enterprise
More informationHA-PACS/TCA: Tightly Coupled Accelerators for Low-Latency Communication between GPUs
HA-PACS/TCA: Tightly Coupled Accelerators for Low-Latency Communication between GPUs Yuetsu Kodama Division of High Performance Computing Systems Center for Computational Sciences University of Tsukuba,
More informationScheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 Gilad Shainer, Tong Liu (Mellanox); Jeffrey Layton (Dell); Joshua Mora (AMD) High Performance Interconnects for
More informationSami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1
Acknowledgements: Petra Kogel Sami Saarinen Peter Towers 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Motivation Opteron and P690+ clusters MPI communications IFS Forecast Model IFS 4D-Var
More informationMIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer
MIMD Overview Intel Paragon XP/S Overview! MIMDs in the 1980s and 1990s! Distributed-memory multicomputers! Intel Paragon XP/S! Thinking Machines CM-5! IBM SP2! Distributed-memory multicomputers with hardware
More informationRavindra Babu Ganapathi
14 th ANNUAL WORKSHOP 2018 INTEL OMNI-PATH ARCHITECTURE AND NVIDIA GPU SUPPORT Ravindra Babu Ganapathi Intel Corporation [ April, 2018 ] Intel MPI Open MPI MVAPICH2 IBM Platform MPI SHMEM Intel MPI Open
More informationHETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA
HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA STATE OF THE ART 2012 18,688 Tesla K20X GPUs 27 PetaFLOPS FLAGSHIP SCIENTIFIC APPLICATIONS
More informationGPUDIRECT: INTEGRATING THE GPU WITH A NETWORK INTERFACE DAVIDE ROSSETTI, SW COMPUTE TEAM
GPUDIRECT: INTEGRATING THE GPU WITH A NETWORK INTERFACE DAVIDE ROSSETTI, SW COMPUTE TEAM GPUDIRECT FAMILY 1 GPUDirect Shared GPU-Sysmem for inter-node copy optimization GPUDirect P2P for intra-node, accelerated
More informationEvaluating the Impact of RDMA on Storage I/O over InfiniBand
Evaluating the Impact of RDMA on Storage I/O over InfiniBand J Liu, DK Panda and M Banikazemi Computer and Information Science IBM T J Watson Research Center The Ohio State University Presentation Outline
More informationThe CMS Event Builder
The CMS Event Builder Frans Meijers CERN/EP-CMD CMD on behalf of the CMS-DAQ group CHEP03, La Jolla, USA, March 24-28 28 2003 1. Introduction 2. Selected Results from the Technical Design Report R&D programme
More informationANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation
ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation Ray Browell nvidia Technology Theater SC12 1 2012 ANSYS, Inc. nvidia Technology Theater SC12 HPC Revolution Recent
More informationTopology Awareness in the Tofu Interconnect Series
Topology Awareness in the Tofu Interconnect Series Yuichiro Ajima Senior Architect Next Generation Technical Computing Unit Fujitsu Limited June 23rd, 2016, ExaComm2016 Workshop 0 Introduction Networks
More informationNetworks: Routing, Deadlock, Flow Control, Switch Design, Case Studies. Admin
Networks: Routing, Deadlock, Flow Control, Switch Design, Case Studies Alvin R. Lebeck CPS 220 Admin Homework #5 Due Dec 3 Projects Final (yes it will be cumulative) CPS 220 2 1 Review: Terms Network characterized
More information1/5/2012. Overview of Interconnects. Presentation Outline. Myrinet and Quadrics. Interconnects. Switch-Based Interconnects
Overview of Interconnects Myrinet and Quadrics Leading Modern Interconnects Presentation Outline General Concepts of Interconnects Myrinet Latest Products Quadrics Latest Release Our Research Interconnects
More informationPerformance Evaluation of Myrinet-based Network Router
Performance Evaluation of Myrinet-based Network Router Information and Communications University 2001. 1. 16 Chansu Yu, Younghee Lee, Ben Lee Contents Suez : Cluster-based Router Suez Implementation Implementation
More informationEARLY EVALUATION OF THE CRAY XC40 SYSTEM THETA
EARLY EVALUATION OF THE CRAY XC40 SYSTEM THETA SUDHEER CHUNDURI, SCOTT PARKER, KEVIN HARMS, VITALI MOROZOV, CHRIS KNIGHT, KALYAN KUMARAN Performance Engineering Group Argonne Leadership Computing Facility
More informationThe Optimal CPU and Interconnect for an HPC Cluster
5. LS-DYNA Anwenderforum, Ulm 2006 Cluster / High Performance Computing I The Optimal CPU and Interconnect for an HPC Cluster Andreas Koch Transtec AG, Tübingen, Deutschland F - I - 15 Cluster / High Performance
More informationAim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group
Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.
More informationOpenFOAM Performance Testing and Profiling. October 2017
OpenFOAM Performance Testing and Profiling October 2017 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Huawei, Mellanox Compute resource - HPC
More informationRDMA Read Based Rendezvous Protocol for MPI over InfiniBand: Design Alternatives and Benefits
RDMA Read Based Rendezvous Protocol for MPI over InfiniBand: Design Alternatives and Benefits Sayantan Sur Hyun-Wook Jin Lei Chai D. K. Panda Network Based Computing Lab, The Ohio State University Presentation
More informationFuture Routing Schemes in Petascale clusters
Future Routing Schemes in Petascale clusters Gilad Shainer, Mellanox, USA Ola Torudbakken, Sun Microsystems, Norway Richard Graham, Oak Ridge National Laboratory, USA Birds of a Feather Presentation Abstract
More informationOutline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers
Outline Execution Environments for Parallel Applications Master CANS 2007/2008 Departament d Arquitectura de Computadors Universitat Politècnica de Catalunya Supercomputers OS abstractions Extended OS
More informationLeveraging HyperTransport for a custom high-performance cluster network
Leveraging HyperTransport for a custom high-performance cluster network Mondrian Nüssle HTCE Symposium 2009 11.02.2009 Outline Background & Motivation Architecture Hardware Implementation Host Interface
More informationPerformance Analysis and Prediction for distributed homogeneous Clusters
Performance Analysis and Prediction for distributed homogeneous Clusters Heinz Kredel, Hans-Günther Kruse, Sabine Richling, Erich Strohmaier IT-Center, University of Mannheim, Germany IT-Center, University
More informationA PCIe Congestion-Aware Performance Model for Densely Populated Accelerator Servers
A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator Servers Maxime Martinasso, Grzegorz Kwasniewski, Sadaf R. Alam, Thomas C. Schulthess, Torsten Hoefler Swiss National Supercomputing
More informationOrganizational issues (I)
COSC 6385 Computer Architecture Introduction and Organizational Issues Fall 2009 Organizational issues (I) Classes: Monday, 1.00pm 2.30pm, SEC 202 Wednesday, 1.00pm 2.30pm, SEC 202 Evaluation 25% homework
More informationOrganizational issues (I)
COSC 6385 Computer Architecture Introduction and Organizational Issues Fall 2008 Organizational issues (I) Classes: Monday, 1.00pm 2.30pm, PGH 232 Wednesday, 1.00pm 2.30pm, PGH 232 Evaluation 25% homework
More informationPC DESY Peter Wegner. PC Cluster Definition 1
PC Cluster @ DESY Peter Wegner 1. Motivation, History 2. Myrinet-Communication 4. Cluster Hardware 5. Cluster Software 6. Future PC Cluster Definition 1 Idee: Herbert Cornelius (Intel München) 1 PC Cluster
More informationMemory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand
Memory Scalability Evaluation of the Next-Generation Intel Bensley Platform with InfiniBand Matthew Koop, Wei Huang, Ahbinav Vishnu, Dhabaleswar K. Panda Network-Based Computing Laboratory Department of
More informationTFLOP Performance for ANSYS Mechanical
TFLOP Performance for ANSYS Mechanical Dr. Herbert Güttler Engineering GmbH Holunderweg 8 89182 Bernstadt www.microconsult-engineering.de Engineering H. Güttler 19.06.2013 Seite 1 May 2009, Ansys12, 512
More informationComposite Metrics for System Throughput in HPC
Composite Metrics for System Throughput in HPC John D. McCalpin, Ph.D. IBM Corporation Austin, TX SuperComputing 2003 Phoenix, AZ November 18, 2003 Overview The HPC Challenge Benchmark was announced last
More informationWindows NT Server Configuration and Tuning for Optimal SAS Server Performance
Windows NT Server Configuration and Tuning for Optimal SAS Server Performance Susan E. Davis Compaq Computer Corp. Carl E. Ralston Compaq Computer Corp. Our Role Onsite at SAS Corporate Technology Center
More informationMILC Performance Benchmark and Profiling. April 2013
MILC Performance Benchmark and Profiling April 2013 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting
More informationExploring the Effects of Hyperthreading on Scientific Applications
Exploring the Effects of Hyperthreading on Scientific Applications by Kent Milfeld milfeld@tacc.utexas.edu edu Kent Milfeld, Chona Guiang, Avijit Purkayastha, Jay Boisseau TEXAS ADVANCED COMPUTING CENTER
More informationCray XC Scalability and the Aries Network Tony Ford
Cray XC Scalability and the Aries Network Tony Ford June 29, 2017 Exascale Scalability Which scalability metrics are important for Exascale? Performance (obviously!) What are the contributing factors?
More informationA first look at 100 Gbps LAN technologies, with an emphasis on future DAQ applications.
21st International Conference on Computing in High Energy and Nuclear Physics (CHEP21) IOP Publishing Journal of Physics: Conference Series 664 (21) 23 doi:1.188/1742-696/664//23 A first look at 1 Gbps
More informationBei Wang, Dmitry Prohorov and Carlos Rosales
Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512
More informationWhat are Clusters? Why Clusters? - a Short History
What are Clusters? Our definition : A parallel machine built of commodity components and running commodity software Cluster consists of nodes with one or more processors (CPUs), memory that is shared by
More informationFujitsu s Approach to Application Centric Petascale Computing
Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview
More informationMemory-Based Cloud Architectures
Memory-Based Cloud Architectures ( Or: Technical Challenges for OnDemand Business Software) Jan Schaffner Enterprise Platform and Integration Concepts Group Example: Enterprise Benchmarking -) *%'+,#$)
More informationCommunication Models for Resource Constrained Hierarchical Ethernet Networks
Communication Models for Resource Constrained Hierarchical Ethernet Networks Speaker: Konstantinos Katrinis # Jun Zhu +, Alexey Lastovetsky *, Shoukat Ali #, Rolf Riesen # + Technical University of Eindhoven,
More informationComputer Aided Engineering with Today's Multicore, InfiniBand-Based Clusters ANSYS, Inc. All rights reserved. 1 ANSYS, Inc.
Computer Aided Engineering with Today's Multicore, InfiniBand-Based Clusters 2006 ANSYS, Inc. All rights reserved. 1 ANSYS, Inc. Proprietary Our Business Simulation Driven Product Development Deliver superior
More informationCP2K Performance Benchmark and Profiling. April 2011
CP2K Performance Benchmark and Profiling April 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC
More informationThe Architecture and the Application Performance of the Earth Simulator
The Architecture and the Application Performance of the Earth Simulator Ken ichi Itakura (JAMSTEC) http://www.jamstec.go.jp 15 Dec., 2011 ICTS-TIFR Discussion Meeting-2011 1 Location of Earth Simulator
More informationHPC Enabling R&D at Philip Morris International
HPC Enabling R&D at Philip Morris International Jim Geuther*, Filipe Bonjour, Bruce O Neel, Didier Bouttefeux, Sylvain Gubian, Stephane Cano, and Brian Suomela * Philip Morris International IT Service
More informationA Global Operating System for HPC Clusters
A Global Operating System Emiliano Betti 1 Marco Cesati 1 Roberto Gioiosa 2 Francesco Piermaria 1 1 System Programming Research Group, University of Rome Tor Vergata 2 BlueGene Software Division, IBM TJ
More informationNetSpeed ORION: A New Approach to Design On-chip Interconnects. August 26 th, 2013
NetSpeed ORION: A New Approach to Design On-chip Interconnects August 26 th, 2013 INTERCONNECTS BECOMING INCREASINGLY IMPORTANT Growing number of IP cores Average SoCs today have 100+ IPs Mixing and matching
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System
More informationHP Z Turbo Drive G2 PCIe SSD
Performance Evaluation of HP Z Turbo Drive G2 PCIe SSD Powered by Samsung NVMe technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant August 2015 Sponsored by: P a
More informationIntel Enterprise Processors Technology
Enterprise Processors Technology Kosuke Hirano Enterprise Platforms Group March 20, 2002 1 Agenda Architecture in Enterprise Xeon Processor MP Next Generation Itanium Processor Interconnect Technology
More informationarxiv: v1 [physics.comp-ph] 4 Nov 2013
arxiv:1311.0590v1 [physics.comp-ph] 4 Nov 2013 Performance of Kepler GTX Titan GPUs and Xeon Phi System, Weonjong Lee, and Jeonghwan Pak Lattice Gauge Theory Research Center, CTP, and FPRD, Department
More informationAltair OptiStruct 13.0 Performance Benchmark and Profiling. May 2015
Altair OptiStruct 13.0 Performance Benchmark and Profiling May 2015 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute
More informationCatapult: A Reconfigurable Fabric for Petaflop Computing in the Cloud
Catapult: A Reconfigurable Fabric for Petaflop Computing in the Cloud Doug Burger Director, Hardware, Devices, & Experiences MSR NExT November 15, 2015 The Cloud is a Growing Disruptor for HPC Moore s
More informationMPICH-G2 performance evaluation on PC clusters
MPICH-G2 performance evaluation on PC clusters Roberto Alfieri Fabio Spataro February 1, 2001 1 Introduction The Message Passing Interface (MPI) [1] is a standard specification for message passing libraries.
More informationCluster Computing. Chip Watson Jefferson Lab High Performance Computing. Acknowledgements to Don Holmgren, Fermilab,, USQCD Facilities Project
Cluster Computing Chip Watson Jefferson Lab High Performance Computing Acknowledgements to Don Holmgren, Fermilab,, USQCD Facilities Project Jie Chen, Ying Chen, Balint Joo, JLab HPC Group Distributed
More informationAltix Usage and Application Programming
Center for Information Services and High Performance Computing (ZIH) Altix Usage and Application Programming Discussion And Important Information For Users Zellescher Weg 12 Willers-Bau A113 Tel. +49 351-463
More informationFrom Beowulf to the HIVE
Commodity Cluster Computing at Goddard Space Flight Center Dr. John E. Dorband NASA Goddard Space Flight Center Earth and Space Data Computing Division Applied Information Sciences Branch 1 The Legacy
More informationStatus and physics plan of the PACS-CS Project
Status and physics plan of the PACS-CS Project Lattice 2006 July 25 Tucson Collaboration members PACS-CS status physics plan Summary Akira Ukawa Center for Computational Sciences University of Tsukuba
More informationThe Red Storm System: Architecture, System Update and Performance Analysis
The Red Storm System: Architecture, System Update and Performance Analysis Douglas Doerfler, Jim Tomkins Sandia National Laboratories Center for Computation, Computers, Information and Mathematics LACSI
More information10 Gbit/s Challenge inside the Openlab framework
10 Gbit/s Challenge inside the Openlab framework Sverre Jarp IT Division CERN SJ Feb 2003 1 Agenda Introductions All Overview Sverre Feedback Enterasys HP Intel Further discussions Elaboration of plan
More information46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep
46PaQ Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep 2005 1 Today s talk Overview Current Status and Results Future Work
More informationHardware and Software solutions for scaling highly threaded processors. Denis Sheahan Distinguished Engineer Sun Microsystems Inc.
Hardware and Software solutions for scaling highly threaded processors Denis Sheahan Distinguished Engineer Sun Microsystems Inc. Agenda Chip Multi-threaded concepts Lessons learned from 6 years of CMT
More informationSTAR-CCM+ Performance Benchmark and Profiling. July 2014
STAR-CCM+ Performance Benchmark and Profiling July 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: CD-adapco, Intel, Dell, Mellanox Compute
More informationLarge scale Imaging on Current Many- Core Platforms
Large scale Imaging on Current Many- Core Platforms SIAM Conf. on Imaging Science 2012 May 20, 2012 Dr. Harald Köstler Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen,
More information