Sharing High-Performance Devices Across Multiple Virtual Machines
|
|
- Julianna Crawford
- 6 years ago
- Views:
Transcription
1 Sharing High-Performance Devices Across Multiple Virtual Machines
2 Preamble What does sharing devices across multiple virtual machines in our title mean? How is it different from virtual networking / NSX, which allow multiple virtual networks to share underlying networking hardware? Virtual networking works well for many standard workloads, but in the realm of extreme performance we need to deliver much closer to bare-metal performance to meet application requirements Application areas: Science & Research (HPC), Finance, Machine Learning & Big Data, etc. This talk is about achieving both extremely high performance and device sharing 2
3 Sharing High-Performance PCI Devices 1 Technical Background 2 Big Data Analytics with SPARK 3 High Performance (Technical) Computing 3
4 Direct Device Access Technologies Accessing PCI devices with maximum performance
5 Direct Path I/O Allows PCI devices to be accessed directly by guest OS Examples: GPUs for computation (GPGPU), ultra-low latency interconnects like InfiniBand and RDMA over Converged Ethernet (RoCE) Downsides: No vmotion, No Snapshots, etc. Full device is made available to a single no sharing Virtual Machine Application Guest OS Kernel No ESXi driver required just the standard vendor device driver ware ESXi DirectPath I/O 5
6 Device Partitioning () The PCI standard includes a specification for, Single Root I/O Virtualization A single PCI device can present as multiple logical devices (Virtual Functions or VFs) to ESX and to s Downsides: No vmotion, No Snapshots (but note: pvrdma feature in ESX 6.5) An ESXi driver and a guest driver are required for Mellanox Technologies supports ESXi for both InfiniBand and RDMA over Converged Ethernet (RoCE) interconnects Virtual Machine Guest OS Kernel XNET3 vswitch Application NMLX5 VF PF VF 6
7 Remote Direct Memory Access (RDMA) A hardware transport protocol Optimized for moving data to/from memory Extreme performance 600ns application-to-application latencies 100Gbps throughput Negligible CPU overheads RDMA applications Storage (iser, NFS-RDMA, NoF, Lustre) HPC (MPI, SHMEM) Big data and analytics (Hadoop, Spark) 8
8 How does RDMA achieve high performance? Traditional network stack challenges Per message / packet / byte overheads User-kernel crossings Memory copies User AppA Buf AppB Buf RDMA provides in hardware: Isolation between applications Transport Packetizing messages Reliable delivery Address translation User-level networking Direct hardware access for data path Kernel RDMA-capable hardware NeF Buf Buf iser Buf 9
9 Host Configuration Driver Installation Direct Path I/O does not require an ESX driver InfiniBand and RoCE work with the standard guest driver in this case To use, a host driver is required: RoCE bundle: MELLANOX-NMLX5_CORE-41688&productId=614 InfiniBand bundle: will be GA in Q Management tools: Install and configure the host driver using suitable driver parameters
10 Verify Virtual Functions are available 2) Select Configure Tab 4) Check Virtual Function is available 1) Select Host 3) Select PCI Devices 11
11 Host Configuration Assign a VF to a 2) Select Manage Tab 3) Select Hardware 4) Select Edit 1) Select
12 SPARK Big Data Analytics Accelerating time to solution with shared, high-performance interconnect
13 SPARK Test Results vsphere with 250 TCP vs. RDMA (Lower Is Better) Runtime (secs) ESXi6.5 hosts, one Spark per host 0 Average Min Max TCP RDMA Runtime samples TCP (sec) RDMA (sec) Improvement Average 222 (1.05x) 171 (1.01x) 23% 1 Server used as Named Node Min 213 (1.07x) 165 (1.05x) 23% Max 233 (1.05x) 174 (1.0x) 25%
14 High Performance Computing Research, Science, and Engineering applications on vsphere
15 Two Classes of Workloads: Throughput and Tightly-Coupled Often use Message Passing Interface Throughput embarrassingly parallel Examples: Digital movie rendering Financial risk analysis Microprocessor design Genomics analysis HPC Cluster Tightly-Coupled Examples: Weather forecasting Molecular modelling Jet engine design Spaceship, airplane & automobile design
16 InfiniBand MPI Example Cluster 2 Cluster 1 InfiniBand All s: #vcpu = #cores 100% CPU overcommit No memory overcommit ESXi ESXi ESXi Host Host Host 17
17 InfiniBand MPI Performance Test Application: NAMD Benchmark: STMV Cluster 2 20-vCPU s for all tests 60 MPI processes per job 10% Cluster 1 Two vclusters Linux ESXi Linux ESXi Linux ESXi One vcluster 98.5 Host Host Host Bare metal Run time (seconds) 18
18 Compute Accelerators Enabling Machine Learning, Financial and other HPC applications on vsphere
19 Shared NVIDIA GPGPU Computing Linux TensorFlow CUDA & Driver ESXi CUDA & Driver GRID driver TensorFlow Linux TensorFlow RNN SuperMicro dual 12-core system 16GB NVIDIA P100 GPU Two s, each with an 8Q GPU profile NVIDIA GRID 5.0 ESXi 6.5 Scheduling policies: NVIDIA P100 GPU Host Fixed share Equal share Best Effort 20
20 Shared NVIDIA GPGPU Computing Single P100, two 8Q s, Legacy scheduler 21
21 Summary Virtualization can support high-performance device sharing for cases in which extreme performance is a critical requirement Virtualization supports device sharing and delivers near bare-metal performance High Performance Computing Big Data SPARK Analytics Machine and Deep Learning with GPGPU The ware platform and partner ecosystem address the extreme performance needs of the most demanding emerging workloads 22
MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구
MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구 Leading Supplier of End-to-End Interconnect Solutions Analyze Enabling the Use of Data Store ICs Comprehensive End-to-End InfiniBand and Ethernet Portfolio
More informationNAMD GPU Performance Benchmark. March 2011
NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory
More informationHPC Performance in the Cloud: Status and Future Prospects
HPC Performance in the Cloud: Status and Future Prospects ISC Cloud 2012 Josh Simons, Office of the CTO, VMware 2009 VMware Inc. All rights reserved Cloud Cloud computing is a model for enabling ubiquitous,
More informationRDMA on vsphere: Update and Future Directions
RDMA on vsphere: Update and Future Directions Bhavesh Davda & Josh Simons Office of the CTO, VMware 3/26/2012 1 2010 VMware Inc. All rights reserved Agenda Guest-level InfiniBand preliminary results Virtual
More informationWHITE PAPER - SEPTEMBER 2018 VIRTUALIZING HIGH-PERFORMANCE COMPUTING (HPC) ENVIRONMENTS. Reference Architecture
WHITE PAPER - SEPTEMBER 2018 VIRTUALIZING HIGH-PERFORMANCE COMPUTING (HPC) ENVIRONMENTS Reference Architecture September 2018 Index 1. AUDIENCE... 3 2. INTRODUCTION... 3 3. WHAT IS HPC?... 3 4. MAJOR TYPES
More informationEthernet. High-Performance Ethernet Adapter Cards
High-Performance Ethernet Adapter Cards Supporting Virtualization, Overlay Networks, CPU Offloads and RDMA over Converged Ethernet (RoCE), and Enabling Data Center Efficiency and Scalability Ethernet Mellanox
More informationMachine Learning on VMware vsphere with NVIDIA GPUs
Machine Learning on VMware vsphere with NVIDIA GPUs Uday Kurkure, Hari Sivaraman, Lan Vu GPU Technology Conference 2017 2016 VMware Inc. All rights reserved. Gartner Hype Cycle for Emerging Technology
More informationSR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience
SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience Jithin Jose, Mingzhe Li, Xiaoyi Lu, Krishna Kandalla, Mark Arnold and Dhabaleswar K. (DK) Panda Network-Based Computing Laboratory
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
VIRT1997BU Machine Learning on VMware vsphere with NVIDIA s #VMworld #VIRT1997BU Disclaimer This presentation may contain product features that are currently under development. This overview of new technology
More informationPerformance of RDMA and HPC Applications in Virtual Machines using FDR InfiniBand on VMware vsphere T E C H N I C A L W H I T E P A P E R
Performance of RDMA and HPC Applications in Virtual Machines using FDR InfiniBand on VMware vsphere T E C H N I C A L W H I T E P A P E R Table of Contents Introduction... 2 Cloud Model for HPC... 2 High
More informationPARAVIRTUAL RDMA DEVICE
12th ANNUAL WORKSHOP 2016 PARAVIRTUAL RDMA DEVICE Aditya Sarwade, Adit Ranadive, Jorgen Hansen, Bhavesh Davda, George Zhang, Shelley Gong VMware, Inc. [ April 5th, 2016 ] MOTIVATION User Kernel Socket
More informationBuilding the Most Efficient Machine Learning System
Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide
More informationBirds of a Feather Presentation
Mellanox InfiniBand QDR 4Gb/s The Fabric of Choice for High Performance Computing Gilad Shainer, shainer@mellanox.com June 28 Birds of a Feather Presentation InfiniBand Technology Leadership Industry Standard
More informationVPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
VPI / InfiniBand Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox enables the highest data center performance with its
More informationThe Future of High Performance Interconnects
The Future of High Performance Interconnects Ashrut Ambastha HPC Advisory Council Perth, Australia :: August 2017 When Algorithms Go Rogue 2017 Mellanox Technologies 2 When Algorithms Go Rogue 2017 Mellanox
More informationIO virtualization. Michael Kagan Mellanox Technologies
IO virtualization Michael Kagan Mellanox Technologies IO Virtualization Mission non-stop s to consumers Flexibility assign IO resources to consumer as needed Agility assignment of IO resources to consumer
More informationSolutions for Scalable HPC
Solutions for Scalable HPC Scot Schultz, Director HPC/Technical Computing HPC Advisory Council Stanford Conference Feb 2014 Leading Supplier of End-to-End Interconnect Solutions Comprehensive End-to-End
More informationVPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
VPI / InfiniBand Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox enables the highest data center performance with its
More informationThe NE010 iwarp Adapter
The NE010 iwarp Adapter Gary Montry Senior Scientist +1-512-493-3241 GMontry@NetEffect.com Today s Data Center Users Applications networking adapter LAN Ethernet NAS block storage clustering adapter adapter
More informationBuilding the Most Efficient Machine Learning System
Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
SER1740BU RDMA: The World Of Possibilities Sudhanshu (Suds) Jain # SER1740BU #VMworld2017 Disclaimer This presentation may contain product features that are currently under development. This overview of
More informationThe rcuda middleware and applications
The rcuda middleware and applications Will my application work with rcuda? rcuda currently provides binary compatibility with CUDA 5.0, virtualizing the entire Runtime API except for the graphics functions,
More informationLAMMPSCUDA GPU Performance. April 2011
LAMMPSCUDA GPU Performance April 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory Council
More informationAMBER 11 Performance Benchmark and Profiling. July 2011
AMBER 11 Performance Benchmark and Profiling July 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource -
More informationPerformance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA
Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to
More informationABySS Performance Benchmark and Profiling. May 2010
ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC
More informationCreating High Performance Clusters for Embedded Use
Creating High Performance Clusters for Embedded Use 1 The Hype.. The Internet of Things has the capacity to create huge amounts of data Gartner forecasts 35ZB of data from things by 2020 etc Intel Putting
More informationMM5 Modeling System Performance Research and Profiling. March 2009
MM5 Modeling System Performance Research and Profiling March 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center
More informationPerformance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms
Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
SER1494BU Encrypted vmotion in vsphere 6.5: Architecture, Performance and Futures Sreekanth Setty Arunachalam Ramanathan #VMworld #SER1494BU Disclaimer This presentation may contain product features that
More informationNetworking at the Speed of Light
Networking at the Speed of Light Dror Goldenberg VP Software Architecture MaRS Workshop April 2017 Cloud The Software Defined Data Center Resource virtualization Efficient services VM, Containers uservices
More informationSpark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay Mellanox Technologies
Spark Over RDMA: Accelerate Big Data SC Asia 2018 Ido Shamay 1 Apache Spark - Intro Spark within the Big Data ecosystem Data Sources Data Acquisition / ETL Data Storage Data Analysis / ML Serving 3 Apache
More informationBuilding NVLink for Developers
Building NVLink for Developers Unleashing programmatic, architectural and performance capabilities for accelerated computing Why NVLink TM? Simpler, Better and Faster Simplified Programming No specialized
More informationInterconnect Your Future
Interconnect Your Future Paving the Road to Exascale August 2017 Exponential Data Growth The Need for Intelligent and Faster Interconnect CPU-Centric (Onload) Data-Centric (Offload) Must Wait for the Data
More informationPERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency
PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency Mellanox continues its leadership providing InfiniBand Host Channel
More informationAccelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet
WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test
More informationThe Future of Interconnect Technology
The Future of Interconnect Technology Michael Kagan, CTO HPC Advisory Council Stanford, 2014 Exponential Data Growth Best Interconnect Required 44X 0.8 Zetabyte 2009 35 Zetabyte 2020 2014 Mellanox Technologies
More informationApplication Acceleration Beyond Flash Storage
Application Acceleration Beyond Flash Storage Session 303C Mellanox Technologies Flash Memory Summit July 2014 Accelerating Applications, Step-by-Step First Steps Make compute fast Moore s Law Make storage
More informationAccessing NVM Locally and over RDMA Challenges and Opportunities
Accessing NVM Locally and over RDMA Challenges and Opportunities Wendy Elsasser Megan Grodowitz William Wang MSST - May 2018 Emerging NVM A wide variety of technologies with varied characteristics Address
More informationPerformance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox InfiniBand Host Channel Adapters (HCA) enable the highest data center
More informationEvolving HPC Solutions Using Open Source Software & Industry-Standard Hardware
CLUSTER TO CLOUD Evolving HPC Solutions Using Open Source Software & Industry-Standard Hardware Carl Trieloff cctrieloff@redhat.com Red Hat, Technical Director Lee Fisher lee.fisher@hp.com Hewlett-Packard,
More informationNAMD Performance Benchmark and Profiling. November 2010
NAMD Performance Benchmark and Profiling November 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: HP, Mellanox Compute resource - HPC Advisory
More informationOncilla - a Managed GAS Runtime for Accelerating Data Warehousing Queries
Oncilla - a Managed GAS Runtime for Accelerating Data Warehousing Queries Jeffrey Young, Alex Merritt, Se Hoon Shon Advisor: Sudhakar Yalamanchili 4/16/13 Sponsors: Intel, NVIDIA, NSF 2 The Problem Big
More informationOptimizing Out-of-Core Nearest Neighbor Problems on Multi-GPU Systems Using NVLink
Optimizing Out-of-Core Nearest Neighbor Problems on Multi-GPU Systems Using NVLink Rajesh Bordawekar IBM T. J. Watson Research Center bordaw@us.ibm.com Pidad D Souza IBM Systems pidsouza@in.ibm.com 1 Outline
More informationHigh-Performance Training for Deep Learning and Computer Vision HPC
High-Performance Training for Deep Learning and Computer Vision HPC Panel at CVPR-ECV 18 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda
More informationCreating a New SBC SWe VM Instance
Creating a New SBC SWe VM Instance To install SBC SWe on a virtual machine (VM), you must first create a VM and allocate its resources (for example CPU, memory, and NICs), as well as configure a datastore
More informationSNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem
SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem Rob Davis Mellanox Technologies robd@mellanox.com The FASTEST Storage Protocol: iser The FASTEST Storage: Flash What it is: iscsi
More information2008 International ANSYS Conference
2008 International ANSYS Conference Maximizing Productivity With InfiniBand-Based Clusters Gilad Shainer Director of Technical Marketing Mellanox Technologies 2008 ANSYS, Inc. All rights reserved. 1 ANSYS,
More informationFROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE
FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE Carl Trieloff cctrieloff@redhat.com Red Hat Lee Fisher lee.fisher@hp.com Hewlett-Packard High Performance Computing on Wall Street conference 14
More informationIn-Network Computing. Sebastian Kalcher, Senior System Engineer HPC. May 2017
In-Network Computing Sebastian Kalcher, Senior System Engineer HPC May 2017 Exponential Data Growth The Need for Intelligent and Faster Interconnect CPU-Centric (Onload) Data-Centric (Offload) Must Wait
More informationScaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc
Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC
More informationGateways to Discovery: Cyberinfrastructure for the Long Tail of Science
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.
More informationInterconnect Your Future
Interconnect Your Future Paving the Path to Exascale November 2017 Mellanox Accelerates Leading HPC and AI Systems Summit CORAL System Sierra CORAL System Fastest Supercomputer in Japan Fastest Supercomputer
More informationInterconnect Your Future
#OpenPOWERSummit Interconnect Your Future Scot Schultz, Director HPC / Technical Computing Mellanox Technologies OpenPOWER Summit, San Jose CA March 2015 One-Generation Lead over the Competition Mellanox
More informationCP2K Performance Benchmark and Profiling. April 2011
CP2K Performance Benchmark and Profiling April 2011 Note The following research was performed under the HPC Advisory Council HPC works working group activities Participating vendors: HP, Intel, Mellanox
More informationConfiguring SR-IOV. Table of contents. with HP Virtual Connect and Microsoft Hyper-V. Technical white paper
Technical white paper Configuring SR-IOV with HP Virtual Connect and Microsoft Hyper-V Table of contents Abstract... 2 Overview... 2 SR-IOV... 2 Advantages and usage... 2 With Flex-10... 3 Setup... 4 Supported
More informationRoCE vs. iwarp Competitive Analysis
WHITE PAPER February 217 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...5 Summary...6
More informationOCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0
OCP3. 0 ConnectX Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor For illustration only. Actual products
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
SER3052BU How VMware vsphere and NVIDIA s Accelerate Your Organization Raj Rao, NVIDIA GRID Product Management Ziv Kalmanovich, vsphere ESXi Product Management #VMworld #SER3052BU Disclaimer This presentation
More informationOpenPOWER Performance
OpenPOWER Performance Alex Mericas Chief Engineer, OpenPOWER Performance IBM Delivering the Linux ecosystem for Power SOLUTIONS OpenPOWER IBM SOFTWARE LINUX ECOSYSTEM OPEN SOURCE Solutions with full stack
More informationLAMMPS and WRF on iwarp vs. InfiniBand FDR
LAMMPS and WRF on iwarp vs. InfiniBand FDR The use of InfiniBand as interconnect technology for HPC applications has been increasing over the past few years, replacing the aging Gigabit Ethernet as the
More informationMellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007
Mellanox Technologies Maximize Cluster Performance and Productivity Gilad Shainer, shainer@mellanox.com October, 27 Mellanox Technologies Hardware OEMs Servers And Blades Applications End-Users Enterprise
More informationHow to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1
How to Network Flash Storage Efficiently at Hyperscale Manoj Wadekar Michael Kagan Flash Memory Summit 2017 Santa Clara, CA 1 ebay Hyper scale Infrastructure Search Front-End & Product Hadoop Object Store
More informationBig Data Systems on Future Hardware. Bingsheng He NUS Computing
Big Data Systems on Future Hardware Bingsheng He NUS Computing http://www.comp.nus.edu.sg/~hebs/ 1 Outline Challenges for Big Data Systems Why Hardware Matters? Open Challenges Summary 2 3 ANYs in Big
More informationChelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING
Meeting Today s Datacenter Challenges Produced by Tabor Custom Publishing in conjunction with: 1 Introduction In this era of Big Data, today s HPC systems are faced with unprecedented growth in the complexity
More informationS8765 Performance Optimization for Deep- Learning on the Latest POWER Systems
S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems Khoa Huynh Senior Technical Staff Member (STSM), IBM Jonathan Samn Software Engineer, IBM Evolving from compute systems to
More informationNVMe over Universal RDMA Fabrics
NVMe over Universal RDMA Fabrics Build a Flexible Scale-Out NVMe Fabric with Concurrent RoCE and iwarp Acceleration Broad spectrum Ethernet connectivity Universal RDMA NVMe Direct End-to-end solutions
More informationEmerging Technologies for HPC Storage
Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional
More informationLAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015
LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA
More informationInfiniBand Networked Flash Storage
InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1 17PB
More informationData Path acceleration techniques in a NFV world
Data Path acceleration techniques in a NFV world Mohanraj Venkatachalam, Purnendu Ghosh Abstract NFV is a revolutionary approach offering greater flexibility and scalability in the deployment of virtual
More informationComparing Ethernet and Soft RoCE for MPI Communication
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 7-66, p- ISSN: 7-77Volume, Issue, Ver. I (Jul-Aug. ), PP 5-5 Gurkirat Kaur, Manoj Kumar, Manju Bala Department of Computer Science & Engineering,
More informationiser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)
iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India) Agenda Network storage virtualization Current state of Fiber Channel iscsi seeing significant adoption Emergence of
More informationLS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton
More informationInterconnect Your Future
Interconnect Your Future Gilad Shainer 2nd Annual MVAPICH User Group (MUG) Meeting, August 2014 Complete High-Performance Scalable Interconnect Infrastructure Comprehensive End-to-End Software Accelerators
More informationWhy AI Frameworks Need (not only) RDMA?
Why AI Frameworks Need (not only) RDMA? With Design and Implementation Experience of Networking Support on TensorFlow GDR, Apache MXNet, WeChat Amber, and Tencent Angel Bairen Yi (byi@connect.ust.hk) Jingrong
More informationInterconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2017 InfiniBand Accelerates Majority of New Systems on TOP500 InfiniBand connects 77% of new HPC
More informationGPU on OpenStack for Science
GPU on OpenStack for Science Deployment and Performance Considerations Luca Cervigni Jeremy Phillips luca.cervigni@pawsey.org.au jeremy.phillips@pawsey.org.au Pawsey Supercomputing Centre Based in Perth,
More informationS5006 YOUR HORIZON VIEW DEPLOYMENT IS GPU READY, JUST ADD NVIDIA GRID. Jeremy Main Senior Solution Architect - GRID
S5006 YOUR HORIZON VIEW DEPLOYMENT IS GPU READY, JUST ADD NVIDIA GRID Jeremy Main Senior Solution Architect - GRID AGENDA 1 Overview 2 Prerequisites 3 Differences between vsga and vdga 4 vsga setup and
More informationDeep Learning Frameworks with Spark and GPUs
Deep Learning Frameworks with Spark and GPUs Abstract Spark is a powerful, scalable, real-time data analytics engine that is fast becoming the de facto hub for data science and big data. However, in parallel,
More informationQuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.
Overview 1. Product description 2. Product features 1. Product description HPE Ethernet 10Gb 2-port 535FLR-T adapter 1 HPE Ethernet 10Gb 2-port 535T adapter The HPE Ethernet 10GBase-T 2-port 535 adapters
More informationMeltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies
Meltdown and Spectre Interconnect Evaluation Jan 2018 1 Meltdown and Spectre - Background Most modern processors perform speculative execution This speculation can be measured, disclosing information about
More informationCreating an agile infrastructure with Virtualized I/O
etrading & Market Data Agile infrastructure Telecoms Data Center Grid Creating an agile infrastructure with Virtualized I/O Richard Croucher May 2009 Smart Infrastructure Solutions London New York Singapore
More informationRevisiting Network Support for RDMA
Revisiting Network Support for RDMA Radhika Mittal 1, Alex Shpiner 3, Aurojit Panda 1, Eitan Zahavi 3, Arvind Krishnamurthy 2, Sylvia Ratnasamy 1, Scott Shenker 1 (1: UC Berkeley, 2: Univ. of Washington,
More informationA Case for High Performance Computing with Virtual Machines
A Case for High Performance Computing with Virtual Machines Wei Huang*, Jiuxing Liu +, Bulent Abali +, and Dhabaleswar K. Panda* *The Ohio State University +IBM T. J. Waston Research Center Presentation
More informationNAMD Performance Benchmark and Profiling. January 2015
NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource
More informationICON Performance Benchmark and Profiling. March 2012
ICON Performance Benchmark and Profiling March 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource - HPC
More informationDesigning High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning
5th ANNUAL WORKSHOP 209 Designing High-Performance MPI Collectives in MVAPICH2 for HPC and Deep Learning Hari Subramoni Dhabaleswar K. (DK) Panda The Ohio State University The Ohio State University E-mail:
More informationContaining RDMA and High Performance Computing
Containing RDMA and High Performance Computing Liran Liss ContainerCon 2015 Agenda High Performance Computing (HPC) networking RDMA 101 Containing RDMA Challenges Solution approach RDMA network namespace
More informationCharacterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures
Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures Talk at Bench 2018 by Xiaoyi Lu The Ohio State University E-mail: luxi@cse.ohio-state.edu http://www.cse.ohio-state.edu/~luxi
More informationBenefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies
Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Storage Transitions Change Network Needs Software Defined Storage Flash Storage Storage
More informationIn-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017
In-Network Computing Paving the Road to Exascale 5th Annual MVAPICH User Group (MUG) Meeting, August 2017 Exponential Data Growth The Need for Intelligent and Faster Interconnect CPU-Centric (Onload) Data-Centric
More informationSurvey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016
Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016 VNFaaS (Virtual Network Function as a Service) In our present work, we consider the VNFaaS use-case
More informationAN INTRODUCTION TO THE PATHSCALE INFINIPATH HTX ADAPTER. LLOYD DICKMAN Distinguished Architect, Office of the CTO PathScale, Inc.
AN INTRODUCTION TO THE PATHSCALE INFINIPATH HTX ADAPTER LLOYD DICKMAN Distinguished Architect, Office of the CTO PathScale, Inc. 1 Executive Summary Cluster systems based on commodity processors and the
More informationThe BioHPC Nucleus Cluster & Future Developments
1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does
More informationStorage Protocol Offload for Virtualized Environments Session 301-F
Storage Protocol Offload for Virtualized Environments Session 301-F Dennis Martin, President August 2016 1 Agenda About Demartek Offloads I/O Virtualization Concepts RDMA Concepts Overlay Networks and
More informationScheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 Gilad Shainer, Tong Liu (Mellanox); Jeffrey Layton (Dell); Joshua Mora (AMD) High Performance Interconnects for
More informationCavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters
Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters Cavium FastLinQ QL45000 25GbE adapters provide maximum performance and flexible bandwidth management to optimize virtualized servers
More informationTowards Transparent and Efficient GPU Communication on InfiniBand Clusters. Sadaf Alam Jeffrey Poznanovic Kristopher Howard Hussein Nasser El-Harake
Towards Transparent and Efficient GPU Communication on InfiniBand Clusters Sadaf Alam Jeffrey Poznanovic Kristopher Howard Hussein Nasser El-Harake MPI and I/O from GPU vs. CPU Traditional CPU point-of-view
More informationThe Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011
The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011 HPC Scale Working Group, Dec 2011 Gilad Shainer, Pak Lui, Tong Liu,
More information