VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Similar documents
VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Ethernet. High-Performance Ethernet Adapter Cards

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

RoCE vs. iwarp Competitive Analysis

Introduction to High-Speed InfiniBand Interconnect

Solutions for Scalable HPC

Storage Protocol Offload for Virtualized Environments Session 301-F

SUSE Linux Enterprise Server (SLES) 12 SP4 Inbox Driver Release Notes SLES 12 SP4

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ ConnectX-5 Release Note. Rev 3.5.0

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Birds of a Feather Presentation

High Performance Computing

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

Introduction to Infiniband

Paving the Road to Exascale Computing. Yossi Avni

2008 International ANSYS Conference

ARISTA: Improving Application Performance While Reducing Complexity

Flex System IB port FDR InfiniBand Adapter Lenovo Press Product Guide

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Building the Most Efficient Machine Learning System

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX

The Future of Interconnect Technology

Mellanox Virtual Modular Switch

InfiniBand Networked Flash Storage

Benefits of Offloading I/O Processing to the Adapter

N V M e o v e r F a b r i c s -

Paving the Road to Exascale

Interconnect Your Future

The Future of High Performance Interconnects

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Application Acceleration Beyond Flash Storage

Future Routing Schemes in Petascale clusters

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Red Hat Enterprise Linux (RHEL) 7.5-ALT Driver Release Notes

Interconnect Your Future

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

Building the Most Efficient Machine Learning System

NVMe over Universal RDMA Fabrics

Creating an agile infrastructure with Virtualized I/O

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

Red Hat Enterprise Linux (RHEL) 7.4-ALT Driver Release Notes

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

Interconnect Your Future

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017

Uncompromising Performance. Elastic Network Manageability. Maximum System Productivity.

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-5 Release Note. Rev 3.4.1

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

SUSE Linux Enterprise Server (SLES) 12 SP3 Driver SLES 12 SP3

Hardened Security in the Cloud Bob Doud, Sr. Director Marketing March, 2018

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

Intel PRO/1000 PT and PF Quad Port Bypass Server Adapters for In-line Server Appliances

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

Uncompromising Performance Elastic Network Manageability Maximum System Productivity

Interconnect Your Future

SUSE Linux Enterprise Server (SLES) 15 Inbox Driver Release Notes SLES 15

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

Sharing High-Performance Devices Across Multiple Virtual Machines

The Road to ExaScale. Advances in High-Performance Interconnect Infrastructure. September 2011

Flex System EN port 10Gb Ethernet Adapter Product Guide

The Mellanox ConnectX-2 Dual Port QSFP QDR IB network adapter for IBM System x delivers industryleading performance and low-latency data transfer

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

In-Network Computing. Sebastian Kalcher, Senior System Engineer HPC. May 2017

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

INCREASE IT EFFICIENCY, REDUCE OPERATING COSTS AND DEPLOY ANYWHERE

Optimizing LS-DYNA Productivity in Cluster Environments

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

STAR-CCM+ Performance Benchmark and Profiling. July 2014

Cisco UCS Virtual Interface Card 1225

Voltaire Making Applications Run Faster

iscsi Technology: A Convergence of Networking and Storage

Interconnect Your Future Paving the Road to Exascale

Audience This paper is targeted for IT managers and architects. It showcases how to utilize your network efficiently and gain higher performance using

SUSE Linux Enterprise Server (SLES) 12 SP2 Driver SLES 12 SP2

QLogic 10GbE High-Performance Adapters for Dell PowerEdge Servers

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

QLogic TrueScale InfiniBand and Teraflop Simulations

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

HP 10 GbE Dual Port Mezzanine Adapter for HP BladeSystem c-class server blades

NVMe Direct. Next-Generation Offload Technology. White Paper

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

WinOF-2 Release Notes

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Broadcom Adapters for Dell PowerEdge 12G Servers

Ron Emerick, Oracle Corporation

Transcription:

VPI / InfiniBand Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Mellanox enables the highest data center performance with its InfiniBand Host Channel Adapters (HCA), delivering state-of-the-art solutions for High-Performance Computing, Machine Learning, data analytics, database, cloud and storage platforms. Mellanox InfiniBand Host Channel Adapters (HCA) provide the highest performing interconnect solution for High-Performance Computing, Enterprise Data Centers, Web 2.0, Cloud Computing, Machine Learning, and embedded environments. Clustered databases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. Mellanox delivers the most technologically advanced HCAs, providing best-in-class performance and efficiency. They are the ideal solution for HPC clusters that demand high bandwidth, high message rate and low latency to achieve the highest server efficiency and application productivity. With traffic consolidation and hardware acceleration for virtualization, Mellanox HCAs provide optimal I/O services such as high bandwidth and server utilization to achieve the maximum return on investment (ROI) for data centers, high scale storage systems and cloud computing. By offering Virtual Protocol Interconnect (VPI), Mellanox HCAs offer the flexibility of connectivity for InfiniBand and Ethernet protocols within the same adapter.

World-Class Performance and Scale Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Application acceleration and GPU communication acceleration bring further levels of performance improvement. The advanced acceleration technology in Mellanox InfiniBand adapters enables higher cluster efficiency and large scalability to hundreds of thousands of nodes. Complete End-to-End EDR InfiniBand Networking ConnectX adapters are part of Mellanox s full EDR 100Gb/s InfiniBand end-toend portfolio for data centers and high-performance computing systems, which includes switches, application acceleration packages, and cables. Mellanox s Switch-IB family of EDR InfiniBand switches and Unified Fabric Management software incorporate advanced tools that simplify networking management and installation, and provide the needed capabilities for the highest scalability and future growth. Mellanox s HPC-X collectives, messaging, and storage acceleration packages deliver additional capabilities for the ultimate server performance, and the line of EDR copper and fiber cables ensure the highest interconnect performance. With Mellanox end to end, IT managers can be assured of the highest performance and most efficient network fabric. BENEFITS World-class cluster performance High-performance networking and storage access Efficient use of compute resources Guaranteed bandwidth and low-latency services Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms Increased VM per server ratio I/O unification Virtualization acceleration Scalability to hundreds-of-thousands of nodes TARGET APPLICATIONS High-performance parallelized computing Data center virtualization Public and private clouds Machine Learning and data analysis platforms Clustered database applications and high-throughput data warehousing Latency-sensitive applications such as financial analysis and high frequency trading Performance storage applications such as backup, restore, mirroring, etc.

Virtual Protocol Interconnect VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand or Ethernet fabrics, and supports IP over InfiniBand (IPoIB) and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. I/O Virtualization Mellanox adapters provide comprehensive support for virtualized data centers with Single-Root I/O Virtualization (SR-IOV) allowing dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization on InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity. Multi-Host Solution Mellanox s Multi-Host technology provides high flexibility and major savings in building next generation, scalable high-performance data centers. Multi-Host connects multiple compute or storage hosts into a single interconnect adapter, separating the adapter PCIe interface into multiple and independent PCIe interfaces with no performance degradation. The technology enables designing and building new scale-out heterogeneous compute and storage racks with direct connectivity between compute elements, storage elements and the network, better power and performance management, while achieving maximum data processing and data transfer at minimum capital and operational expenses. Various Form Factors Mellanox adapter cards are available in a variety of form factors to meet every data center s specific needs. Open Compute Project (OCP) cards integrate into the most cost-efficient, energyefficient and scalable enterprise and Web 2.0 data centers, delivering leading connectivity for performance-driven server and storage applications. The OCP Mezzanine adapter form factor is designed to mate into OCP servers. Mellanox also offers a Socket Direct card to enable EDR 100Gb/s transmission rate for servers without x16 PCIe slots. The adapter s 16-lane PCIe bus is split into two 8-lane buses, with one bus accessible through a PCIe x8 edge connector and the other bus through an x8 parallel connector to an Auxiliary PCIe Connection Card. The result is lower latency and lower CPU utilization for dual socket servers. The direct connection from each CPU to the network means the interconnect can bypass a QPI (UPI) and the other CPU, optimizing performance and improving latency. CPU utilization is improved as each CPU handles only its own traffic and not traffic from the other CPU. GPUDirect RDMA is also enabled for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card. Storage Accelerated A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Mellanox adapters support SRP, iser, NFS RDMA, SMB Direct as well as SCSI and iscsi storage protocols. ConnectX adapters also offer NVMF over Fabric offloads,

the flexible Signature Handover mechanism based on the advanced T-10/DIF implementation, and Erasure Coding offloading capability enabling distributed RAID (Redundant Array of Inexpensive Disks). Enabling High-Performance Computing Applications Mellanox InfiniBand/VPI adapters, with their advanced accelerations and RDMA capabilities, deliver best-in-class latency, bandwidth and message rate coupled with low CPU utilization, making them the most deployed adapters for large-scale High Performance Computing (HPC) applications. Mellanox adapters deliver the highest scalability, efficiency, and performance for HPC systems in a variety of markets and applications, including bioscience, media and entertainment, automotive design, computational fluid dynamics and manufacturing, weather research and forecasting, and oil and gas industry modeling. Software Support All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux adn FreeBSD major distributions. The adapters support OpenFabrics-based RDMA protocols and software, and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are compatible with configuration and management tools from OEMs and operating system vendors. ConnectX -5 Intelligent ConnectX-5 adapter cards support Co-Design and In-Network Compute, and introduce new acceleration engines for maximizing High Performance, Data Analytics and Storage platforms. ConnectX-5 with VPI supports two ports of EDR 100Gb/s InfiniBand and 100Gb Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets. ConnectX-5 enables higher performance with new Message Passing Interface (MPI) offloads, such as MPI Tag Matching and MPI AlltoAll operations, advanced dynamic routing, and new capabilities to perform various data algorithms. ConnectX -4 ConnectX-4 adapter cards with Virtual Protocol Interconnect (VPI), supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution for high-performance, data analytics, database, and storage platforms. ConnectX-4 provides an unmatched combination of 100Gb/s bandwidth in a single port, very lowest latency, 150 million messages per second and specific hardware offloads, addressing both today s and the next generation s compute and storage data center demands. ConnectX -3 Pro Mellanox s ConnectX-3 Pro Virtual Protocol Interconnect (VPI) adapter provides a highperformance, flexible FDR 56Gb/s InfiniBand and 40Gb Ethernet (up to 56GbE when connected to a Mellanox switch) interconnect solution, delivering high throughput across the PCI Express 3.0 host bus. ConnectX-3 Pro enables extremely fast transaction latency (less than 1usec), and can deliver more than 90M MPI messages/second, making it a highly-scalable, suitable solution for transaction-demanding applications. ConnectX-3 Pro maximizes network efficiency, making it ideal for HPC or converged data centers operating a wide range of applications.

General Specs Ports Single, Dual Single, Dual Single, Dual Port Speed (Gb/s) IB: SDR, DDR, QDR, FDR10, FDR Eth: 10, 40, 56 IB: SDR, DDR, QDR, FDR, EDR Eth: 10, 25, 40, 50, 56, 100 PCIe Gen3 x8 Gen3 x8 Gen3 x16 IB: SDR, DDR, QDR, FDR, EDR Eth: 10, 25, 40, 50, 100 Gen3 x16 Gen4 x16 Connectors QSFP+ QSFP28 QSFP28 SFP28 RDMA Message Rate (million msgs/sec) 36 150 200 (ConnectX-5 Ex, Gen4 server) 175 (ConnectX-5 En, Gen3 server) Latency (us) 0.64 0.6 0.6 Typical Power (2 ports, max speed) 6.2W 13.8W 16.6W (ConnectX-5 Ex) 13.8W RDMA OOO RDMA (Adaptive Routing) Dynamically Connected Transport Storage NVMe-oF Target Offload Erasure Coding (RAID Offload) T-10 Dif/Signature Handover

Virtualization SR-IOV 127 VFs 32 PFs, 254 VFs 64 PFs, 1000 VFs Multi-Host 4 hosts 4 hosts Congestion Control (QCN, ECN) MPI Tag Matching Offload OVS Offload Management Hairpin (Host Chaining) Host Management QoS Packet Pacing Form Factors OCP Socket Direct

World-leading HPC centers are making the smart decision and choosing InfiniBand. This is why: Julich Supercomputer Centre CHPC South Africa The Julich Supercomputer Centre chose HPC Testimonia for a balanced, Co-Design approach to its interconnect, providing low latency, high throughput, and future scalability to its cluster, which contributes to projects in the areas of energy, environment, and brain research. The Centre for High Performance Computing in South Africa, the largest HPC facility in Africa, chose HPC Testimonia to enhance and unlock the vast potential of its system, which provides high end computational resources to a broad range of users in fields such as bioinformatics, climate research, material sciences, and astronomy. We chose a co-design approach, the appropriate hardware, and designed the system. This system was of course targeted at supporting in the best possible manner our key applications. The only interconnect that really could deliver that was HPC Testimonia. The heartbeat of the cluster is the interconnect. Everything is about how all these processes shake hands and do their work. InfiniBand and the interconnect is, in my opinion, what defines HPC.

For detailed information on features, compliance, and compatibility, please see each product s specific product brief. 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com 3525BR Rev 11.0 *This brochure describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability. * Product images may not include heat sync assembly; actual product may differ. Copyright 2017. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, ConnectX, GPUDirect, Mellanox Multi-Host, UFM, and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. HPC-X, Socket Direct, and Switch-IB are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3525BR Rev 11.0