PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

Similar documents
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

ConnectX VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

ARISTA: Improving Application Performance While Reducing Complexity

Ethernet. High-Performance Ethernet Adapter Cards

Flex System IB port FDR InfiniBand Adapter Lenovo Press Product Guide

Flex System EN port 10Gb Ethernet Adapter Product Guide

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

HP 10 GbE Dual Port Mezzanine Adapter for HP BladeSystem c-class server blades

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide

The Mellanox ConnectX-2 Dual Port QSFP QDR IB network adapter for IBM System x delivers industryleading performance and low-latency data transfer

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

Introduction to High-Speed InfiniBand Interconnect

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

QuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview

IBM Flex System EN port 10Gb Ethernet Adapter IBM Redbooks Product Guide

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

Mellanox ConnectX-3 and ConnectX-3 Pro Adapters for IBM System x IBM Redbooks Product Guide

QuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview

NOTE: A minimum of 1 gigabyte (1 GB) of server memory is required per each NC510F adapter. HP NC510F PCIe 10 Gigabit Server Adapter

Birds of a Feather Presentation

Uncompromising Performance. Elastic Network Manageability. Maximum System Productivity.

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

Creating an agile infrastructure with Virtualized I/O

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Uncompromising Performance Elastic Network Manageability Maximum System Productivity

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

2008 International ANSYS Conference

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

40Gb/s InfiniBand Switch Module (HSSM) for IBM BladeCenter

IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX

Mellanox ConnectX-3 ESXi 6.5 Inbox Driver Release Notes. Rev 1.0

Introduction to Infiniband

Mellanox ConnectX-3 and ConnectX-3 Pro Adapters Product Guide

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

ibutils2 - InfiniBand Diagnostic Utilities Release Notes

RoCE vs. iwarp Competitive Analysis

All Roads Lead to Convergence

ADAPTER CARD PRODUCT GUIDE

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

AOC-UIBF-m1. User's Guide. Revision 1.0a

Future Routing Schemes in Petascale clusters

IBM United States Hardware Announcement , dated January 22, 2013

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 Release Notes. Rev /

Storage Protocol Offload for Virtualized Environments Session 301-F

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

Solutions for Scalable HPC

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

FastLinQ QL41162HLRJ. 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA. Product Brief OVERVIEW

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

HP InfiniBand Options for HP ProLiant and Integrity Servers Overview

PE2G4SFPI35L Quad Port SFP Gigabit Ethernet PCI Express Server Adapter Intel i350am4 Based

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

QLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter

Network Adapters. FS Network adapter are designed for data center, and provides flexible and scalable I/O solutions. 10G/25G/40G Ethernet Adapters

Cisco UCS Virtual Interface Card 1225

AOC-CIBF-M1. User's Guide. Revision 1.0

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

SUSE Linux Enterprise Server (SLES) 12 SP3 Driver SLES 12 SP3

Cisco UCS Virtual Interface Card 1227

Red Hat Enterprise Linux (RHEL) 7.5-ALT Driver Release Notes

The Future of Interconnect Technology

Mellanox ConnectX-4/ ConnectX-4 Lx Plugin for RedHat OpenStack Platform 10

Infiniband and RDMA Technology. Doug Ledford

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

PE2G6SFPI35 Six Port SFP Gigabit Ethernet PCI Express Server Adapter Intel i350am4 Based

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

MLNX_EN for FreeBSD Release Notes

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU

HP InfiniBand Options for HP ProLiant and Integrity Servers Overview

Cisco UCS Virtual Interface Card 1400 Series

PE2G4SFPI80 Quad Port SFP Gigabit Ethernet PCI Express Server Adapter Intel 82580EB Based

Mellanox NATIVE ESX Driver for VMware vsphere 6.5 Release Notes

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

PE310G4I71L Quad Port Fiber SFP+ 10 Gigabit Ethernet PCI Express Server Adapter Intel FTXL710BM1 Based

Data Center & Cloud Computing DATASHEET. PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter. Flexibility and Scalability in Data Centers

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-5 Release Note. Rev 3.4.1

SUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER

Checklist for Selecting and Deploying Scalable Clusters with InfiniBand Fabrics

Flex System EN port 1Gb Ethernet Adapter Product Guide

Broadcom NetXtreme Gigabit Ethernet Adapters Product Guide

QuickSpecs. HPE Ethernet 10Gb 2-port 557SFP+ Adapter

PE310G4I71L Server Adapter Quad Port Fiber SFP+ 10 Gigabit Ethernet PCI Express Server Adapter Intel FTXL710BM1 Based

Transcription:

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

Mellanox continues its leadership providing InfiniBand Host Channel Adapters (HCA) the highest performing interconnect solution for Enterprise Data Centers, Cloud Computing, High-Performance Computing, and embedded environments. VALUE PROPOSITIONS High Performance Computing needs high bandwidth, low latency, and CPU offloads to get the highest server efficiency and application productivity. Mellanox HCAs deliver the I/O performance that meets these requirements. Data centers and cloud computing require I/O services such as bandwidth, consolidation and unification, and flexibility. Mellanox's HCAs support LAN and SAN traffic consolidation and provides hardware acceleration for server virtualization. Virtual Protocol Interconnect (VPI) flexibility offers InfiniBand, Ethernet, Data Center Bridging, EoIB, FCoIB and FCoE connectivity.

Mellanox InfiniBand Host Channel Adapters (HCA) provide the highest performing interconnect solution for Enterprise Data Centers, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. Virtualized servers will achieve higher performance with reduced power consumption and fewer cables. World-Class Performance Mellanox InfiniBand adapters deliver industry-leading 40Gb/s bandwidth with ultra low-latency for performance-driven server and storage clustering applications. CPU overhead is kept to a minimum with hardware-based transport offload and data movement acceleration such as RDMA and Send/Receive semantics to maximize server productivity. Mellanox InfiniBand adapters support from one to 64 processor cores with zero impact to latency and can scale to 10 s of thousands of nodes. I/O Virtualization Mellanox adapters support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization on InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity. Storage Accelerated A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Mellanox adapters support SCSI, iscsi and NFS protocols. Software Support All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software, and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are compatible with configuration and management tools from OEMs and operating system vendors.

ConnectX-2 with Virtual Protocol Interconnect Mellanox s industry-leading ConnectX-2 InfiniBand adapters provides the highest performing and most flexible interconnect solution. ConnectX-2 delivers up to 40Gb/s throughput across the PCI Express 2.0 host bus, enables the fastest transaction latency, as low as as 1usec, and can deliver more than 50M MPI messages per second making it the most scalable and suitable solution for current and future transaction-demanding applications. In addition to the InfiniBand features mentioned before, ConnectX-2 CORE-Direct application acceleration, GPU communication acceleration, Virtual-IQ technology for vnic and vhba support, and congestion management to maximize the network efficiency making it ideal for HPC or converged data centers operating a wide range of applications. BENEFITS World-class cluster performance High-performance networking and storage access Efficient use of compute resources Guaranteed bandwidth and low-latency services Reliable transport I/O unification Virtualization acceleration Scales to tens-of-thousands of nodes VPI flexibility makes it possible for any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each ConnectX-2 port can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over InfiniBand (EoIB) and Fibre Channel over InfiniBand (FCoIB) as well as Fibre Channel over Ethernet (FCoE) and RDMA over Ethernet (RoE). ConnectX-2 with VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. Complete End-to-End 40Gb/s InfiniBand Networking ConnectX-2 adapters are part of Mellanox s full 40Gb/s end-to-end portfolio for data centers and high-performance computing systems, which includes switches and cables. Mellanox s IS5000 family of 40Gb/s InfiniBand switches incorporate advanced tools that simplify networking management and installation, and provide the needed capabilities for the highest scalability and future growth. Mellanox's line of 40Gb/s copper and fiber cables ensure the highest interconnect performance. With Mellanox end to end, IT managers can be assured of the highest performance, most efficient network fabric. TARGET APPLICATIONS High-performance parallelized computing Data center virtualization using VMware ESX Server Clustered database applications, parallel RDBMS queries, high-throughput data warehousing Latency sensitive applications such as financial analysis and trading Cloud and grid computing data centers Performance storage applications such as backup, restore, mirroring, etc.

Ports 1 x 10Gb/s 1 x 10Gb/s 2 x 10Gb/s 1 x 20Gb/s 2 x 20Gb/s 1 x 20Gb/s 2 x 20Gb/s 1 x 40Gb/s 1 x 20Gb/s 2 x 20Gb/s 1 x 40Gb/s 2 x 40Gb/s 1 x 10GigE ASIC InfiniHost III Lx InfiniHost III Lx InfiniHost III Ex ConnectX-2 ConnectX-2 ConnectX-2 ConnectX-2 ConnectX-2 Connector microgigacn microgigacn microgigacn microgigacn microgigacn QSFP QSFP QSFP, SFP+ Host Bus PCIe 1.1 PCIe 1.1 PCIe 1.1 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 Speed 2.5GT/s 2.5GT/s 2.5GT/s 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s 5.0GT/s Lanes x4 x8 x8 x8 x8 x8 x8 x8 Features Hardware-based Transport, RDMA, I/O Virtualization VPI, Hardware-based Transport and Application Offloads, RDMA, GPU Communication Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Offload OS Support RHEL, SLES, RHEL, SLES, RHEL, SLES, RHEL, SLES, RHEL, SLES, RHEL, SLES, RHEL, SLES, RHEL, SLES, Windows, HPUX, Windows, HPUX, Windows, HPUX, Windows, ESX Windows, ESX Windows, ESX Windows, ESX Windows, ESX ESX3.5 ESX3.5 ESX3.5 RoHS Yes Yes Yes Yes Yes Yes Yes Ordering Part MHES14-XTC MHES18-XTC MHEA28-XTC MHGH19B-XTR MHGH29B-XTR MHRH19B-XTR MHRH29B-XTR MHZH29-XTR MHGS18-XTC MHGA28-XTC MHQH19B-XTR MHQH29B-XTR

FEATURE SUMMARY COMPLIANCE COMPATIBILITY INFINIBAND IBTA Specification 1.2.1 compliant 10, 20, or 40Gb/s per port RDMA, Send/Receive semantics Hardware-based congestion control Atomic operations 16 million I/O channels 9 virtual lanes: 8 data + 1 management ENHANCED INFINIBAND (ConnectX-2) Hardware-based reliable transport Collective operations offloads GPU communication acceleration Hardware-based reliable multicast Extended Reliable Connected transport Enhanced Atomic operations Fine grained end-to-end QoS HARDWARE-BASED I/O VIRTUALIZATION (ConnectX-2) Single Root IOV Address translation and protection Multiple queues per virtual machine VMware NetQueue support ADDITIONAL CPU OFFLOADS (ConnectX-2) TCP/UDP/IP stateless offload Intelligent interrupt coalescence Compliant to Microsoft RSS and NetDMA SAFETY USA/Canada: ctuvus UL EU: IEC60950 Germany: TUV/GS International: CB Scheme EMC (EMISSIONS) USA: FCC, Class A Canada: ICES, Class A EU: EN55022, Class A EU: EN55024, Class A EU: EN61000-3-2, Class A EU: EN61000-3-3, Class A Japan: VCCI, Class A Taiwan: BSMI, Class A ENVIRONMENTAL EU: IEC 60068-2-64: Random Vibration EU: IEC 60068-2-29: Shocks, Type I / II EU: IEC 60068-2-32: Fall Test OPERATING CONDITIONS Operating temperature: 0 to 55 C Air flow: 200LFM @ 55 C Requires 3.3V, 12V supplies CONNECTIVITY Interoperable with InfiniBand or 10GigE switches microgigacn or QSFP connectors 20m+ (10Gb/s), 10m+ (20Gb/s) or 7m+ (40Gb/s) of pasive copper cable External optical media adapter and active cable support Quad to Serial Adapter (QSA) module, connectivity adapter from QSFP to SFP+ OPERATING SYSTEMS/DISTRIBUTIONS Novell SLES, Red Hat Enterprise Linux (RHEL), Fedora, and other Linux distributions Microsoft Windows Server 2003/2008/CCS 2003 OpenFabrics Enterprise Distribution (OFED) OpenFabrics Windows Distribution (WinOF) VMware ESX Server 3.5/ vsphere 4.0 PROTOCOL SUPPORT Open MPI, OSU MVAPICH, Intel MPI, MS MPI, Platform MPI TCP/UDP, IPoIB, SDP, RDS SRP, iser, NFS RDMA, EoIB, FCoIB, FCoE udapl STORAGE SUPPORT (ConnectX-2) Fibre Channel over InfiniBand or Ethernet Copyright 2010. Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX and Virtual Protocol Interconnect are registered trademarks of Mellanox Technologies, Ltd. CORE-Direct, and FabricIT are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com