OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

Similar documents
Ethernet. High-Performance Ethernet Adapter Cards

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Uncompromising Performance. Elastic Network Manageability. Maximum System Productivity.

Uncompromising Performance Elastic Network Manageability Maximum System Productivity

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

ARISTA: Improving Application Performance While Reducing Complexity

Benefits of Offloading I/O Processing to the Adapter

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

N V M e o v e r F a b r i c s -

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Mellanox Virtual Modular Switch

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

RoCE vs. iwarp Competitive Analysis

Density Optimized System Enabling Next-Gen Performance

QLogic FastLinQ QL41232HMKR

NVMe over Universal RDMA Fabrics

Storage Protocol Offload for Virtualized Environments Session 301-F

Hardened Security in the Cloud Bob Doud, Sr. Director Marketing March, 2018

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Cisco UCS Virtual Interface Card 1225

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

At the heart of a new generation of data center infrastructures and appliances. Sept 2017

Network Function Virtualization Using Data Plane Developer s Kit

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

Broadcom Adapters for Dell PowerEdge 12G Servers

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

How to Network Flash Storage Efficiently at Hyperscale. Flash Memory Summit 2017 Santa Clara, CA 1

WIND RIVER TITANIUM CLOUD FOR TELECOMMUNICATIONS

Cisco HyperFlex HX220c M4 Node

Cisco UCS Virtual Interface Card 1227

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Converged Platforms and Solutions. Business Update and Portfolio Overview

PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

QLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

The Future of High Performance Interconnects

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Solutions for Scalable HPC

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Flex System EN port 10Gb Ethernet Adapter Product Guide

Сетевые технологии для систем хранения данных

Vision of the Software Defined Data Center (SDDC)

OCP Engineering Workshop - Telco

Cisco HyperFlex HX220c Edge M5

Building the Most Efficient Machine Learning System

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

SAN Virtuosity Fibre Channel over Ethernet

HPE ProLiant ML350 Gen10 Server

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ ConnectX-5 Release Note. Rev 3.5.0

HP BladeSystem c-class Ethernet network adaptors

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

INCREASE IT EFFICIENCY, REDUCE OPERATING COSTS AND DEPLOY ANYWHERE

Dell Solution for High Density GPU Infrastructure

Creating an agile infrastructure with Virtualized I/O

NVMe Direct. Next-Generation Offload Technology. White Paper

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

QLogic 10GbE High-Performance Adapters for Dell PowerEdge Servers

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

Data Center & Cloud Computing DATASHEET. PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter. Flexibility and Scalability in Data Centers

SUSE Linux Enterprise Server (SLES) 12 SP4 Inbox Driver Release Notes SLES 12 SP4

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Broadberry. Hyper-Converged Solution. Date: Q Application: Hyper-Converged S2D Storage. Tags: Storage Spaces Direct, DR, Hyper-V

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

The Best Ethernet Storage Fabric

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

InfiniBand Networked Flash Storage

ALLNET ALL0141-4SFP+-10G / PCIe 10GB Quad SFP+ Fiber Card Server

WinOF-2 Release Notes

HP BladeSystem c-class Ethernet network adapters

FastLinQ QL41162HLRJ. 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA. Product Brief OVERVIEW

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Enabling Efficient and Scalable Zero-Trust Security

Cisco UCS Virtual Interface Card 1400 Series

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

QNAP 25GbE PCIe Expansion Card. Mellanox SN GbE/100GbE Management Switch. QNAP NAS X 25GbE card X 25GbE/100GbE switch

NVMe Takes It All, SCSI Has To Fall. Brave New Storage World. Lugano April Alexander Ruebensaal

Five Reasons Why You Should Choose Cisco MDS 9000 Family Directors Cisco and/or its affiliates. All rights reserved.

Building the Most Efficient Machine Learning System

HPE Adapters from Marvell FastLinQ Ethernet and QLogic Fibre Channel Adapters Power HPE Servers

Overview. Cisco UCS Manager User Documentation

All product specifications are subject to change without notice.

Mellanox NATIVE ESX Driver for VMware vsphere 6.5 Release Notes

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

Transcription:

OCP3. 0 ConnectX Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor For illustration only. Actual products may vary.

Mellanox Ethernet adapter cards in the OCP 3.0 form factor support speeds from 10 to 200GbE. Combining leading features with best-in-class efficiency, Mellanox OCP cards enable the highest data center performance. World-Class Performance and Scale Mellanox 10, 25, 40, 50, 100 and 200GbE adapter cards deliver industry-leading connectivity for performance-driven server and storage applications. Offering high bandwidth coupled with ultra-low latency, ConnectX adapter cards enable faster access and real-time responses. Complementing its OCP 2.0 offering, Mellanox offers a variety of OCP 3.0-compliant adapter cards, providing best-in-class performance and efficient computing through advanced acceleration and offload capabilities. These advanced capabilities, which free up valuable CPU for other tasks, while increasing data center performance, scalability and efficiency, include: RDMA over Converged Ethernet (RoCE) NVMe-over-Fabrics (NVMe-oF) Virtual switch offloads (e.g., OVS offload) leveraging Accelerated Switching and Packet Processing (ASAP 2 ) GPUDirect communication acceleration Mellanox Multi-Host for connecting multiple compute or storage hosts to a single interconnect adapter Mellanox Socket Direct technology for improving the performance of multi-socket servers. Complete End-to-End Networking ConnectX OCP 3.0 adapter cards are part of Mellanox s 10, 25, 40, 50, 100 and 200 GbE end-to-end portfolio for data centers which also includes switches, application acceleration packages, and cabling to deliver a unique price-performance value proposition for network and storage solutions. With Mellanox, IT managers can be assured of the highest performance, reliability and most efficient network fabric at the lowest cost for the best return on investment. In addition, Mellanox NEO-Host management software greatly simplify host network provisioning, monitoring and diagnostics with ConnectX OCP3.0 cards, providing the agility and efficiency for scalability and future growth. Featuring an intuitive and graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking control. NEO-Host also integrates with Mellanox NEO, Mellanox s end-to-end datacenter orchestration and management platform. Open Compute Project Spec 3.0 The OCP NIC 3.0 specification extends the capabilities of OCP NIC 2.0 design specification. OCP 3.0 defines a different form factor and connector style than OCP 2.0. The OCP 3.0 specification defines two basic card sizes: Small Form Factor (SFF) and Large Form Factor (LFF). Mellanox OCP NICs are currently supported in a SFF.* * Future designs may utilize LFF to allow for additional PCIe lanes and/or Ethernet ports,

OCP 3.0 also provides additional board real estate, thermal capacity, electrical interfaces, network interfaces, host conflagration and management. OCP 3.0 also introduces a new mating technique that simplifies FRU installation and removal, and reduces overall downtime. The table below shows key comparisons between the OCP Specs 2.0 and 3.0. Card Dimensions Non-rectangular (8000mm 2 ) OCP Spec 2.0 OCP Spec 3.0 SFF: 76x115mm (8740mm 2 ) LFF: 139x115mm (15985mm 2 ) Baseband Connector Type Mezzanine (B2B) Edge (0.6mm pitch) Network Interfaces Up to 2 SFP side-by-side or 2 QSFP belly-to-belly Up to two QSFP in SFF, side-by-side Expansion Direction N/A Side Installation in Chassis Parallel to front/rear panel Perpendicular to front/rear panel Hot Swap No Yes (pending server support) PCIe Lanes Up to x16 SFF: Up to x16 LFF: Up to x32 Maximum Power Capability Up to 67.2W for PCIe x8 card; Up to 86.4W for PCIe x16 card SFF: Up to 80W; LFF: Up to 150W Multi-Host Up to 4 hosts Up to 4 host in SFF or 8 Hosts in LFF Host Management Interfaces RBT, SMBus RBT, SMBus, PCIe Host Managment Protocols Not standard DSP0267, DSP0248 ConnectX OCP3.0 Ethernet Adapters Benefits Open Data Center Committee (ODCC) compatible Supports the latest OCP 3.0 NIC specifications All Platforms: x86, Power, Arm, compute and storage Industry-leading performance TCP/IP and RDMA - for I/O consolidation SR-IO virtualization technology: VM protection and QoS Cutting-edge performance in virtualized Overlay Networks Increased Virtual Machine (VM) count per server ratio TARGET APPLICATIONS Data center virtualization Compute and storage platforms for public & private clouds HPC, Machine Learning, AI, Big Data, and more Clustered databases and high-throughput data warehousing Latency-sensitive financial analysis and high frequency trading Media & Entertainment Telco platforms For more details, please refer to the Open Compute Project Specifications.

Specs, Form Factors & Part Numbers General Specs Ports Dual Ports Dual Ports Dual Ports Dual Ports Single/Dual Ports Port Speed (GbE) 10, 25 10, 25 10, 25, 40, 50 10, 25, 40, 50, 100 10, 25, 40, 50, 100, 200 PCIe Gen3 x8 Gen3 x16 Gen3 x16 Gen4 x16 Gen4 x16 Connectors SFP28 SFP28 QSFP28 QSFP28 QSFP56 Typical Power (2 ports @ max. speed) 9.6W 12.7W 14.7W 16.2W Contact Mellanox Host Management Yes Yes Yes Yes Yes Multi-Host Support No No Yes No Yes Form Factor OCP Spec 3.0 OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF Bracket Type Thumbscrew Internal Lock Internal Lock Internal Lock Internal Lock Ordering Part Numbers OPNs MCX4621A-ACAB MCX562A-ACAI MCX566M-GDAI MCX566A-CDAI Contact Mellanox for Availability For detailed information on features, compliance, and compatibility, please refer to product-specific documentation and software/firmware release notes on www.mellanox.com

I/O Virtualization and Virtual Switching Mellanox ConnectX Ethernet adapters provide comprehensive support for virtualized data centers with Single-Root I/O Virtualization (SR-IOV), allowing dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization gives data center managers better server utilization and LAN and SAN unification while reducing cost, power and cable complexity. Moreover, virtual machines running in a server traditionally use multilayer virtual switch capabilities, like Open vswitch (OVS). Mellanox Accelerated Switch and Packet Processing (ASAP 2 ) Direct technology. allows for the offloading of any implementation of a virtual switch or virtual router by handling the data plane in the NIC hardware, all the while maintaining the control plane unmodified. This results in significantly higher vswitch/vrouter performance without the associated CPU load. RDMA over Converged Ethernet (RoCE) Mellanox RoCE doesn t require any network configurations, allowing for seamless deployment and efficient data transfers with very low latencies over Ethernet networks a key factor in maximizing a cluster s ability to process data instantaneously. With the increasing use of fast and distributed storage, data centers have reached the point of yet another disruptive change, making RoCE a must in today s data centers. Flexible Multi-Host Technology Mellanox s innovative Multi-Host technology provides high flexibility and major savings in building next generation, scalable, high-performance data centers. Multi- Host connects multiple compute or storage hosts to a single interconnect adapter, separating the adapter PCIe interface into multiple and independent PCIe interfaces, without any performance degradation. Mellanox s OCP 3.0 Small Form Factors (SFF) cards may support up to 4 different hosts (4x4 on SFF) or up to 8 hosts on a Large Form Factor (LFF) card. The technology enables designing and building new scale-out heterogeneous compute and storage racks with direct connectivity among compute elements, storage elements and the network. This enables better power and performance management, while achieving maximum data processing and data transfer at minimum capital and operational expenses. Socket Direct TM Mellanox s Socket Direct technology brings improved performance to multi-socket servers by enabling direct access from each CPU in a multi-socket server to the network through its dedicated PCIe interface. With this type of configuration, each CPU connects directly to the network; this enables the interconnect to bypass a QPI (UPI) and the other CPU, optimizing performance and improving latency. CPU utilization improves as each CPU handles only its own traffic, and not the traffic from the other CPU. Mellanox s OCP 3.0 cards include native support for socket direct technology for multi-socket servers and can support up to 8 CPUs.

Accelerated Storage Mellanox adapters support a rich variety of storage protocols and enable partners to build hyperconverged platforms where the compute and storage nodes are co-located and share the same infrastructure. Leveraging RDMA, Mellanox adapters enhance numerous storage protocols, such as iscsi over RDMA (iser), NFS RDMA, and SMB Direct to name a few. Moreover, ConnectX adapters also offer NVMe-oF protocols and offloads, enhancing utilization of NVMe based storage appliances. Other storage related hardware offloads are the Signature Handover mechanism based on the advanced T-10/DIF implementation, and the Erasure Coding offloading capability enabling the building of a distributed RAID (Redundant Array of Inexpensive Disks). Host Management Mellanox host management sideband implementations enable remote monitor and control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe Baseboard Management Controller (BMC) interface, supporting both NC-SI and PLDM management protocols using these interfaces. Mellanox OCP 3.0 adapters support these protocols to offer numerous Host Management features such as PLDM for Firmware Update, network boot in UEFI driver,uefi secure boot, and more. Enhancing Machine Learning Application Performance Mellanox adapters with built-in advanced acceleration and RDMA capabilities deliver best-in-class latency, bandwidth and message rates, and lower CPU utilization. Mellanox PeerDirect technology with NVIDIA GPUDirect RDMA enables adapters with direct peerto-peer communication to GPU memory, without any interruption to CPU operations. Mellanox adapters also deliver the highest scalability, efficiency, and performance for a wide variety of applications, including bioscience, media and entertainment, automotive design, computational fluid dynamics and manufacturing, weather research and forecasting, as well as oil and gas industry modeling. Thus, Mellanox adapters are the best NICs for machine learning applications. Secure Network Adapters Mellanox ConnectX OCP 3.0 adapters implement a secure firmware update check, which means that the devices verify using digital signatures the firmware binaries prior to their installation on the adapters. This ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the installation happens from the host, the network, or a BMC. Starting from ConnectX-6 Mellanox offers the option for Hardware Root of Trust which introduces secure boot as well.

Broad Software Support All Mellanox adapter cards are supported by a full suite of drivers for Linux major distributions, Microsoft Windows, VMware vsphere and FreeBSD. Drivers are also available inbox in Linux main distributions, Windows and VMware. Multiple Form Factors In addition to the OCP Spec 3.0 cards, Mellanox adapter cards are available in other form factors to meet data centers specific needs, including: OCP Specification 2.0 Type 1 & Type 2 mezzanine adapter form factors, designed to mate into OCP servers. Standard PCI Express (PCIe) Gen3 and Gen4 adapter cards. OCP2.0 Adapter Card Standard PCIExpress Adapter Card OCP3.0 Adapter Card 350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com NOTES: (1) This brochure describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability. (2) Product images may not include heat sync assembly; actual product may differ. For illustration only. Actual products may vary. Copyright 2018. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, ConnectX, GPUDirect, and Mellanox Multi-Host are registered trademarks of Mellanox Technologies, Ltd. ASAP 2 - Accelerated Switch and Packet Processing, and Socket Direct, and PeerDirect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 060275BR Rev 1.0