InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Similar documents
Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Uncompromising Performance Elastic Network Manageability Maximum System Productivity

Uncompromising Performance. Elastic Network Manageability. Maximum System Productivity.

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

40Gb/s InfiniBand Switch Module (HSSM) for IBM BladeCenter

ibutils2 - InfiniBand Diagnostic Utilities Release Notes

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Mellanox SwitchX Firmware (fw-sx) Release Notes

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Mellanox Virtual Modular Switch

MLNX_EN for FreeBSD Release Notes

Data Sheet FUJITSU Server PRIMERGY CX400 S2 Multi-Node Server Enclosure

Cisco SFS 7000D InfiniBand Server Switch

Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System

Mellanox InfiniBand (IB) EDR Switches are supported to work with HPE InfiniBand Adapters to deliver high performance

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Data Sheet Fujitsu PRIMERGY BX900 S2 Blade Server

Cisco Nexus 9500 Platform Switches for Cisco Application Centric Infrastructure

Data Sheet FUJITSU ETERNUS DX60 S3 Disk Storage System

Data Sheet FUJITSU Server PRIMERGY BX900 S2 Blade Server

Data Sheet Fujitsu ETERNUS DX500 S3 Disk Storage System

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Data Sheet FUJITSU Server PRIMERGY CX400 M1 Scale out Server

Mellanox WinOF VPI Release Notes

Cisco Nexus 9500 Series Switches

Mellanox ConnectX-4/ ConnectX-4 Lx Plugin for RedHat OpenStack Platform 10

RoCE vs. iwarp Competitive Analysis

SUN BLADE 6000 CHASSIS

HP InfiniBand Options for HP ProLiant and Integrity Servers Overview

CONNECTRIX ED-DCX8510B ENTERPRISE DIRECTORS

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Mellanox ConnectX-3 ESXi 6.5 Inbox Driver Release Notes. Rev 1.0

Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System

Cisco Nexus 9500 R-Series

Data Sheet FUJITSU Server PRIMERGY CX400 M4 Scale out Server

Cisco Nexus 9500 Platform Switches for Cisco Application Centric Infrastructure

Configuring Mellanox Hardware for VPI Operation Application Note

CONNECTRIX MDS-9132T, MDS-9396S AND MDS-9148S SWITCHES

Cisco UCS B230 M2 Blade Server

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

WinOF VPI for Windows Installation Guide

CONNECTRIX DS-6500B SWITCHES

Mellanox NATIVE ESX Driver for VMware vsphere 6.0 Release Notes

Data Sheet FUJITSU Server PRIMERGY CX400 M4 Scale out Server

Cisco Meraki MS400 Series Cloud-Managed Aggregation Switches

SUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE

QuickSpecs. StorageWorks Fibre Channel SAN Switch 16-EL by Compaq M ODELS O VERVIEW

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

ARISTA: Improving Application Performance While Reducing Complexity

Mellanox ConnectX-3 ESXi 6.0 Inbox Driver

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System

Data Sheet FUJITSU Storage ETERNUS DX600 S3 Disk System

Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server

Data Sheet FUJITSU Storage ETERNUS DX60 S3 Disk Storage System

Ubuntu Inbox Driver Release Notes. Ubuntu 16.10

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Data Sheet Fujitsu PRIMERGY BX400 S1 Blade Server

Mellanox MLX4_EN Driver for VMware README

Innova-2 Flex Open for Application Acceleration EN Adapter Card. Software and Firmware Bundle Release Notes

Data Sheet FUJITSU Server PRIMERGY BX400 S1 Blade Server

HP InfiniBand Options for HP ProLiant and Integrity Servers Overview

Datasheet Fujitsu ETERNUS DX60 S2 Disk Storage System

Voltaire 40 Gb InfiniBand Switch Module and 2-port 40 Gb InfiniBand Expansion Card (CFFh) options for IBM BladeCenter

Grid Director 4000 Family

Data Sheet FUJITSU Server PRIMERGY BX400 S1 Blade System

Data Sheet FUJITSU Server PRIMERGY CX1640 M1 Multi-node Server

Flex System IB port FDR InfiniBand Adapter Lenovo Press Product Guide

Cisco UCS B460 M4 Blade Server

Data Sheet FUJITSU JX60 S2 DAS Subsystem

Data Sheet FUJITSU Server PRIMERGY BX400 S1 Blade System

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Cisco UCS C200 M2 High-Density Rack-Mount Server

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

SUSE Linux Enterprise Server (SLES) 12 SP3 Driver SLES 12 SP3

Data Sheet Fujitsu Server PRIMERGY CX250 S2 Dual Socket Server Node

Data Sheet FUJITSU Storage ETERNUS AF650 S2 All-Flash Array

QuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview

Ethernet. High-Performance Ethernet Adapter Cards

Flex System EN port 10Gb Ethernet Adapter Product Guide

Datasheet Fujitsu ETERNUS DX90 S2 Disk Storage System

Cisco UCS B440 M1High-Performance Blade Server

Data Sheet FUJITSU Server PRIMERGY CX2550 M1 Dual Socket Server Node

SUSE Linux Enterprise Server (SLES) 12 SP2 Driver SLES 12 SP2

Data Sheet FUJITSU Storage ETERNUS DX100 S3 Disk System

Red Hat Enterprise Linux (RHEL) 7.4-ALT Driver Release Notes

Mellanox Technologies Delivers The First InfiniBand Server Blade Reference Design

Data Sheet Fujitsu ETERNUS DX400 S2 Series Disk Storage Systems

INCREASE IT EFFICIENCY, REDUCE OPERATING COSTS AND DEPLOY ANYWHERE

Deploying Data Center Switching Solutions

Data Sheet FUJITSU Storage ETERNUS DX500 S3 Disk System

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

The Mellanox ConnectX-2 Dual Port QSFP QDR IB network adapter for IBM System x delivers industryleading performance and low-latency data transfer

Cisco TelePresence MSE 8000

Transcription:

InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

Mellanox continues its leadership by providing InfiniBand SDN Switch Systems the highest performing interconnect solution for Web 2.0, Big Data, Cloud Computing, Enterprise Data Centers (EDC) and High-Performance Computing (HPC). VALUE PROPOSITIONS Mellanox switches come with port configurations from 8 to 648 at speeds up to 56Gb/s per port with the ability to build clusters that can scale out to ten-of-thousands of nodes. Mellanox Unified Fabric Manager (UFM ) ensures optimal cluster and data center performance with high availability and reliability. Support for Software Defined Network (SDN). Mellanox switches delivers high bandwidth with low latency to get the highest server efficiency and application productivity. Best price/performance solution with error-free 40-56Gb/s performance. Mellanox switches support Virtual Protocol Interconnect (VPI) - allowing them to run seamlessly over both InfiniBand and Ethernet.

Mellanox s family of InfiniBand switches delivers the highest performance and port density with a complete chassis and fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. The Mellanox family of switches includes a broad portfolio of Edge and Director switches that range from 8 to 648 ports, and support 40-56Gb/s per port with the lowest latency. These switches allow IT managers to build the most cost-effective and scalable switch fabrics ranging from small clusters up to tens-of-thousands of nodes. Virtual Protocol Interconnect (VPI) Virtual Protocol Interconnect (VPI) flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. Why Software Defined Network (SDN)? Data center networks have become exceedingly complex. IT managers cannot optimize the networks for their applications which leads to high CAPEX/OPEX, low ROI and IT headaches. Mellanox InfiniBand SDN Switches ensure separation between control and data planes. InfiniBand enables centralized management and view of network. Programmability of the network by external applications and enable cost effective, simple and flat interconnect infrastructure. BENEFITS Built with Mellanox s InfiniScale and SwitchX switch silicons Industry-leading energy efficiency, density, and cost savings Ultra low latency Granular QoS for Cluster, LAN and SAN traffic Quick and easy setup and management Maximizes performance by removing fabric congestions Fabric Management for cluster and converged I/O applications

Edge Switches 8 to 36-port non blocking 40Gb/s and 56Gb/s InfiniBand Switch Systems The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 4Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service. Director Switches 108 to 648-port Non-blocking 40Gb/s and 56Gb/s InfiniBand Switch Systems Mellanox director switches provide the highest density switching solution, scaling from 8.64Tb/s up to 72.52Tb/s of bandwidth in a single enclosure, with lowlatency and the highest per port speeds of up to 56Gb/s. Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes. The InfiniBand director switches deliver director-class availability required for mission-critical application environments. The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time. Sustained Network Performance Mellanox switch family enables efficient computing for clusters of all sizes from the very small to the extremely large while offering near-linear scaling in performance. Advanced features such as static routing, adaptive routing, and congestion management allows the switch fabric to dynamically detect and avoid congestion and to re-route around points of congestion. These features ensure the maximum effective fabric performance under all types of traffic conditions. Reduce Complexity Mellanox switches reduce complexity by providing seamless connectivity between InifiniBand, Ethernet and Fibre Channel based networks. You no longer need separate network technologies with multiple network adapters to operate your data center fabric. Granular QoS and guaranteed bandwidth allocation can be applied per traffic type. This ensures that each type of traffic has the resources needed to sustain the highest application performance. Reduce Environmental Costs Improved application efficiency along with the need for fewer network adapters allows you to accomplish the same amount of work with fewer, more costeffective servers. Improved cooling mechanism and reduced power and heat consumption allow data centers to reduce the cost associated with physical space.

Enhanced Management Capabilities Mellanox managed switches comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring up for up to 648 nodes. Mellanox FabricIT (IS5000 family) or MLNX-OS (SX6000 family) chassis management provides administrative tools to manage the firmware, power supplies, fans, ports, and other interfaces. All Mellanox switches can also be coupled with Mella nox s Unified Fabric Manager (UFM) software for managing scale-out InfiniBand computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that the fabric is up and running at all times. MLNX-OS provides a license activated embedded diagnostic tool, Fabric Inspector, to check node-to-node, node-to-switch connectivity and ensures the fabric health. Chassis Management UFM Software

Edge Switches IS5022 IS5023 IS5024 IS5025 SX6025 Ports 8 18 36 36 36 Height 1U 1U 1U 1U 1U Switching Capacity 640Gb/s 1.44Tb/s 2.88Tb/s 2.88Tb/s 4.032Tb/s Link Speed 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s Interface Type QSFP QSFP QSFP QSFP QSFP+ Management No No No No No PSU Redundancy No No No Yes Yes Fan Redundancy No No No Yes Yes Integrated Gateway SX6005 SX6012 SX6015 SX6018 IS5030 IS5035 4036 4036E SX6036 Ports 12 12 18 18 36 36 36 34 + 2Eth 36 Height 1U 1U 1U 1U 1U 1U 1U 1U 1U Switching Capacity 1.3 Tb/s 1.3 Tb/s 2.016 Tb/s 2.016Tb/s 2.88Tb/s 2.88Tb/s 2.88Tb/s 2.72Tb/s 4.032Tb/s Link Speed 56 Gb/s 56 Gb/s 56 Gb/s 56Gb/s 40Gb/s 40Gb/s 40Gb/s 40Gb/s 56Gb/s Interface Type QSFP+ OSFP+ QSFP+ QSFP+ QSFP QSFP QSFP QSFP QSFP+ Management No Yes No Yes Yes Yes Yes Yes Yes 648 No 648 nodes 108 nodes 648 nodes 648 nodes 648 nodes 648 nodes Management Ports 1 2 1 2 1 1 2 PSU Redundancy No Optional Yes Yes Yes Yes Yes Yes Yes Fan Redundancy No No Yes Yes Yes Yes Yes Yes Yes Integrated Gateway Optional Optional Yes Optional

Director Switches IS5100 SX6506 IS5200 SX6512 IS5300 SX6518 IS5600 SX6536 4200 4700 Ports 108 108 216 216 324 324 648 648 144/162 324 (648 HS) Height 6U 6U 9U 9U 16U 16U 29U 29U 11U 19U Switching Capacity 8.64Tb/s 12.12Tb/s 17.28Tb/s 24.24Tb/s 25.9Tb/s 36.36Tb/s 51.8Tb/s 72.52Tb/s 11.52Tb/s 25.92Tb/s (51.8Tb/s) Link Speed 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 56Gb/s 40Gb/s 40Gb/s Interface Type QSFP QSFP+ QSFP QSFP+ QSFP QSFP+ QSFP QSFP+ QSFP QSFP Management 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes 648 nodes Management HA Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Console Cables Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Spine Modules 3 3 6 6 9 9 18 18 4 9 Leaf Modules (Max) 6 6 12 12 18 18 36 36 9 18 PSU Redundancy Yes (N+1) YES (N+N) YES (N+1) YES (N+N) YES (N+2) YES (N+N) YES (N+2) YES (N+N) YES (N+N) YES (N+N) Fan Redundancy Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Feature SUmmary COMPLIANCE HARDWARE 40-56Gb/s per port Full bisectional bandwidth to all ports IBTA 1.21 and 1.3 compliant QSFP connectors supporting passive and active cables Redundant auto-sensing 110/220VAC power supplies Per port status LED Link, Activity System, Fans and PS status LEDs Hot-swappable replaceable fan trays MANAGEMENT Mellanox Operating System (MLNX-OS/ FabricIT) Switch chassis management Embedded Subnet Manager (648 nodes) Error, event and status notifications Quality of Service based on traffic type and service levels Coupled with Mellanox Unified Fabric Manager (UFM) Comprehensive fabric management Secure, remote configuration and management Performance/provisioning manager Fabric Inspector Cluster diagnostics tools for single node, peer-to-peer and network verification SAFETY USA/Canada: ctuvus EU: IEC60950 International: CB Scheme Russia: GOST-R Argentina: S-mark EMC (EMISSIONS) USA: FCC, Class A Canada: ICES, Class A EU: EN55022, Class A EU: EN55024, Class A EU: EN61000-3-2, Class A EU: EN61000-3-3, Class A Japan: VCCI, Class A Australia: C-TICK ENVIRONMENTAL EU: IEC 60068-2-64: Random Vibration EU: IEC 60068-2-29: Shocks, Type I / II EU: IEC 60068-2-32: Fall Test OPERATING CONDITIONS Operating 0 C to 45 C, Non Operating -40 C to 70 C Humidity: Operating 5% to 95% Altitude: Operating -60 to 2000m ACCOUSTIC ISO 7779 ETS 300 753 OTHERS RoHS-6 compliant 1-year warranty 350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Copyright 2012. Mellanox Technologies. All rights reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. ConnectIB, FabricIT, MLNX-OS, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 3531BR Rev 4.1