ARISTA: Improving Application Performance While Reducing Complexity

Similar documents
PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Mellanox Virtual Modular Switch

RoCE vs. iwarp Competitive Analysis

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Optimize New Intel Xeon E based Ser vers with Emulex OneConnect and OneCommand Manager

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

QuickSpecs. HP Z 10GbE Dual Port Module. Models

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Cisco UCS Virtual Interface Card 1225

Mellanox Technologies Delivers The First InfiniBand Server Blade Reference Design

Use of the Internet SCSI (iscsi) protocol

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

HP BladeSystem c-class Ethernet network adapters

Birds of a Feather Presentation

Ethernet. High-Performance Ethernet Adapter Cards

SAN Virtuosity Fibre Channel over Ethernet

NVMe over Universal RDMA Fabrics

Mellanox CloudX, Mirantis Fuel 5.1/ 5.1.1/6.0 Solution Guide

Emulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Benefits of Offloading I/O Processing to the Adapter

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

HP BladeSystem c-class Ethernet network adaptors

Broadcom Adapters for Dell PowerEdge 12G Servers

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Network Adapters. FS Network adapter are designed for data center, and provides flexible and scalable I/O solutions. 10G/25G/40G Ethernet Adapters

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

The Convergence of Storage and Server Virtualization Solarflare Communications, Inc.

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

Introduction: PURPOSE BUILT HARDWARE. ARISTA WHITE PAPER HPC Deployment Scenarios

REAL SPEED. Neterion : The Leader in 10 Gigabit Ethernet adapters

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

The NE010 iwarp Adapter

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

2008 International ANSYS Conference

Cisco UCS Virtual Interface Card 1227

Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

QLogic 10GbE High-Performance Adapters for Dell PowerEdge Servers

The Missing Piece of Virtualization. I/O Virtualization on 10 Gb Ethernet For Virtualized Data Centers

AMD EPYC Processors Showcase High Performance for Network Function Virtualization (NFV)

QLogic TrueScale InfiniBand and Teraflop Simulations

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Data Center & Cloud Computing DATASHEET. PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter. Flexibility and Scalability in Data Centers

vnetwork Future Direction Howie Xu, VMware R&D November 4, 2008

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

TECHNOLOGY BRIEF. 10 Gigabit Ethernet Technology Brief

HPE ProLiant DL360 Gen P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21)

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

A-GEAR 10Gigabit Ethernet Server Adapter X520 2xSFP+

Looking ahead with IBM i. 10+ year roadmap

Future Routing Schemes in Petascale clusters

QLogic 2500 Series FC HBAs Accelerate Application Performance

Creating an agile infrastructure with Virtualized I/O

Mellanox ConnectX-4/ ConnectX-4 Lx Plugin for RedHat OpenStack Platform 10

stec Host Cache Solution

IBM Power Systems Update. David Spurway IBM Power Systems Product Manager STG, UK and Ireland

HP 10 GbE Dual Port Mezzanine Adapter for HP BladeSystem c-class server blades

Netronome 25GbE SmartNICs with Open vswitch Hardware Offload Drive Unmatched Cloud and Data Center Infrastructure Performance

Mellanox MLX4_EN Driver for VMware README

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Agilio CX 2x40GbE with OVS-TC

Dell EMC ScaleIO Ready Node

QLE10000 Series Adapter Provides Application Benefits Through I/O Caching

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

Mellanox ConnectX-4 NATIVE ESXi Driver for VMware vsphere 5.5/6.0 Release Notes. Rev /

ARISTA WHITE PAPER Arista 7500E Series Interface Flexibility

ClearStream. Prototyping 40 Gbps Transparent End-to-End Connectivity. Cosmin Dumitru! Ralph Koning! Cees de Laat! and many others (see posters)!

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

iscsi Technology: A Convergence of Networking and Storage

Building a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation

40Gb/s InfiniBand Switch Module (HSSM) for IBM BladeCenter

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Powered by Intel Xeon Processors

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Single-Points of Performance

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Interconnect Your Future

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Cisco UCS Virtual Interface Card 1400 Series

IBM System x High Performance Servers(R) Technical Support V4.

Transcription:

ARISTA: Improving Application Performance While Reducing Complexity October 2008 1.0 Problem Statement #1... 1 1.1 Problem Statement #2... 1 1.2 Previous Options: More Servers and I/O Adapters... 1 1.3 Mellanox and Artista Solution... 1 1.4 Summary... 2 1.5 About Mellanox... 3 1.6 About Artista Networks... 4 1.0 Problem Statement #1 Advances in server virtualization, network storage, and compute clusters have driven the need for faster network throughput to address application latency and availability problems in the Enterprise. Yet today s economic demands require the Enterprise maintain reasonable or reduced capital and operational cost parameters. This white paper will address how Mellanox and Arista Networks substantially improve application performance while reducing network complexity. 1.1 Problem Statement #2 As applications and databases increase in size, the need to have faster access to data becomes critical. Enterprises traditionally have thrown more servers to solve performance challenges. More servers bring in more I/O adapters and more I/O adapters lead to more cables and higher level of complexity as well as more power and higher cost. The number of I/O adapters in a server has been growing as each server needs connectivity to networks and to storage resulting in servers shipping with multiple, dual or quad-port Gigabit Ethernet NICs and dual-port Fibre Channel HBAs. On an average, each server in the enterprise is connected with four I/O cables increasing the complexity without addressing the real bottleneck in the data center Enterprises are looking for improved application performance and reduced infrastructure complexity. 1.2 Previous Options: More Servers and I/O Adapters Previously, enterprises approached performance problems by deploying additional servers and by adding I/O adapters to each server. Multiple Gigabit Ethernet NICs were installed in servers to provide more I/O leading to increased consumption of Gigabit Ethernet switch ports. This solved the problem short-term, but over time it introduced more complexity in managing the server sprawl and increased switching infrastructure and overall power consumption. 1.3 Mellanox and Arista Solution For over 8 years, Mellanox Technologies has been an active leader in high-performance interconnect technology. The Company recently introduced its feature-rich 10 Gigabit Ethernet silicon and adapter with industry leading support for networking, virtualization and storage:

Mellanox s ConnectX EN 10 Gigabit Ethernet adapter is the first adapter to support the PCI Express Gen 2.0 specification, which delivers 32Gb/s of PCI performance per direction with a x8 link compared to 16Gb/s with a Gen1 device. Dual-port Mellanox ConnectX EN adapters, not only deliver high availability, but also link aggregation and high-performance because of PCIe Gen 2.0. Mellanox ConnectX EN 10 GbE Adapter Different applications have different performance characteristics and Mellanox, together with their partner Arista, delivers joint solutions that address the needs of different applications with the best performance for both high-bandwidth and low-latency applications. Mellanox provides a superior out-of-the-box experience by delivering a product that does not need additional tuning, configuration, nor kernel patches. Mellanox drivers leverage standard industry stacks to provide the best performance. Mellanox and Arista conducted several performance benchmarks that provide proof points for enterprises to deploy 10 Gigabit Ethernet. Enterprises deploying Mellanox and Arista s solutions can attain a significant performance increase over Gigabit Ethernet deployments, cabling complexity, consolidate their networking, storage and compute clusters on a single fabric and saves energy costs. Implementation: Scenario 1: Throughput Mellanox ConnectX EN with Arista 7124S delivers line-rate throughput with both single stream and multiple streams. Delivering line-rate on low message sizes implies that the adapter is capable of saturating the wire even in smaller packet sizes. In benchmark tests using ConnectX EN and the Arista 7124S (Figure 1, Table 1) Figure 1. 10GbE Test Topology 2

Table 1: Test Configuration Switch: Arista 7124S w/10gbase-sr Optics Server: Supermicro 3U X7DWE w/4gb memory CPU: Dual Socket Quad Core Intel Xeon CPU X5482 @ 3.20GHz Chipset: Intel Seaburg (PCIe Gen2) OS: SLES 10 SP 1, Kernel 2.6.16.46-0,12 NIC: Mellanox ConnectX EN 10GBASE-SR w/ ConnectX: Firmware:2.5.9, Driver:MLNX_EN 1.3.0 ConnectX EN can deliver superior performance while scaling performance from message sizes of 64 Byte or 128 Byte, a size most of the enterprise applications tend to have, to large message sizes like 4 MB, which provides higher bandwidth in storage applications like back-up. Using Iperf at 1500MTU 9.47 Gbps (line-rate) was achieved. (Figure 2) Mellanox ConnectX EN and Arista together provide the best out-of-the-box performance in low latency and high bandwidth applications. Figure 2. Test Tool: Iperf 2.0.4 Throughput Results 1500 MTU: 9.47Gb/s (line-rate) w/message size from 128B to 4MB 9000 MTU: 9.92Gb/s (line-rate) w/message size from 64B to 4MB Scenario 2: TCP Latency using Standard Test Suite The performance tests listed below uses standard operating systems without any additional tuning or running any kernel patches. These numbers are achievable in the field by using a similarly configured server with Mellanox 10 Gigabit Ethernet cards and Arista switch. (Figure 3) Figure 3. 10GbE vs. 1GbE Test Topology Latency on the same server using the standard built-in Gigabit Ethernet adapter was measured to be 34.38usecs. (Figure 4) Comparing 1GbE performance versus 10GbE performance shows a 5X reduction in TCP latency by utilizing Mellanox ConnectX ED and the Arista 7124S switch. 3

Figure 4. 10GbE Latency vs. 1GbE Latency 10GbE vs 1GbE Latency 35 34.38 Latency 25 15 5-5 7.36 Mellanox 10GbE Technology 1GbE 1.4 Summary Delivers IO Consolidation FCoE Data Center Ethernet Future Proofs the networks Improves VM performance SR-IOV ConnectX XR 10GigE NIC Software Ease of use and management Delivers the 10x promise Line-rate performance PHY Integration Low power, Low cost and High reliability Mellanox s ConnectX EN 10 Gigabit Ethernet adapters with PCIe 2.0 delivers significant investment protection by integrating PHYs like CX4, XAUI, XFI and KR delivering low power, low cost and high reliability for today s network and support for Data Center Ethernet, Fibre Channel over Ethernet (FCoE) and Single Root I/O Virtualization (SRIOV) for tomorrow s networks. Enterprises can deploy the low powered and highly integrated Mellanox ConnectX EN today which provides low CAPEX while low energy costs and ease of use delivers low OPEX. ConnectX EN leverages Mellanox s experience and expertise in designing and delivering high-performance in both virtualized and non-virtualized operating environments. ConnectX EN supports a wide variety of operating systems like Windows Server 2003 and 2008, RedHat Enterprise Linux, Novell SuSE Enterprise Linux Server and other distributions, VMWare ESX3.5, Citrix XENSever 4.1 and FreeBSD 7.0 By deploying Mellanox ConnectX EN and Arista 7124S enterprises can get almost 5x improvements in their application performance over a 1GbE network. Low latency helps in several cluster applications like Monte Carlo simulations, portfolio analysis, compute intensive applications fast access to the data in ever increasing databases, and faster storage access. Low latency also 4

The case for low-latency ethernet delivers faster failover in virtualized VMware ESX environments where virtual machines can be moved from one physical server to another instantaneously] 1.5 About Mellanox Mellanox Technologies is a leading supplier of semiconductor-based, interconnect products to worldclass server, storage, and infrastructure OEMs servicing Fortune 500 data centers, the world s most powerful supercomputers, and mission critical embedded applications. The company s 10 Gigabit Ethernet and Data Center Ethernet products with full FCoE hardware offload provide proven networking, storage, and virtualization acceleration. Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information, visit Mellanox at www.mellanox. com. 1.6 About Artista Networks Arista, based in Menlo Park, California, was founded to deliver scalable networking interconnects for large-scale datacenters and cloud computing. Arista offers best-of-breed 10 Gigabit Ethernet solutions for cloud networking that redefine scalability, robustness, and price performance. At the core of Arista s platform is the Extensible Operating System (EOS), a pioneering new software architecture with self-healing and live in-service software upgrade capabilities. Arista s team is comprised of seasoned management and engineering talent bringing over a thousand man-years of expertise from leading network and server companies. For more information please visit: http://www.aristanetworks.com 350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085 Tel: 408-970-3400 Fax: 408-970-3403 www.mellanox.com Copyright 2009. Mellanox Technologies. All rights reserved. Preliminary information. Subject to change without notice. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 5