SAN Virtuosity Fibre Channel over Ethernet

Similar documents
Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

Cisco Nexus 5000 and Emulex LP21000

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

FIBRE CHANNEL OVER ETHERNET

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Cisco UCS Virtual Interface Card 1225

Jake Howering. Director, Product Management

Optimize New Intel Xeon E based Ser vers with Emulex OneConnect and OneCommand Manager

Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters

Cisco UCS Virtual Interface Card 1227

Dell and Emulex: A lossless 10Gb Ethernet iscsi SAN for VMware vsphere 5

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

Best Practices Guide: Emulex Virtual HBA Solutions and VMware ESX Server 3.5

Best Practices for Deployments using DCB and RoCE

VMware vsphere 4 and Cisco Nexus 1000V Series: Accelerate Data Center Virtualization

Unified Storage Networking. Dennis Martin President, Demartek

Emulex Universal Multichannel

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

All Roads Lead to Convergence

Unified Storage Networking

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

VMware Infrastructure 3


HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

Cisco MDS 9000 Family Highlights: Server Virtualization Series

Configuration Maximums VMware Infrastructure 3: ESX Server 3.5 Update 2, ESX Server 3i version 3.5 Update 2, VirtualCenter 2.

Solution Brief: VMware vcloud Director and Cisco Nexus 1000V

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

Unified Storage and FCoE

LAB VALIDATION REPORT

Benefits of Offloading I/O Processing to the Adapter

Creating an agile infrastructure with Virtualized I/O

Eliminate the Complexity of Multiple Infrastructure Silos

Evolution with End-to-End Data Center Virtualization

Comparison of Storage Protocol Performance ESX Server 3.5

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

VMware vshield App Design Guide TECHNICAL WHITE PAPER

Virtualizing SAN Connectivity with VMware Infrastructure 3 and Brocade Data Center Fabric Services

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Industry Report 2010

7 Things ISVs Must Know About Virtualization

Evaluation of Multi-protocol Storage Latency in a Multi-switch Environment

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

W H I T E P A P E R. What s New in VMware vsphere 4: Performance Enhancements

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

Cisco Data Center Ethernet

Hálózatok üzleti tervezése

EMC Business Continuity for Microsoft Applications

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Expert Insights: Expanding the Data Center with FCoE

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Emulex Drivers Version 10.6 for Windows. Quick Installation Manual

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Transition to the Data Center Bridging Era with EqualLogic PS Series Storage Solutions A Dell Technical Whitepaper

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Configuration Maximums VMware vsphere 5.0

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

DCNX5K: Configuring Cisco Nexus 5000 Switches

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Storage Protocol Offload for Virtualized Environments Session 301-F

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

Emulex Drivers Version 10.2 for Windows

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Cisco MDS 9000 Family Blade Switch Solutions Guide

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Microsoft E xchange 2010 on VMware

Demartek September Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi. Introduction. Evaluation Environment. Evaluation Summary

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

High performance and functionality

Cisco Prime Data Center Network Manager 6.2

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Stellar performance for a virtualized world

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

VMware vshield Edge Design Guide

HP BladeSystem c-class Ethernet network adapters

Configuring and Managing Virtual Storage

Broadcom Adapters for Dell PowerEdge 12G Servers

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

A Dell Technical White Paper Dell Virtualization Solutions Engineering

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

ARISTA: Improving Application Performance While Reducing Complexity

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager

Building a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Oracle Database Consolidation on FlashStack

Deploy a Next-Generation Messaging Platform with Microsoft Exchange Server 2010 on Cisco Unified Computing System Powered by Intel Xeon Processors

Microsoft Office SharePoint Server 2007

QLogic 10GbE High-Performance Adapters for Dell PowerEdge Servers

Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE

Cisco Unified Computing System for SAP Landscapes

NVMe over Universal RDMA Fabrics

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

MODERNIZE INFRASTRUCTURE

Transcription:

SAN VIRTUOSITY Series WHITE PAPER SAN Virtuosity Fibre Channel over Ethernet Subscribe to the SAN Virtuosity Series at www.sanvirtuosity.com

Table of Contents Introduction...1 VMware and the Next Generation Server...2 Network Convergence and Data Center Bridging...3 Fibre Channel over Ethernet...3 Converged Network Hardware Components...4 Emulex OneConnect Universal CNA...4 Cisco Nexus 5000 Series Switches...5 Converged Network Deployment with VMware ESX...5 Conclusion...7

Introduction Maintaining separate Local Area Networks (LANs) and Storage Area Networks (SANs) is an expensive proposition. Help is on the way with the convergence of storage and network traffic onto a single high-performance 10Gb/s Ethernet (10GbE) infrastructure using Fibre Channel over Ethernet (FCoE). Studies have shown that network convergence provides the potential for substantial cost savings that include up to: 35% savings for server adapters 25% savings for network switches 28% savings for cables 91% savings for cable installation 50% savings for power and cooling Although these savings are highly beneficial, enterprise-class data centers still have concerns about adopting new technologies. Typical questions include: Are there standards that will insure interoperability? Is there a steep learning curve? What about existing investments in hardware and software? Will this be disruptive to my current operations? Are there reliable suppliers that have proven track records? As part of the continuing SAN Virtuosity series, this white paper introduces network convergence and FCoE, and provides an overview on how to plan for a successful deployment. It will highlight how key products and technologies from Emulex, Cisco and VMware answer the critical questions that must be resolved so that deployments can go forward with a high level of confidence. 1

VMware and the Next Generation Server For most data centers, the story of FCoE and network convergence begins with new servers. Next-generation multi-core, multi-processor servers from Intel and AMD are providing a substantial increase in processing capability and memory capacity when compared to older x86 servers that are reaching the end of their useful life. These new systems are ideal for server virtualization and VMware vsphere is the market-leading platform of choice. VMware vsphere enables data centers to leverage server virtualization to save capital and operating expenses and deliver computing resources in an agile, cost-effective way. Built on the ESX/ESXi hypervisor, VMware vsphere encompasses a comprehensive suite of products to optimize and manage the VMware virtualization environment. The combination of VMware vsphere and Fibre Channel SANs provide a scalable and reliable data infrastructure to support key server virtualization features that include: VMware vmotion Moving virtual machines (VMs) from between physical servers without disrupting applications to better allocate resources and facilitate server deployments and upgrades VMware High Availability Automatic restarting of VMs on a different server to insure continual application services if a physical server fails VMware Distributed Resource Scheduler (DRS) Dynamic allocation and balancing of computing resources aggregated into resource pools to optimize use of data center resources As increased server and memory capacity enable much higher virtualization ratios, I/O bandwidth becomes a critical concern. Most data centers are currently using separate 1Gb/s Ethernet (1GbE) and Fibre Channel networks which results in a large number of adapter ports, switch ports and cables. For a typical virtualized server, this includes: Two or more host bus adapters (HBAs) with connectivity to a Fibre Channel SAN to support VM data traffic Four or more 1GbE adapters to support LAN requirements for applications running in VMs 1GbE adapter for VMware vmotion 1GbE adapter for VMware vcenter management The end result is a high capital expense (CapEx) for network infrastructure and large operating expense (OpEx) for power, cooling and management time. Fortunately, help is on the way. The first part of the solution is 10Gb/s Ethernet (10GbE). Two 10GbE ports will provide 40Gb/s of total bidirectional bandwidth, more than enough to replace 6-10 1GbE ports that would typically be installed on a virtualized server. Many of the new multi-core servers ship with two 10GbE ports included as a LAN-onmotherboard (LOM) component, so there s no need to buy additional Ethernet adapters. The next part of the solution is new networking technologies that allow Fibre Channel storage I/O to be transmitted over a 10GbE infrastructure. 2

Network Convergence and Data Center Bridging The potential benefits with transmitting data over a TCP/IP network have been recognized for several years. The Fibre Channel over IP (FCIP) protocol was developed by the Internet Engineering Task Force, but has not gained critical mass for data center deployments. The Internet Small Computer System Interface (iscsi) protocol supports carrying SCSI commands over IP networks and has seen a growing rate of adoption. Particularly well suited for small and medium sized data centers, iscsi has been limited by 1Gb/s bandwidth and potentially high latencies. For Fibre Channel deployments, the problem with using traditional Ethernet has been the potential requirement to re-send lost data packets that results in unacceptably high latencies. Although normal TCP/IP traffic is not adversely affected with lost and re-sent packets, Fibre Channel storage requires a low-latency data stream transmitted over a lossless Ethernet. The Institute of Electrical and Electronics Engineers (IEEE) has been working with Cisco, Emulex and other network companies to define a set of standards to resolve these issues. Originally introduced as Convergence Enhanced Ethernet (CEE), the official IEEE standard will be called Data Center Bridging (DCB) when fully approved. The DCB standard enables multiple traffic types over a single link with lossless behavior and low latency. The key standards that support network convergence are shown in the following table: Feature Benefit Standards Activity Priority-based Flow Control (PFC) Allows traffic assigned to specified classes of service to be lossless, with priorities based on the class of service IEEE 802.1Qbb Quality of Service (QoS) Enhanced Transmission Selection (ETS) Supports 8 priorities for network traffic on a single physical Ethernet link Guarantees bandwidth to priorities while permitting additional bandwidth when available. ETS is particularly useful for bursty environments that can have occasional high I/O levels. IEEE 802.1p IEEE 802.1Qaz Data Center Bridging exchange (DCBX) Table 1 Data Center Bridging Standards Validates communication between DCB-capable devices and assists configuration management IEEE 802.1Qaz Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) is an extension of the Fibre Channel standard defined by the American National Standards Institute (ANSI) and enables Fibre Channel I/O to be transmitted over a lossless Ethernet network. Fibre Channel is defined by ANSI standard FC-BB-1 through FC-BB-4 and the FCoE extension is defined by ANSI standard FC-BB-5. The FCoE protocol specification includes encapsulation of a complete Fibre Channel frame within an Ethernet frame, avoiding the overhead of any intermediate protocols (see Figure 1). 3

Figure 1 FCoE encapsulation in Ethernet This lightweight encapsulation ensures that FCoE-capable Ethernet switches are less compute-intensive and provide low latency performance that is required for a Fibre Channel network. By retaining Fibre Channel as the upper layer protocol, the technology fully supports Fibre Channel functionality, including fabric login, zoning and Logical Unit Number (LUN) masking, and ensures compatible, secure access to networked storage. Converged Network Hardware Components The key hardware components that support FCoE and network convergence are host-based Converged Networks Adapters (CNAs) and DCB-enabled network switches that support FCoE. Emulex OneConnect Universal CNA The Emulex OneConnect Universal CNA product family supports multiple protocol offloads for networking and storage. Using a common hardware platform, protocol support is controlled by the personality that is loaded in flash memory. The OneConnect product family includes baseline 10GbE NICs and adapters that support FCoE or iscsi storage protocols with hardware offloads. The OneConnect pay-as-you-go approach allows customers to begin with a 10GbE NIC and then add storage protocols using a software-based enablement process. Within the OneConnect family, the Emulex OCe10102-F is a single-chip, second-generation 10GbE CNA that supports FCoE and DCB. The OCe10102-F offloads Fibre Channel protocol processing from the CPU, providing high performance storage connectivity and optimized CPU utilization. The OCe10102-F also leverages ten generations of advanced, field-proven Emulex LightPulse Fibre Channel HBA technology, making them ideal for server I/O consolidation. Figure 2 Emulex OCe10102-F CNA 4

Cisco Nexus 5000 Series Switches The Cisco Nexus 5000 Series Switches transform the data center with innovative, standards-based, multilayer, multiprotocol, and multipurpose Ethernet-based fabric. They help enable any transport over Ethernet, including Layer 2 and Layer 3 traffic, and storage traffic, all on one common data center-class platform. The Nexus family of rack switches delivers high-performance, low-latency 10 Gigabit Ethernet with DCB and FCoE support The Cisco Nexus 5000 Series Switches are ideal for enterprise-class data center server access layer and smallerscale, midmarket data center aggregation deployments. They can be deployed across a diverse set of traditional, virtualized, unified, and high-performance computing (HPC) environments. As part of the network foundation for Cisco Data Center Business Advantage, the Cisco Nexus 5000 Series Switches are designed to address the business, application, and operational requirements for current and future data centers. Cisco Nexus 5000 Series Switches provide: Architectural flexibility to support diverse business and application needs Infrastructure simplicity to decrease the total cost of ownership Increased agility and flexibility for traditional deployments with easy migration to virtualized, unified, or HPC environments Enhanced business resilience with greater operational continuity Ability to use existing operational models and administrative domains for easy deployment Converged Network Deployment with VMware ESX Server virtualization is a top-of-mind issue for most data centers and one of the key drivers of FCoE deployments. FCoE-enabled network convergence significantly reduces the number of cables, switch ports and adapters, as shown in Figure 3, for an ESX environment. Figure 3 Port reductions using a OneConnect CNA in a rack-mount server running ESX DCB-enabled Cisco Nexus switches and Emulex CNAs provide the building blocks needed for an enterpriseclass data center to add FCoE to their data center. FCoE switches, provisioned at the edge of the network, detect 5

Ethernet frames with encapsulated Fibre Channel frames and pass them along to the Fibre Channel Forwarder (FCF) for communication to the SAN. When the switch is configured for FCoE, Fibre Channel frames can be routed over a FC uplink port that can be connected to a Fibre Channel switch (E-port or NPIV mode) or storage array (F-port). Ethernet ports on the switch provide connectivity to the LAN. Figure 4 Converged network deployment and access to SAN Although a single physical adapter, the Emulex OCe10102-F CNA is recognized by ESX as both an HBA and a NIC as shown in Figure 5. The ESX server, in turn, provides networking and storage access for VMs hosted on the ESX server. Figure 5 The ESX hypervisor views the CNA as an HBA and a NIC Configuration and management of the ESX server is performed with VMware vcenter Server. OCe10102-F ports are visible in the vcenter Configuration Tab in the storage adapter view and the network adapter view. When deploying multiple CNAs for failover and load balancing, Fibre Channel multipathing and NIC teaming would be configured. 6

Conclusion FCoE and network convergence are ready for deployment and particularly well suited for virtualized servers. The final step is to answer the questions that we began with: Q: Are there standards that will insure interoperability? A: Yes. DCB and FCoE technologies are based on IEEE and ANSI standards. Q: Is there a steep learning curve? A: No. FCoE and DCB use the same procedures and management tools that are in place today for Ethernet-based LANs and Fibre Channel- based SANs. Q: What about existing investments in hardware and software? A: Existing investments in network hardware and software can still be used and are fully leveraged. In particular, servers with CNAs can connect to the same SANs and LANs that today support traditionally configured servers with separate HBAs and NICs. Q: Will this be disruptive to my current operations? A: No. FCoE and DCB can be added at the edge of the network, typically with new servers that are ideal for server virtualization. Existing applications can be virtualized when appropriate. VMware ESX server clusters can also be upgraded non-disruptively. Using vmotion, a server s workload can be moved to other servers in the cluster during downtime to replace NICs and HBAs with CNAs. Q: Are there reliable suppliers that have proven track records? A: Yes. FCoE and DCB are enabled with adapters and switches from companies like Emulex and Cisco that have proven track records for networking products with enterprise-class data centers. Cisco Systems, Inc. 170 West Tasman Drive, San Jose, CA 95134-1706 USA Tel 408-526-4000 www.cisco.com 2010 Cisco Systems, Inc. All rights reserved. CCVP, the Cisco logo, and the Cisco Square Bridge logo are trademarks of Cisco Systems, Inc. Emulex Corporation. 3333 Susan Street, Costa Mesa, CA 92626 USA Tel 714-662-5600 www.emulex.com Copyright 2010 Emulex. All rights reserved worldwide. No part of this document may be reproduced by any means or translated to any electronic medium without the prior written consent of Emulex. Information furnished by Emulex is believed to be accurate and reliable. However, no responsibility is assumed by Emulex for its use; or for any infringements of patents or other rights of third parties which may result from its use. No license is granted by implication or otherwise under any patent, copyright or related rights of Emulex. Emulex, the Emulex logo, LightPulse and SLI are trademarks of Emulex. VMware, Inc. 3401 Hillview Ave. Palo Alto CA 94304 USA Tel 650-427-5000 www.vmware.com 2010 VMware, Inc. All rights reserved. VMware, the VMware boxes logo and design, vsphere, Virtual SMP and vmotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. 7 11-0405 10/10