Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Similar documents
iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

ETHERNET ENHANCEMENTS FOR STORAGE. Sunil Ahluwalia, Intel Corporation Errol Roberts, Cisco Systems Inc.

Creating an agile infrastructure with Virtualized I/O

Unified Storage Networking. Dennis Martin President, Demartek

ETHERNET ENHANCEMENTS FOR STORAGE. Sunil Ahluwalia, Intel Corporation

UM DIA NA VIDA DE UM PACOTE CEE

Unified Storage Networking

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

All Roads Lead to Convergence

Evolution with End-to-End Data Center Virtualization

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

Hálózatok üzleti tervezése

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

FIBRE CHANNEL OVER ETHERNET

Storage Protocol Offload for Virtualized Environments Session 301-F

Introduction to Infiniband

Cisco Data Center Ethernet

ECE Enterprise Storage Architecture. Fall 2016

Jake Howering. Director, Product Management

SAN Virtuosity Fibre Channel over Ethernet

iscsi Technology: A Convergence of Networking and Storage

Fibre Channel vs. iscsi. January 31, 2018

Introduction to High-Speed InfiniBand Interconnect

Best Practices for Deployments using DCB and RoCE

Module 2 Storage Network Architecture

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

Big Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering

Cisco UCS Virtual Interface Card 1225

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Expert Insights: Expanding the Data Center with FCoE

Traditional SAN environments allow block

iscsi or iser? Asgeir Eiriksson CTO Chelsio Communications Inc

Advancing RDMA. A proposal for RDMA on Enhanced Ethernet. Paul Grun SystemFabricWorks

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp Suzanne Morgan, Microsoft

Cisco - Enabling High Performance Grids and Utility Computing

Multifunction Networking Adapters

STORAGE PROTOCOLS. Storage is a major consideration for cloud initiatives; what type of disk, which

Agenda. 1. The need for Multi-Protocol Networks. 2. Users Attitudes and Perceptions of iscsi. 3. iscsi: What is it? 4.

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Virtual Networks: For Storage and Data

OFED Storage Protocols

Server-to-network edge technologies: converged networks and virtual I/O

Cisco UCS Virtual Interface Card 1227

Ethernet: The High Bandwidth Low-Latency Data Center Switching Fabric

Data Center & Cloud Computing DATASHEET. PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter. Flexibility and Scalability in Data Centers

Transition to the Data Center Bridging Era with EqualLogic PS Series Storage Solutions A Dell Technical Whitepaper

Evaluating the Impact of RDMA on Storage I/O over InfiniBand

Storage Area Network (SAN)

Use Cases for iscsi and FCoE: Where Each Makes Sense

Redefining Storage Economics. Ethernet SAN Comparison of FC (and FCoE), iscsi, and AoE Robert Przykucki, Director of Product Management

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

The confusion arising from Converged Multihop Topologies as to the roles and capabilities of FCFs, FCoE-FC Gateways, and FCoE Transit Switches

A-GEAR 10Gigabit Ethernet Server Adapter X520 2xSFP+

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

Unified Storage and FCoE

HP Cluster Interconnects: The Next 5 Years

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Concurrent Support of NVMe over RDMA Fabrics and Established Networked Block and File Storage

Technology Insight Series

S S SNIA Storage Networking Foundations

UCS Engineering Details for the SAN Administrator

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

Advanced iscsi Management April, 2008

Changes to UCR 2008, Change 2, Section 5.10, Data Storage Controller

Cisco Nexus 5000 and Emulex LP21000

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

Dragon Slayer Consulting

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

Data Center Networks. Joseph L White, Juniper Networks

Ethernet Jumbo Frames

Title Month Year. IP Storage: iscsi and FC Extension. Introduction. IP Network Layers - In Practice. IP Network Layers

The Confusion Arising from Converged Multi Hop FCoE Topologies

STORAGE 101. LAN to SAN in 90min

The NE010 iwarp Adapter

Course Documentation: Printable course material This material is copyrighted and is for the exclusive use of the student only

PCI Express x8 Quad Port 10Gigabit Server Adapter (Intel XL710 Based)

RoCE vs. iwarp A Great Storage Debate. Live Webcast August 22, :00 am PT

COSC6376 Cloud Computing Lecture 17: Storage Systems

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

CompTIA Storage+ Powered by SNIA

An FPGA-Based Optical IOH Architecture for Embedded System

Title: CompTIA Storage+ Powered by SNIA

How Are The Networks Coping Up With Flash Storage

Storage Protocol Analyzers: Not Just for R & D Anymore. Brandy Barton, Medusa Labs - Finisar

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

I/O Considerations for Server Blades, Backplanes, and the Datacenter

IP Storage Protocols: iscsi. John L. Hufferd, Sr. Exec Dir of Technology, Brocade, Inc Ahmad Zamer Storage Technology Initiatives Manager, Intel

Single Root I/O Virtualization (SR-IOV) and iscsi Uncompromised Performance for Virtual Server Environments Leonid Grossman Exar Corporation

Dell PowerEdge M1000e Blade Enclosure and Dell PS Series SAN Design Best Practices Using Dell S-Series and M-Series Networking

Introduction to iscsi

PLUSOPTIC NIC-PCIE-2SFP+-V2-PLU

Transcription:

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE Gilles Chekroun Errol Roberts

SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. 2

Abstract Comparing Server I/O Consolidation: iscsi, Infiniband and FCoE This tutorial gives an introduction to Server I/O consolidation, having one network interface technology (Standard Ethernet, Data Center Ethernet, InfiniBand), to support IP applications and block level storage (iscsi, FCoE and SRP/iSER) applications. The benefits for the end user are discussed: less cabling, power and cooling. For these 3 solutions, iscsi, Infiniband and FCoE, we compare features like Infrastructure / Cabling, Protocol Stack, Performance, Operating System drivers and support, Management Tools, Security and best design practices. 3

Agenda Definition of Server I/O Consolidation Why Server I/O Consolidation Introducing the 3 solutions iscsi InfiniBand FCoE Differentiators Conclusion 4

Definition of Server I/O Consolidation 5

What is Server I/O Consolidation IT Organizations operate multiple parallel networks IP Applications (including NFS, NAS, ) over a Ethernet network *) SAN over a Fibre Channel network HPC/IPC over an InfiniBand network **) Server I/O consolidation combines the various traffic types onto a single interface and single cable Server I/O consolidation is the first phase for a Unified Fabric (single network) *) In this presentation we cover only Block Level Storage solutions, not File Level (NAS, NFS,..) **) For the remaining part, we don t cover HPC; for lowest latency requirements, InfiniBand is the best and most appropriate technology. 6

I/O Consolidation Benefits FC HBA FC Traffic FC HBA NIC FC Traffic Enet Traffic Adapter Adapter Consolidated Network NIC Enet Traffic Adapter: NIC for Ethernet/IP or HCA for InfiniBand or Converged Network Adaptor (CNA) for FCoE Customer Benefit: Fewer NIC s, HBA s and cables, lower CapEx, OpEx (power, cooling) 7

Why Server I/O Consolidation? 8

The drive for I/O Consolidation Multicore Multisocket CPUs Server Virtualization software (Hypervisor) High demand for I/O bandwidth Reductions in cables, power and cooling, therefore reducing OpEx/CapEx Limited number of interfaces for Blade Servers Consolidated Input into Unified Fabric 9

New Network Requirements with Server Virtualization Virtual networks growing faster and larger than physical Network admins are getting involved in virtual interface deployments Network access layer needs to evolve to support consolidation and mobility Multi-core Computing driving Virtualization & new networking needs Driving SAN attach rates higher (10% 40% Growing) Driving users to plan now for 10GE server interfaces Virtualization enables the promise of blades 10GE and FC are highest growth technologies within blades Virtualization and Consolidated I/O removes blade limitation 10

10GbE Drivers in the Datacenter Multi-Core CPU architectures allowing bigger and multiple workloads on the same machine Server virtualization driving the need for more bandwidth per server due to server consolidation Growing need for network storage driving the demand for higher network bandwidth to the server Multi-Core CPUs and Server Virtualization driving the demand for higher bandwidth network connections 11

Evolution of Ethernet Physical Media 12

Therefore... 10 Gigabit Ethernet is the enabler for I/O Consolidation in next generation Data Centers 13

Introducing the three solutions 14

Server I/O Consolidation Solutions iscsi LAN: Based on Ethernet and TCP/IP SAN: Encapsulates SCSI in TCP/IP InfiniBand LAN: Transports IP over InfiniBand (IPoIB); Socket Direct Protocol (SDP) between IB attached servers SAN: Transports SCSI over Remote DMA protocol (SRP) or iscsi Extensions for RDMA (iser) HPC/IPC: Message Passing Interface (MPI) over InfiniBand network FCoE LAN: Based on Ethernet (CEE *) and TCP/IP SAN: Maps and transports Fibre Channel over CEE * * CEE (Converged Enhanced Ethernet) is an architectural collection of Ethernet extensions designed to improve Ethernet networking and management in the Data Center; also called DCB (Data Center Bridging) or DCE (Data Center Ethernet) 15

Encapsulation technologies iscsi FCoE InfiniBand Operating System / Applications SCSI Layer FCP* iscsi FCP* FCP* FCP* SRP iscsi FCIP ifcp iser TCP TCP TCP IP IP IP FCoE Fibre Channel Ethernet CEE InfiniBand 1, 2, 4, 8, 10 Gbps 1, 10 Gbps 10 Gbps 10, 20, 40 Gbps * Includes FC Layer 16

iscsi OS / Applications SCSI Layer iscsi TCP IP Ethernet 1, 10... Gbps A SCSI transport protocol that operates over TCP Encapsulates SCSI CDBs (operational commands: e.g. read or write) and data into TCP/IP bytestreams Allows iscsi Initiators to access IP-based iscsi targets (either natively or via iscsi-to-fc gateway) Standards status IETF standard RFC 3720 on iscsi Collection of RFCs describing iscsi RFC 3347 iscsi Requirements RFC 3721 iscsi Naming and Discover RFC 3723 iscsi Security Broad industry support Operating System vendors support their iscsi drivers Gateway (Routers, Bridges) and Native iscsi storage arrays 17

iscsi Messages Contains routing information so that the message can find its way through the network Provides information necessary to guarantee delivery Explains how to extract SCSI commands and data Ethernet Header IP Header TCP Header iscsi Header iscsi Data Enet Trailer 22 Bytes 20 Bytes 20 Bytes 48 Bytes nn 4 Bytes PDU TCP Segment IP Packet Ethernet Frame 18

iscsi Topology Allows I/O consolidation iscsi is proposed today as an I/O consolidation option Native (iscsi Storage Array) and Gateway solutions iscsi Initiator iscsi session IP/Ethernet stateful iscsi gateway FCP session FC fabric FC Target iscsi Initiator iscsi Storage Array 19

View from Operating System Operating System sees: 1 Gigabit Ethernet adapter iscsi Initiator 20

iscsi based I/O Consolidation OS / Applications SCSI Layer iscsi TCP IP Ethernet 1, 10... Gbps Overhead of TCP/IP Protocol TCP for recovery of lost packets It s SCSI not FC LAN/Metro/WAN (Routable) Security of IP protocols (IPsec) Stateful gateway (iscsi <-> FCP) Mainly 1G Initiator (Server) 10G for iscsi Target recommended Can use existing Ethernet switching infrastructure Offload Engine (TOE) suggested (virtualized environment support?) QoS or separate VLAN for storage traffic suggested New Management Tools Might require different Multipath Software iscsi Boot Support 21

InfiniBand OS / Applications SCSI Layer SRP iscsi iser IB IB Standards-based interconnect http://www.infinibandta.org Channelized, connection-based interconnect optimized for high performance computing Supports server and storage attachments Bandwidth Capabilities (SDR/DDR/QDR) 4x 10/20/40 Gbps: 8/16/32 Gbps actual data rate 12x 30/60/120 Gbps: 24/48/96 Gbps actual data rate Built-in RDMA as core capability for inter-cpu communication 10, 20, 40 Gbps (4X SDR/DDR/QDR) 22

InfiniBand: SCSI RDMA Protocol (SRP) SCSI Semantics over RDMA fabric Provides High Performance block-level storage access Not IB specific - Standard specified by T10 http://www.t10.org Host drivers tie into standard SCSI I/F in kernel Storage appears as SCSI disks to local host Can be used for end-to-end IB storage (No FC) Can be used for SAN Boot over InfiniBand Usually in combination with a Gateway to native Fibre Channel targets 23

InfiniBand: iscsi Extensions for RDMA (iser) IETF Standard Enables iscsi to take advantage of RDMA. Mainly offloads the data path Leverages iscsi management and discovery architecture Simplifies iscsi protocol details such as data integrity management and error recovery Not IB Specific Needs a iser Target to work end to end User Space Application Kernel Space VFS Layer File Systems Block Drivers iscsi iser Linux Driver Stack InfiniBand HCA Reliable Connection InfiniBand Storage Targe 24

InfiniBand Gateway Topology: Gateways for Network and Storage IP Application Traffic Servers Fibrechannel Block Level Storage IB Connected Ethernet InfiniBand Single InfiniBand link for: -Storage - Network Servers Ethernet Connected Fibrechannel Fabric Infiniband Switch with stateful Gateways Ethernet LAN Fibre Channel to InfiniBand gateway for storage access Ethernet to InfiniBand gateway for LAN access 25

Physical vs. Logical view Physical View Servers connected via IB SAN attached via FC LAN attached via Gigabit Ethernet LAN Logical View Hosts present PWWN on SAN Hosts present IP address on VLAN LAN 26

View from Operating System 27

InfiniBand based I/O Consolidation OS / Applications SCSI Layer SRP iscsi iser IB IB 10, 20, 40 Gbps (4X SDR/DDR/QDR) Requires new Eco system (HCA, cabling, switches) Mostly copper cabling, limited distance but Fiber is available Datacenter protocol New driver (SRP) Stateful Gateway from SRP to FCP (unless native IB attached disk array) RDMA capability of HCA used Low CPU overhead Payload is SCSI not FC Concept of Virtual links and QoS in InfiniBand Boot Support 28

FCoE OS / Applications SCSI Layer FCP* FCoE DCE, CEE, DCB From a Fibre Channel standpoint it s Fibre Channel encapsulated in Ethernet From an Ethernet standpoint it s just another ULP (Upper Layer Protocol) FCoE is an extension of Fibre Channel onto a Lossless (Data Center) Ethernet fabric FCoE is managed like FC at initiator, target, and switch level, completely based on the FC model Same host-to-switch and switch-to-switch behavior of FC in order frame delivery or FSPF load balancing WWNs, FC-IDs, hard/soft zoning, DNS, RSCN Standards Work in T11, IEEE and IETF not yet final (1), 10... Gbps * Includes FC Layer 29

FCoE Enablers 10Gbps Ethernet Lossless Ethernet Matches the B2B credits used in Fibre Channel to provide a lossless service Ethernet mini jumbo frames (2180 Bytes) Max FC frame payload = 2112 bytes Ethernet V2 Frame, Ethertype = FCoE=0x8906 Same as a physical FC frame Ethernet Header FCoE Header FC Header FC Payload CRC EOF FCS Control information: version, ordered sets (SOF, EOF) 30

Ethernet Enhancements Enhanced Ethernet for Data Center Applications Transport of FCoE Enabling Technology for I/O Consolidation and Unified Fabric Feature / Standard Priority Flow Control (PFC) IEEE 802.1Qbb Bandwidth Management IEEE 802.1Qaz Congestion Management IEEE 802.1Qau Data Center Bridging Exchange Protocol (DCBX) L2 Multipath for Unicast and Multicast Benefit Enable multiple traffic types to share a common Ethernet link without interfering with each other Enable consistent management of QoS at the network level by providing consistent scheduling End-to-end congestion management for L2 network Management protocol for enhanced Ethernet capabilities Increase bandwidth, multiple active paths. No spanning tree *) T11 BB 5 group has only required that Ethernet switches have standard Pause (Link Pause), and baby Jumbo frame capability; which means no I/O consolidation support. 31

FCoE Enabler: Priority Flow Control Traffic FC/FCoE FC/FCoE Mgmt. Mgmt. Mgmt. Mgmt. Default LAN Default LAN User 1 User 1 User 2 STOP PAUSE User 2 full! User 3 User 3 User 4 User 4 Enables lossless Fabrics for each class of service PAUSE sent per virtual lane when buffers limit exceeded Network resources are partitioned between VL s (E.g. input buffer and output queue) The switch behavior is negotiable per VL InfiniBand uses a similar mechanism for multiplexing multiple data streams over a single physical link 32

FCoE Enabler: Converged Network Adapter (CNA) LAN HBA CNA 10GbEE 10GbEE 4GHBA 4GHBA 10GbE 10GbE Link Link Link Fibre Channel Ethernet Fibre Channel Ethernet PCIe PCIe PCIe Ethernet Drivers Fibre Channel Drivers Ethernet Drivers Fibre Channel Drivers Operating System Operating System 33

View from Operating System Standard drivers Same management Operating System sees: Dual port 10 Gigabit Ethernet adapter Dual Port 4 Gbps Fibre Channel HBAs 34

FCoE I/O Consolidation Topology Classical Ethernet Data Center Ethernet with FCoE LAN SAN A SAN B SERVER SERVER Classical FC FCoE Target: Dramatic reduction in adapters, switch ports and cabling 4 cables to 2 cables per server Seamless connection to the installed base of existing SANs and LANs High performance frame mappers vs. gateway bottlenecks Effective sharing of high bandwidth links Consolidated network infrastructure Faster infrastructure provisioning Lower TCO 35

FCoE based I/O Consolidation OS / Applications SCSI Layer FCP* FCoE DCE, CEE, DCB (1), 10... Gbps FCP layer untouched Requires Baby Jumbo Frames (2180 Bytes) Non IP routable Datacenter protocol Datacenter wide VLAN s Same management tools as for Fibre Channel Same drivers as for Fibre Channel HBA s Same Multipathing software Simplified certifications with storage subsystem vendors Requires lossless (10G) Ethernet switching fabric May require new host adaptors (unless FCoE software stack) Boot Support * Includes FC Layer 36

Differentiators 37

Storage Part of I/O Consolidation iscsi FCoE IB-SRP Payload SCSI Fibre Channel SCSI Transport TCP/IP Data Center Ethernet InfiniBand Scope LAN/MAN/WAN Datacenter Datacenter Bandwidth/Performance Low/Medium High High CPU Overhead High Low Low Gateway Overhead High Low High FC Security Model No Yes No FC Software on Host No Yes No FC Management Model No Yes No Initiator Implementation Yes Yes Yes Target Implementation Yes Yes Yes IP Routable Yes No N/A 38

Storage Part of I/O Consolidation iscsi FCoE IB-SRP Virtual Lanes No Yes Yes Congestion Control TCP Priority Flow Control Credit based Gateway Functionality stateful stateless stateful Connection Oriented Yes No Yes Access Control IP/VLAN VLAN / VF Partitions RDMA primitives defined defined defined Latency Medium Low Very low Adapter NIC CNA HCA 39

Conclusion 40

Encapsulation Technologies iscsi FCoE InfiniBand Operating System / Applications SCSI Layer FCP* iscsi FCP* FCP* FCP* SRP iscsi FCIP ifcp iser TCP TCP TCP IP IP IP FCoE Fibrechannel Ethernet Data Center Ethernet InfiniBand 1, 2, 4, 8, 10,... Gbps 1, 10 Gbps 10,... Gbps 10, 20, 40 Gbps * Includes FC Layer 41

Conclusion Server I/O Consolidation is driven by high I/O bandwidth demand I/O Bandwidth demand is driven by Multicore / Socket Server and Virtualization TCP/IP (iscsi), Data Center Ethernet (FCoE) and InfiniBand (SRP, iser) are generic transport protocols allowing Server I/O Consolidation Server I/O Consolidation is the first phase, consolidating input into a Unified Fabric 42

Thank You! 43

Other SNIA Tutorials Check out SNIA Tutorial: Fibre Channel Technologies: Current and Future iscsi tutorial / IP storage protocols Fabric Consolidation with InfiniBand FCoE: Fibre Channel over Ethernet 44

Q&A / Feedback Please send any questions or comments on this presentation to SNIA: tracknetworking@snia.org Many thanks to the following individuals for their contributions to this tutorial. - SNIA Education Committee Gilles Chekroun Howard Goldstein Marco Di Benedetto Errol Roberts Bill Lulofs Carlos Pereira Walter Dey Dror Goldenberg James Long 45

References http://www.fcoe.com/ http://www.t11.org/fcoe http://www.fibrechannel.org/overview/ FCIA_SNW_FCoE_WP_Final.pdf http://www.fibrechannel.org/overview/ FCIA_SNW_FCoE_flyer_Final.pdf http://www.fibrechannel.org/fcoe.html 46