Dragon Slayer Consulting

Similar documents
Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

Module 2 Storage Network Architecture

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Unified Storage Networking. Dennis Martin President, Demartek


SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter (Intel 82599ES Based) Single-Port 10 Gigabit SFP+ Ethernet Server Adapters Provide Ultimate

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Cisco UCS Virtual Interface Card 1225

iscsi Technology: A Convergence of Networking and Storage

Unified Storage Networking

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

Key Measures of InfiniBand Performance in the Data Center. Driving Metrics for End User Benefits

Creating an agile infrastructure with Virtualized I/O

Cisco Nexus 4000 Series Switches for IBM BladeCenter

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

Cisco - Enabling High Performance Grids and Utility Computing

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Introduction to High-Speed InfiniBand Interconnect

Hálózatok üzleti tervezése

All Roads Lead to Convergence

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Introduction to Infiniband

Building a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation

Unified Storage and FCoE

Technology Insight Series

3331 Quantifying the value proposition of blade systems

SAN Virtuosity Fibre Channel over Ethernet

The NE010 iwarp Adapter

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

Extending InfiniBand Globally

Technology Watch. LAN Newsletter

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Cisco HyperFlex HX220c M4 Node

I/O Considerations for Server Blades, Backplanes, and the Datacenter

Cisco UCS Virtual Interface Card 1227

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

PCI EXPRESS TECHNOLOGY. Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology

Data Center & Cloud Computing DATASHEET. PCI Express x8 Single Port SFP+ 10 Gigabit Server Adapter. Flexibility and Scalability in Data Centers

HP Cluster Interconnects: The Next 5 Years

Storage Area Network (SAN)

Storage Area Networks SAN. Shane Healy

FIBRE CHANNEL OVER ETHERNET

COSC6376 Cloud Computing Lecture 17: Storage Systems

The Promise of Unified I/O Fabrics

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Pass-Through Technology

iser as accelerator for Software Defined Storage Rahul Fiske, Subhojit Roy IBM (India)

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

iscsi Can Allow Your Storage Network to go Global

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

OFED Storage Protocols

Remove Two Or More Things And Replace It With One Wachovia Corporate & Investment Banking

I/O Virtualization The Next Virtualization Frontier

Big Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering

Next Generation Computing Architectures for Cloud Scale Applications

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

InfiniBand SDR, DDR, and QDR Technology Guide

HP ProLiant blade planning and deployment

Cisco Wide Area Application Services and Cisco Nexus Family Switches: Enable the Intelligent Data Center

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Jake Howering. Director, Product Management

Introduction to iscsi

BROCADE 8000 SWITCH FREQUENTLY ASKED QUESTIONS

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Improving Blade Economics with Virtualization

Infiniband und Virtualisierung

<Insert Picture Here> Exadata Hardware Configurations and Environmental Information

Multifunction Networking Adapters

Cisco Nexus 5000 and Emulex LP21000

Virtual Networks: For Storage and Data

Microsoft Office SharePoint Server 2007

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

Hardware Evolution in Data Centers

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

OmniSwitch 6850E Stackable LAN Switch

IBM Europe Announcement ZG , dated February 13, 2007

Infiniband Fast Interconnect

Survey of ETSI NFV standardization documents BY ABHISHEK GUPTA FRIDAY GROUP MEETING FEBRUARY 26, 2016

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp Suzanne Morgan, Microsoft

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

VDI Challenges How virtual I/O helps Case Study #1: National telco Case Study #2: Global bank

Next Generation Storage Networking for Next Generation Data Centers. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

Transcription:

Dragon Slayer Consulting Introduction to the Value Proposition of InfiniBand Marc Staimer marcstaimer@earthlink.net (503) 579-3763 5/27/2002 Dragon Slayer Consulting 1

Introduction to InfiniBand (IB) Agenda IB defined IB vs. FC & GbE IB architecture Real market problems IB solves Market projections Conclusions 5/27/2002 Dragon Slayer Consulting 2

Definition of Input/Output The transfer of data into and out of a computer Maintain data integrity Protect all other data in the computer from corruption Through the use of Operating System defined mechanisms UsuallyUsually 5/27/2002 Dragon Slayer Consulting 3

Block protocol Typically disk oriented Network protocol Typically IP oriented Three (3) Distinct Classes of I/O Inter-Process Communication IPC 5/27/2002 Dragon Slayer Consulting 4

Characteristics I/O Classes Latency Tolerance Avg Message Size Context Predominate Protocol Block Protocol Network Protocol IPC Dozens of milliseconds 100s of Milliseconds Dozens of Microseconds Very large Small to large Small to large Data center/campus FC Fibre Channel Protocol (FCP) Global Ethernet / TCP/IP Server cluster/data center Emerging - VI 5/27/2002 Dragon Slayer Consulting 5

The 1st unified, simplified, & consolidated I/O Fabric Designed from the ground up for all aspects of I/O Shared memory vs. shared bus Leverages virtual lanes or pipes (multiple (multiple fabrics in one) Spec d for today & tomorrow 1x 1x = 2.5Gbps 4x 4x = 10Gbps 12x 12x = 30Gbps Native VI protocol OS OS bypass Credit based flow control Key: extends server I/O Outside Outside the box IB Defined InfiniBand (VI Protocol) Ultra SCSI FC GbE SATA SAS Virtual Lanes 5/27/2002 Dragon Slayer Consulting 6

Why Do We Need Yet Another Fabric? The issue is not the fabric, the issue is server I/O Current GbE & FC fabrics do not solve server I/O bottlenecks Bus contention GbE & FC fabrics weren t specifically designed for clustering They can do it AND Message Message queue depths and performance not optimal Performance is often inadequate 5/27/2002 Dragon Slayer Consulting 7

IB vs. FC vs. GbE Conclusion Initially complimentary IB will not replace FC or GbE Investment protection Eventually competitive and complimentary They will compete for some of the same budget dollars 5/27/2002 Dragon Slayer Consulting 8

IB Architecture Target Channel Adapter Interface to I/O Controller FC, GbE, SCSI, etc. I/O Cntlr TCA Mgt Services Link CPU CPU System Bus Mem Cntlr Sys Mem HCA Host Channel Adapter Protocol Engine Moves data via messages queued in memory Link Switch Internal or External Simple, low cost Multi-stage Multi Stage Switch Router Controller 5/27/2002 Dragon Slayer Consulting 9 Link Link Interconnect TCA Router Controller Routers Connects subnets together I/O Cntlr Link

IB Fabric BW Increases as Switches are Added End Node End Node End Node End Node End Node End Node End Node End Node End Node End Node Switch End Node router End Node End Node End Node End Node End Node router End Node End Node SubnetA SubnetB 5/27/2002 Dragon Slayer Consulting 10

I/O Architecture Today Traditional Server & Infrastructure w/dedicated I/O Complex and expensive Traditional Server Architecture Ethernet LAN Router NIC CPU CPU System Bus Memory Memory Controller I/O Bridge Local Bus (PCI) NIC HBA HBA IPC FC SAN Storage iscsi? DAFS? IPC IPC Network 5/27/2002 Dragon Slayer Consulting 11

InfiniBand Based I/O InfiniBand Server Hardware Architecture CPU CPU System Bus Memory Memory Controller Host Channel Adapter Multiple IBA links 2.5 Gbps 10 Gbps Solve redundancy problem once 5/27/2002 Dragon Slayer Consulting 12

InfiniBand Based I/O InfiniBand Server Hardware Architecture InfiniBand I/O Unit Hardware Architecture CPU Memory Ethernet I/O Controller CPU System Bus Memory Controller Host Channel Adapter (HCA) IB Switch Target Channel Adapter (TCA) Fibre Channel I/O Controller iscsi I/O Controller UltraSCSI I/O Controller RDMA based protocols 5/27/2002 Dragon Slayer Consulting 13

Market Problems IB Solves Higher performance lower cost I/O (Shared I/O) Converges clustering, networking, & storage into one fabric The The IAN (I/O Area Fabric) Reduces: Reduces: IT IT management tasks Server Server workloads TCOTCO PCI Bus I/O constraints Low cost HP/HA server clustering Lowers the cost of server blade systems Enables Enables higher density server blade clusters 5/27/2002 Dragon Slayer Consulting 14

Higher Performance Lower Cost I/O (Shared I/O) New IB Server Clusters IB Fabric I/O Unit IB FC FC SAN Ethernet LAN/WAN TCA IB E-net IB IB IB iscsi IB Storage Mgt LAN Modem Remote Monitoring Ethernet SAN 5/27/2002 Dragon Slayer Consulting 15

Current High Availability I/O Configuration 16 Rack mount servers with dedicated I/O per server Ethernet LAN/WAN FC SAN Maintenance LAN = 210 connections (2) HBA FC paths/server to FC fabric (4) FC paths to storage to FC fabric (2) Ethernet paths/server to network (1) Ethernet maint path/server to network 5/27/2002 Dragon Slayer Consulting 16

Non-Productive Costly Connectivity Productive connectivity Ethernet LAN/WAN FC SAN Maintenance LAN Non-productive connectivity 5/27/2002 Dragon Slayer Consulting 17

InfiniBand Shared I/O Chassis Example 19 rack mount environment 3U high IBA single high single wide std Integrated IBA fabric Up to 45 watts / linecard slot Hot swappable components Chassis Management Entity (CME) Front-to-Back cooling 3U Fabric Card InfiniBand ports Line Cards Up to 8 - GE/FC/IB ports Fabric Card InfiniBand ports 5/27/2002 Dragon Slayer Consulting 18

IB Enabled High Availability Shared I/O Add dual redundant IB I/O Chassis Ethernet LAN/WAN FC SAN Eliminate FC Edge Switches Maintenance LAN 10 slots each IB form factor I/O cards Multi-protocol FC, GigE, FastE, iscsi, etc. Eliminate FC edge switches 5/27/2002 Dragon Slayer Consulting 19

IB Enabled High Availability Shared I/O Reduces LAN switch requirements Ethernet LAN/WAN FC SAN Maintenance LAN Total Connections = 116 = ~ 45% reduction (2) IB paths/server - IB fabric (6) FC paths to storage - FC fabric (2) Ethernet paths/i/o subsystem network (2) E-net E maint path/i/o subsystem - network 5/27/2002 Dragon Slayer Consulting 20

Potential Savings Current dedicated I/O subsystem/server Costs = ~ $225,000 IB shared I/O System with Improved BW, BW, connectivity, manageability, availability Costs = ~ $112,500 Savings = ~ 50% Additional non-hardware TCO gains Operational Expense Estimated Estimated at 3x 8x Capital Expense reduction Simpler system design to manage 5/27/2002 Dragon Slayer Consulting 21

System Benefits Increased BW & connectivity per server Reduced infrastructure complexity Reduced power & space BW migration to bursting servers Natural low latency IPC network 5/27/2002 Dragon Slayer Consulting 22

Managing Scalability w/traditional I/O What happens when just 2 more servers are added? Ethernet LAN/WAN FC SAN Maintenance LAN In the FC SAN (1) new switch has to be added Fabric will need to be reconfigured Maintenance LAN will also need to change From a 16-pt switch/router to 24-port 5/27/2002 Dragon Slayer Consulting 23

Managing Scalability w/traditional I/O Adding servers takes a lot of hard work & time Ethernet LAN/WAN FC SAN Maintenance LAN 20 Net new connections Disruptive FC fabric reconfigurations 5/27/2002 Dragon Slayer Consulting 24

Managing Scalability w/ib based I/O Adding additional servers is significantly simpler & easier Ethernet LAN/WAN FC SAN Maintenance LAN 8 net new connections = a 60% reduction (2) IB paths/new server - IB fabric No new switches or reconfigurations Faster & non-disruptive implementation 5/27/2002 Dragon Slayer Consulting 25

Scalability Net Results w/ib shared I/O Making adds, moves, or changes means Less time Less cost Less effort Less complexity Less personnel Less disruptions More control More simplicity More stability Better RAS 5/27/2002 Dragon Slayer Consulting 26

PCI Bus Constraints PCI bus limitations have been strangling CPU I/O Like trying to drink from a fire hose 5/27/2002 Dragon Slayer Consulting 27

PCI Bus Constraints PCI PCI (66Mhz) PCI-X (133 Mhz) DDR QDR 3GIO Max BW 4 Gbps 8 Gbps 16Gbps 32Gbps 64Gbps I/O Constraint (4) GbE w/toe or (2) 2gig FC (4) SCSI 320 (1) 4x IB (1) 10gigE, FC, or 4x IB (2) 10gigE, FC, or 4x IB Architecture Shared Parallel Bus Switched serial Issues Bus contention Not until 04 5/27/2002 Dragon Slayer Consulting 28

PCI Bus vs. IB Comparison Advantages Disadvantages PCI PCI-X DDR QDR 3GIO InfiniBand Lower cost Simpler for chip-to-chip Protects software base Clustering Scalability: Ports & BW Out-of-box connectivity QoS Security Fault Tolerance Multi-cast Fabric Convergence PCB, Copper, & Fiber Until there is 3GIO, bus contention Software 5/27/2002 Dragon Slayer Consulting 29

It s not either:or Solution: PCI Bus AND IB They are complimentary not mutually exclusive The best solutions takes advantage of both This is why you rarely hear anymore that IB is the PCI replacement There are new HCAs WITH PCI-X X interfaces Expect DDR, QDR, & 3GIO as well The IB benefits are almost as great Eliminates Eliminates bus contention Preserves Preserves PCI software base Provides IB benefits NOW Don t Don t have to wait for native server IB PCI-X HCA Example 5/27/2002 Dragon Slayer Consulting 30

Low Cost HP/HA Server Clustering IB clustering costs less for scaling out than SMP or NUMA scaling g up IB eliminates fabric messaging performance Issues with clustering Long queues PCI bus contention IB enables low cost server (shared I/O arguments even stronger here) h Diskless blades Personality on the storage Higher Fault Tolerance and Availability One connection for clustering and shared I/O Less I/O interfaces than any other interconnect Higher performance Lower TCO 5/27/2002 Dragon Slayer Consulting 31

6000000 Industry Analyst s IB Enabled Server Forecast Analysts are split in their forecast of IB s TAM; but, not on its potential 5000000 4000000 3000000 2000000 1000000 0 2002 2003 2004 2005 Gartner IDC 5/27/2002 Dragon Slayer Consulting 32

IB Enabled Servers as a % of Total 90.0% 80.0% 70.0% 60.0% 50.0% 40.0% 30.0% 20.0% 10.0% 0.0% 2002 2003 2004 2005 Gartner IDC 5/27/2002 Dragon Slayer Consulting 33

Conclusions Even if the analysts views are optimistic Huge % of servers will be I/B enabled The value proposition is far too strong to ignore Initial deployment will utilize PCI-X X HCAs Native deployments will enable lower cost server blade clusters As more and more servers become IB enabled Clever Clever IT people will realize that they can run IB native for: Clustering, Networking, and Storage When IB becomes native with the server motherboard The perception becomes that it s free There There is always high market demand for free. 5/27/2002 Dragon Slayer Consulting 34

Dragon Slayer Consulting?????Questions????? 5/27/2002 Dragon Slayer Consulting 35

Why Not Just Use GbE or FC? GbE and FC are the current fabric infrastructures IT personnel already know & understand the technologies FC & GbE are already battling it out for SAN infrastructures FCP vs. iscsi 5/27/2002 Dragon Slayer Consulting 36

IB vs. FC vs. GbE Technology Gigabit Ethernet Fibre Channel InfiniBand Architecture Standards Body Signaling Speed First Standard Maximum Frame Size IEEE & IETF 1.25 Gbps 1999 1.5K ANSI 2.125 Gbps 1988 2K InfiniBand 2.5Gbps (1x) Trade 10Gbps (4x) 2001 4K Association 30Gbps (12x) Primary Application LAN: Local Area Network SAN: Storage Area Network IAN: I/O Area Network 5/27/2002 Dragon Slayer Consulting 37

How IB compares w/gbe & FC in OSI Ethernet (802.3) Fibre Channel IB Architecture Upper Level Protocols Application Application Application Transport Layer TCP UDP FC-4: Protocol Mappings IBA Operations Network Layer IP (FC-3) FC-2: Network Logical Link Control Framing Service Class Line Encoding = layers not included in the protocol standards Link Layer Media Access Control FC-1: Encoding Media Access Control Physical Layer Physical Layer Entities FC-0: Physical Media Physical 5/27/2002 Dragon Slayer Consulting 38

Data Center Fabric & I/O Consolidation IB enables convergence through shared server I/O One I/O interface for ClusteringClustering NetworkNetwork StorageStorage Eliminates the need for multiple server I/O blades/ports IB virtual lanes provides Multiple Multiple independent logical fabrics multiplexed on one physical one QoS QoS to prioritize traffic The The benefits of independent fabrics with: The management and maintenance of one fabric Switches, directors, and routers provide Scalability, redundancy, availability, and flexibility 5/27/2002 Dragon Slayer Consulting 39

Requirements of a Shared I/O System Cooperative Software Architecture Ability to productively distribute work between host & external shared I/O system Virtualization of I/O Host manipulates logical resources Host has no awareness of underlying physical resources All I/O managed external to host Host originates requests and receives result Heterogeneous Operating Systems 3 Classes of I/O Efficiently handle small to very large messages Microsecond sensitive latency without sacrificing bandwidth Channel Architecture Highly differentiated priority and service levels Connection oriented guaranteed delivery mechanism Inherent memory semantics and protection High speed / low latency 5/27/2002 Dragon Slayer Consulting 40

Dragon Slayer Consulting Market Projections IDC & Gartner-Dataquest 5/27/2002 Dragon Slayer Consulting 41