16GFC Sets The Pace For Storage Networks

Similar documents
FCIA Fibre Channel: Live Long and Prosper

128GFC. Fiber Channel Leading Charge for Higher Speeds Skip Jones Date: Tuesday, October 15 Time: 2:10pm 2:50pm Location: Room 102B

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Analyst Perspective: Test Lab Report 16 Gb Fibre Channel Performance and Recommendations

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

Data Storage World - Tokyo December 16, 2004 SAN Technology Update

Data Storage World - Tokyo December 16, 2004 SAN Technology Update

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

NETWORKING FLASH STORAGE? FIBRE CHANNEL WAS ALWAYS THE ANSWER!

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade

Speed Forum and Plugfest Meeting

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Next Generation Storage Networking for Next Generation Data Centers. PRESENTATION TITLE GOES HERE Dennis Martin President, Demartek

The Benefits of Brocade Gen 5 Fibre Channel

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 Node

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Roadmap to the Future!

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

Flash Storage with 24G SAS Leads the Way in Crunching Big Data

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Fibre Channel. Roadmap to your SAN Future! HP Technology Forum & Expo June, 2007, Las Vegas Skip Jones, FCIA Chair, QLogic

THE DATA CENTER TRANSFORMATION IS UNDERWAY IS YOUR STORAGE AREA NETWORK READY?

Virtualization of the MS Exchange Server Environment

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Emulex -branded Fibre Channel HBA Product Line

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Evolution of Fibre Channel. Mark Jones FCIA / Emulex Corporation

The PowerEdge M830 blade server

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE VSAN INFRASTRUCTURE

The Impact of Hyper- converged Infrastructure on the IT Landscape

NetApp AFF A300 Review

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >

Industry Brief. Gen 5 16Gb Fibre Channel from HP Eases Data Center Traffic Jams. Featured Applications

Brocade Technology Conference Call: Data Center Infrastructure Business Unit Breakthrough Capabilities for the Evolving Data Center Network

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

HP Converged Network Switches and Adapters. HP StorageWorks 2408 Converged Network Switch

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

Demartek September Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi. Introduction. Evaluation Environment. Evaluation Summary

Demartek 16Gb Fibre Channel Deployment Guide

Storage Best Practices for Microsoft Server Applications

Evolution to Cloud Computing

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Building a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation

By John Kim, Chair SNIA Ethernet Storage Forum. Several technology changes are collectively driving the need for faster networking speeds.

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

Cisco UCS B460 M4 Blade Server

VMware Virtual SAN Technology

IBM Emulex 16Gb Fibre Channel HBA Evaluation

PureSystems: Changing The Economics And Experience Of IT

Hálózatok üzleti tervezése

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

São Paulo. August,

Modernize Without. Compromise. Modernize Without Compromise- All Flash. All-Flash Portfolio. Haider Aziz. System Engineering Manger- Primary Storage

Modernize with all-flash

Wyse Datacenter for Citrix XenDesktop Reference Architecture

Active System Manager Release 8.2 Compatibility Matrix

Software-defined Shared Application Acceleration

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

How Are The Networks Coping Up With Flash Storage

Lenovo Enterprise Portfolio

Microsoft SharePoint Server 2010 on Dell Systems

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage


Low latency and high throughput storage access

As enterprise organizations face the major

IBM and BROCADE Building the Data Center of the Future with IBM Systems Storage, DCF and IBM System Storage SAN768B Fabric Backbone

Fibre Channel over Ethernet and 10GBASE-T: Do More with Less

IBM Europe Announcement ZG , dated February 13, 2007

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

The Economics of InfiniBand Virtual Device I/O

SwitchX Virtual Protocol Interconnect (VPI) Switch Architecture

Network Design Considerations for Grid Computing

Maximize the All-Flash Data Center with Brocade Gen 6 Fibre Channel

Cisco UCS Virtual Interface Card 1225

Optical Components. A Customers Perspective. Finisar Analyst and Investor Meeting October 7, Adam Carter Director Transceiver Module Group Cisco

Delivering High-Throughput SAN with Brocade Gen 6 Fibre Channel. Friedrich Hartmann SE Manager DACH & BeNeLux

Next Generation Computing Architectures for Cloud Scale Applications

Cisco UCS Virtual Interface Card 1227

HP ProLiant BladeSystem Gen9 vs Gen8 and G7 Server Blades on Data Warehouse Workloads

Storage Protocol Offload for Virtualized Environments Session 301-F

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

Creating an agile infrastructure with Virtualized I/O

IBM System Storage SAN768B

IBM Data Center Networking in Support of Dynamic Infrastructure

IBM ProtecTIER and Netbackup OpenStorage (OST)

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

vstart 50 VMware vsphere Solution Specification

vstart 1000v for Enterprise Virtualization using VMware vsphere: Reference Architecture

Transcription:

16GFC Sets The Pace For Storage Networks Scott Kipp Brocade Mark Jones Emulex August 30 th, 2011 To be presented to the Block Storage Track at 1:30 on Monday September 19th 1

Overview What is 16GFC? What is Driving 16GFC Prolific applications, server virtualization, multi-core processors, more memory, solid state drives, PCIe 2.0, traffic aggregation, VDI Benefits of 16GFC Higher performance leads to fewer links, easier cable management, less power consumption per bit Application of 16GFC 16GFC is needed for ISLs, high bandwidth applications, data migration and VDI 2

Generations of Fibre Channel The newest speed in Fibre Channel - Keep it Serial Stupid (KISS) Generation 1 st Gen 2 nd Gen 3rd Gen 4th Gen 5th Gen 6 th Gen Electrical / Optical Module Data Rate(Gbps) 1GFC / GBIC/ SFP 1 lane at 1.0625 2GFC / SFP 1 lane at 2.125 4GFC / SFP 1 lane at 4.25 8GFC / SFP+ 16GFC / SFP+ 1 lane at 8.5 1 lane at 14.025 32GFC / SFP+ 1 lane at 28.05 Encoding 8b/10b 8b/10b 8b/10b 8b/10b 64b/66b 64b/66b Availability 1997 2001 2006 2008 2011 2014 GBIC SFP / SFP+ 3

FC External Storage Dominates 12 10 External Storage Sales ($B) Source: IDC 8 6 4 2 0 2005 2006 2007 2008 2009 2010 Fibre Channel iscsi FCoE FCoE storage barely visible in 2010 4

8GFC Switch Ports Outsell 10GbE 10GbE standard released in 2002, 5 years before 8GFC in 2007 Switch Ports Sold Annually (000s) 7000 6000 5000 4000 3000 2000 1000 0 10GbE Switch Ports 8GFC Switch Ports 1 7 58 142 295 663 5 1886 1314 745 3415 3395 2002 2003 2004 2005 2006 2007 2008 2009 2010 6385 Source: Dell Oro 5

16Gb FC Adapters to Dominate 16 Gbps FC FC Adapters Port Shipments (Millions) 4 8 Gbps FC 2 4 Gbps FC 2 Gbps FC 0 2010 2011 2012 2013 2014 2015 Source: Dell Oro Group SAN Forecast Report, July 2011 6

16Gb FC Adapters Price Decline $700 World Wide FC Adapter Market MFG Port ASP ($) All Adapters $600 $500 $400 $300 1 Gbps Fibre Channel 2 Gbps Fibre Channel 4 Gbps Fibre Channel 8 Gbps Fibre Channel > 16 Gbps Fibre Channel $200 $100 $0 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 Source: Dell Oro Group SAN Forecast Report, July 2011 7

Characteristics of 16GFC Double the throughput over backplanes, 100 meters and 10 kilometers Fibre Channel Physical Interfaces 5 (FC-PI-5) standardized 16GFC Speed Name Throughput (MB/sec) Retimers in the module Transmitter Training OM1/2/3/4 Link Distance (meters) 1GFC 100 No No 300/500/860/* 2GFC 200 No No 150/300/500/* 4GFC 400 No No 50/150/380/400 8GFC 800 No No 21/50/150/190 10GFC 1200 Yes No 33/82/300/* 16GFC 1600 Yes Yes 15/35/100/125 * FC-PI-5 didn t standardize distances for OM4 fiber for 1/2/10GFC 8

Dual Codecs for Backward Compatibility To improve the efficiency of the protocols, 16GFC only uses 64b/66b coding that is 98% efficient 8b/10b codes used for 2/4/8GFC are 80% efficient 16GFC signals do not use the 8b/10b encoders To be backward compatible with 2/4/8GFC, 16GFC ASICs need to support both 8b/10b and 64b/66b coder/decoders (codec) on each link During speed negotiation, the transmitter and receiver switch back and forth between the speeds (and the corresponding codecs) until the fastest speed is reached for a given link For 10/16GFC 64b/66b Encoder SFP+ Coupler For 2/4/8GFC 8b/10b Encoder 9

Speed Negotiation During speed negotiation, the speed dependent switch routes the initialization sequence to the appropriate encoder 64b/66b for 16GFC 8b/10b for 2/4/8GFC The coupler sends the signals from one the encoders to the SFP+ Upper Level Processing + buffers Fibre Channel ASIC Speed Dependent Switch For 16GFC 64b/66b Encoder Coupler 8b/10b Encoder For 2/4/8GFC SFP+ 10

Speed Negotiation Examples After Speed Negotiation, only one encoder works at the fastest supported speed For 16GFC 64b/66b Encoder Coupler 16GFC SFP+ Link runs at fastest speed of 8GFC so uses 8b/10b encoding 8GFC SFP+ 8b/10b Encoder For 2/4/8GFC For 2/4/8GFC 8b/10b Encoder For 16GFC 64b/66b Encoder Coupler 16GFC SFP+ Link runs at fastest speed of 16GFC so uses 64b/66b encoding 16GFC SFP+ 64b/66b Encoder Coupler For 16GFC For 2/4/8GFC 8b/10b Encoder 8b/10b Encoder For 2/4/8GFC 11

The Benefits of 16GFC 16GFC is 100% faster than 8GFC and 40% faster than 10GE and leads to these benefits Higher performance lets servers process more data Fewer links to do the same job Easier cable and device management Less power consumption per bit 12

Higher Performance While processors are considered multi-tasking, each core is still serial and switch between tasks rapidly while they are waiting If individual tasks like reading and writing data are done faster, then the processor doesn t have to wait as long Similar to how SSDs speed up performance, faster links transfer data quicker and decrease latency 13

The Benefits of 16GFC Fewer Links ToR Switches 100 Gbps to Core Core to Core 150 Gbps Blade Switches 70 Gbps to Core ISLs from ToR Switch to Core ISL from Blade Switch to Core Core to Core 8GFC Links 16GFC Links 16 8 10 5 24 12 Total ISLs 50 25 Total Ports 100 50 14

Easier Cable Management Fewer links lead to less cables to manage With structured cabling costs up to $300/port, each cable reduction saves money and reduces opex and the amount of infrastructure to manage Multiple vendors provide dense cable Management solutions 15

Less Power Consumption 16GFC ports consumes 40% more power than an 8GFC port while delivering 100% more bandwidth Instead of two ports at 8GFC where the power would increase by 100%, a 16GFC port can deliver twice the performance with less than 40% increase in power 16GFC reduces the power per bit that must be expended for each data transfer and reduces operational expenses (opex) 16

Summary of Benefits 2 x 8GFC Links = 4 x 0.5 W/sfp+ = 2.0 Watts 1 x 16GFC Links = 2 x 0.75 W/sfp+ = 1.5 Watts 17

What is driving 16GFC? Multiple factors require Fibre Channel to advance Prolific applications that increase in scale and number Server virtualization Multi-core processors Large Memory increases Solid State Disks (SSD) Faster PCIe rates Aggregation of traffic on ISLs Intra-chassis bandwidth Competition with iscsi and FCoE 18

Prolific Applications and Server Virtualization Applications keep growing in number and breadth Multiple servers need to access data from shared storage Database applications drive bandwidth Server virtualization creates multiple virtual machines (VMs) for each application so each physical server is producing more Input/Output (I/O) VM1 VM2 VM3 VM4 App App App App OS OS OS OS Hypervisor Hardware 19

Faster Processors NehalemEX has 8 cores and 16 threads and uses Intel QuickPath Interconnect at 6.4 Gigatransfers/s delivering up to 25 GigaBytes/ second (GBps) IBM has the Power7 that has 8 cores and supports 50 GBps or peak IO and directly interconnects 32 of these processors on a server AMD has 8-core and 16 core processors that support 32 threads and HyperTransport 3.0 to support 4.8 gigatransfers/second Sun s UltrasparcT3 chip has 16 cores and supports up to 128 threads Nehalem-EX in twochip configuration IBM s Power7 QuickPath 25 GBps 20

SSDs Solid State Drives Performance of applications is limited by multiple factors with disk drive latency being one factor Order of magnitude improvements in performance SSDs often referred to as Tier-0 storage while disk drives are Tier-1 Capacities in the hundreds of GBs per drive Very energy efficient compared to spinning disks Most SSDs provide over 50,000 IOPs per drive Texas Memory Systems RamSan-630 storage system supports 500,000 IOPS and 8 GBps (64 Gbps) of throughput Latency Drive IOPs Array IOPS HDD 2-10 ms 100-300 400-40,000 SSD 50-250 us* 40k-150k 50k-500k * This is based on Flash memory and multiple parallel processing 21

Increased Memory in Servers 32GB RDIMM Server performance and number of VMs is dependent on memory capacity in servers Registered Dual-Inline Memory Modules (LRDIMM) already come in 32GB packaging Midrange servers averaged 32 GB of memory in 2009 and are expected to triple to 96GB in 2012 according to Gartner Dell s 2U PowerEdge R710 supports 144 GB of memory Sun SPARC M9000-64 offers 4 TB memory capacity VMWARE supports 1 TB /server and 255GB/VM Memory has limited virtual servers in the past Memory drives more applications that drive more storage I/O traffic 22

PCIe Continues Ramping PCIe 2.0 increases in speed to support dual ported 16GFC HBAs PCIe 3.0 will support quad ported 16GFC HBAs PCIe4.0 expected in Number of Lanes Speed per Lane (MBps) Directional Bandwidth (Gbps) Ports Supported PCIe -1.0 8 250 16 2 8GFC PCIe -2.0 8 500 32 2 16GFC PCIe -3.0 8 1000 64 4 16GFC PCIe-4.0 8 2000 128 4-32GFC 23

More Applications Drive more Bandwidth 16GFC was designed for servers over the next few years that will use these technologies More Applications More Bandwidth More Data More Virtual Machines Multi-core Processors More Memory Faster PCIe Faster SSDs App App App App OS OS OS OS Hypervisor Hardware 24

How is 16GFC better than 40GE? 40 Gigabit Ethernet (40GE) is basically 4 lanes of 10GE With Ethernet, there are only steps of 10GE, 40GE and 100GE with each lane running at 10.3125 Gbps and 4X25.7Gbps in 2014 Each lane of 16GFC outperforms each lane of 10GE by 40% and is more efficient in terms of power consumption/bit 3 lanes of 16GFC yield more bandwidth than 40GE Lanes of 16GFC can be trunked in a more flexible way than 40GE because FC lanes can be trunked in steps of 16GFC to effectively produce 32GFC, 48GFC, 64GFC, 80GFC, 96GFC, 112GFC and 128GFC 25

The Application of 16GFC ISLs Only one 16GFC ISL is needed instead of two 8GFC ISLs of four 4GFC ISLs to exchange the same amount of data High Bandwidth Applications applications that use large amounts of bandwidth can perform better with 16GFC Data Migration Data migrations can be done in less time when the limiting factor is the link speed Virtual Desktop Infrastructure 26

Sample Fabric 12 Core to Core ISLs 8GFC Links 16GFC Links ISLs from ToR Switch to Core 80 40 ISL from Blade Switch to Core 240 120 Core to Core 24 12 High End Server Ports 320 160 Mainframe Server Ports 96 48 Total Links 760 380 Total Ports 1,520 760 40 ISLs 10 ToR Switches with 4 uplinks per switch 120 ISLs 20 Blade Switches with 6 uplinks per switch 160 16GFC Ports 40 High-end servers with 4 uplinks per server 48 16GFC Ports 4 Mainframes with 12 uplinks per server 27

High Bandwidth Applications Different applications have varying bandwidth requirements 16GFC is designed to support a wide range of applications and is mainly applicable for aggregation points, high end servers and backup servers Max I/O per Application Applications per Server I/O per Server Servers per Rack I/O per Rack 10 Mbps 30 0.3 Gbps 40 12 Gbps 100 Mbps 15 1.5 Gbps 30 45 Gbps 400 Mbps 8 3.2 Gbps 24 76.8 Gbps 1.5 Gbps 12 18 Gbps 4 72 Gbps 8 Gbps 12 96 Gbps 1 96 Gbps 4 Gbps 4 16 Gbps 20 320 Gbps 28

Bursty Data Traffic Data traffic is bursty in nature and the system performance is based on how fast each burst can be processed or transmitted Faster links lead to better performance Latency of GE at 125 MBps (Seconds) Latency of 4GFC at 400 MBps (Seconds) Latency of 8GFC at 800 MBps (Seconds) Latency of 10GE at 1250 MBps (Seconds) Latency of 16GFC at 1600 MBps (Seconds) Size of Data Exchange 100 MB 0.8 0.25 0.125 0.08 0.0625 1 GB 8 2.5 1.25 0.8 0.625 10 GB 800 250 125 80 62.5 100 GB 800 250 125 80 62.5 1 TB 8000 2500 1250 800 625 10 TB 80000 25000 12500 8000 6250 100 TB 800000 250000 125000 80000 62500 1 PB 8000000 2500000 1250000 800000 625000 16GFC delivers a TB in10 minutes 29

I/O Aggregation Small streams of data can add up to large rivers 16GFC will minimize the number of links by maximizing the throughput of each link Max I/O Requirements (MBps) Number of Applications Aggregate Bandwidth 16 GFC Links Very Low-End Application 10 500 5000 3.125 Low-End Application 100 80 8000 5 Mid-range Application 1000 17 17000 10.625 High-end Application 10000 2 20000 12.5 30

Data Migration When large amounts of data need to replicated, little processing is involved, but high bandwidth is needed Faster links cut the transfer time down Size of Data Exchange Latency of 1 GE at 125 MBps (Seconds) Latency of 4GFC at 400 MBps (Seconds) Latency of 8GFC at 800 MBps (Seconds) Latency of 10GE at 1250 MBps (Seconds) Latency of 16GFC at 1600 MBps (Seconds) 1 GB 8 2.5 1.25 0.8 0.625 10 GB 80 25 12.5 8 6.25 100 GB 800 250 125 80 62.5 1 TB 8000 2500 1250 800 625 10 TB 80000 25000 12500 8000 6250 100 TB 800000 250000 125000 80000 62500 1 PB 8000000 2500000 1250000 800000 625000 10 PB 80000000 25000000 12500000 8000000 6250000 16GFC delivers a PB in ~1 week 31

Virtual Desktop Infrastructure (VDI) VDI lets IT departments run virtual desktops in the datacenter and deliver desktops to employees as a managed service. End users gain a familiar, personalized environment that they can access from any number of devices anywhere throughout the enterprise or from home. Administrators gain centralized control, efficiency, and security by having desktop data in the datacenter. VDI drives bandwidth to the data center when hundreds or thousands of employees work from the data center 32

VDI Architecture Datacenter LAN Centralized Virtual Desktops Linked Clones Unified Access Connection Servers 16GFC allows highest performance SAN Campus LAN Load Balancers Virtualization Control Center 33

Summary 16GFC is needed to keep pace with other advances Abundant applications, server virtualization 12 core processors, 32 GB DIMMs, SSDs, PCIe 2.0 Users benefit from higher performance, reduced number of links, easier cable management, and better power efficiency High bandwidth applications, data migrations, VDI and ISLs are perfect applications for 16GFC Fibre Channel will remain the dominant storage interconnect by providing the best performance 34

Thank You 35