Jake Howering Director, Product Management
Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2
Market Opportunity for Converged Infrastructure The Converged Infrastructure market is predicted to reach (US) $74 billion in 2017*, a 52% CAGR that includes Networking, Storage, and Compute. $74B $6B 2013 2017 *Source - Wikibon 3
Revenue ($M) Data Center connectivity is changing Increasing emphasis on Ethernet-based connectivity options 14,000 12,000 10,000 8,000 6,000 4,000 2,000 2010-2014 CAGR NAS + iscsi + FCoE 13.9% Fibre Channel SAN 1.3% Network-attached NAS 5.4% iscsi SAN 18.2% External DAS -8.8% Fibre Channel over Ethernet 104.6% Switched SAS 31.9% 0 2008 2009 2010 2011 2012 2013 2014 Source: IDC, (7/10) and EMC
Storage Networking Interconnections: Fibre Channel vs Ethernet FC v Ethernet Storage Port Count Growth 2/4G 8G 16G 10GE 40GE 30,000,000 25,000,000 20,000,000 15,000,000 10,000,000 5,000,000-12 13 14 15 16 5
VSPEX Certification & Best-of-Breed Solutions Test Configuration Mixed Workloads on VMware ESXi 5.1 Lenovo RD630 + Qlogic 8300 CNA Designed for flexibility and validated to ensure interoperability and fast deployment, VSPEX enables you to choose the technology in your complete cloud infrastructure solution. Extreme Networks Summit X670 EMC VNX 5300 http://www.emc.com/platform/vspex-proven-infrastructure Ethernet SAN, up to 125 VM s, Failure scenarios, 9.88 GB iscsi Throughput 6
Extreme Competitive Beating the Competition High Performance and High Value Extreme Networks X670 Cisco Nexus 5548UP Switch Height 1 RU 1 RU 1RU Brocade VDX 6730 OS Single OS Multiple OS s Single OS Max 10GE ports 64 48 60 Max 40GE ports 4 0 0 7 Throughput 1.2T 960G 1.2T Stacking Yes No Yes OpenFlow Yes No Yes OpenStack Yes Yes No List Price ~ $25,000 ~ $55,000 ~ $62,000 Technology iscsi FCoE FCoE
Extreme Innovation with Open Standards Extreme Validated Solution (EVS) Enables Storage Partners and SDN 8 NetApp EMC Others VLANs LAG iscsi (Support TCP) Jumbo Frames (9000) DCB
Extreme SDN for Converged Infrastructure Available Now! OpenStack - Extreme Quantum Plug-in is Topology Aware Data Center Core Internet Zone 1 Network Network VM provisioned in Pod 1 based on Topology Scheduler proximity algorithm Compute Storage Compute Storage VM mobility, aka vmotion, can be restricted to Pods or Zones. 9 Pod 1 Pod 2
What Does it Mean to be Extreme Value Leverage the low cost curve of Ethernet Converge LAN and SAN onto single network for lower CAPEX and OPEX costs Efficient scalability with pay-as-you-grow model for incremental growth Performance High availability to maximize uptime and user experience Efficient bandwidth utilization while assuring a loop free topology Data Center Bridging features for lossless SAN experience Simplicity Pre-tested and Pre-validated solution assures seamless deployment and operations Extreme Networks single OS provides consistent and predictable UI and troubleshooting Automation and Management tools with VMware vcenter, EMC Unisphere and Extreme Networks Ridgeline systems Open Standards Industry standards protocols, including Ethernet, to assure interoperability SDN-Ready with OpenStack and OpenFlow support Open API's, including XML and SOAP, for system abstraction and custom integration as needed 10
Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 11
Storage Networking - Multiple Protocols iscsi NAS Fibre Channel FCoE Ethernet Ethernet File Sharing Fibre Channel FCoE iscsi SAN SAN SAN Choice of connectivity Fibre Channel (4 Gb/s, 8 Gb/s) Low cost IP (1 Gb/s, 10 Gb/s) FCoE Choice of delivery File-based Block-based S I M P L E Storage System Growth paths iscsi to Fibre Channel for throughput Fibre Channel to FCoE for simplification Scale front end and storage independently
Typical Storage Systems Deployment Shared storage for virtual servers and applications vcenter Unisphere Oracle, Microsoft Exchange, SQL Server, SharePoint Storage pool VNX series Virtual server pool Simple Tune SQL Server in 80% less time with FAST VP Provision SharePoint 4 times faster with a single tool Efficient Realize 50:1 server consolidation without creating storage bottle necks with FAST Cache Powerful Run virtualized Microsoft SQL and Oracle three-times faster
Storage Networking Key Requirements Availability Resiliency Isolation Performance 14
Storage Networking Key Technologies Fibre Channel Fibre Channel over Ethernet iscsi 15
Network Stack Comparison SCSI SCSI SCSI SCSI SCSI iscsi FCP FCP FCP FC FC FC FCIP Less Overhead than FCIP, iscsi TCP TCP IP IP FCoE Ethernet Ethernet Ethernet PHYSICAL WIRE SCSI iscsi FCIP FCoE FC
Ethernet-Based Storage Systems Block Based Storage File Based Storage iscsi FCoE NFS CIFS ExtremeXOS Infrastructure Layer Data Center Bridging (DCB) Protocols Priority Based Flow Control (PFC) Enhanced Transmission Standard(ETS) DCB Capabilities Exchange (DCBX) Dynamic Scripting Clear Flow
Storage Networking Key Features DCB FIP Snooping STP MLAG VLANs Jumbo Frames 18
Data Center Bridging Data Center Bridging Key Technology for Lossless Ethernet SAN DCBX DCBX LLDP LLDP DCBX: Data Center Bridging Capabilities Exchange (802.1Qaz) Discover and Exchange Capabilities and Configuration between DCB Switches via LLDP (802.1ab), including: Priority Flow Control (802.1Qbb) Pause specific classes of traffic between DCB switches Enhanced Transmission Selection (802.1Qaz) Guarantee a specific percentage of bandwidth for a specific class of traffic
FCoE Initialization Protocol FIP Snooping FIP Snooping (FCoE Initialazation Protocol Snooping ) Efficient FC transport (FCoE) over 10GE Ethernet in DC.. FIP snooping is used in multi-hop FCoE environments. FIP snooping is a frame inspection method that can be used by FIP snooping capable DCB devices to monitor FIP frames and apply policies based on the information in those frames. 20
LAN & SAN Physically Separate Topologies Ethernet Fibre Channel 1 Gigabit Ethernet 1 Gigabit Ethernet 1 Gigabit Ethernet NICs Rack-mounted servers Fibre Channel HBAs iscsi SAN Ethernet LAN Fibre Channel SAN Storage Servers connect to LAN, NAS and iscsi SAN with NICs Servers connect to FC SAN with HBAs Many environments today are still 1 Gigabit Ethernet Multiple server adapters, multiple cables, power and cooling costs Storage is a separate network (including iscsi)
Adapter Evolution: Consolidation Network Adapter
Storage Drivers and Virtualization vnic vscsi vnic vscsi Vswitch VMkernel storage stack Hypervisor NIC LAN traffic C N A FC HBA NIC C N A FC HBA iscsi traffic FCoE follows FC iscsi traffic FC traffic path *iscsi initiator can also be in the VM
FCoE Extends FC on a Single Network Server sees storage traffic as FC Network Driver FC Driver Ethernet Network FCoE SW Stack Standard 10G NIC Converged Network Adapter FC storage 2 options Lossless Ethernet Links Converged Network Switch FC network Ethernet FC SAN sees host as FC
FCoE With External FCoE Gateway Converged Network Switches move out of the rack from a tightly controlled environment into a unified network Maintains existing LAN and SAN management Ethernet LAN Ethernet Network (IP, FCoE) and CNS Converged Network Switch FC Attach Fibre Channel SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers
FCoE with Top of Rack Gateway Network Switches stay in the rack for a IP-based unified network Need Specialized Network Switch that has both FC and Ethernet ports expensive! Maintains existing LAN and SAN management Ethernet LAN Eternet Switch FC Attach FC SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers
Ethernet LAN & iscsi SAN Network Switches stay in the rack for a IP-based unified network Maintains existing LAN and SAN management Ethernet LAN Eternet Switch iscsi Attach iscsi SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers
Convergence at 10 Gigabit Ethernet Two paths to a Converged Network iscsi purely Ethernet FCoE allows for mix of FC and Ethernet (or all Ethernet) FC that you have today or buy tomorrow will plug into this in the future Choose based on scalability, management, and skill set Converged Network Switch Ethernet LAN iscsi/fcoe Storage 10 GbE CNAs FC SAN Ethernet FC Rack Mounted Servers
Software Defined Storage Networking FCoE Overlay
Basic Topology Customer to Compute Layer Customer or Machine Indicates traffic is allowed to cross planes in normal working condition Ethernet Fabric A Ethernet Fabric B Compute Layer 1. Dual paths from customer to compute layer - Basic Design 2. Active path backup path - Basic Design 3. Load Shared Paths - Advanced Design
Basic Topology Compute to Storage Path to Fabric A is Active Compute Layer Failure plan Path to Fabric B is Passive Ethernet Fabric A Ethernet Fabric B Storage Layer 1. Dual paths from Compute layer to storage layer - Basic Design 2. Active path backup path - Basic Design based on hypervisor multi-pathing 3. Load Shared Paths - Advanced Design - requires hypervisor plugin to enable IO level load sharing 4. Consider TCP monitor LACP lag groups with iscsi
FC + FCoE Design Single Hop & Active/Standby
FC + FCoE Design Single Hop & Active/Active
FC + FCoE Design Scalable Single Hop & Active/Active
Storage Networking Comparison FC FCoE iscsi Lossless Yes DCB (req) DCB (option) Layer 2 N/A Yes No Layer 3 - IP No No Yes TCP No No Yes Resiliency Yes Yes Yes Isolation Yes Yes Yes Performance Best Second Third Bandwidth 16GB FC 10GE 40GE + Hardware FC SAN Director FCF (Gateway) + FIP Switch Ethernet Switch Capability
Thank You! Extreme Converged Infrastructure http://www.extremenetworks.com/solutions/datacenter_converged_infrastruct ure.aspx Network Design Guide Coming Out Soon!
Thank You
EMC VSPEX Minimum Requirements 125 VMs Profile Characteristics Value Number of virtual machines 125 Virtual machine OS Processors per virtual machine 1 Number of virtual processors per physical CPU core RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or NFS shares to store virtual machine disks Windows Server 2012 Datacenter edition 4 2 GB 100 GB 25 IOPS 1 or 2 Number of virtual machines per LUN or NFS share Disk and RAID type for LUNs or NFS shares 50 RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks 38
EMC VSPEX Hardware Requirements 125 VMs Component Configuration Lenovo RD 630 Intel E5-2680 Dual socket 8 core per socket with hyper threading achieves 32 logical cores per node 256G ram per node 2 x Qlogic 8362CNA per host using Ethernet drivers RAID 1 boot disk for hypervisor 2 x 300 Gig SAS VMware vsphere Servers CPU 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host Network (Block Storage systems) IPMI enabled in bios using dedicated copper management port shares access with IPMI ip address For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical server 2 x 10 GE NIC per server 2 Qlogic 8362 CNAs NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums. Extreme Networks Infrastructure Shared Infrastructure Minimum Switching Capacity for Block Storage: 2 x Extreme NetworksX670 2 x 10 GE ports per VMware vsphere Server 1 x 1 GE port per Control Station for management 2 ports per VMware vsphere server, for storage network 2 ports per SP, for storage data In most cases, a customer environment already has infrastructure services such as Active Directory (AD), DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, the new minimum requirements are: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GE ports per server NOTE: These services can be migrated into VSPEX post-deployment. However, they must exist before VSPEX can be deployed. EMC VNX Series Storage Array Block Common: 1 x 1 GE interface per Control Station for management 1 x 1 GE interface per SP for management 2 front end ports per SP system disks for VNX OE For 125 virtual machines EMC VNX 5300 60 x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as hot spare 39
EMC VSPEX Software Versions Software Configuration VNX OE for file: Release 7.0.100-2 OE for block: Release 31 (05.31.000.5.704) EMC VSI for VMware vsphere Version 5.1 Virtual Machine Base Operating System Microsoft Windows Server 2008 R2 VDBench 5.0.2 Note: VDBench was used to validate this solution. It is not a required component for production. 40
Extreme + EMC VSPEX Software Versions r Software vsphere Server vcenter Server Operating system for vcenter Server Configuration VMware vsphere 5.1 Enterprise Edition 5.1 Standard Edition Windows Server 2008 R2 SP1 Standard Edition NOTE: Any operating system that is supported for vcenter can be used. Microsoft SQL Server Version 2008 R2 Standard Edition NOTE: Any supported database for vcenter can be used. VNX OE for Block 05.32.000.3.770 EMC VSI for VMware vsphere: Unified Storage Management 5.4 EMC VNX EMC VSI for VMware vsphere: Storage Viewer 5.4 EMC PowerPath /VE 5.8 Base Operating System Extreme Networks Summit Switches Virtual Machines (used for validation not required for deployment) Microsoft Window Server 2012 datacenter edition 15.3 Network Switching 41
EMC VSPEX Virtualization Requirements 125 VMs Component VMware vsphere Servers CPU Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 125 virtual machines: Minimum of 250GB RAM Add 2 GB RAM for each physical server Network Block 2 x 10 GE NICs per server 2 HBA or CNAs per server NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums. 42
EMC VSPEX Network Requirements 125 VMs Component Configuration Network Infrastructure Minimum switching capacity Block 2 Extreme Networks X670 physical switches 2 x 10 GE ports per VMware vsphere server 1 x 1 GE port per Control Station for management 2 ports per SP, for storage data 43
EMC VSPEX Block Storage Requirements 125 VMs Component Configuration EMC VNX series Storage Array Block Common: 1x1 GE interface per control station for management 1x1 GE interface per SP for management 2 front-end ports per SP System disks for OE For 125 virtual machines: EMC VNX 5300 60 x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 44