Jake Howering. Director, Product Management

Similar documents
SAN Virtuosity Fibre Channel over Ethernet

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

FIBRE CHANNEL OVER ETHERNET

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Evolution with End-to-End Data Center Virtualization

Unified Storage and FCoE

Cisco UCS Virtual Interface Card 1225

EMC VSPEX END-USER COMPUTING

THE EXPONENTIAL DATA CENTER

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Expert Insights: Expanding the Data Center with FCoE

VSPEX Brocade Networking for the VSPEX Proven Infrastructure. Copyright 2012 EMC Corporation. All rights reserved.

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Cisco Nexus 4000 Series Switches for IBM BladeCenter

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

All Roads Lead to Convergence

VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI

Install & Run the EMC VNX Virtual Storage Appliance (VSA)

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

VDI Challenges How virtual I/O helps Case Study #1: National telco Case Study #2: Global bank

Dell and Emulex: A lossless 10Gb Ethernet iscsi SAN for VMware vsphere 5

Thinking Different: Simple, Efficient, Affordable, Unified Storage

EMC Business Continuity for Microsoft Applications

Building Private Cloud Infrastructure

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

Transition to the Data Center Bridging Era with EqualLogic PS Series Storage Solutions A Dell Technical Whitepaper

EMC VSPEX PRIVATE CLOUD

Cisco HyperFlex HX220c M4 Node

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

DATA PROTECTION IN A ROBO ENVIRONMENT

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Hálózatok üzleti tervezése

Storage Protocol Offload for Virtualized Environments Session 301-F

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

FCoE Cookbook for HP Virtual Connect

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Cisco UCS Virtual Interface Card 1227

EMC VSPEX END-USER COMPUTING

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

Building a Phased Plan for End-to-End FCoE. Shaun Walsh VP Corporate Marketing Emulex Corporation

Microsoft Office SharePoint Server 2007

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

Active System Manager Release 8.2 Compatibility Matrix

Creating an agile infrastructure with Virtualized I/O

2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Copyright 2012 EMC Corporation. All rights reserved.

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

VMware Virtual SAN Technology

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

Virtual Networks: For Storage and Data

UCS - the computing vision that boosts your business

Surveillance Dell EMC Storage with FLIR Latitude

UCS Technical Deep Dive: Getting to the Heart of the Matter

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

HPE Synergy HPE SimpliVity 380

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

Reference Architecture

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

Microsoft SharePoint Server 2010 on Dell Systems

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

SNIA Developers Conference - Growth of the iscsi RDMA (iser) Ecosystem

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Brocade and VMware Strategic Partners. Kyle Creason Brocade Systems Engineer

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

Surveillance Dell EMC Storage with Digifort Enterprise

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

Trends in Data Centre

Cisco Certdumps Questions & Answers - Testing Engine

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

UM DIA NA VIDA DE UM PACOTE CEE

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Cisco Exam Questions & Answers

Technology Insight Series

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Copyright 2012 EMC Corporation. All rights reserved.

Transcription:

Jake Howering Director, Product Management

Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2

Market Opportunity for Converged Infrastructure The Converged Infrastructure market is predicted to reach (US) $74 billion in 2017*, a 52% CAGR that includes Networking, Storage, and Compute. $74B $6B 2013 2017 *Source - Wikibon 3

Revenue ($M) Data Center connectivity is changing Increasing emphasis on Ethernet-based connectivity options 14,000 12,000 10,000 8,000 6,000 4,000 2,000 2010-2014 CAGR NAS + iscsi + FCoE 13.9% Fibre Channel SAN 1.3% Network-attached NAS 5.4% iscsi SAN 18.2% External DAS -8.8% Fibre Channel over Ethernet 104.6% Switched SAS 31.9% 0 2008 2009 2010 2011 2012 2013 2014 Source: IDC, (7/10) and EMC

Storage Networking Interconnections: Fibre Channel vs Ethernet FC v Ethernet Storage Port Count Growth 2/4G 8G 16G 10GE 40GE 30,000,000 25,000,000 20,000,000 15,000,000 10,000,000 5,000,000-12 13 14 15 16 5

VSPEX Certification & Best-of-Breed Solutions Test Configuration Mixed Workloads on VMware ESXi 5.1 Lenovo RD630 + Qlogic 8300 CNA Designed for flexibility and validated to ensure interoperability and fast deployment, VSPEX enables you to choose the technology in your complete cloud infrastructure solution. Extreme Networks Summit X670 EMC VNX 5300 http://www.emc.com/platform/vspex-proven-infrastructure Ethernet SAN, up to 125 VM s, Failure scenarios, 9.88 GB iscsi Throughput 6

Extreme Competitive Beating the Competition High Performance and High Value Extreme Networks X670 Cisco Nexus 5548UP Switch Height 1 RU 1 RU 1RU Brocade VDX 6730 OS Single OS Multiple OS s Single OS Max 10GE ports 64 48 60 Max 40GE ports 4 0 0 7 Throughput 1.2T 960G 1.2T Stacking Yes No Yes OpenFlow Yes No Yes OpenStack Yes Yes No List Price ~ $25,000 ~ $55,000 ~ $62,000 Technology iscsi FCoE FCoE

Extreme Innovation with Open Standards Extreme Validated Solution (EVS) Enables Storage Partners and SDN 8 NetApp EMC Others VLANs LAG iscsi (Support TCP) Jumbo Frames (9000) DCB

Extreme SDN for Converged Infrastructure Available Now! OpenStack - Extreme Quantum Plug-in is Topology Aware Data Center Core Internet Zone 1 Network Network VM provisioned in Pod 1 based on Topology Scheduler proximity algorithm Compute Storage Compute Storage VM mobility, aka vmotion, can be restricted to Pods or Zones. 9 Pod 1 Pod 2

What Does it Mean to be Extreme Value Leverage the low cost curve of Ethernet Converge LAN and SAN onto single network for lower CAPEX and OPEX costs Efficient scalability with pay-as-you-grow model for incremental growth Performance High availability to maximize uptime and user experience Efficient bandwidth utilization while assuring a loop free topology Data Center Bridging features for lossless SAN experience Simplicity Pre-tested and Pre-validated solution assures seamless deployment and operations Extreme Networks single OS provides consistent and predictable UI and troubleshooting Automation and Management tools with VMware vcenter, EMC Unisphere and Extreme Networks Ridgeline systems Open Standards Industry standards protocols, including Ethernet, to assure interoperability SDN-Ready with OpenStack and OpenFlow support Open API's, including XML and SOAP, for system abstraction and custom integration as needed 10

Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 11

Storage Networking - Multiple Protocols iscsi NAS Fibre Channel FCoE Ethernet Ethernet File Sharing Fibre Channel FCoE iscsi SAN SAN SAN Choice of connectivity Fibre Channel (4 Gb/s, 8 Gb/s) Low cost IP (1 Gb/s, 10 Gb/s) FCoE Choice of delivery File-based Block-based S I M P L E Storage System Growth paths iscsi to Fibre Channel for throughput Fibre Channel to FCoE for simplification Scale front end and storage independently

Typical Storage Systems Deployment Shared storage for virtual servers and applications vcenter Unisphere Oracle, Microsoft Exchange, SQL Server, SharePoint Storage pool VNX series Virtual server pool Simple Tune SQL Server in 80% less time with FAST VP Provision SharePoint 4 times faster with a single tool Efficient Realize 50:1 server consolidation without creating storage bottle necks with FAST Cache Powerful Run virtualized Microsoft SQL and Oracle three-times faster

Storage Networking Key Requirements Availability Resiliency Isolation Performance 14

Storage Networking Key Technologies Fibre Channel Fibre Channel over Ethernet iscsi 15

Network Stack Comparison SCSI SCSI SCSI SCSI SCSI iscsi FCP FCP FCP FC FC FC FCIP Less Overhead than FCIP, iscsi TCP TCP IP IP FCoE Ethernet Ethernet Ethernet PHYSICAL WIRE SCSI iscsi FCIP FCoE FC

Ethernet-Based Storage Systems Block Based Storage File Based Storage iscsi FCoE NFS CIFS ExtremeXOS Infrastructure Layer Data Center Bridging (DCB) Protocols Priority Based Flow Control (PFC) Enhanced Transmission Standard(ETS) DCB Capabilities Exchange (DCBX) Dynamic Scripting Clear Flow

Storage Networking Key Features DCB FIP Snooping STP MLAG VLANs Jumbo Frames 18

Data Center Bridging Data Center Bridging Key Technology for Lossless Ethernet SAN DCBX DCBX LLDP LLDP DCBX: Data Center Bridging Capabilities Exchange (802.1Qaz) Discover and Exchange Capabilities and Configuration between DCB Switches via LLDP (802.1ab), including: Priority Flow Control (802.1Qbb) Pause specific classes of traffic between DCB switches Enhanced Transmission Selection (802.1Qaz) Guarantee a specific percentage of bandwidth for a specific class of traffic

FCoE Initialization Protocol FIP Snooping FIP Snooping (FCoE Initialazation Protocol Snooping ) Efficient FC transport (FCoE) over 10GE Ethernet in DC.. FIP snooping is used in multi-hop FCoE environments. FIP snooping is a frame inspection method that can be used by FIP snooping capable DCB devices to monitor FIP frames and apply policies based on the information in those frames. 20

LAN & SAN Physically Separate Topologies Ethernet Fibre Channel 1 Gigabit Ethernet 1 Gigabit Ethernet 1 Gigabit Ethernet NICs Rack-mounted servers Fibre Channel HBAs iscsi SAN Ethernet LAN Fibre Channel SAN Storage Servers connect to LAN, NAS and iscsi SAN with NICs Servers connect to FC SAN with HBAs Many environments today are still 1 Gigabit Ethernet Multiple server adapters, multiple cables, power and cooling costs Storage is a separate network (including iscsi)

Adapter Evolution: Consolidation Network Adapter

Storage Drivers and Virtualization vnic vscsi vnic vscsi Vswitch VMkernel storage stack Hypervisor NIC LAN traffic C N A FC HBA NIC C N A FC HBA iscsi traffic FCoE follows FC iscsi traffic FC traffic path *iscsi initiator can also be in the VM

FCoE Extends FC on a Single Network Server sees storage traffic as FC Network Driver FC Driver Ethernet Network FCoE SW Stack Standard 10G NIC Converged Network Adapter FC storage 2 options Lossless Ethernet Links Converged Network Switch FC network Ethernet FC SAN sees host as FC

FCoE With External FCoE Gateway Converged Network Switches move out of the rack from a tightly controlled environment into a unified network Maintains existing LAN and SAN management Ethernet LAN Ethernet Network (IP, FCoE) and CNS Converged Network Switch FC Attach Fibre Channel SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers

FCoE with Top of Rack Gateway Network Switches stay in the rack for a IP-based unified network Need Specialized Network Switch that has both FC and Ethernet ports expensive! Maintains existing LAN and SAN management Ethernet LAN Eternet Switch FC Attach FC SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers

Ethernet LAN & iscsi SAN Network Switches stay in the rack for a IP-based unified network Maintains existing LAN and SAN management Ethernet LAN Eternet Switch iscsi Attach iscsi SAN 10 GbE CNAs Storage Ethernet FC Rack Mounted Servers

Convergence at 10 Gigabit Ethernet Two paths to a Converged Network iscsi purely Ethernet FCoE allows for mix of FC and Ethernet (or all Ethernet) FC that you have today or buy tomorrow will plug into this in the future Choose based on scalability, management, and skill set Converged Network Switch Ethernet LAN iscsi/fcoe Storage 10 GbE CNAs FC SAN Ethernet FC Rack Mounted Servers

Software Defined Storage Networking FCoE Overlay

Basic Topology Customer to Compute Layer Customer or Machine Indicates traffic is allowed to cross planes in normal working condition Ethernet Fabric A Ethernet Fabric B Compute Layer 1. Dual paths from customer to compute layer - Basic Design 2. Active path backup path - Basic Design 3. Load Shared Paths - Advanced Design

Basic Topology Compute to Storage Path to Fabric A is Active Compute Layer Failure plan Path to Fabric B is Passive Ethernet Fabric A Ethernet Fabric B Storage Layer 1. Dual paths from Compute layer to storage layer - Basic Design 2. Active path backup path - Basic Design based on hypervisor multi-pathing 3. Load Shared Paths - Advanced Design - requires hypervisor plugin to enable IO level load sharing 4. Consider TCP monitor LACP lag groups with iscsi

FC + FCoE Design Single Hop & Active/Standby

FC + FCoE Design Single Hop & Active/Active

FC + FCoE Design Scalable Single Hop & Active/Active

Storage Networking Comparison FC FCoE iscsi Lossless Yes DCB (req) DCB (option) Layer 2 N/A Yes No Layer 3 - IP No No Yes TCP No No Yes Resiliency Yes Yes Yes Isolation Yes Yes Yes Performance Best Second Third Bandwidth 16GB FC 10GE 40GE + Hardware FC SAN Director FCF (Gateway) + FIP Switch Ethernet Switch Capability

Thank You! Extreme Converged Infrastructure http://www.extremenetworks.com/solutions/datacenter_converged_infrastruct ure.aspx Network Design Guide Coming Out Soon!

Thank You

EMC VSPEX Minimum Requirements 125 VMs Profile Characteristics Value Number of virtual machines 125 Virtual machine OS Processors per virtual machine 1 Number of virtual processors per physical CPU core RAM per virtual machine Average storage available for each virtual machine Average IOPS per virtual machine Number of LUNs or NFS shares to store virtual machine disks Windows Server 2012 Datacenter edition 4 2 GB 100 GB 25 IOPS 1 or 2 Number of virtual machines per LUN or NFS share Disk and RAID type for LUNs or NFS shares 50 RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks 38

EMC VSPEX Hardware Requirements 125 VMs Component Configuration Lenovo RD 630 Intel E5-2680 Dual socket 8 core per socket with hyper threading achieves 32 logical cores per node 256G ram per node 2 x Qlogic 8362CNA per host using Ethernet drivers RAID 1 boot disk for hypervisor 2 x 300 Gig SAS VMware vsphere Servers CPU 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host Network (Block Storage systems) IPMI enabled in bios using dedicated copper management port shares access with IPMI ip address For 125 virtual machines: Minimum of 250 GB RAM Add 2 GB for each physical server 2 x 10 GE NIC per server 2 Qlogic 8362 CNAs NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums. Extreme Networks Infrastructure Shared Infrastructure Minimum Switching Capacity for Block Storage: 2 x Extreme NetworksX670 2 x 10 GE ports per VMware vsphere Server 1 x 1 GE port per Control Station for management 2 ports per VMware vsphere server, for storage network 2 ports per SP, for storage data In most cases, a customer environment already has infrastructure services such as Active Directory (AD), DNS, and other services configured. The setup of these services is beyond the scope of this document. If implemented without existing infrastructure, the new minimum requirements are: 2 physical servers 16 GB RAM per server 4 processor cores per server 2 x 1 GE ports per server NOTE: These services can be migrated into VSPEX post-deployment. However, they must exist before VSPEX can be deployed. EMC VNX Series Storage Array Block Common: 1 x 1 GE interface per Control Station for management 1 x 1 GE interface per SP for management 2 front end ports per SP system disks for VNX OE For 125 virtual machines EMC VNX 5300 60 x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as hot spare 39

EMC VSPEX Software Versions Software Configuration VNX OE for file: Release 7.0.100-2 OE for block: Release 31 (05.31.000.5.704) EMC VSI for VMware vsphere Version 5.1 Virtual Machine Base Operating System Microsoft Windows Server 2008 R2 VDBench 5.0.2 Note: VDBench was used to validate this solution. It is not a required component for production. 40

Extreme + EMC VSPEX Software Versions r Software vsphere Server vcenter Server Operating system for vcenter Server Configuration VMware vsphere 5.1 Enterprise Edition 5.1 Standard Edition Windows Server 2008 R2 SP1 Standard Edition NOTE: Any operating system that is supported for vcenter can be used. Microsoft SQL Server Version 2008 R2 Standard Edition NOTE: Any supported database for vcenter can be used. VNX OE for Block 05.32.000.3.770 EMC VSI for VMware vsphere: Unified Storage Management 5.4 EMC VNX EMC VSI for VMware vsphere: Storage Viewer 5.4 EMC PowerPath /VE 5.8 Base Operating System Extreme Networks Summit Switches Virtual Machines (used for validation not required for deployment) Microsoft Window Server 2012 datacenter edition 15.3 Network Switching 41

EMC VSPEX Virtualization Requirements 125 VMs Component VMware vsphere Servers CPU Configuration 1 vcpu per virtual machine 4 vcpus per physical core For 125 virtual machines: 125 vcpus Minimum of 32 physical CPUs Memory 2 GB RAM per virtual machine 2 GB RAM reservation per VMware vsphere host For 125 virtual machines: Minimum of 250GB RAM Add 2 GB RAM for each physical server Network Block 2 x 10 GE NICs per server 2 HBA or CNAs per server NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vsphere High Availability (HA) functionality and to meet the listed minimums. 42

EMC VSPEX Network Requirements 125 VMs Component Configuration Network Infrastructure Minimum switching capacity Block 2 Extreme Networks X670 physical switches 2 x 10 GE ports per VMware vsphere server 1 x 1 GE port per Control Station for management 2 ports per SP, for storage data 43

EMC VSPEX Block Storage Requirements 125 VMs Component Configuration EMC VNX series Storage Array Block Common: 1x1 GE interface per control station for management 1x1 GE interface per SP for management 2 front-end ports per SP System disks for OE For 125 virtual machines: EMC VNX 5300 60 x 600 GB 15k rpm 3.5-inch SAS drives 4 x 200 GB Flash drives 2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares 1 x 200 GB Flash drive as a hot spare 44