VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD Truls Myklebust Director, Product Management Brocade Communications 2011 Brocade Communciations - All Rights Reserved 13 October 2011
THE ENTERPRISE IS GOING VIRTUAL 2011 Brocade Communciations - All Rights Reserved 13 October 2011
Transition to Private Cloud Infrastructure Remote Data Centers Remote Data Centers Traditional Data Centers Private cloud attributes: Highly virtualized pools of compute, storage, and network resources On-demand, fast provisioning of application resources Lower capital and operational costs, higher asset utilization Automated management and orchestration Cloud-Enabled Data Centers 2011 Brocade Communciations - All Rights Reserved 13 October 2011
Continuous Growth Planning for the future Server hardware More cores, more memory, faster buses App OS App OS App OS App OS Virtual servers and desktops A 10X growth in Virtual Machines (VMs) by 2012 Hypervisor Hardware Hundreds of virtual desktops per server Storage and data Adoption of Solid State Drives (SSDs) More than 900 exabytes of data created by 2010 More than 35 zettabytes of data forecast by 2020 2011 Brocade Communciations - All Rights Reserved 13 October 2011
Increasing Complexity and Inefficiency Dynamic business requirements Peak and unpredictable workloads 24 7 operations Infrastructure complexity Too many elements Disparate management frameworks Geographic locality Operational complexity across data centers Data center design Legacy designs are inefficient 2011 Brocade Communciations - All Rights Reserved 13 October 2011
App COMPUTE NETWORK STORAGE Past 15% 20% Virtual Utilization Apps (2010) 1 Data Center Evolution Virtualization Early VMs VMs Traditional Construction COMPUTE Manual NETWORK STORAGE Present Replication Rigid Inflexible Future 70% Escalating Complexity Simplify Automate Scale Operating Cost 1 Source: IDC Virtual Server Forecast 2010 2014, reported April 2010 2011 Brocade Communciations - All Rights Reserved 13 October 2011
The Effects of Server Virtualization Unifying the Server and the Fabric Applications A A A A A Physical servers are consolidated, but a higher percentage of servers now connect to shared storage Servers A A A A A Higher virtualization densities drive new performance requirements, leading to adapter sprawl Data Center Networks LAN SAN Applications move across the infrastructure, no longer attached to physical ports the network policies must follow VM-aware networking services and management become essential Storage Integration with partner orchestration frameworks for greater choice and unified management from a single pane of glass iscsi/fcoe Fibre Channel 2011 Brocade Communciations - All Rights Reserved 13 October 2011
Private Cloud Data Center Components Ethernet and Fibre Channel Fabrics Virtual I/O layer Fabric Adapters Ethernet (VCS) Fibre Channel Element Management Orchestration Application-aware Fabric Services Unified Management Self-service and Automation
Hypervisor VMOP: Virtual Machine Optimized Ports Removing the Hypervisor Bottleneck Offloads packet sorting burden from the hypervisor Layer 2 classifier/sorter sorts packet on MAC and VLAN tagging Maps affinity of the VMs to specific CPU cores CPU% 0 100 VM1 vmnic VM2 VM3 vmnic vmnic vswitch VM4 vmnic VMware NetQueue and Microsoft VMQ support Gbps 0 10 Brocade CNA or NIC 1860 Fabric without Adapter VMOP Enables 10 GbE line-rate performance in virtualized environments Access Layer Switch Zero configuration required 2011 Brocade Communciations - All Rights Reserved 13 October 2011
10 GbE 1 Gbps 1 Gbps 3 Gbps 5 Gbps 16 Gbps Fibre Channel 6 Gbps 10 Gbps IOV: vflink I/O Virtualization Adapter Consolidation with Granular Control Up to 8 virtual fabric links (vflinks) 4 PCIe Physical Functions (PFs) per port Fibre Channel: vhba Ethernet: vnic or vhba (FCoE) Appear as independent physical adapters to operating system No OS dependency, works today OS- and hypervisor-agnostic Access layer switch agnostic Configurable bandwidth assignments 100 Mbps increments Benefit: Consolidate multiple NICs and HBAs while maintaining isolation, QoS, and bandwidth allocations for different networks VM1 Single Dual-Port Brocade 1860 Fabric Adapter vnics VM2 vswitch Ethernet Fabric (VCS) VM3 Hypervisor Kernel Storage IF VMn vhbas Fibre Channel Fabric Console Backup/iSCSI VM Migration Production Tape Storage Disk Storage
Hypervisor SR-IOV: Single-Root I/O Virtualization Efficient Sharing of I/O Resources Extends Brocade vflink IOV Uses PCIe Virtual Functions (VFs) Up to 255 VFs per adapter VFs mapped directly to VMs, bypassing hypervisor (direct I/O) Hypervisor retains control of underlying Physical Function (PF) Implements split-driver model OS/hypervisor support required Benefit: Enables native performance while maintaining non-disruptive VM mobility* VM1 VM2 VM3 VF Driver VF Driver VF Driver VF1 VF2 VF3 Physical Function PF Driver Brocade 1860 Fabric Adapter VMn VF Driver VFn * Hypervisor dependent
Brocade 1860 Fabric Adapter Hypervisor Software-Based Virtual Switching How It Works Today A software vswitch inside the hypervisor provides inter-vm and inbound / outbound connectivity Creates additional CPU load and impacts performance Fragmented management between server and network administrators VM1 VM2 vmnic vmnic vswitch PF Software vswitch
Brocade 1860 Fabric Adapter Hypervisor Virtual Ethernet Bridging (VEB) Offload Virtual Switching to Adapter Integrated hardware VEB handles inter-vm switching Requires SR-IOV and direct I/O to bypass the hypervisor + Improves I/O performance, lowest switching latency + Alleviates excessive CPU utilization, allowing greater scalability + No support needed from access layer switch + Unifies management of physical and virtual switching No traffic visibility in external network VM1 vmnic vswitch PF VM2 vmnic VM3 VF Driver VF1 VM4 VF Driver VF2 Virtual Ethernet Bridge (VEB) Adapter-based VEB
Brocade 1860 Fabric Adapter Hypervisor VEPA: Virtual Ethernet Port Aggregator 802.1Qbg Edge Virtual Bridging (EVB) All VM-generated traffic is forwarded to access layer switch VMs appear as if directly connected to access layer switch + Allows per-vm policy enforcement and traffic visibility Requires VEPA support from access layer switch Additional latency and potential for link congestion as all traffic exits the adapter VM1 vmnic vswitch PF VM2 vmnic VM3 VF Driver VF1 VM4 VF Driver VF2 Virtual Ethernet Bridge (VEB) VEPA Access Layer Switch
Brocade 1860 Fabric Adapter Hypervisor Virtual Switching Options Different Approaches Multiple options to address network connectivity challenges in virtualized environments Brocade 1860 supports all these options Allows freedom to choose best approach for each use case VM1 vmnic vswitch PF VM2 vmnic VM3 VF Driver VF1 VM4 VF Driver VF2 Virtual Ethernet Bridge (VEB) Software VEB VEPA Latency Very high Very low High CPU overhead Very high Very low Very low Access control Limited Limited Good Software vswitch Adapter-based VEB VEPA Access Layer Switch
SAO: Server Application Optimization High-Performance Trunking Brocade Network Advisor Aggregates two ports into a single link 2 16 Gbps links = 1 32 Gbps link BENEFITS Maximum bandwidth utilization Frame-level load balancing Transparent failover across physical links 2011 Brocade Communciations - All Rights Reserved 13 October 2011
SAO: Server Application Optimization Application-level Quality of Service (QoS) with Virtual Channels VM1 VM2 VM3 OS OS OS App1 App2 App3 Hypervisor Physical Server Brocade 1860 Fabric Adapter Brocade 1860 Fabric Adapter SAO- and AN- enabled Fibre Channel edge switch High Medium Low AN-enabled Fibre Channel core switch Extending Adaptive Networking (AN) services such as QoS from the fabric to the host helps users rapidly scale server virtualization without compromising SLAs Storage
Brocade Server Application Optimization The Impact of QoS Without QoS High Medium Low Medium With QoS High Medium Low High Medium Low
16 Gbps Fibre Channel AnyIO Technology Unmatched flexibility Run all protocols concurrently Field-configurable per port No licensing required 10 GbE DCB/FCoE/ iscsi Full line-rate 16 Gbps Fibre Channel N_Port Trunking, VM-aware QoS Over 500K IOPS per port Full line-rate 10 Gbps Ethernet Full FCoE and stateless networking offloads Over 500K IOPS per port (FCoE/iSCSI)
Delivering Business Value in the Cloud AGILITY Private Cloud Readiness Flexible Just-In-Time Decision Making Easy/Rapid Migration & Re-Provisioning CONSOLIDATION CapEx/OpEx Savings Improved High Availability Reduced Complexity PERFORMANCE VIRTUALIZATION MANAGEMENT Virtualize Performance & Mission Critical Applications Line-rate 16Gbps FC & 10GbE Over 1,000,000 IOPS for FC/FCoE/iSCSI VM-aware Fabric Services & Quality-of-Service Increase Server VM Density/Scalability Reduce Hypervisor CPU Overhead Operational Simplicity Unified Management Third-party integration End-to-End Visibility
THANK YOU!