Data Center 3.0 Technology Evolution. Session ID 20PT

Similar documents
Evolution with End-to-End Data Center Virtualization

Cisco Certdumps Questions & Answers - Testing Engine

Cisco Exam Questions & Answers

Service Oriented Virtual DC Design

Next Generation Data Centers Networks Consolidation and Virtualization

Virtual Security Gateway Overview

Cisco Nexus 4000 Series Switches for IBM BladeCenter

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Network Services in Virtualized Data Center

Data Center Virtualization Setting the Foundation. Ed Bugnion VP/CTO, Cisco Server, Access and Virtualization Technology Group

CCIE Data Center Written Exam ( ) version 1.0

Vendor: Cisco. Exam Code: Exam Name: Designing Cisco Data Center Unified Fabric (DCUFD) Version: Demo

Q&As DCID Designing Cisco Data Center Infrastructure

UCS Networking 201 Deep Dive

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Cisco Designing Cisco Data Center Unified Fabric (DCUFD) v5.0. Download Full Version :

Cisco Exam Questions & Answers

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Cisco UCS Virtual Interface Card 1225

HW virtualizace a podpora hypervizorů různých výrobců

Network Virtualization

Data Center Interconnect Solution Overview

Cisco Cisco Data Center Associate Level Accelerated - v1.0 (DCAA)

Agenda Registration & Coffee

Cisco UCS Virtual Interface Card 1227

VIRTUALIZING SERVER CONNECTIVITY IN THE CLOUD

Cisco Virtual Networking Solution Nexus 1000v and Virtual Services. Abhishek Mande Engineer

Selftestengine questions. Cisco Designing Cisco Data Center Unified Fabric

Exam Questions

Data Center Design IP Network Infrastructure

Overview. Cisco UCS Manager User Documentation

Midmarket Data Center Architecture: Cisco Unified Computing System with the Cisco Nexus 1000V Switch

Segmentation. Threat Defense. Visibility

UCS Networking Deep Dive. Neehal Dass - Customer Support Engineer

Architecting Scalable Clouds using VXLAN and Nexus 1000V

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Nexus 7000/5000/2000/1000v Deployment Case Studies

End To End Data Center Virtualization

Designing Cisco Data Center Unified Computing

"Charting the Course... Designing Cisco Data Center Infrastructure (DCID) Course Summary

UCS Engineering Details for the SAN Administrator

UCS - the computing vision that boosts your business

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

UCS Technical Deep Dive: Getting to the Heart of the Matter

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Configuring Cisco Nexus 7000 Series Switches

Module 5: Cisco Nexus 7000 Series Switch Administration, Management and Troubleshooting

Cisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K)

Cisco Virtual Networking Solution for OpenStack

Cisco Exam Questions & Answers

UCS Networking Deep Dive

All Roads Lead to Convergence

Maailman paras palvelinjärjestelmä. Tommi Salli Distinguished Engineer

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Cisco.Actualtests v by.Dragan.81q

Cisco Virtual Security Gateway (VSG) Mohammad Salaheldin

Cisco Nexus 1100 Series Virtual Services Appliances

Cisco Virtualized Workload Mobility Introduction

Availability for the Modern Data Center on FlexPod Introduction NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only

Virtuální firewall v ukázkách a příkladech

Implementing Cisco Data Center Infrastructure v6.0 (DCII)

Návrh serverových farem

Layer 2 Implementation

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Module Title : Course DCUFD : Designing Cisco Data Center Unified Fabric v3.0 Duration : 5 days

Frequently Asked Questions for HP EVI and MDC

Nexus 1000V in Context of SDN. Martin Divis, CSE,

Cisco Nexus 7000 Series Connectivity Solutions for the Cisco Unified Computing System

THE EXPONENTIAL DATA CENTER

Deploying Cloud Network Services Prime Network Services Controller (formerly VNMC)

DCNX5K: Configuring Cisco Nexus 5000 Switches

Storage Protocol Offload for Virtualized Environments Session 301-F

Administration and monitoring of the Cisco Data Center with Cisco DCNM

Cisco Prime Data Center Network Manager 6.2

VXLAN Overview: Cisco Nexus 9000 Series Switches

VIRTUAL CLUSTER SWITCHING SWITCHES AS A CLOUD FOR THE VIRTUAL DATA CENTER. Emil Kacperek Systems Engineer Brocade Communication Systems.

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

Configuring VM-FEX. Information About VM-FEX. VM-FEX Overview. VM-FEX Components. This chapter contains the following sections:

Design and Implementations of FCoE for the DataCenter. Mike Frase, Cisco Systems

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

SAN Virtuosity Fibre Channel over Ethernet

Virtualized Access Layer. Petr Grygárek

Cloud Networking (VITMMA02) Server Virtualization Data Center Gear

Cisco Nexus 7000 F3-Series 6-Port 100 Gigabit Ethernet Module

Best Practices come from YOU Cisco and/or its affiliates. All rights reserved.

Virtual Networks: For Storage and Data

Pass-Through Technology

"Charting the Course... Implementing Cisco Data Center Infrastructure (DCII) Course Summary

Next Generation Computing Architectures for Cloud Scale Applications

Jake Howering. Director, Product Management

Using Advanced Features on Cisco UCS Dan Hanson, Technical Marketing Manager, Data Center Group

CISCO EXAM QUESTIONS & ANSWERS

examcollection.premium.exam.186q

Questions & Answers

Design Considerations

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Cisco HyperFlex Systems

Evolution of the Data Centre Access Architecture

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations

NETWORK ARCHITECTURES AND CONVERGED CLOUD COMPUTING. Wim van Laarhoven September 2010

Transcription:

Data Center 3.0 Technology Evolution Session ID 20PT

Session Goal The focus of this seminar is on the latest technologies some of which can already be used in today's deployments and some that will become available in the next 1 or 2 years. These include: virtual Port- Channeling, Fabric Extender, Layer 2 multi-pathing (read IETF TRILL), VN-TAG, FCoE, IO Virtualization. Attaching Physical and Virtual Compute resources to the networking is described including blade switching, pass-thru, virtualized adapters and Hypervisor based switching. The Cisco compute platform, the UCS, is analyzed as part of the Blade Servers discussion in terms of network connectivity. 2

Understand Evolving Design Requirements 3

Data Center Inflection Points First Inflection Point Virtualization Minicomputer Client Server Distributed Computing Web Mainframe Second Inflection Point 1960 1970 1980 1990 2000 2010 4

The Evolving Data Centre Architecture Dec 2011 Gartner Data Center Conference: CIO Feedback 1) Budget in 2011 a. 33% Flat b. 42% Higher (greatest was up 5%, lowest was up 10% or more) c. 25% Lower 2) Biggest Challenge you Face in 2011 a. 29% Power, Cooling and Energy Efficiency b. 16% Cloud Computing c. 14% Meeting Business Needs with IT d. 5% Dealing with limited budget Technologies CIO s selected as one of their top five priorities Cloud Virtualization Mobility Infrastructure IT Manager 5

The Evolving Data Centre Architecture Data Center Evolution Path Consolidation Virtualization Automation Utility Cloud Efficient use of Compute, Storage and Network Flexibility Provisioning Speed Tighter integration between servers and the network Virtualized Workloads Network/Server demarcation moving inside of the server 6

Impact on the Data Center Operations & Maintenance Now ~80% of IT Budgets and Growing Spending (US$B) $300 $250 $200 $150 $100 $50 $0 Source: IDC Server mgmt. and admin. costs Power and cooling costs New server spending Virtualization makes things worse Admin Costs Dominate Budgets 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 60 60 55 50 45 40 35 30 25 20 15 10 5 0 Logical server installed base (millions) Physical server installed base (millions) 7

The Datacenter Today Trusted Control Reliable Secure 8

Cloud Computing? Cloud Computing Trusted Control Reliable Secure Flexible Dynamic On-demand Efficient 9

The Goal: Unified Platform Virtualization Security Applications Information Cloud Computing Trusted Control Reliable Secure Flexible Dynamic On-demand Efficient 10

Data Center 3.0 Business Driven, Fabric Architecture 11

The Evolving Data Centre Architecture Data Center 2.0 (Pre-Virtualized) Small Branch Office Medium Branch Office Large Branch Campus Core IP Core Firewall Services Firewall Services Server Balancing SSL Offloading WAN Application Services Aggregation Aggregation Server Balancing SSL Offloading WAN Application Services Access Server Farms Server Farms Aggregation Core Virtual Fabrics (VSANs) Storage Virtualization Data Replication Svcs Fabric Routing Services Fabric Gateway Services Aggregation Core Datacenter A Storage Farm Storage Farm Datacenter B 12

The Evolving Data Centre Architecture Data Center 2.0 (Physical Design == Logical Design) The IP portion of the Data Center Architecture has been based on the hierarchical switching design Workload is localized to the Aggregation Block Services localized to the applications running on the servers connected to the physicalpod Mobility often supported via a centralized cable plant Architecture is often based on optimized design for control plane stability within the network fabric Goal #1: Understand the constraints of the current approach (De-Couple the Elements of the Design) Goal #2:Understand the options we have to build a more efficient architecture (Re-assemble the elements into a more flexible design) Layer 3 Layer 2 Compute Core Aggregation Services Access 13

Cisco s Holistic Fabric-Based Approach High-Performance Infrastructure Delivering Architectural Flexibility Physical Virtual Cloud Network Compute Storage Application Services OPEN INTEGRATED FLEXIBLE SCALABLE RESILIENT SECURE 14

What is Cisco Unified Fabric? ATTRIBUTES BENEFITS CONVERGENCE Simplified Infrastructure Simplified Operations Reduced Cost SCALE INTELLIGENCE Ethernet Network Data Center OS & Mgmt Storage Network Intra Data Center Inter Data Center Multi-Tenant Integrated App Delivery Fluid VM Networking Secure VM Mobility Network-Based Approach for Systems Excellence 15

Cisco Unified Fabric Technologies NX-OS Continued Innovation for Differentiation Unified Ports Deployment Flexibility DCB/FCoE Consolidated I/O CONVERGENCE VDC LISP Virtualizes the Switch Scalability and Mobility FabricPath Architectural Flexibility SCALE INTELLIGENCE vpc FEX Architecture OTV IO Accelerator Active-Active Uplinks Simplified Management with Scale Workload Mobility Replication and Back up SME/DMM Compliance and Workload Mobility 16

Building Next Generation Data Center Fabric 17

Evolving Data Centre Architecture Focus of this Session The Tightly Coupled Workload Domain Layer 2 Scaling FEX Scale-Up FabricPath Scale-Out Common Storage Fabrics SAN, NAS, iscsi, CIFS Virtual Machine Connectivity vswitch, Nexus 1000v, VM-FEX Policy and Services in a vmotion environment Virtual Machines Connectivity and Services WAN 18

Evolution of the Data Center Architecture Tightly Coupled Workload Active/Active L2/L3 The domain of active workload migration (e.g. vmotion) is currently constrained by the latency requirements associated with storage synchronization Tightly coupled workload domain has specific network, storage, virtualization and services requirements 19

Evolving Data Centre Architecture Tightly Coupled Workload (e.g., vmotion and Clustering) Hypervisor based server virtualization and the associated capabilities (vmotion, ) are changing multiple aspects of the Data Center design How large do we need to scale Layer 2? Where does the storage fabric exist (NAS, SAN, ) How much capacity does a server need Where is the policy boundary (security, QoS, WAN acceleration, )? Where and how do you connect the servers? Data Center Row 1 Data Center Row 2 20

Evolving Data Centre Architecture Design Factor #1 to Re-Visit As we move to new designs need to re-evaluate the L2/L3 scaling and design assumptions Need to consider VLAN Usage Policy assignment (QoS, Security, Closed User Groups) IP Address Management Some factors are fixed (e.g. ARP load) Some factors can be modified by altering VLAN/Subnet ratio Still need to consider L2/L3 Boundary Control Plane Scaling ARP scaling (how many L2 adjacent devices) FHRP, PIM, IGMP STP logical port count (BPDUs generated per second) Goal: Evaluate which elements can change in your architecture VLAN/Subnet Ratio? VLAN span? Where is the L2/L3 boundary? Layer 3 Layer 2 Aggregation Scalability ARP, STP, FHRP 21

Evolving Data Centre Architecture Not an L2 vs. L3 Debate L2/L3 The traditional L2 vs. L3 debate has been based on a number of issues Scalability Availability Requirements for the High Availability design moving forward is a scalable, highly available switching fabric with the advantages of both L2 and L3 22

Architecture Flexibility Through NX-OS Spanning-Tree vpc FabricPath Active Paths Single Dual 16 Way POD Bandwidth Up to 10 Tbps Up to 20 Tbps Up to 160 Tbps Layer 2 Scalability Infrastructure Virtualization and Capacity 23

Virtual Port Channel (vpc) vpc is a Port-channeling concept extending link aggregation to two separate physical switches Allows the creation of resilient L2 topologies based on Link Aggregation. Eliminates the need for STP in the access-distribution Provides increased bandwidth All links are actively forwarding vpcmaintains independent control planes vpc switches are joined together to form a domain Physical Topology Si Virtual Port Channel L2 Non-vPC Si Logical Topology vpc Increased BW with vpc 24

Evolving Data Centre Architecture vpc Scaling Up the Network Pod Nexus designs currently leveraging vpc to increase capacity and increase scale of layer 2 fabrics Removing physical loops out of the layer 2 topology Reducing the STP state on the access and aggregation layer vpc Physical and Control Plane Redundancy Scaling for up to 32 x 10G links = 320 Gbps Scaling the aggregate Bandwidth Nexus 5000/5500 when combined with F1 line cards on Nexus 7000 can support port channels of up to 32 x 10G interfaces = 320 Gbps between access and aggregation Nexus 5500/2000 virtualized access switch can support MCEC based port channels of up to 16 x 10G links = 160Gbps between server and virtualized access switch VM #2 vpc VM #3 VM #4 Fabric Extension: STP Free Scaling for up to 16 x 10G links 25

Cisco FabricPath NX-OS Innovation, Enhancing Layer 2 with Layer 3 More than 500 FabricPath customers on Cisco Nexus 7000 Layer 2 Strengths Simple configuration Flexible provisioning Low cost Fabric Path Layer 3 Strengths All links active Fast convergence Highly scalable Resilience Cost Simplicity Flexibility Bandwidth Availability Now Shipping on Nexus 5500 from Q4CY11 Enhanced Fabric Scale with FEX 26

Evolving Data Centre Architecture FabricPath Scaling Out the Workload/Fabric Scaling the Fabric to a much larger geographic domain Extending the tightly coupled workload domain FabricPath Connect a group of switches using an arbitrary topology With a simple CLI, aggregate them into a Fabric: N7K(config)# interface ethernet 1/1 N7K(config-if)# switchport mode fabricpath An open protocol based on L3 technology provides Fabricwide intelligence and ties the elements together 27

Evolving Data Centre Architecture FabricPath The BUS for the Compute Pods FabricPath provides an extensible transport fabric that aids in decoupling the building blocks of the DC Provides a scalable communication BUS between design elements(layer 3 Routing, Policy Services, Compute Pod) FabricPath BUS blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 28

Evolving Data Centre Architecture Design Factor #2 to Re-Visit How is striping of workload across the physical Data Center accomplished (Rack, Grouping of Racks, Blade Chassis, )? How is the increase in percentage of devices attached to SAN/NAS impacting the aggregated I/O and cabling density per compute unit? Goal: Define the unit of Compute I/O and how it is managed (how does the cabling connect the compute to the network and fabric) blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 blade1 blade2 slot 1 blade3 slot 2 blade4 slot 3 blade5 slot 4 blade6 slot 5 blade7 slot 6 blade8 slot 7 slot 8 Rack Mount Blade Chassis Integrated - UCS 29

Nexus Virtualized Access Switch Rack Mount Rack Mount Scale Unit Rack Mount Servers are still predominantly 1Gbps (2 to 6 NIC s common) Migration to 2 x 10G NIC/CNA s is occurring (10G cable requirements) Server layout will primarily be driven by power density Look to replicate Units of workload [ Compute + I/O ] to aid in providing the flexibility in mapping the physical layout to the logical design 20 to 30 Servers per Rack 10 to 15 Servers per Rack 8 to 12 Servers per Rack 30

Evolving Data Centre Architecture FEX Scaling Up and Distributing the Workload Domain De-Coupling of the Layer 1 and Layer 2/3 Topologies If the workload is defined to be local to the switch can provide an alternative method to geographic distribution Provides a per rack based granularity view of the logical workload domain... Virtualized Switch 31

Fabric Extender Technology Simplifies Data Center Infrastructure Decouple Scale and Complexity Lower Costs Cross-Bar and Supervisor Nexus 5K and 7K UCS Fabric Interconnect Fabric Extenders (Remote Line Cards) Nexus 2K UCS I/O Module Distributed Modular System Fixed Backplane Distributed Data Center Fabric 10GbE (with 40GbE coming soon) Ethernet for the Backplane FABRIC EXTENDER TECHNOLOGY: Available in UCS and Nexus 32

Evolving Data Centre Architecture Starting Point Workload Virtualization Partitioning Physical devices partitioned into virtual devices Clustering Applications distributed across multiple servers App App App App App Virtual Machines OS OS OS OS OS OS OS Physical Servers 33

The Evolving Data Centre Architecture Starting Point The Compute Workload Domain Distribution of the Workload (Striping servers and VM s amongst the rack, along the row, between the rows, ) Scaling of the Workload Capabilities (Number of Servers and VM s) Architectural Goal is balanced between the need to scale workload and provide availability and manageability Flexibility in the architecture of the Data Center Fabric Where are the Tightly Coupled vs. Loosely Coupled Domains of Workload 34

Server Virtualization Issues 1. vmotion moves VMs across physical ports the network policy must follow 2. Impossible to view or apply network policy to locally switched traffic Port Group 3. Need shared nomenclature and collaboration for security policies between network and server admin vcenter Physical Switch Interface 35

Unified VM Awareness Nexus 1000V Software Hypervisor Switching Tagless (802.1Q) Feature set Flexibility Cisco VM-FEX External Hardware Switching Tag-based (Pre-standard 802.1BR) Performance Consolidation VM #1 VM #2 Server VM #3 VM #4 VM #1 VM #2 Server VM #3 VM #4 Nexus 1000V Hypervisor Hypervisor VIC NIC NIC Nexus 1000V LAN Nexus 5500 Policy-Based VM Connectivity Mobility of Network and Security Properties Non-Disruptive Operational Model 36

Data Centre Architecture Evolution Embedded 802.1Q Bridging Nexus 1000v Embedded Virtual Bridge - Nexus 1000V 802.1q standards based bridge Performs packet forwarding and applies advanced networking features Policy Based port profile applies port security, VLAN, and ACLs, policy maps for QoS treatment for all systems traffic including VM traffic, Console & Vmotion/Vmkernel Generic adapter on generic x86 server Standard 802.1q based upstream switch Leveraging standard switch to switch links (QoS, trunking, channeling,..) Policy on upstream switch looks like standard aggregation configuration VM VM VM VM Nexus 1000V VEM Hypervisor Generic Adapter 802.1Q Switch VNIC VETH 37

Cisco Nexus 1000V Components Cisco VSMs vcenter Server Virtual Ethernet Module (VEM) Replaces Vmware s virtual switch Data Plane for the N1KV Enables advanced switching capability on the hypervisor Provides each VM with dedicated switch ports Virtual Supervisor Module (VSM) CLI interface into the Nexus 1000V Control Plane for the N1KV Leverages NX-OS Controls multiple VEMs as a single network device Cisco VEM Cisco VEM Cisco VEM VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM7 VM9 VM10 VM11 VM12 38

Evolving Data Centre Architecture Design Factor #4 to Re-Visit Where Is the Edge? Hypervisor based compute virtualization moves the edge of the Fabric PCI-E bus and storage and network connectivity resources are virtualized Eth 2/12 FC 3/11 vswitch VMFS (VMWare) pnic HBA NPV (provides FC SAN virtualization) With a shift in the edge of the fabric comes a change in the operational practices and fabric design requirements VETH VNIC VMFS SCSI PCI-E Bus Edge of the Fabric Hypervisor provides virtualization of PCI-E resources 39

Evolving Data Centre Architecture Design Factor #4 to Re-Visit - Where Is the Edge? Unification of LAN and Storage fabrics FCoE, NFS, iscsi, CIFS all provide the capability to share the fabric Eth 2/12 vfc 3 10G adapters provide rational to share IP storage and IP transactional Converged Network Adapters (CNAs) virtualize the PCI-E bus resources Two PCIe devices are seen by the OS/Hypervisor Ethernet NIC Fibre Channel HBA Still 2 PCI Addresses on the BUS VETH 10GbE Fibre Channel 10GbE Link Ethernet PCIe VMFS SCSI Converged Network Adapter provides virtualization of the physical Media PCI-E Bus Edge of the Fabric The edge of the SAN/Storage and LAN fabrics are merging VNIC Hypervisor provides virtualization of PCI-E resources 40

Evolving Data Centre Architecture Design Factor #4 to Re-Visit - Where Is the Edge? The introduction of SR-IOV, Cisco Adapter-FEX and developing IEEE standards further extend the virtualization of the adapter and the first physical device port A single component is seen as many PCIe addresses on the bus and thus the OS talks to multiple devices but there is only one physical NIC, one physical wire and physical switch Extending the virtual port down from the physical fabric directly to the VM s vnic Edge of the Fabric Hypervisor provides virtualization of PCI-E resources PCI-E Bus 41 VETH VNIC veth 1 Eth 1 vfc 2 vfc 4 10GE - VNTag FC 2 Pass Thru vfc 3 Eth 3 FC 4 VMFS SCSI vfc 126 Adapter-FEX provides multiple PCIe resources Eth 126

Adapter FEX and VM-FEX Expanding Cisco Fabric Extender Offerings VM FEX VM-FEX extends Adapter FEX technology to virtual machine Adapter FEX Adapter FEX splits a physical NIC into multiple logical NICs IEEE 802.1BR Benefits: Single point of management Increased 10 G bandwidth utilization less power, fewer adaptors and cables with Adapter FEX Dynamic network & security policy mobility during VM migration with VM-FEX 42

Scaling the Fabric: Physical to Virtual Single Point of Policy and Management at Scale Up to 1152 host ports Adapter FEX VM-FEX Up to 1024 logical instances Up to 4096 VMs VMs VMs Nexus 2248TP Logical NIC VMs VMs Cisco Nexus 5596UP Switch Nexus 2232PP Logical NIC VMs VMs Logical NIC Nexus 2232TM VMs VMs Logical NIC VMs VMs Logical NIC Fabric Extensibility with Simplified Management BRKSPM-2604_c1 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 43

Evolving Data Centre Architecture Design Factor #5 to Re-Visit Where Are the Services? In the non-virtualized model services are inserted into the Data Path at choke points Client Logical Topology matches the Physical Virtualized workload may require a reevaluation of where the services are applied and how they are scaled Virtualized Services associated with the Virtual Machine (Nexus 1000v &vpath) 44 VM #2 VM #3 VM #4 Virtualized Services Nexus 1000v & vpath VSG, vwaas

What Is vpath? Services Interception in Nexus 1000v Intelligence build into Virtual Ethernet Module (VEM) of Cisco Nexus 1000V virtual switch (version 1.4 and above); vpath has the following main functions: VSN Server VM VPATH Interception 1. Intelligent Traffic interception for Virtual Service Nodes (VSN): vwaas& VSG; VEM In/Out 2. Offload the processing of Pass-through traffic (from vwaas, for instance); 3. ARP based health check; 4. Maintain Flow entry table. Upstream Switch vpath is Multitenant Aware Leveraging vpath can enhance the service performance by moving the processing to hypervisor; 45

Logical Nework Spanning Across Layer 3 VM VM VM VM VM VM VM Utilize All Links in Port Channel w/ UDP Add More Pods to Scale 46

L3 Connectivity Cisco Nexus 1000 Portfolio Virtual Appliance Nexus 1010 Virtual ASA vwaas VSG VSM VSM NAM Primary VSG Secondary VSM NAM VSG VSM: Virtual Supervisor Module VEM: Virtual Ethernet Module vpath: Virtual Service Data-path VXLAN: Scalable Segmentation VSG: Virtual Security Gateway vwaas: Virtual WAAS Virtual ASA: Tenant-edge security Virtual Blades Virtual Supervisor Module (VSM) Network Analysis Module (NAM) Virtual Security Gateway (VSG) Data Center Network Manager (DCNM)* vpath Service Binding (Traffic Steering) Fast-Path Offload VEM-1 vpath VXLAN VMware ESX VEM-2 vpath VXLAN MSFT Hyper-V** VXLAN 16M address space for LAN segments Network Virtualization (Mac-over-UDP) * Est: 4Q CY11 **With Microsoft Window 8 in 2012 47

Evolution of the Data Center Architecture Loosely Coupled Workload Burst and Disaster Recovery WAN DCI and Routing Asynchronous Storage Burst workload (adding temporary processing capacity) and Disaster Recovery leverage out of region facilities Loosely coupled workload domain has a different set of network, storage, virtualization and services requirements 48

Data Center Interconnect OTV Layer 2 Mobility Ethernet traffic between sites is encapsulated in IP: MAC in IP Dynamic encapsulation based on MAC routing table No Pseudo-Wire or Tunnel state maintained MAC MAC1 MAC2 MAC3 MAC1 MAC2 IP A IP B MAC1 MAC2 MAC1 MAC2 Encap Decap IF Eth1 IP B IP B OTV OTV IP A IP B Server 1 MAC 1 Communication between MAC1 (site 1) and MAC2 (site 2) Server 2 MAC 2 49

LISP Next Gen Routing Architecture Locator-ID Separation Protocol Decouples the Identity of a resource (IP address) from its Location Keeps ID-to-Location mapping in a Directory IP addresses are resolved to a location by consulting the directory Similar to DNS name-to-ip resolution, but this is IP-to-location resolution The directory is a distributed hierarchical database Identifiers can move in the Cloud Mobility for Virtual Machines Scalability IETF Draft Standard Please see BRKVIR-2931 End-to-End Data Center Virtualization for more information on DCI 50

Bringing It All together: NX-OS Any Application. Any Location. Any Scale. 1 Compute Fabric: Converged, Scalable and Stateless San Jose Workload Mobility 4 Policy Based Management 1 UCS, FabricPath, FCOE, Unified Ports 2 OTV, LISP 3 VM-FEX, VSG Service/Port/Security 4 Profiles and Open API s 2 Extensible Fabric 3 VM-Aware Fabric New York 2010 Cisco and/or its affiliates. All rights reserved. 51

SCALABILITY Raising The Bar With New Cisco Unified Fabric Innovations SAN LAN LAN/SAN New! New! New! New! Nexus 3000 Nexus 4000 Nexus 5000 Nexus 7000 MDS 9500 Nexus 2000 New! MDS 9200 Nexus 1010 MDS 9100 Nexus 1000V Cisco NX-OS: One OS from the Hypervisor to the Data Center Core CONVERGENCE VM-AWARE NETWORKING 10 GbE SWITCHING FABRIC EXTENSIBILITY CLOUD MOBILITY 52

Conclusion You should now have a thorough understanding of the Nexus Data Center features that enable a more flexible switching fabric supporting faster provisioning models, virtual machine mobility while maintaining the security isolation and availability you have to come to rely on in your current DC 2.0 designs Any questions? 53

Complete Your Session Evaluation Please give us your feedback!! Complete the evaluation form you were given when you entered the room This is session 3.1 (Data Center & Virtualization) Don t forget to complete the overall event evaluation form included in your registration kit YOUR FEEDBACK IS VERY IMPORTANT FOR US!!! THANKS 54

55