Architecting Scalable Clouds using VXLAN and Nexus 1000V

Similar documents
VXLAN Overview: Cisco Nexus 9000 Series Switches

Table of Contents HOL-PRT-1305

Virtual Security Gateway Overview

Cisco Nexus 1000V InterCloud

Virtuální firewall v ukázkách a příkladech

Nexus 1000V in Context of SDN. Martin Divis, CSE,

VXLAN Design with Cisco Nexus 9300 Platform Switches

Implementing VXLAN in DataCenter

Network Services in Virtualized Data Center

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

VMWARE SOLUTIONS AND THE DATACENTER. Fredric Linder

Deploying Cloud Network Services Prime Network Services Controller (formerly VNMC)

Hybrid Clouds: Integrating the Enterprise Data Center and the Public Cloud

Exam Name: VMware Certified Associate Network Virtualization

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco Virtual Security Gateway Deployment Guide VSG 1.4

Cisco Virtual Networking Solution for OpenStack

Service Graph Design with Cisco Application Centric Infrastructure

VXLAN Deployment Use Cases and Best Practices

Cisco Nexus 1000V Switch for Microsoft Hyper-V

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Hypervisors networking: best practices for interconnecting with Cisco switches

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Migration from Classic DC Network to Application Centric Infrastructure

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit

Data Center Configuration. 1. Configuring VXLAN

Lecture 8 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

Vmware VCXN610. VMware Certified Implementation Expert (R) Network Virtualization.

Network Configuration Example

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

Best Practices come from YOU Cisco and/or its affiliates. All rights reserved.

Cisco Certdumps Questions & Answers - Testing Engine

Cloud Networking (VITMMA02) Network Virtualization: Overlay Networks OpenStack Neutron Networking

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer

Cisco Nexus 7000 Series NX-OS VXLAN Configuration Guide

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Data Center 3.0 Technology Evolution. Session ID 20PT

Q&As DCID Designing Cisco Data Center Infrastructure

Cross-vCenter NSX Installation Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

Vendor: Cisco. Exam Code: Exam Name: Designing Cisco Data Center Unified Fabric (DCUFD) Version: Demo

Ethernet VPN (EVPN) in Data Center

VMware vsphere 5.5 VXLAN Networking and Emulex OneConnect OCe14000 Ethernet Adapters

Nexus 7000 F3 or Mx/F2e VDC Migration Use Cases

Evolution with End-to-End Data Center Virtualization

Cross-vCenter NSX Installation Guide. Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4

Page 2

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Cloud e Datacenter Networking

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Contents. EVPN overview 1

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

"Charting the Course... Implementing Cisco Data Center Infrastructure (DCII) Course Summary

Cloud e Datacenter Networking

Cross-vCenter NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3

VXLAN Technical Brief A standard based Data Center Interconnection solution Dell EMC Networking Data Center Technical Marketing February 2017

Cisco HyperFlex Systems

Cloud Networking From Theory to Practice. Ivan Pepelnjak NIL Data Communications

UCS Technical Deep Dive: Getting to the Heart of the Matter

PracticeTorrent. Latest study torrent with verified answers will facilitate your actual test

HPE FlexFabric 7900 Switch Series

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Lecture 7 Advanced Networking Virtual LAN. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Multi-site Datacenter Network Infrastructures

NSX Administration Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

Implementing Cisco Data Center Infrastructure v6.0 (DCII)

Evolution of Network Overlays in Data Center Clouds

Real World ACI Deployment and Migration Kannan Ponnuswamy, Solutions Architect BRKACI-2601

Cisco Exam Questions & Answers

Module 5: Cisco Nexus 7000 Series Switch Administration, Management and Troubleshooting

Cisco ACI with Cisco AVS

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

NETWORK OVERLAYS: AN INTRODUCTION

IP Mobility Design Considerations

Virtualization Design

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

Service Oriented Virtual DC Design

Cisco ACI and Cisco AVS

Virtual Machine Manager Domains

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

Vendor: Cisco. Exam Code: Exam Name: DCID Designing Cisco Data Center Infrastructure. Version: Demo

Cisco CCIE Data Center Written Exam v2.0. Version Demo

BIG-IP TMOS : Tunneling and IPsec. Version 13.0

Cisco Nexus 1000V for VMware vsphere VDP Configuration Guide, Release 5.x

Exam Questions

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min.

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

CCIE Data Center Written Exam ( ) version 1.0

Virtual Extensible LAN (VXLAN) Overview

Configuring Cisco Nexus 7000 Series Switches

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public

Network Virtualization

Layer 2 Implementation

vsphere Networking 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

ARISTA DESIGN GUIDE Data Center Interconnection with VXLAN

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay

Transcription:

Architecting Scalable Clouds using VXLAN and Nexus 1000V Lawrence Kreeger Principal Engineer

Agenda Session Is Broken Into 3 Main Parts Part 1: VXLAN Overview What is a VXLAN? Why VXLANs? What is VMware vcloud Director? What is a vapp? Part 2: Deeper Dive Data Plane Model, Packet Format, Day in the life of a VXLAN What s New for VXLAN on Nexus 1000V Comparison with other Network Virtualization technologies Part 3: Deployment Considerations ESX Host Infrastructure Configuration Underlying Network Infrastructure Configuration VXLAN Configuration (with and without vcloud Director) 3

Part 1: VXLAN Overview

What Is A VXLAN? A VLAN with an X in the middle A VXLAN provides the same service to End Systems as a VLAN The X stands for extensible Scale! More layer 2 segments than VLANs Wider stretch than VLANs VXLANs are an Overlay Network technology MAC Over IP/UDP A draft specifying VXLAN was submitted to the IETF by Cisco, VMware and several other hypervisor and network equipment vendors (draft-mahalingamdutt-dcops-vxlan) 5

Overlay Networks SAN Air Traffic Control System EWR Ethernet Frames V M 1 V M 2 Virtual Switch IP Addr 1.1.1.1 IP Network IP Addr 2.2.2.2 Virtual Switch V M 4 V M 5 V M 3 Hypervisor IP/UDP Packets Hypervisor V M 6 6

VXLAN Data Plane High Level Overview VM to VM traffic on different access switches is encapsulated in a VXLAN header + UDP + IP The VXLAN header contains a 24 bit VXLAN Network Identifier VXLAN uses IP multicast to deliver bcast/mcast/unknown destination VM MAC addresses to all access switches participating in a given VXLANs VM MAC to access switch IP address mappings are learned by receiving encapsulated packets Similar to Ethernet bridge flood and learn behavior Known destination VM MAC addresses are carried over point to point tunnels between access switches 7

Why VXLANs? Pain Points in Scaling Cloud Networking Use of server virtualization and cloud computing is stressing the network infrastructure in several ways: - Server Virtualization increases demands on switch MAC address tables - Multi-tenancy and vapps driving the need for more than 4K VLANs - Static VLAN trunk provisioning doesn t work well for Cloud Computing and VM mobility - Limited reach of VLANs using STP constrains use of compute resources 8

Server Virtualization and MAC Addresses Comparison of Physical vs. Virtualized Servers Assume each server has 2 NICs (e.g. front end and management) A physical server uses only 1 MAC addresses per NIC (2 MACs) Virtualized servers have a MAC address for each Virtual NIC (VNIC) Multiple kernel VNICs are used by the hypervisor itself (e.g. management, iscsi, vmotion, Fault Tolerance etc) (e.g. 6 MACs) Each VM may have multiple VNICs (e.g. 2) New 1 RU servers have 16 cores (32 threads), and hundreds of GB of memory e.g. 32 VMs with 2 VNICs each = 64 MACs (This number will only be rising) Physical with 2 MACs -> Virtualized with 70 MACs = 35 fold increase in MACs per server! 9

High Density Switch Architectures Can Pack Thousands of Servers in One STP Domain 4K Physical Servers = 4K * 2 = 8K MACs 4K Virtualized Servers: Without VXLAN: 4K * 70 = 280K MACs With VXLAN: 4K * 6 = 24K MACs 2 Nexus 7000 (768 ports each) IP Core 2 VPC Peer Links each 286 Uplink Ports each 480 Downlink Ports each 256 Nexus 2232 (40 Ports Each) 32 Nexus 5596 (96 Ports Each) 2 VPC Peer Links each 30 Uplink Ports each 64 FEX Ports each 8 Fabric Ports each 32 Host Ports each 4096 Servers 2 x 10GE each 10

Multi-Tenancy and vapps Drive the Need for Many L2 Segments Both MAC and IP addresses could overlap between two tenants, or even within the same tenant in different vapps. Each overlapping address space needs a separate segment VLANs use 12 bit IDs = 4K VXLANs use 24 bit IDs = 16M 11

Challenges Configuring VLAN Trunks to Servers Trunk ports to virtualized servers are typically manually configured Slow to react to dynamic needs of the cloud, which usually leads to over-provisioning the trunks Over-provisioned trunk ports lead to Broadcast and Unknown unicast traffic arriving at servers that don t need them Excessive use of Spanning Tree Logical Port resources on access switches VXLANs use the multicast IGMP protocol to automatically prune traffic on demand Logical Port resources are traded for multicast IGMP Snooping state in switches and IP Route state in routers 12

Spanning Tree Logical Port Limitations A Logical Port is the intersection of a VLAN with a physical switch port e.g. A single trunk port with 1000 VLANs uses 1000 Logical Ports Switches have a limited number of Logical Ports This is a STP software limitation Nexus 7000 NX-OS 6.x supports 16,000 for PVST+ and 90,000 for MST Nexus 5000 NX-OS 5.2 supports 32,000 for either PVST+ or MST e.g. A 96 port Nexus 5000 switch can support on average 333 VLANs per port Numbers get worse for a larger switch and/or with FEX e.g. The previous example topology had 288 ports per Nexus 5000 -> 111 VLANs per port When using VXLANs, all traffic travels over just one transport VLAN 13

Extending Layer 2 Across the Data Center Exacerbates 4K VLAN Limit Using FabricPath or OTV to extend layer 2 across the entire data center increases VM mobility and deployment flexibility However...it makes the 4K VLAN limit a data center wide limitation With VLANs a tradeoff must be made between the number of segments within a data center, and the span of those segments Small layer 2 domains give many islands of 4K VLANs, but limits VM placement and mobility VXLANs can be extended data center wide and still support up to 16M layer 2 segments 14

VMware vcloud Director and vapps

What is VMware vcloud Director? Organization 1 Organization m Pools virtual infrastructure resources into tiers called Virtual Datacenters Defines standard collections of VMs called vapps Creates Organizations and manages users Provides UI for users to self provision vapps into Virtual Datacenters Automatically deploys VMware vshield Edge VMs to provide secure multi-tenancy User Portals Users Virtual Datacenter 1 (Gold) VMware vcenter Server VMware vcloud Director Catalogs VMware vshield VMware vcenter Server Security Virtual Datacenter n (Silver) VMware vcenter Server VMware vsphere VMware vsphere VMware vsphere 16

What Is A vapp? A Cloud Provider using vcloud Director offers catalogs of vapps to their Users When cloned, new vapps retain the same MAC and IP addresses Duplicate MACs within different vapps requires L2 isolation Duplicate IP addresses requires L2/L3 isolation (NAT of externally facing IP addresses) Usage of vapps causes an explosion in the need for isolated L2 segments Org Network vapp vapp DB Net vapp App Net vapp Web Net DB VM s App VM s Web VM s Edge Gateway 17

Possible vapp Instantiation Edge Gateway options: vshield Edge (now) ASA 1000V (future) Edge Gateway performs NAT or VPN to remote location VXLANs are perfect candidates for vapp Networks VLAN 100 vapp X VXLAN 5001 VXLAN 5002 VXLAN 5000 DB VM s App VM s Web VM s vshield Edge 18

VXLAN Benefits On demand network segments without physical network reconfiguration Massive scale of layer 2 segments for multi-tenant environments Allows virtual layer 2 segments to stretch across physical layer 2 network boundaries Provides operational flexibility for deploying VMs anywhere in the data center VXLANs work over existing deployed data center switches and routers Alleviates network scaling issues associated with server virtualization 19

Part 1: Q & A

Part 2: Deeper Dive

VXLAN Network Model Access Switch Access Switch End System Bridge Domain Switch VTEP IP Multicast Enabled Underlying Network VTEP Bridge Domain Switch End System End System End System VTEP = VXLAN Tunnel End Point 22

VXLAN Data Plane Model Direct Unicast tunnels between VTEPs (Carries known unicast frames) VTEP VTEP VXLAN s IP Any Source Multicast Group (*,G) acts as a bus for delivery to all relevant VTEPs for a given VNI. (Carries unknown/broadcast/multicast frames) VTEP VTEP 23

VTEPs on the Nexus 1000V The Nexus 1000V VEMs act as the VXLAN Tunnel Endpoints (VTEP) Nexus 1000V uses a VMKNIC to terminate VTEP traffic The VMKNIC is connected to a VLAN to transport the encapsulated traffic The VMKNIC is assigned an IP address used to terminate the encapsulated traffic 24

VXLAN Packet Structure Original L2 Frame Given a VXLAN Header with VNI UDP header has a well known UDP destination port reserved for VXLAN UDP source port is generated using a hash of the inner /IP Ethernet header IP header has destination and source addresses of the VTEPs Outer MAC header has source VTEP MAC and next hop MAC as destination Outer MAC frame may optionally have a VLAN tag (if needed, i.e. sent over a trunk) 25

VTEP Use Of IGMP IGMP Used to Join Each VXLANs Assigned Multicast Group on Demand Web VM DB VM DB VM Web VM Join Multicast Group 239.1.1.1 Join Multicast Group 239.1.1.1 Join Multicast Group 239.2.2.2 Join Multicast Group 239.2.2.2 26

VXLAN Example Data Flow VM1 Communicating with VM2 in a VXLAN ARP Request MAC: VM 1 VM 2 MAC: Request VM 3 abc xyz ARP ARP Request VXLAN VMKNIC 1.1.1.1 VXLAN VMKNIC 2.2.2.2 VXLAN VMKNIC 3.3.3.3 VEM 1 VEM 2 VEM 3 Multicast Multicast Multicast MAC Table: VEM 2 VM Source MAC Remote Host VXLAN IP VM1:abc 1.1.1.1 27

VXLAN Example Data Flow VM1 Communicating with VM2 in a VXLAN MAC: VM 1 VM 2 MAC: Response VM 3 abc xyz ARP VXLAN VMKNIC 1.1.1.1 VXLAN VMKNIC 2.2.2.2 VXLAN VMKNIC 3.3.3.3 Unicast MAC Table: VEM 2 Layer 3 VM Source MAC Remote Host VXLAN IP VM1:abc 1.1.1.1 28

VXLAN Example Data Flow VM1 Communicating with VM2 in a VXLAN MAC: VM 1 VM 2 MAC: VM 3 abc ARP Response xyz VXLAN VMKNIC 1.1.1.1 VXLAN VMKNIC 2.2.2.2 VXLAN VMKNIC 3.3.3.3 MAC Table: VEM 1 VM Source MAC Remote Host VXLAN IP VM2:xyz 2.2.2.2 MAC Table: VEM 2 VM Source MAC Remote Host VXLAN IP VM1:abc 1.1.1.1 29

VXLAN Example Data Flow VM1 Communicating with VM2 in a VXLAN MAC: VM 1 VM 2 MAC: VM 3 abc xyz VXLAN VMKNIC 1.1.1.1 VXLAN VMKNIC 2.2.2.2 VXLAN VMKNIC 3.3.3.3 Unicast MAC Table: VEM 1 VM Source MAC Remote Host VXLAN IP VM2:xyz 2.2.2.2 MAC Table: VEM 2 VM Source MAC Remote Host VXLAN IP VM1:abc 1.1.1.1 30

Multiple VXLANs Can Share One Multicast Group Blue & Red VXLANs Share the 239.1.1.1 Multicast Group Web VM App VM DB VM App VM Encapsulate with Blue VXLAN ID Multicast to Servers Registered for 239.1.1.1 Multicast Group VEM Discards Since No VM with Blue VXLAN ID VM Broadcast Frames Sent to More Servers But Broadcast Domain Respected Within VXLAN Segment 31

What s New for VXLAN on Nexus 1000V

Nexus 1000V VXLAN Enhancements Available Starting In Release 4.2(1)SV2(2.1) Multicast not required within a single Nexus 1000V MAC Address distribution within a single Nexus 1000V Trunking of VXLANs to Virtual Machines VXLAN to VLAN Gateway Virtual Service Blade 33

Multicast-less Mode Several customers have asked for a way to support VXLAN without using IP multicast A single Nexus 1000V is actually one virtual switch, controlled by the same Virtual Supervisor Module The VSM is already used to distribute MAC addresses between VEMs for features such as: Private VLAN Port Security For this feature, the VSM is also used to distribute the VTEP IP addresses for each VXLAN between the VEMs VEMs perform head-end replication of multi-destination frames only to the other VEMs which are participating in the VXLAN Should only be used if the amount of multi-destination traffic is low (e.g. ARP, DHCP, discovery) 34

MAC Address Distribution VSM distributes assigned VNIC MAC addresses and their VTEP IP address mappings This pre-populates the VXLAN forwarding tables This eliminates the need for unknown flooding for these addresses Especially useful in conjunction with Multicast-less mode to minimize head-end replication 35

VXLAN Trunking to VNICs VMs have a limited number of VNICs (e.g. 10 or 8) This typically limits the number of VLANs or VXLANs a VM can connect to Sometimes it is desirable for a VM to connect to many networks e.g. If the VM is a network service appliance or router For VLANs, the Nexus 1000V supports VLAN trunks It is possible for VMs to have there own VTEPs to terminate many VXLANs, but most existing VMs do not support this. Solution: Map each VXLAN to a locally significant VLAN tag on the virtual Ethernet interface These locally significant tag values can be reused with different mappings on different interfaces The VM thinks it is connected to a VLAN trunk 36

VXLAN to VLAN Virtual Service Blade

Bridging The Virtual/Physical Divide?? Si Si VXLANs/virtual VLANs/physical 38

VXLAN to VLAN Gateway (Logical View) L3 VLANs L2 Domain 1 L2 Domain 2 L2 Domain 3 VXLANs VXLAN GW VEM VXLAN GW VEM VXLAN GW VEM 39

VXLAN Gateway: A Two Port Bridge VXLAN 10000 bridge-domain red VLAN 100 VXLAN 20000 bridge-domain blue VLAN 200 VXLAN Gateway Virtual Service Blade Uplink Each VXLAN Gateway VSB can support multiple bridge domains 40

VXLAN Gateway Virtual Service Module Is a Virtual Service Blade running on Nexus 1010/1110 Each VXLAN Gateway VSB can use one or two dedicated 1G NICs from the appliance Is managed as a module of the Nexus 1000V virtual chassis Supports Active/Standby High Availability Requires Nexus 1000V Advanced Edition License Available now 41

VXLAN Overlay Comparisons

VXLAN Versus STT Stateless Transport Tunneling Protocol Similarities Both carry Ethernet Frames Both use IP Transport Both can use IP Multicast For broadcast and multicast frames Both can take advantage of existing Port Channel load distribution algorithms 5 Tuple Hashing (UDP vs TCP) Differences Encapsulation Format and Overhead VXLAN: UDP with 50 bytes STT: TCP-like with 72 to 54 bytes (not uniform) * Segment ID Size VXLAN: 24 bit STT: 64 bit Firewall ACL can act on VXLAN UDP port Firewalls will likely block STT since it has no TCP state machine handshake Forwarding Logic VXLAN: Flooding/Learning STT: Not specified Note: STT uses the TCP header, but not the protocol state machine. TCP header fields are repurposed. * The STT header does not exist in every packet. Only the first packet of a large segment, therefore reassembly is required. 43

VXLAN Versus NVGRE Network Virtualization using Generic Routing Encapsulation Similarities Both carry Ethernet frames Both use IP Transport Both can use IP Multicast For broadcast and multicast frames 24 Bit Segment ID Differences Encapsulation Format and Overhead VXLAN: UDP with 50 bytes NVGRE: GRE with 42 bytes Port Channel Load Distribution VXLAN: UDP 5-tuple hashing Most (if not all) current switches do not hash on the GRE header Firewall ACL can act on VXLAN UDP port Difficult for firewall to act on the GRE Protocol Type field Forwarding Logic VXLAN: Flooding/Learning NVGRE: Not specified 44

VXLAN Versus OTV Overlay Transport Virtualization Similarities Both carry Ethernet frames Same UDP based encapsulation header VXLAN does not use the OTV Overlay ID field Both can use IP Multicast For broadcast and multicast frames (optional for OTV) Differences Forwarding Logic VXLAN: Flooding/Learning OTV: Uses the IS-IS protocol to advertise the MAC address to IP bindings OTV can locally terminate ARP and doesn t flood unknown MACs OTV can use an adjacency server to eliminate the need for IP multicast OTV is optimized for Data Center Interconnect to extend VLANs between or across data centers VXLAN is optimized for intra-dc and multitenancy 45

VXLAN Versus LISP Locator / ID Separation Protocol Similarities Same UDP based encapsulation header VXLAN does not use the control flag bits or Nonce/Map-Version field 24 Bit Segment ID Differences LISP carries IP packets, while VXLAN carries Ethernet frames Forwarding Logic VXLAN: Flooding/Learning LISP: Uses a mapping system to register/resolve inner IP to outer IP mappings For LISP, IP Multicast is only required to carry host IP multicast traffic LISP is designed to give IP address (Identifier) mobility / multi-homing and IP core route scalability LISP can provide optimal traffic routing when Identifier IP addresses move to a different location 46

Part 2: Q & A

Part 3: Deployment

Nexus 1000V VEM VMKNICs Management VMKNIC For VSM to VEM communication VXLAN VMKNIC(s) For terminating VXLAN encapsulated traffic VM VM Mgmt VMKNIC VXLAN VMKNIC 49

Configure VMKNIC On Each ESX Host Allocate a separate VLAN to carry VXLAN traffic to/from ESX hosts Add this VLAN to allowed VLANs on trunk ports leading to ESX servers Add this VLAN to allowed VLANs on Nexus 1000V uplink port profiles Create an Access Port port profile connected to the above created VXLAN transport VLAN Add the command capability vxlan to the port profile to indicate the associated VMKNIC will be used to send/receive VXLAN encapsulated packets Using vcenter, create a new VMKNIC on each host that requires access to VXLANs Assign the above port profile to this VMKNIC Assign an available IP address within the subnet of the VXLAN transport VLAN 50

VXLAN Infrastructure MTU Requirements Increase MTU To Accommodate Added Encapsulation Overhead VXLAN encapsulation overhead is 50 bytes Recommendation: Increase MTU by 160 bytes to be ready for future. e.g. 1500 + 160 = 1660 If VMs will be sending jumbo frames (> 1500), add accordingly Configure all Layer 2 switches carrying the VXLAN Transport VLAN (specifics vary by switch) Increase global MTU config if applicable Increase interface MTU if applicable on trunk ports (to servers and inter-switch) Increase MTU in Nexus 1000V uplink port profiles Configure router interfaces carrying VXLAN traffic SVIs for VXLAN transport VLAN(s) Routed ports (if used) 51

What If I Can t Increase The Network MTU? Alternatively, decrease the MTU of the VM s VNICs by 50 bytes If you do neither, the Nexus 1000V will try to do the following to help If the VM performs Path MTU Discover, the Nexus 1000V will return an ICMP Too Big message to cause the VM to segment traffic into smaller packets If the VM sends IP packets which are too large, the Nexus 1000V will fragment the packets from the VM. The destination VM is responsible for reassembling the fragments If the frame contains a non-ip packet which is too large to be sent after encapsulation, the Nexus 1000V will drop the packet If the Nexus 1000V uplink MTU is increased, but the other switch ports along the path between hosts are not increased, the other switches will silently drop the frames! 52

Enable IP Multicast Forwarding Layer 2 Multicast Configuration IGMP Snooping should be enabled on the VXLAN transport VLAN to avoid delivery of unwanted multicast packets to the hosts Note: IGMP Snooping is enabled by default on Cisco switches If all hosts are connected to the same subnet, IP multicast routing is not required However, an IGMP Querier is still required to make IGMP Snooping work on the switches Use the command ip igmp snooping querier <ip-addr> for the VXLAN transport VLAN on the aggregation switches. Use IP addresses which are unused within the VXLAN transport VLAN s subnet 53

Enable IP Multicast Forwarding Layer 3 Multicast Configuration If host VXLAN VMKNICs are on different subnets, IP multicast routing must be enabled on the router(s) interconnecting the subnets VXLAN multicast traffic is bi-directional All hosts with VXLANs both send and receive IP multicast traffic VXLAN VTEPs join for Any Source (*, G) to the relevant VXLAN multicast group using IGMPv2 Using Bi-dir PIM on the routers will : Provide the most optimal forwarding trees Use the least amount of multicast routes in the routers Put less stress on the router control plane PIM-SM will also work, but be less optimal 54

Alternatives To Enabling IP Multicast Routing Use FabricPath to extend the layer 2 domain FabricPath supports efficient multicast L2 pruning within the fabric Place all VTEPs on the same VLAN Use OTV to extend just the VXLAN transport VLAN Place all VTEPs on the same VLAN Other VLANs do not need to be extended 55

Active/Active Uplinks with LACP Access Switch With VLANs Any Hash Access Switch With VXLANs 5-tuple Hash All frames have the same VTEP dest IP, MAC and a small number of source IP/MAC. All flow entropy will be in the source UDP Port. LACP LACP VEM Any Hash VEM 5-tuple Hash IP-1 / MAC-1 VTEP All frames have the same VTEP source IP, MAC and a small number of destination IP/MAC. All flow entropy will be in the source UDP Port. MAC-A MAC-B MAC-C MAC-D MAC-A MAC-B MAC-C MAC-D VM A VM B VM C VM D VM A VM B VM C VM D 56

Enable UDP Port Based Load Distribution For Both Layer 2 and Layer 3 VTEPs transfer inter-vm flow entropy into the outer IP encapsulation source UDP port VTEP generates a hash value based on the VM s IP or L2 headers and put this into the outer UDP source port Take advantage of this in the underlying network by using UDP port based flow distribution Enable 5-tuple (L3 Src/Dst, L4 Proto, L4 Port Src/Dst) based load distribution for Port Channels and Virtual Port Channels to VXLAN enabled hosts Port Channels and Virtual Port Channels between switches For router Equal Cost Multi-Pathing (ECMP) 57

Enable Proxy ARP on Transport VLAN SVIs VEM VTEP function will always ARP for destination IP addresses This simplifies the ESX host routing table configuration If host VXLAN VMKNICs are on different subnets SVIs must be created on the VXLAN transport VLANs Proxy ARP must be enabled on these SVIs IOS and NX-OS defaults are different for Proxy ARP IOS defaults to enabled NX-OS defaults to disabled 58

VXLAN CLI Manual Provisioning of VXLANs Enable the feature switch(config)# feature segmentation Create a VXLAN Instance switch(config)# bridge-domain my-vxlan-1 switch(config-bd)# segment id 20480 switch(config-bd)# group 239.1.1.1 Assign a Port Profile to connect to a VXLAN switch(config-port-prof)# switchport mode access switch(config-port-prof)# switchport access bridge-domain my-vxlan-1 59

Nexus 1000V vcloud Director Integration Four Main Components Network Management through vshield Manager 1. VMware vcloud Director Center (vcd) Management of Tenant VMs Management of vshield Edge VMs Nexus 1000V Management through REST API 2. VMware vshield Manager 4. Cisco Nexus 1000V VSM 3. VMware vcenter Standard vcenter and VSM integration 60

Integrating Nexus 1000V and vshield Manager 1. Turn on Network Segmentation Manager feature on Nexus 1000V N1KV(config)# feature network-segmentation-manager 2. Add Nexus 1000V in vshield Manager as a Managed switch with VXLAN and Multicast address pool range 61

VXLAN Creation Using vcloud Director vcloud Director user creates a Network (Organization or vapp) vcloud Director invokes vshield Manager to create a VXLAN Network vshield Manager allocates a VXLAN ID and Multicast Group and invokes the CreateNetwork API to the Nexus 1000V vcd provides the VXLAN ID and Multicast IP, Plus the Tenant ID Nexus 1000V creates a VXLAN Bridge-Domain and a Port Profile referring to that Bridge- Domain and pushes the Port Group into vcenter vcloud Director connects VMs to the Port Group VMware vcd VMware vsm VMware vcenter Cisco Nexus 1000V VSM 62

Part 3: Q & A

Related Sessions BRKVIR-2023 Cisco Nexus 1000V InterCloud based Hybrid Cloud Architectures and Approaches BRKVIR-2017 The Nexus 1000V on Microsoft Hyper-V: Expanding the Virtual Edge LTRVIR-2005 Deploying the Nexus 1000V on ESXi and Hyper-V BRKVIR-2016 Cisco's Cloud Services Router (CSR): Extending the Enterprise Network to the Cloud BRKVIR-3013 Deploying and Troubleshooting the Nexus 1000v virtual switch BRKDCT-2328 Evolution of Network Overlays in Data Center Clouds VXLAN Walk-in Lab 64

Resources Whitepapers and Deployment Guides (www.cisco.com/go/1000v) Deploying the VXLAN Feature in Cisco Nexus 1000V Series Switches Deploying Cisco Nexus 1000V Series Switches with VMware vcloud Director and VXLAN 1.0 Scalable Cloud Networking with Cisco Nexus 1000V Series Switches and VXLAN Enable Cisco Virtual Security Gateway Service on a Virtual Extensible LAN Network in VMware vcloud Director Cisco Cloud Lab (cloudlab.cisco.com) Demo: Virtual Extensible LAN (VXLAN) 65

Summary / Next Steps VXLANs can help you scale your cloud networking VXLANs work over your existing switches and routers The Nexus 1000V s VXLAN support is fully integrated with VMware vcloud Director Explore available resources Try VXLANs for yourself! 66

Complete Your Online Session Evaluation Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Cisco Daily Challenge points for each session evaluation you complete. Complete your session evaluation online now through either the mobile app or internet kiosk stations. Maximize your Cisco Live experience with your free Cisco Live 365 account. Download session PDFs, view sessions on-demand and participate in live activities throughout the year. Click the Enter Cisco Live 365 button in your Cisco Live portal to log in. 67