NET1927BU vsphere Distributed Switch Best Practices for NSX Gabriel Maciel VMware, Inc. @gmaciel_ca #VMworld2017 #NET1927BU
Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitment from VMware to deliver these features in any generally available product. Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. Technical feasibility and market demand will affect final delivery. Pricing and packaging for any new technologies or features discussed or presented have not been determined. 2
Agenda 1 vsphere Virtual Switches VSS and VDS 2 NSX Logical Switches (VXLAN) on VDS 3 NSX VDS Uplink Configuration & Connectivity 4 Key Takeaways and Q&A @gmaciel_ca 3
Agenda 1 vsphere Virtual Switches VSS and VDS 2 NSX Logical Switches (VXLAN) on VDS 3 NSX VDS Uplink Configuration and Connectivity 4 Key Takeaways and Q&A @gmaciel_ca 4
vsphere Networking Journey Flagship VMware ESX release June 2006 VSS Network services (iscsi, NFS etc.) VDS Introduction Traffic shaping Private VLANs Network vmotion May 2009 VMXNet3 IPv6 Network I/O control IPv6 enhancements LBT policy Jumbo Frames Support Netflow Port Mirroring LLDP August 2011 NIOC v2 User-defined resource pool 802.1P Tag LACP Healthcheck Network Rollback RSPAN ERSPAN IPFIX BPDU filter Scale Improvements LACP enhancement Traffic filtering DSCP tagging September 2013 40G Network Adapter Support NIOCv3 Multiple TCP/IP stacks MLD/IGMP Snooping March 2015 GW per vmknic Additional Performance improvements November 2016 vsphere 3.0 vsphere 4.0 vsphere 4.1 vsphere 5.0 vsphere 5.1 vsphere 5.5 vsphere 6.0 vsphere 6.5 5
vsphere Virtual Standard Switch Single Host 6
vsphere Virtual Standard Switch Multiple Hosts 7
Virtual Distributed Switch Architecture VM1 dvpg-a (Host Proxy Switch) dvuplink-pg Host A dvpg-b VM4 VM2 dvpg-a (Host Proxy Switch) dvuplink-pg Host B dvpg-b VM5 VM3 dvpg-a (Host Proxy Switch) dvuplink-pg Host C dvpg-b A local component of the VDS is instantiated on each Host (Host Proxy Switch) vcenter representation of the Data Center Physical Data Center dvportgroups, dvuplink PortGroup, etc. configuration on each Host is pushed from vcenter VM6 VDS vcenter 8
Virtual Distributed Switch Architecture VDS Global View 9
Differences Between VSS & VDS 10
Cross-vSwitch vmotion Cross-vSwitch vmotion Allows you to migrate VMs across vswitch boundaries VSS-A VDS-A VSS vmotion vmotion vmotion VSS-B VDS-B VDS Use case example: live migrations to a new cluster with a separate VDS 11
Visibility & Troubleshooting VDS Tools Visibility & Troubleshooting Tools CDP & LLDP Netflow / IPFIX Port mirroring: - SPAN, RSPAN, ERSPAN Network Health Check pktcap-uw (ESXi) 12
What s Next? Evolution of the Virtual Switch Access Layer VSS Abstract at an ESXi Host level Virtual Distributed Switch - VDS Abstract across many ESXi Hosts VDS VXLAN VDS Abstract across many ESXi Hosts and Virtual Distributed Switches 13
Agenda 1 vsphere Virtual Switches VSS and VDS 2 NSX Logical Switches (VXLAN) on VDS 3 NSX VDS Uplink Configuration and Connectivity 4 Key Takeaways and Q&A @gmaciel_ca 14
VDS & NSX Architecture and Components Logical Network Cloud Consumption Management Plane Control Plane Data Plane ESXi vcenter Server DLR Control VM Distributed Services NSX Manager NSX Controllers Logical Switch netcpa vsfwd Distributed Logical Router Distributed Firewall Hypervisor Kernel Modules NSX Edge VPN Self Service Portal vrealize Automation, VMware Integrated OpenStack (VIO), Custom CMP Single configuration portal REST API entry-point Provides Registration of 3rd Party Services Control-Plane Protocol Provides Separation of Control and Data Plane L2, L3 Data Plane Programming (VXLAN, DLR) High Performance Data Plane Scale-out Distributed Forwarding Model Physical Network 15
VLAN Backed VDS PortGroup One dvportgroup One VLAN on the physical infrastructure Coupling virtual/physical, could prevent full network services automation There are also end-to-end Layer 2 network limitations Host A End-to-End L2 Physical Network vsphere Distributed Switch Host B Legend dvpg-a backed by VLAN A dvpg-b backed by VLAN B dvuplink PG dvuplink Physical infrastructure needs to be configured with VLANs A & B 16
NSX and Virtual Extensible LAN (VXLAN) VXLAN is an industry standard IP overlay technology used to tunnel Layer 2 traffic over an IP infrastructure L2 frame VTEP1 Why an IP encapsulation? NSX leverages VXLAN in order to decouple its data plane from the physical network Basic IP connectivity is enough to provide distributed logical switching (VXLAN) on NSX Why an additional VXLAN header? src IP: VTEP1, dst IP:VTEP2 UDP/VXLAN L2 frame VTEP2 VXLAN Network Identifier (VNI) More details in the next slide L2 frame VTEP: Virtual Tunnel End Point 17
NSX and Virtual Extensible LAN (VXLAN) 14 bytes Outer Ethernet Header IP Header Data* 20 bytes Outer IP Header IP Protocol (UDP) Layer 3 (20 bytes) 8 bytes Outer UDP Header Header Checksum VXLAN Encapsulated Frame Inner Ethernet Frame Outer Source IP 8 bytes VXLAN Header Outer Dest IP Inner Dest MAC Inner Source MAC Source Port Ethtype = C-Tag 802.1Q Dest. Port (4789) Layer 4 (8 bytes) Optional Inner VLAN ID UDP Length EtherType (IP version) UDP Checksum Original Ethernet Payload VXLAN Flags 8 bits RSVD 24b FCS VXLAN ID 24b (VNI) RSVD 8b Outer Dest MAC Outer Source MAC Layer 2 (14 bytes) Ethtype = C-Tag 802.1Q Optional Outer VLAN ID EtherType (IP version) Hash of the inner L2/L3/L4 header of the original frame RRRRILRR VXLAN Header (8 bytes) *IP Header Mic. Data = Version, IHL, TOS, Length, ID 18
Virtual Distributed Switch (VDS) and VXLAN Host A VM1 Logical SW A dvuplink-pg VXLAN VTEP dvpg-vtep Generic IP Fabric vsphere Distributed Switch A logical switch is a L2 broadcast domain implemented using VXLAN A dvportgroup is created for each logical switch Provides local switching & isolation VXLAN traffic uses a vmknic which provides Virtual Tunnel End Point (VTEP) functionality (encapsulation/de-capsulation) There might be several VTEPs on an ESXi Host A single dvportgroup is created for all VTEPs VXLAN logical switches can also span multiple Virtual Distributed Switches 19
Traffic Flowing on a VXLAN Backed VDS PortGroup Host A VM1 Logical SW A VTEP dvpg-vtep dvuplink-pg vsphere Distributed Switch Host B VXLAN Tunnel Generic IP Fabric In this setup, VM1 and VM2 are on different hosts but belong to the same VXLAN Logical Switch A VXLAN tunnel is established between the two hosts VM2 Logical SW A VTEP dvpg-vtep dvuplink-pg 20
Traffic Flowing on a VXLAN Backed VDS PortGroup Host A IP/UDP/VXLAN L2 frame VM1 Logical SW A L2 VTEP frame dvpg-vtep dvuplink-pg Assume VM1 sends some traffic to VM2: vsphere Distributed Switch Host B VXLAN Tunnel VMworld 2017 Content: Not for VM2 Logical SW A L2 VTEP frame dvpg-vtep dvuplink-pg publication Generic IP Fabric 1 VM1 sends L2 frame to local VTEP 2 VTEP adds VXLAN, UDP & IP headers 3 Physical Transport Network forwards as a regular IP packet 4 Destination Hypervisor VTEP de-capsulates frame 5 L2 frame delivered to VM2 21
Traffic Flowing on a VXLAN Backed VDS PortGroup Host A VM1 Logical SW A VTEP L2 frame dvpg-vtep dvuplink-pg VM3 Logical SW B vsphere Distributed Switch Host B VMworld 2017 Content: Not for VXLAN Tunnel VM2 VTEP VM4 IP/UDP/VXLAN L2 frame L2 frame Logical SW A dvpg-vtep dvuplink-pg Suppose we configure another dvportgroup (or logical switch) with VM3 and VM4 VM3 and VM4 can communicate, without requiring any change to the physical network! Logical SW B publication Generic IP Fabric 22
With NSX you don t need end-to-end Layer 2 in you DC VXLAN VTEP Subnet A dvpg-vtep dvuplink-pg Host A VDS ToR VLAN ID: X R1 Generic IP Fabric VXLAN VTEP Subnet B dvpg-vtep dvuplink-pg R2 Host B ToR VLAN ID: X With NSX you don t need end-to-end Layer 2 in the network VXLAN service use same transport VLAN on uplinks This does not mandate end-to-end L2 connectivity: Configure VTEP addresses in different IP subnets Terminate L2 at the top of rack (ToR) switch Same recommendation applies to other vmkernel VLANs (vmotion, HA, etc.) terminate at the ToR No end-to-end L2 connectivity required 23
Visibility & Troubleshooting NSX Tools VMworld 2017 Content: Not for NSX Visibility & Troubleshooting Tools Central CLI/API Access New NSX Dashboard! Syslog publication VXLAN Ping & vmkping ++netstack=vxlan Live Flow Flow Monitoring (IPFIX) Endpoint Monitoring Traceflow New Traceflow capabilities! 24
Agenda 1 vsphere Virtual Switches VSS and VDS 2 NSX Logical Switches (VXLAN) on VDS 3 NSX VDS Uplink Configuration and Connectivity 4 Key Takeaways and Q&A @gmaciel_ca 25
Host Connectivity General Recommendations vsphere Distributed Switch Host A vmnic0 vmnic1 Avoid single point of failure Connect to separate network devices When using a limited number of physical uplinks In general, you don t need to dedicate physical uplinks to vmknics All uplinks can share infrastructure and data traffic Enable NIOC! Different classes of service can share the uplinks VXLAN HW capable NICs are recommended (Emulex, Intel, etc.) Configure Port Fast and BPDU guard on Switch Ports No STP running on VDS VDS never bridge traffic between its uplinks cannot loop traffic VDS BPDU Filter for VMs is also available 26
VDS Uplink Connectivity A Host is typically connected via several uplinks for redundancy and added bandwidth Added Bandwidth First, remember that for example 2x10Gbps uplinks is not equivalent to a 20Gbps uplink 2x10Gbps uplinks provide between a theoretical 20Gbps down to 10Gpbs Efficiency depends on even packet load balancing, VDS provides several options detailed later Redundancy 20Gps: Lucky 10Gps: Unlucky VDS VDS vmnic0 vmnic1 ToR 0 ToR 1 vmnic0 vmnic1 ToR 0 ToR 1 The host can sustain the loss of an uplink Link failure results in degraded bandwidth (and this might have an impact on operations) 27
Load Balancing Options in Traditional VLAN Backed PGs Teaming options only differ on how they spread traffic vmnic0 VM1 VM2 dvpg A dvpg B VDS vmnic1 Explicit Failover Granularity: Port Group All traffic to/from a given port group is pinned to an uplink vmnic0 VM1 dvpg A VDS VM2 vmnic1 Originating Virtual Port Granularity: VM Virtual Port Id All traffic to/from a particular VM is pinned to an uplink vmnic0 Flow A VM1 dvpg A VDS vmnic1 Flow B LACP Granularity: Flow A particular flow is pinned to an uplink 28
VXLAN Backed VDS and Explicit Failover VM1 Logical SW A vsphere Distributed Switch vmnic0 VTEP dvpg-vtep dvuplink-pg VM3 Logical SW B vmnic1 All VM traffic is originated from the VTEP dvportgroup No dvportgroup load distribution Explicit Failover is purely active/standby for VM traffic Benefit of Explicit Failover Simple to configure Deterministic in term of available bandwidth Absolutely no requirements on the IP fabric 29
VXLAN Backed VDS and Originating Virtual Port VM1 VTEP0 VTEP1 VM3 Logical SW A dvpg-vtep Logical SW B vsphere Distributed Switch vmnic0 dvuplink-pg vmnic1 One VTEP/vmknic is created for each physical adapter VM traffic is spread across VTEPs, and thus spread across physical uplinks on a per-vm basis Note: the VTEPs are assigned an IP address in the same subnet, L2 connectivity required between top of rack switches (end-to-end L2 not necessary) Benefits of Originating Virtual Port Good spread of traffic if multiple VMs in the network Simple configuration No advanced feature required on the physical network 30
VXLAN Backed VDS and LACP VM1 Logical SW A vsphere Distributed Switch vmnic0 VTEP dvpg-vtep dvuplink-pg VM3 Logical SW B vmnic1 Benefit: VXLAN traffic sent/received by the VTEP is spread across uplinks on a per-flow basis Requires port channel/lag configuration on the top of rack switches If the host is connected to two top of rack switches (as recommended), requires advanced features like MLAG/VPC Port-Channel configuration required on physical infrastructure 31
Summary NSX VDS Uplink Configuration & Connectivity Teaming and Failover Mode Route based on Originating Virtual Port NSX Support Multi- VTEP Uplink Behavior (2*10 Gbps) Both Uplinks Active Virtual Port Granularity LACP X Both Active Flow Based Explicit Failover Order X One Uplink Active 32
Agenda 1 vsphere Virtual Switches VSS and VDS 2 NSX Logical Switches (VXLAN) on VDS 3 NSX VDS Uplink Configuration and Connectivity 4 Key Takeaways and Q&A @gmaciel_ca 33
Key Takeaways and Q&A There is no need to continue to use the VSS a the main virtual switch The VDS scales to a large number of hosts while allowing at the same time centralized management and configuration The VDS is a key component of the vsphere and NSX Platforms With NSX and VXLAN, the VDS decouples the virtual network from the physical infrastructure Only requiring basic IP connectivity While maintaining advanced visibility, monitoring, and security functionalities VMworld 2017 Content: Not for publication 34
Where to get started Engage and Learn Join VMUG for exclusive access to NSX vmug.com/vmug-join/vmug-advantage Connect with your peers communities.vmware.com Find NSX Resources vmware.com/products/nsx Network Virtualization Blog blogs.vmware.com/networkvirtualization Try VMworld 2017 Experience Dozens of Unique NSX Sessions Spotlights, breakouts, quick talks & group discussions Visit the VMware Booth Product overview, use-case demos Visit Technical Partner Booths Integration demos Infrastructure, security, operations, visibility, and more Content: Not for publication Meet the Experts Join our Experts in an intimate roundtable discussion Take Free Hands-on Labs Test drive NSX yourself with expert-led or self-paces hands-on labs labs.hol.vmware.com Training and Certification Several paths to professional certifications. Learn more at the Education & Certification Lounge. vmware.com/go/nsxtraining