Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Size: px
Start display at page:

Download "Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview"

Transcription

1 Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018

2 Revision history Date Document revision Description of changes December Remove note about vsphere 6.0 dependency in Introduction September Updated Introduction to include note about vsphere 6.0 dependency March Updated the graphic in Logical network for the Cisco UCS B-Series Blade Servers (edge cluster). November Fixed the graphic in Logical network for the Cisco UCS B-Series Blade Servers (compute cluster). June Added support for VPLEX. March Updated the sections "VMware NSX local topology with cross VMware vcenter Server option" and "VMware NSX logical topology with local objects" to show one DLR in the network connectivity illustrations January Initial version Revision history 2

3 Contents Introduction...5 Part 1: VMware NSX 6.2 on a VxBlock System... 6 Introduction to VMware NSX 6.2 network virtualization... 6 VMware vsphere cluster summary for VMware NSX... 6 Cross VMware vcenter Server option... 7 VMware NSX logical topology with local objects... 7 VMware NSX local topology with universal objects (cross VMware vcenter Server option)... 8 VMware NSX network virtualization with Integrated Data Protection solutions... 9 VMware NSX management cluster VMware NSX Management cluster components Management cluster specifications...10 Hardware requirements...11 VMware vsphere cluster requirements Custom resource pool requirements Storage requirements Networking requirements...12 VMware NSX edge cluster Edge cluster components Edge cluster specifications Hardware requirements: Cisco UCS C-Series Rack Mount Servers Hardware requirements: Cisco UCS B-Series Blade Servers VMware vsphere cluster requirements...17 Custom resource pool requirements...18 Storage requirements Networking requirements...18 Logical topology of VMware NSX 6.2 local and universal objects Network requirements for the Cisco UCS Servers...21 VXLAN tunnel end point...22 VMware virtual network...22 Logical network for the Cisco UCS C-Series Rack Mount Servers (Edge Cluster) using four ESGs...23 Logical network for the Cisco UCS C-Series Rack Mount Servers (Edge Cluster) using more than four ESGs Logical network for the Cisco UCS B-Series Blade Servers (edge cluster)...25 VMware NSX compute cluster Hardware requirements VXLAN tunnel end point...26 VMware virtual network for the compute cluster...26 Logical network for the Cisco UCS B-Series Blade Servers (compute cluster)...27 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System Introduction to VMware NSX 6.2 network virtualization with VPLEX VMware NSX: VMware vcenter Server options for VPLEX...28 VMware vsphere cluster summary for VMware NSX with VPLEX Contents

4 Logical topology for a VxBlock System with VMware NSX networking and VPLEX Metro storage...31 VMware NSX with VPLEX: VMware vsphere non-amp management cluster...32 VMware NSX with VPLEX: VMware vsphere cluster components...32 VMware NSX with VPLEX: VMware vsphere cluster specifications...32 VMware NSX with VPLEX: AMP hardware requirements...32 VMware NSX with VPLEX: VMware vsphere cluster requirements VMware NSX with VPLEX: storage requirements VMware NSX with VPLEX: networking requirements...34 Logical network for Cisco UCS B-Series Blade Servers in the primary and secondary sites (stretched management cluster) VMware NSX with VPLEX: VMware vsphere edge cluster...36 VMware NSX with VPLEX: VMware vsphere edge cluster components...37 VMware NSX with VPLEX: VMware vsphere edge cluster specifications...38 VMware NSX with VPLEX: Hardware requirements for Cisco UCS C-series Rack Mount Servers...38 VMware NSX with VPLEX: Hardware requirements for Cisco UCS B-series Blade Servers VMware NSX with VPLEX: VMware vsphere cluster requirements VMware NSX with VPLEX: custom resource pool requirements...40 VMware NSX with VPLEX: storage requirements VMware NSX with VPLEX: networking requirements...40 VMware NSX with VPLEX: logical topology for edge in a primary site...42 VMware NSX with VPLEX: logical topology for edge in a secondary site VMware NSX with VPLEX: VMware vsphere edge service gateway traffic flow...44 VMware NSX with VPLEX: VXLAN tunnel end points VMware NSX with VPLEX: network requirements for Cisco UCS Servers...49 VMware NSX with VPLEX: VMware virtual network...50 Logical network for the Cisco UCS C-Series Rack Mount Servers in the primary site Logical network for the Cisco UCS C-Series Rack Mount Servers in a secondary site...51 Logical network for the Cisco UCS B-Series Blade Servers in a primary site Logical network for the Cisco UCS B-Series Blade Servers in a secondary site...53 VMware NSX with VPLEX: compute cluster VMware NSX with VPLEX: production workloads in a stretched cluster VMware NSX with VPLEX: VXLAN tunnel end points Contents 4

5 Introduction This document describes the high-level design of VMware NSX network virtualization technologies for stand-alone VxBlock Systems and for a multi-site option with VPLEX support on VxBlock Systems. This document can be used as a reference and read in any order. This document covers VMware NSX with VMware vsphere running on Cisco UCS C-Series Rack Mount servers and B-Series Blade Servers for the edge VMware vsphere cluster. Refer to the Release Certification Matrix for more information about supported hardware and software with VMware NSX. The target audience for this document includes sales engineers, field consultants, and advanced services specialists who want to deploy a virtualized infrastructure using VMware NSX. The Glossary provides related terms, definitions, and acronyms. 5 Introduction

6 Part 1: VMware NSX 6.2 on a VxBlock System Introduction to VMware NSX 6.2 network virtualization VMware NSX network virtualization is part of the software-defined data center that offers cloud computing on VMware virtualization technologies. With VMware NSX, virtual networks are programmatically provisioned and managed independently of the underlying hardware. VMware NSX reproduces the entire network model in software, enabling a network topology to be created and provisioned in seconds. Network virtualization abstracts Layer 2 switching and Layer 3 routing operations from the underlying hardware, similar to what server virtualization does for processing power and operating systems. VMware vsphere cluster summary for VMware NSX The following table describes the VMware vsphere clusters for VMware NSX: Cluster Management Description Includes the VMware NSX manager appliance and three VMware NSX controllers that reside in the second generation of the Advanced Management Platform, specifically, AMP-2HA Performance or AMP-2S. The VMware NSX controllers provide failover and scalability. The management cluster also includes the VMware vcenter Server, which attaches to the edge and compute clusters and provides the following functionality: Allows the VMware NSX manager appliance and the VMware NSX controllers to be deployed into the management cluster. Deploying VMware NSX on the AMP-2S is supported only with the Windows version of VMware vsphere vcenter 6.0. The VMware vsphere vcenter 6 appliance is not supported. Allows the VMware NSX edge services gateway (ESG) and VMware NSX distributed logical routers (DLRs) to be deployed in the edge cluster, which resides on either the Cisco UCS C- Series Rack Mount servers or the B-Series Blade servers. Edge Compute Includes the ESGs that provide external connectivity to the physical network and the DLRs that provide routing and bridging. Includes the production VMs. There can be more than one compute cluster. Part 1: VMware NSX 6.2 on a VxBlock System 6

7 The following illustration provides the components and functions in each VMware NSX cluster: More than one compute cluster can exist in the VMware vcenter Server. Cross VMware vcenter Server option VMware NSX 6.2 running on VMware vsphere 6.0 allows the data plane to span multiple VMware vcenter Server instances in the same domain. Universal objects (transport zone, logical switches, DLRs, and transit network logical switch) are created on the primary VMware vcenter Server instance and replicated across the secondary (seven) VMware vcenter Servers. For more information on the Cross VMware vcenter Server option, refer to the following sections of the Cross-vCenter NSX Installation Guide: Benefits of cross-vcenter NSX Support Matrix for NSX Service in cross-vcenter NSX Local objects (as opposed to universal objects) reside on a single VMware vcenter Server. Local objects do not replicate across multiple VMware vcenter Servers, but can use such features as Layers 4 through 7 services and Layer 2 bridging. VMware NSX logical topology with local objects This local VMware vcenter Server environment supports Layers 4 through 7 services and Layer 2 bridging. 7 Part 1: VMware NSX 6.2 on a VxBlock System

8 The following illustration shows the local objects (DLRs and local transit network logical switch) integrated in the VMware NSX single VMware vcenter Server topology: VMware NSX local topology with universal objects (cross VMware vcenter Server option) The following diagram illustrates a cross VMware vcenter Server with universal objects (DLRs, universal transit network logical switch) integrated in the VMware NSX topology. This topology cannot support Part 1: VMware NSX 6.2 on a VxBlock System 8

9 Layers 4 through 7 services and Layer 2 bridging. However, the universal production logical switch spans VMware vcenter Servers A and B. VMware NSX network virtualization with Integrated Data Protection solutions VMware NSX supports the following Integrated Data Protection solutions: VPLEX, which deploys a single datastore that stretches between two sites within the same VMware vsphere cluster. VMware NSX is supported with Windows version of VMware vsphere vcenter 6.0. The VMware vsphere vcenter 6 appliance have not been tested is not supported. 9 Part 1: VMware NSX 6.2 on a VxBlock System

10 VMware NSX management cluster The management cluster consists of the management and control planes for VMware NSX. The VMware NSX manager handles the management plane and the VMware NSX controllers handle the control plane. VMware NSX Management cluster components The VMware NSX management cluster on AMP-2HA Performance or AMP-2S consists of the following components: Management component VMware NSX manager appliance VMware NSX controllers Universal components Description This central management component of VMware NSX is paired with a dedicated VMware vcenter Server. Use the web client to configure the NSX manager. A single domain can span multiple VMware vcenter Servers running VMware vsphere 6.0. One primary VMware NSX manager and up to seven secondary VMware NSX managers are supported in a single domain. Three VMware NSX controllers are joined in a cluster to provide failover and scalability. VMware NSX controllers are deployed hybrid mode to support both small and large virtual extensible LAN (VXLAN) environments. In a cross-vcenter Server configuration, the controllers are deployed only on the primary VMware NSX manager. The controller metadata is replicated across the secondary VMware NSX managers to allow synchronization of the control plane across the VMware vcenter Servers. The universal components consist of the universal transport zone, universal logical switches, and universal DLR. These components are created on the primary VMware NSX manager and not on the secondary VMware NSX managers. Management cluster specifications The following table lists the specifications for the VMware NSX manager and VMware NSX controller when VPLEX is not configured: Specification VMware NSX manager VMware NSX controllers Quantity One to eight VMs (With VMware NSX 6.2, more than one VMware NSX manager can be deployed in a single domain.) Three VMs Location Management cluster Management cluster Hardware AMP-2HA Performance or AMP-2S with three servers min AMP-2HA Performance or AMP-2S with three servers min Size Four vcpu (Eight vcpu if there are more than 256 hypervisors) 16 GB RAM (24 GB RAM if there are more than 256 hypervisors) 60 GB disk Four vcpu (2048 MHz reservation) Four GB RAM 20 GB disk Network vcesys_esx_mgmt (105) vcesys_esx_mgmt (105) Part 1: VMware NSX 6.2 on a VxBlock System 10

11 Specification VMware NSX manager VMware NSX controllers Availability VMware High Availability VMware High Availability Distribution OVA Anti-affinity rules enabled on AMP cluster Hardware requirements VxBlock Systems support VMware NSX virtual networking with AMP-2HA Performance or AMP-2S with three servers minimum. No other AMP type is supported with VMware NSX. AMP-2HA Performance or AMP-2S enables the VMware NSX manager and the three VMware NSX controllers to be dedicated to a VMware vsphere ESXi host for redundancy and scalability. No special cabling is required. The following illustration shows the VMware NSX manager and the three VMware NSX controllers: VMware vsphere cluster requirements The management cluster requires VMware High Availability (HA) and VMware vsphere Distributed Resource Scheduler (DRS) to provide VM protection against a VMware vsphere ESXi host failure and to balance VM workloads in the cluster. The following table shows the rules applied to the DRS: Rule VMware NSX manager VMware NSX controllers DRS affinity rules (same host) Anti-affinity rules (separate host) Affinity rules are not applied to the management cluster, regardless of the numbers of VMware NSX managers. Anti-affinity rules are applied to the management cluster if more than one VMware NSX manager exists. Affinity rules are not applied to the management cluster, because each controller should not be on the same VMware vsphere ESXi host. Anti-affinity rules are applied to the management cluster for each controller on separate VMware vsphere ESXi hosts. Custom resource pool requirements The VMware NSX management cluster does not require custom resource pools. However, for heavy workloads, create memory reservations for the VMware NSX manager. 11 Part 1: VMware NSX 6.2 on a VxBlock System

12 Storage requirements The management cluster does not require a specific disk layout other than the standard disk layout of the AMP-2HA Performance or AMP-2S. The VMware vsphere ESXi hosts that are connected to the management cluster use the VNXe storage array. All the VMware NSX components, including the VMware NSX manager and controllers, are deployed across three separate data stores to protect against LUN corruption and improve performance and resilience. Networking requirements No special network requirements besides AMP-2HA Performance or AMP-2S are required. The VMware NSX management traffic (VMware NSX managers) and control plane traffic (VMware NSX controllers) are on the same network segment as the VMware vsphere ESXi management traffic to improve performance. VMware NSX edge cluster The VMware NSX edge cluster connects to the physical network and provides routing and bridging. The edge cluster supports either the Cisco UCS C-Series Rack Mount servers (recommended) or B-Series Blade Servers. Edge cluster components The following table explains VMware NSX edge cluster components: Component ESG DLRs Universal components Description Provides connectivity to the physical network. Provides routing and bridging. The only universal object in the edge cluster is the DLR. The ESGs support multiple internal connections that connect to a local and universal DLR at the same time. Part 1: VMware NSX 6.2 on a VxBlock System 12

13 The following illustration shows the VMware NSX components that belong to the edge cluster and which components are assigned to each VMware vsphere ESXi host. The illustration shows the minimum number of four ESGs. Edge cluster specifications The following table lists the specifications for the VMware NSX ESGs and VMware NSX DLR: Specification VMware NSX ESG VMware NSX DLR Quantity Four VMs (four or six are optional) Active/active with ECMP 2 VMs Active/active with ECMP Location Edge cluster Edge cluster Hardware Size Network Cisco UCS C-Series Rack Mount Servers (Recommended) or Cisco UCS B-Series Blade Servers Four vcpu One GB RAM 2.15 GB Disk Two external interfaces for connections to north-south physical switches One internal interface for the connection to the local DLR One internal interface for the connection to the universal DLR (if the universal DLR is deployed) Cisco UCS C-Series Rack Mount Servers (Recommended) or Cisco UCS B-Series Blade Servers Four vcpu 512 MB RAM 2.15 GB Disk One uplink interface for connection to the ESG (for local and universal DLRs) Availability VMware HA VMware HA 13 Part 1: VMware NSX 6.2 on a VxBlock System

14 Specification VMware NSX ESG VMware NSX DLR Distribution Anti-affinity rules are enabled to separate the ESG VM pairs in the edge cluster. Affinity rules maintain two ESGs per host, except for the first two hosts. Anti-affinity rules are enabled to keep both DLR VMs separate in the edge cluster. Anti-affinity rules are enabled across the first two hosts in the edge cluster. Hardware requirements: Cisco UCS C-Series Rack Mount Servers The following table describes the hardware requirements for VMware NSX on the Cisco UCS C-Series Rack Mount Servers. Component Cisco UCS C-Series Rack Mount Servers Performance enhancements CPU and memory UCS-FEX Requirements The edge cluster uses four Cisco UCS C-Series M4 Rack Mount Servers, regardless of the number of ESGs. However, the number of CPU sockets and Intel X520 cards and the amount of memory depends on the number of ESGs. Each edge server uses a dual-port 10 GB Intel X520 (SFP+) card to support full line rate speeds for the ESGs (VXLAN offloading).two Intel X520 (SFP+) cards are installed in each edge server if more than four ESGs are deployed. Having a second Intel X520 (SFP+) card adds the additional bandwidth needed for the additional ESGs. Configuration A (Typical for most installations) Four ESGs Single socket CPU per edge server 64 GB of memory per edge server Configuration B Six or eight ESGs Dual socket CPU per edge server 128 GB of memory per edge server Each edge server includes a dual-port Cisco VIC 1227 (SFP+) card to take advantage of the Cisco SingleConnect technology. Each edge server connects directly to the Cisco Fabric Interconnect. Cisco UCS Manager can manage the Cisco UCS C-Series M4 Rack Mount servers using service profiles. Only the VMware vsphere core management infrastructure uses the service profiles. Part 1: VMware NSX 6.2 on a VxBlock System 14

15 Component Physical cabling Requirements Server to Cisco Nexus 9300 Series Switch connectivity: Each edge server uses the Intel X520 (SFP+) card(s) to connect to the Cisco Nexus 9300 Series Switches. The number of ESGs determines if one or two Intel X520 (SFP+) cards are installed in each server. The physical cable configuration types are shown below: Configuration A (Typical for most installations) Four ESGs One Intel X520 card per edge server Intel X520 port A connected to Cisco Nexus 9300 Series Switch A Intel X520 port B connected to Cisco Nexus 9300 Series Switch B Two connections per edge server Eight connections from all four edge servers to the Cisco Nexus 9300 Series Switches (four per switch) Configuration B Six or eight ESGs Two Intel X520 card per edge server Intel X520-A port A connected to Cisco Nexus 9300 Series Switch A Intel X520-A port B connected to Cisco Nexus 9300 Series Switch B Intel X520-B port A connected to Cisco Nexus 9300 Series Switch A Intel X520-B port B connected to Cisco Nexus 9300 Series Switch B Four connections per edge server. 16 connections from all four edge servers to the Cisco Nexus 9300 Series Switches (eight per switch) Server to Cisco UCS Fabric Interconnect Connectivity Each edge server uses the Cisco VIC 1227 (SFP+) card to connect to the Cisco Fabric Interconnects. The physical cable configuration types are: VIC 1227 port A connects to Cisco Fabric Interconnect A VIC 1227 port B connects to Cisco Fabric Interconnect B Two connections per edge server Eight connections from all four edge servers to the Cisco UCS Fabric Interconnects (four per Cisco Fabric Interconnect) 15 Part 1: VMware NSX 6.2 on a VxBlock System

16 Hardware requirements: Cisco UCS B-Series Blade Servers The following table describes the hardware requirements for VMware NSX on the Cisco UCS B-Series Blade Servers. Component Cisco UCS B-Series Blade Servers Performance enhancements Requirements The edge cluster uses four to six Cisco UCS B-Series Blade Servers, depending on the number of ESGs deployed, as specified below: Configuration A (Typical for most installations) Four ESGs Four Cisco UCS B-Series Blade Servers Configuration B Six ESGs Five Cisco UCS B-Series Blade Servers Configuration C Eight ESGs Six Cisco UCS B-Series Blade Servers Cisco does not support Cisco VIC cards that provide full line rate speeds. Each edge Cisco UCS B-Series Blade Server uses a Cisco VIC 1340 and 1380 card to support the VXLAN offloading capabilities, which provide VXLAN TCP checksum and segmentation to improve CPU performance. In addition, using two VIC cards allows the load balancing of traffic types to provide extra network performance. Using the VXLAN offloading capabilities provides up to half the full line rate at speeds of about five GB of bandwidth per ESG. To provide full line speeds, use the Cisco UCS C-Series Rack Mount Server for the edge cluster. The VXLAN offload feature is not enabled due to a driver limitation. In the meantime RSS is enabled on the card to provide the best performance. Part 1: VMware NSX 6.2 on a VxBlock System 16

17 Component Physical cabling Requirements Each edge server uses the Cisco VIC 1340 and 1380 cards to virtually connect to the Cisco Fabric Interconnects. Physical 10 GB/s cables connect the Cisco Fabric Interconnects to the Top-of-Rack (ToR) Cisco 9300 Series Switches in a non-virtual port channel used for external traffic. (A Cisco limitation exists where dynamic IP routing with peers on the virtual port channel (vpc) VLAN is not supported.) The number of ESGs determines the number of 10 GB/s cable links, as follows: Configuration A (Typical for most installations) Four ESGs Four 10 GB/s cables connect the Cisco Fabric Interconnect A to Cisco Nexus 9300 Series Switch A Four 10 GB/s cables connect the Cisco Fabric Interconnect B to Cisco Nexus 9300 Series Switch B A total of eight connections exist from the Cisco Fabric Interconnects to the Cisco Nexus 9300 Series Switches (four per switch). Configuration B Six ESGs Six 10 GB/s cables connect the Cisco Fabric Interconnect A to Cisco Nexus 9300 Series Switch A Six 10 GB/s cables connect the Cisco Fabric Interconnect B to Cisco Nexus 9300 Series Switch B A total of 12 connections exist from the Cisco Fabric Interconnects to the Cisco Nexus 9300 Series Switches (six per switch). Configuration C Eight ESGs Eight 10 GB/s cables connect the Cisco Fabric Interconnect A to Cisco Nexus 9300 Series Switch A Eight 10 GB/s cables connect the Cisco Fabric Interconnect B to Cisco Nexus 9300 Series Switch B A total of 16 connections exist from the Cisco Fabric Interconnects to the Cisco Nexus 9300 Series Switches (eight per switch). VMware vsphere cluster The edge cluster requires VMware HA and VMware vsphere DRS to provide VM protection against a VMware vsphere ESXi host failure and to balance VM workloads in the cluster. The following table provides the DRS rules: Rule ESGs DLR VMware NSX Controllers Affinity rules (same host) Affinity rules are applied to the edge cluster to allow each pair of VMware NSX ESGs to be assigned to its own VMware vsphere ESXi host. For DLRs, DRS affinity rules do not need to be configured, because only two DLRs exist in HA mode. For DLRs, DRS affinity rules are applied to the edge cluster to allow one DLR as a pair to be assigned to its own VMware vsphere ESXi host. Affinity rules are not applied to the edge cluster, because each controller should not be on the same VMware vsphere ESXi host. 17 Part 1: VMware NSX 6.2 on a VxBlock System

18 Rule ESGs DLR VMware NSX Controllers Anti-affinity rules (separate host) Anti-affinity rules are applied to the edge cluster to each pair of VMware NSX ESGs to not cross the VMware vsphere ESXi hosts. For DLRs, DRS anti-affinity rules are applied to the edge cluster to each VMware NSX DLRs or pair to not cross the VMware vsphere ESXi hosts. Anti-affinity rules are applied to the edge cluster for each controller on separate VMware vsphere ESXi hosts. The edge cluster does not require custom resource pools. Custom resource pool The edge cluster does not require custom resource pools. Storage requirements The edge cluster has the following storage requirements: Data stores: The VMware vsphere ESXi hosts that are connected to the edge cluster use the VNX, XtremIO, or VMAX storage arrays. The VMware NSX components, including the ESGs and DLRs, are deployed across two separate data stores to protect against LUN corruption and improve performance and resilience. Disk layout: No specific disk layout is necessary. VMware NSX supports the standard disk layout of VNX, XtremIO, or VMAX storage arrays. Networking requirements The following table describes the network connectivity for the VMware NSX edge components: Connectivity type External Internal and uplink Description North-south connectivity exists between the ESGs and the Cisco Nexus 9300 Series Switches. Each ESG uses two separate uplink interfaces that connect to an uplink edge distributed port group independently on the edge VMware VDS. The internal connectivity between the ESG and the DLR is as follows: Each ESG uses one internal interface to connect to the internal VXLAN local transit logical switch to reach the local DLR. If NSX uses the cross-vcenter Server option, a second internal interface connects to the internal VXLAN universal transit logical switch to reach the universal DLR. (By default, the local and universal DLRs are deployed only in the primary VMware vcenter Server.) The local DLR uses an uplink interface to connect to the VXLAN local transit logical switch to reach the ESG. If NSX uses the cross-vsphere vcenter Server option, an uplink interface connects the universal DLR to the uplink VXLAN universal transit logical switch to reach the ESG. Part 1: VMware NSX 6.2 on a VxBlock System 18

19 Connectivity type Dynamic routing and Layer 3 termination Description The ESGs use Equal Cost Multi-Pathing (ECMP) to increase bandwidth by load balancing traffic across equal cost multiple paths and to provide fault tolerance for failed paths. The ESGs use ebgp to peer with the ToR Cisco Nexus 9300 Series switches. On the Cisco Nexus 9300 Series switches, the two edge VLANs have switch virtual interfaces (SVI): Edge01 SVI on switch A Edge02 SVI on switch B The DLRs use ibgp to peer with the ESGs. All other VLANs internal to the VxBlock System terminate at the ToR Cisco Nexus 9300 Series switches. Layer 2 bridging Layer 2 bridging is an optional configuration to support VXLAN-to-VLAN connectivity. Layer 2 bridging works with a local DLR. Layer 2 bridging is not supported with a universal DLR. 19 Part 1: VMware NSX 6.2 on a VxBlock System

20 Logical topology of VMware NSX 6.2 local and universal objects The NSX edge cluster uses the local DLR and the cross VMware vcenter Server option, as shown in the following illustration: Part 1: VMware NSX 6.2 on a VxBlock System 20

21 Network requirements for the Cisco UCS Servers The following table describes the network requirements for Cisco UCS C-Series Rack Mount Servers and Cisco UCS B-Series Blade Servers: Component Cisco UCS C-Series Rack Mount Servers Cisco UCS B-Series Blade Servers VLAN IDs VMware NSX requires three VLAN/SVIs on the Cisco Nexus 9300 Series Switches: Two external edge VLAN/SVIs are used for external traffic (on/off ramp) One transport VLAN is used to pass VXLAN traffic. The external traffic traverses north-south between the edge servers and the Cisco Nexus 9300 Series Switches. The transport VLAN is Layer 2 and does not have an SVI on the Cisco Nexus 9300 Series Switches. With Cisco UCS C-Series Rack Mount Servers, the external edge traffic VLAN IDs do not need to be created in the Cisco UCS Manager. However, because the compute blades pass VXLAN traffic, the VXLAN Transport VLAN ID must be added to the Cisco UCS Manager. With Cisco UCS B-Series Blade Servers, the external edge traffic and VXLAN transport traffic VLAN IDs must be created in the Cisco UCS Manager. VXLAN Tunnel End Points (VTEPs) The number of VTEPs deployed to each VMware vsphere ESXi host depends on the number of dvuplinks configured on the VMware VDS that has the transport distributed port group created. Because there is more than one VTEP on each host, the Load Balance SRCID mode is enabled to load balance VXLAN traffic. Although the link aggregation control protocol (LACP) is supported on the Cisco UCS C-Series Rack Mount Servers, Load Balance SRCID mode ensures a consistent compute blade configuration. The number of ESGs determines the number of uplinks (one or two Intel X520 cards) created on the edge VMware VDS. Configuration A (Typical for most installations) Four ESGs One Intel X520 card per host, which provides two dvuplinks Two VTEP/VMkernal distributed port groups per edge host Configuration B Six or eight ESGs Two Intel X520 cards per host, which provides four dvuplinks Four VTEP/VMkernal distributed port groups per edge LACP is not supported for the Cisco B-Series Blade Servers. Regardless of the number of ESGs, two uplinks are created on the edge VMware VDS. Configuration A (Typical for most installations) Four ESGs Four Cisco UCS B-Series Blade Servers Configuration B Six ESGs Five Cisco UCS B-Series Blade Servers Configuration C Eight ESGs Six Cisco UCS B-Series Blade Servers 21 Part 1: VMware NSX 6.2 on a VxBlock System

22 VXLAN tunnel end point The number of VXLAN Tunnel End Points (VTEPs) deployed to each VMware vsphere ESXi depends on the number of dvuplinks configured on the VMware VDS that has the transport distributed port group. Because more than one VTEP is on each host, Load Balance SRCID mode is enabled to load balance VXLAN traffic. For the Cisco UCS C-Series Rack Mount Servers, LACP is supported. However, Load Balance SRCID mode ensures a consistent compute blade configuration. The number of ESGs determines the number of uplinks (1 or 2 Intel X520 cards) created on the edge VMware VDS. Configuration A Typical for most installations Four ESGs One Intel X520 card per host, which provides two dvuplinks Two VTEP/VMkernal distributed port groups per edge host Configuration B Six or eight ESGs Two Intel X520 cards per host, which provides four dvuplinks Four VTEP/VMkernal distributed port groups per edge host VMware virtual network For Cisco UCS C-Series Rack Mount Servers, two VDS are created for the edge cluster. For Cisco UCS B-Series Blade Servers, three VDS are created for the edge cluster. The following table describes each VDS: Cisco UCS C-Series Rack Mount Servers DVswitch01-Edge-Core manages the VMware vsphere ESXi management and VMware vsphere vmotion traffic types. VMware vsphere ESXi Management is on a VMware VDS instead of a VMware VSS to improve VMware NSX network performance. DVswitch02-Edge-NSX manages the external edge (Edge01 and Edge02), transport, and the optional Layer 2 bridging traffic types. For four ESGs, VMware VDS requires two dvuplinks to connect to the Cisco Nexus 9300 Series Switches. This creates two VTEP VMkernel distributed port groups per edge host. For more than four ESGs, VMware VDS uses four dvuplinks to connect to the Cisco Nexus 9300 Series Switches. This creates four VTEP VMkernel distributed port groups per edge host. Jumbo frames (9000) are enabled on the DVswitch- Edge-NSX switch and on the VXLAN transport distributed port group for VXLAN transport traffic. Cisco UCS B-Series Blade Servers DVswitch01-Edge-Core manages the VMware vsphere ESXi management and VMware vsphere vmotion traffic types. VMware vsphere ESXi Management is on a VMware VDS instead of a VMware VSS to improve VMware NSX network performance. DVswitch02-Edge-NSX-Core manages the VXLAN transport (east-west) and optional Layer 2 bridging traffic types. Jumbo frames (9000) are enabled on the DVswitch- NSX-Edge switch and on the VXLAN transport distributed port group for VXLAN transport traffic. DVswitch03-NSX-External manages edge traffic for north-south connectivity. Part 1: VMware NSX 6.2 on a VxBlock System 22

23 Logical network for the Cisco UCS C-Series Rack Mount Servers using four ESGs The following illustration shows the network layout for the hosts running on Cisco C-Series Rack Mount Servers in the edge cluster with four ESGs: 23 Part 1: VMware NSX 6.2 on a VxBlock System

24 Logical network for the Cisco UCS C-Series Rack Mount Servers (Edge Cluster) using more than four ESGs Adding more than four ESGs to the design requires a second Intel X520 card within each of the edge Cisco rack mount servers. This means four uplinks instead of two are added to the VMware vsphere Distributed Switch. The number of VTEPs is based on the number of uplinks on the VMware vsphere Distributed Switch, which is four VTEPs in this design. The following illustration shows the network layout for the hosts running on Cisco UCS C-Series Rack Mount Servers in the edge cluster with six or eight ESGs: Part 1: VMware NSX 6.2 on a VxBlock System 24

25 Logical network for the Cisco UCS B-Series Blade Servers (edge cluster) The following illustration shows the VMware virtual network layout for the hosts running on Cisco UCS B- Series Blade Servers in the edge cluster: 25 Part 1: VMware NSX 6.2 on a VxBlock System

26 VMware NSX compute cluster The VMware NSX compute cluster contains all the production VMs. Hardware requirements For best performance, make sure each NSX compute cluster has network adapters that support VXLAN offload capabilities, such as the Cisco VIC 1340 and VXLAN tunnel end point The number of VXLAN Tunnel End Points (VTEPs) deployed to each VMware vsphere ESXi depends on the number of dvuplinks configured on the VMware VDS that has the transport distributed port group created. Because more than one VTEP is on each host, Load Balance SRCID mode is enabled to load balance VXLAN traffic. The Cisco UCS B-Series blade servers design has two VXLAN Tunnel End Points (VTEPs) deployed to each VMware vsphere ESXi host within the compute cluster. LACP is not supported. Regardless of the number of ESG deployed, the number of dvuplinks is two. VMware virtual network for the compute cluster More than one compute cluster can exist in the VMware vcenter Server. A single VMware VDS spans across multiple compute clusters. However, additional VMware VDS can be deployed for a compute cluster or a set of compute clusters. The single compute VDS manages the VMware vsphere vmotion, NFS, and VXLAN transport traffic types. The VXLAN transport NFS port groups and the VMware VDS are configured for jumbo frames (MTU 9000). By default, the VMware vsphere ESXi management traffic resides on the VMware standard switch. However, you can put the VMware vsphere ESXi management traffic on the VMware VDS. Part 1: VMware NSX 6.2 on a VxBlock System 26

27 Logical network for the Cisco UCS B-Series Blade Servers (compute cluster) The following illustration shows the VMware virtual network layout for the hosts in the compute cluster: 27 Part 1: VMware NSX 6.2 on a VxBlock System

28 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System Introduction to VMware NSX 6.2 network virtualization with VPLEX On a VxBlock System, VMware NSX 6.2 supports a VPLEX multi-site solution using stretched VMware vsphere clusters. The VMware vsphere clusters stretch across two sites, which is called a VMware vsphere Metro Storage Cluster (vmsc). The first site is the primary site and the second site is the secondary site. With VMware NSX networking and VPLEX Metro storage on a VxBlock System, all the core VMware vsphere and NSX component VMs do not reside on the AMP. Instead they reside in the stretched management cluster that uses Cisco B-Series blade servers. Therefore, you can use AMP-2HA Performance or AMP-2S (two node) with VMware NSX with VPLEX. Deploying VPLEX with VMware NSX provides the following benefits: Business continuity Logical networking and security (flexibility, automation consistent networking across sites without spanning physical VLAN and micro segmentation.) The design for VPLEX on a VxBlock System does not change with VMware NSX. Refer to the Integrated Data Protection Design Guide for additional details of the VPLEX design. VMware NSX: VMware vcenter Server options for VPLEX Deploying VMware NSX 6.2 on a VxBlock System supports the following four VPLEX options based on existing VMware vsphere vcenter Server designs. The following table outlines the available options and requirements: VPLEX Metro Option Description Requirements 1 (Default) Uses AMP-2S (two node) or AMP-2HA Performance A stretched VMware vsphere cluster with hosts from both sites is required for all hosts within the compute and stretched management clusters. Both the edge and AMP VMware vsphere clusters are not stretched between both sites. Instead the AMP carries only the non-vsphere VMs (Out of band VMs.) AMP-2RP is not supported. Both VxBlock Systems share one VMware vcenter server. Both AMPs run only out-of-band VMs. SQL, PSCs, VUM are moved to the VPLEX Distribution Volume. If using AMP-2S, VMware NSX 6.2 supports only the Windows version of VMware vcenter server. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 28

29 VPLEX Metro Option Description Requirements 2 Uses AMP-2RP A stretched VMware vsphere cluster with hosts from both sites is required for all hosts within the compute and stretched management clusters. Both the edge and AMP VMware vsphere clusters are not stretched between both sites. Instead the AMP carries only the non-vsphere VMs (Out of band VMs.) 3 Uses AMP-2P (single server option) A stretched VMware vsphere cluster with hosts from both sites is required for all hosts within the compute and stretched management clusters. Both the edge and AMP VMware vsphere clusters are not stretched between both sites. Instead the AMP carries only the non-vsphere VMs (Out of band VMs.) 4 A mixed environment of stretched and non-stretched VMware vsphere clusters Supported only with AMP-2RP. VMware vcenter Server appliance runs on each local AMP. The Windows version of VMware vcenter Server (SQL, PSCs, and VUM) from the primary site are moved to a VPLEX Distributed Volume. The Windows version of VMware vcenter Server and its associated VMs from the second site are deleted All compute hosts in both sites are managed by the Windows version of VMware vcenter Server. AMP-2P must be supported on the VxBlock System. Only the out-of-band workloads exist within the AMP, which support less critical applications. This option accommodates the lowest cost AMP and six additional Cisco B-Series blade servers to support the critical management workloads. Only core VMware vsphere and VMware NSX component VMs move to the VPLEX Distribution Volume. A single VMware vcenter Server manages the compute hosts in both sites. VMware vcenter Server (Microsoft SQL Server, PSCs, VUM) runs on both local AMPs and manages the local AMP and the compute hosts that are not participating in a VPLEX stretched cluster. A third VMware vcenter Server (SQL, PSCs, and VUM) runs on a VPLEX distributed volume and manages the compute hosts on both sites that are participating in the VMware vsphere stretched cluster. 29 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

30 VMware vsphere cluster summary for VMware NSX with VPLEX The functionality of the management, edge, and compute clusters does not change from the standard design without VPLEX. A new management cluster is created that is separate from the AMP management cluster used for all the core vsphere and NSX manager/controller VMs. Only the Compute and non-amp management VMware vsphere clusters require a dedicated stretched cluster that crosses both sites. The following illustration shows the components and functions in each VPLEX stretched cluster. More than two VMware vsphere compute cluster can exist in the VMware vcenter Server. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 30

31 Logical topology for a VxBlock System with VMware NSX networking and VPLEX Metro storage The following illustration shows a high-level logical topology of a VMware NSX networking and VPLEX Metro storage. (The diagram shows fiber channel for WAN replication. VPLEX supports both Fiber Channel and IP for WAN replication.) 31 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

32 VMware NSX with VPLEX: VMware vsphere non-amp management cluster The management (non-amp) cluster consists of the management and control planes for VMware NSX and all the core vsphere VMs. The VMware NSX manager handles the management plane and the VMware NSX controllers handle the control plane. VMware NSX with VPLEX: VMware vsphere cluster components VMware NSX with VPLEX removes the limitation of using a single AMP model. Regardless of which VPLEX option you choose, the core VMware vsphere components (VMs) are located in a stretched management cluster that resides on Cisco UCS B-Series blade servers. This is also a standard design for VPLEX on VxBlock Systems without VMware NSX. VMware NSX with VPLEX: VMware vsphere cluster specifications The following table lists the specifications for the VMware NSX manager and VMware NSX controller when VPLEX is part of the VxBlock System: Specification VMware NSX Manager VMware NSX controllers Quantity Location Hardware One VM (With VMware NSX 6.2 and VPLEX, having more than one VMware NSX manager is not necessary because VMware vsphere clusters are stretched across sites.) VMware vsphere Stretched non-amp management cluster (VPLEX distributed volume) Cisco UCS B-Series Servers (Non-AMP) (Only with VPLEX) Three VMs VMware vsphere Stretched non- AMP management cluster (VPLEX distributed volume) Cisco UCS B-Series Servers (Non- AMP) (Only with VPLEX) Network vcesys_management (136) (Not the same VLAN as VMware ESXi management) vcesys_management (136) (Not the same VLAN as VMware ESXi management) Availability VMware HA and DRS VMware HA and DRS VMware NSX with VPLEX: AMP hardware requirements Stretching the AMP VMware vsphere clusters between each site is not supported due to hardware limitations. This configuration is not supported with VMware NSX. VMware NSX with VPLEX: VMware vsphere cluster requirements The VMware vsphere stretched non-amp management cluster runs in an active/standby configuration. The VMware NSX Managers, controllers and core vsphere components run only on VMware vsphere ESXi hosts within the primary site. If all three VMware vsphere ESXi hosts fail within the primary site, VMware HA restarts them on the VMware ESXi hosts that reside on the secondary site within the Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 32

33 stretched management cluster. The VMware vsphere ESXi management VLAN is Layer 3 and therefore is not stretched across sites. The following table shows the rules applied to the DRS: Rule VMware NSX Manager VMware NSX controllers DRS affinity rules (same host) Anti-affinity rules (separate host) Affinity rules are not applied to the management cluster since only a single NSX Manager is supported for a multi-site option with VPLEX. Anti-affinity rules are not applied to the management cluster since only a single NSX Manager is supported in a for a multi-site option with VPLEX. Affinity rules are not applied to the management cluster, because each controller should not be on the same VMware vsphere ESXi host. Anti-affinity rules are applied to the management cluster for each controller on separate VMware vsphere ESXi hosts within the primary site. The following illustration shows the VMware NSX manager and the three VMware NSX controllers where DRS rules are configured on the stretched management cluster: VMware NSX with VPLEX: storage requirements The VMware vsphere management cluster uses the management VPLEX distributed volume. Storage requirements for VMware NSX with VPLEX are the same as storage requirements for VPLEX on VxBlock Systems. 33 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

34 VMware NSX with VPLEX: networking requirements The VMware NSX management and control traffic share the same network segment (VLAN) as the VMware vsphere vcenter Server components (SQL, PSC, vcenter, VUM). However, this traffic is not on the same network segment (VLAN) as the VMware vsphere ESXi hosts. This is only the case when VPLEX is configured with VMware NSX. All VMkernel interfaces (VMware vsphere ESXi Management, vmotion, FT, VTEPs) are configured as Layer 3 interfaces. This means none of the network segments (VLANs) are to be stretched across the two VxBlock Systems. This enforces that the primary site is where the management VLAN is enabled by default within the ToR switches. The secondary site always has the management VLAN disabled within the ToR switches. If the primary site fails, the Management VLAN is enabled on the ToR switches in the secondary site. The VMware vsphere ESXi hosts on the Cisco UCS B-Series blade servers are configured similar to the compute hosts. The VMware vsphere Standard Switch is used for the VMware vsphere ESXi management traffic and the VMware vsphere Distributed Switch is used for Layer 3 vmotion. The Management VM portgroup is created on the local VMware vsphere Standard Switch. Logical network for Cisco UCS B-Series Blade Servers in the primary and secondary sites (stretched management cluster) The following diagrams show how the virtual networking is configured on the VMware vsphere ESXi hosts on the primary and secondary sites within the stretched management cluster. Each site uses a different VLAN for the management VKernel port group. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 34

35 Logical network for Cisco UCS B-Series Blade Servers in the primary site of a stretched management cluster The following diagram shows a logical network for Cisco UCS B-Series Blade Servers in the primary site of a stretched management cluster: 35 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

36 Logical network for Cisco UCS B-Series Blade Servers in the secondary site of a stretched management cluster VMware NSX with VPLEX: VMware vsphere edge cluster The VMware NSX edge cluster connects to the physical network and provides routing and bridging. The edge cluster supports either the Cisco UCS C-Series Rack Mount servers (recommended) or Cisco UCS B-Series Blade Servers. Edge components are spread out between both the primary and secondary sites and is not part of any VPLEX datastore volume. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 36

37 VMware NSX with VPLEX: VMware vsphere edge cluster components The VMware NSX with VPLEX edge components are split up on VMware vsphere ESXi hosts between the primary and secondary sites. Four edge service gateways (ESGs) reside on each site (eight total) where all egress traffic is active within the primary site. The distributed logical router (DLR) is split between sites where the active appliance resides in the primary site and the standby appliance resides in the secondary site. The following illustration shows the edge cluster. 37 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

38 VMware NSX with VPLEX: VMware vsphere edge cluster specifications The following table lists the specifications for the VMware NSX ESGs and VMware NSX DLR: Specification VMware NSX ESG VMware NSX DLR Quantity Location Hardware Workload Network Eight VMs (four per site) with ECMP enabled By default with VPLEX, the four ESGs in the primary site are always active while the other four ESGs in the secondary site are passive. Non-stretched edge cluster (VPLEX distributed volume) Cisco UCS C-Series Rack Mount Servers (Recommended) Four servers per site or Cisco UCS B-Series Blade Servers Four servers per site Four ESGs are spread out across two of three servers per site One external interface for connection to north-south physical switches One internal interface for the connection to the local DLR One internal interface for the connection to the universal DLR (if the universal DLR is deployed) Two VMs (one per site) with ECMP enabled By default, both DLRs are active in both sites. Non-stretched edge cluster (VPLEX distributed volume) Cisco UCS C-Series Rack Mount Servers (Recommended) or Cisco UCS B-Series Blade Servers One DLR is spread out across one server between both sites. One uplink interface for connection to the ESG Availability VMware HA and DRS VMware HA and DRS VMware NSX with VPLEX: Hardware requirements for Cisco UCS C- series Rack Mount Servers The following table describes the changes in hardware requirements for VMware NSX and VPLEX Metro storage on the Cisco UCS C-Series Rack Mount Servers for edge. Component Cisco UCS C- Series Rack Mount Servers Requirements The edge cluster always uses three Cisco UCS C-Series M4 Rack Mount Servers per site with a total of six servers. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 38

39 Component Physical cabling Requirements Server to Cisco Nexus 9300 Series Switch connectivity: The connections are no different from the standard configuration of a single VxBlock System without VPLEX. Each edge server uses the Intel X520 (SFP+) card(s) to connect to the Cisco Nexus 9300 Series Switches. In a multi-site option with VPLEX environment, there are by default always four edge service gateways at each site so the physical cabling does not change. Server to Cisco UCS Fabric Interconnect Connectivity The connections are no different from the standard configuration of a single VxBlock System without VPLEX. Each edge server uses the Cisco VIC 1227 (SFP+) card to connect to the Cisco Fabric Interconnects. VMware NSX with VPLEX: Hardware requirements for Cisco UCS B- series Blade Servers The following table describes the changes in hardware requirements for VMware NSX in a multi-site option using VPLEX on the Cisco UCS B-Series Blade Servers for edge. Component Cisco UCS B- Series Blade Servers Physical cabling Requirements The edge cluster always uses three Cisco UCS B-Series Blade Servers per site. The connections are no different from the standard configuration of a single VxBlock System without VPLEX. In a multi-site option with VPLEX environment there is by default always four edge service gateways at each site so the physical cabling does not change for having standard four edge service gateway at each site. VMware NSX with VPLEX: VMware vsphere cluster requirements The edge cluster requires VMware HA and VMware vsphere DRS to provide VM protection against a VMware vsphere ESXi host failure and to balance VM workloads in the cluster. The following table explains the requirements for the VMware vsphere cluster: DRS rule ESGs DLRs Affinity rules (same host) Affinity rules are applied to the edge cluster to allow each pair of VMware NSX ESGs to be assigned to its own VMware vsphere ESXi host. For local DLRs, DRS affinity rules do not need to be configured, because only two DLRs exist in HA mode and should not be running on the same host. Universal objects are not configured as part of the multi-site option with VPLEX. 39 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

40 DRS rule ESGs DLRs Anti-affinity rules (separate host) Anti-affinity rules are applied to the edge cluster to each pair of VMware NSX ESGs to not cross the VMware vsphere ESXi hosts and between sites. For local DLRs, DRS anti-affinity rules are applied to the edge cluster to each VMware NSX DLR to not cross the VMware vsphere ESXi hosts. They are split between the primary and secondary sites. Universal objects are not configured as part of the multi-site option with VPLEX. VMware NSX with VPLEX: custom resource pool requirements The edge cluster does not require custom resource pools. VMware NSX with VPLEX: storage requirements VMware NSX with VPLEX has the following storage requirements: The stretched datastore volume is not to be created within the VMware vsphere edge cluster. This means that ESGs within each site are pinned with DRS rules and are not to migrate between sites. VMware NSX with VPLEX: networking requirements The following table describes the network connectivity for the VMware NSX edge components: Connectivity type External Internal and uplink Description North-south connectivity exists between the ESGs and the Cisco Nexus 9300 Series Switches. Each ESG (VM) uses a single Edge uplink interface to establish BGP peering relationships with both top-of-rack switches. The internal connectivity between the ESG and the DLR does not change from the standard VxBlock System when VPLEX is added to the solution. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 40

41 Connectivity type Dynamic routing and Layer 3 termination Layer 2 bridging Description In a VMware NSX 6.2 multi-site option with VPLEX environment the ESGs require Equal Cost Multi-Pathing (ECMP) and ebgp to peer with the ToR Cisco Nexus 9300 Series switches. However, instead of using two separate Layer 3 edge VLAN IDs, each site requires only one. Each site uses a locally significant VLAN ID for Edge VLANs on the Cisco Nexus 9300 Series switches as follows: Edge01 SVI on switch A (Primary Site) Edge02 SVI on switch B (Secondary Site) The DLRs are split between the primary and secondary sites. In a normal state, egress traffic goes through ESGs within the primary site for route predictability, control, and policy management. Ingress traffic goes through any ESGs within either site. If all ESGs within the primary site fail, the DLR in the secondary site has the best metrics to handle traffic. All other VLANs internal to the VxBlock System terminate at the ToR Cisco Nexus 9300 Series switches. Layer 2 bridging is an optional configuration to support VXLAN-to-VLAN connectivity in a multi-site option with VPLEX. However, works local to each site and not across sites. 41 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

42 VMware NSX with VPLEX: logical topology for edge in a primary site The following illustration shows the edge network connectivity for primary site in a multi-site option with VPLEX environment: Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 42

43 VMware NSX with VPLEX: logical topology for edge in a secondary site The following illustration shows the edge network connectivity for secondary site in a multi-site option with VPLEX environment: 43 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

44 VMware NSX with VPLEX: VMware vsphere edge service gateway traffic flow There are various options for ingress and egress traffic flow on the edge service gateway. The active distributed logical router in the primary site routes outbound (Egress) traffic to the edge service gateways in the primary site until the primary site fails. Inbound (Ingress) traffic flows through any edge service gateway in either the primary or secondary site. The illustration below shows the default outbound (Egress) and inbound (Ingress) traffic flows. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 44

45 Default outbound traffic route The default outbound traffic route (Egress) flows through the edge service gateways within the primary site as shown in the following topology, which uses a Layer 3 DCI network topology with dark fiber (IP based is also supported). 45 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

46 Default outbound traffic route when the primary site fails If the primary site fails the default outbound traffic route (Egress) flows through the edge service gateways within the secondary site as shown in the following topology, which uses Layer 3 DCI network topology with dark fiber (IP based is also supported). Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 46

47 Default inbound traffic route The default inbound (return) traffic route (Ingress) is asymmetrical and can flow through any of the edge service gateways in either the primary or secondary site as shown in the following topology, which uses a Layer 3 DCI network topology with dark fiber (IP based is also supported). 47 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

48 Default inbound traffic route when the primary site fails If the primary site fails, the default inbound (return) traffic route (Ingress) is asymmetrical and can flow through the edge service gateways in the secondary site as shown in the following topology, which uses a Layer 3 DCI network topology with dark fiber (IP based is also supported). Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 48

49 VMware NSX with VPLEX: VXLAN tunnel end points The number of VXLAN Tunnel End Points (VTEPs) deployed to each VMware vsphere ESXi depends on the number of dvuplinks configured on the VMware VDS that has the transport distributed port group created. Because more than one VTEP is on each host, the Load Balance SRCID mode is enabled to load balance VXLAN traffic. In a VMware NSX multi-site option with VPLEX, the Cisco UCS C-Series rack mount servers have two VXLAN VTEPs deployed to each VMware vsphere ESXi host within the edge cluster. Each site uses the same Transport VLAN ID, but requires a different Layer 3 subnet. For this reason a separate DHCP server (Linux appliance) is required at each site to automatically assign appropriate IPs for VTEP interfaces on each ESXi host within the edge and compute clusters. Using the same DHCP server for both sites does not work because the IP Address Helper configured on the ToR switches must be on same subnet for that particular site. VMware NSX with VPLEX: network requirements for Cisco UCS Servers The following table describes the network requirements for Cisco UCS C-Series Rack Mount Servers and Cisco UCS B-Series Blade Servers: Component Cisco UCS C-Series Rack Mount Servers Cisco UCS B-Series Blade Servers VLAN IDs VMware NSX requires three VLAN/SVI IDs when deploying VMware NSX multi-site option with VPLEX: Transport Edge01 (primary site) Edge02 (secondary site) However, the difference between the standard design without VPLEX and standard design with VPLEX is that a single Edge ID per site is used instead of two edge VLAN IDs Series Switches at each site: One external edge VLAN/SVI is used for external traffic (on/off ramp) per site (two total). One transport VLAN is used to pass VXLAN traffic. Each site uses the same Transport VLAN ID, but requires a different Layer 3 subnet between each site. The external traffic traverses north-south between the edge servers and the Cisco Nexus 9300 Series Switches. The transport VLAN is Layer 3 and requires an SVI on the Cisco Nexus 9300 Series Switches. With Cisco UCS C-Series Rack Mount Servers, the external edge traffic VLAN IDs do not need to be created in the Cisco UCS Manager. However, because the compute blades pass VXLAN traffic, the VXLAN Transport VLAN ID must be added to the Cisco UCS Manager at each site location. With Cisco UCS B-Series Blade Servers, the external edge traffic and VXLAN transport traffic VLAN IDs must be created in the Cisco UCS Manager for each site location. 49 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

50 Component Cisco UCS C-Series Rack Mount Servers Cisco UCS B-Series Blade Servers VXLAN Tunnel End Points (VTEPs) The teaming and balancing mode does not change from the standard VxBlock System. The Load Balance SRCID mode is enabled to load balance VXLAN traffic. The teaming and balancing mode does not change from the standard VxBlock System. However, a default of one Intel X520 card is required in each edge server. The teaming and balancing mode does not change from the standard VxBlock System. Two uplinks are created on the edge VMware VDS since there is a static number of ESGs (8 total: 4 per site) to be deployed. VMware NSX with VPLEX: VMware virtual network In a VMware NSX multi-site option with VPLEX, the VMware vsphere Distributed Switch design does not change other than the following changes: The requirement for having two Edge VLAN IDs in a multi-site option with VPLEX is no longer required. Instead, only a single Edge VLAN ID is used per site location. The two edge port groups run the teaming and balancing in Active/Active mode with Route based on originating virtual port (non-lacp/vpc) enabled instead of the Active/Unused mode with Route based on IP hash when VPLEX is not deployed. Logical network for the Cisco UCS C-Series Rack Mount Servers in the primary site In a multi-site option with VPLEX environment, a single edge port group is used instead of two edge port groups. Also, the ESXi management and Layer 3 vmotion traffic is on a separate network from the secondary site. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 50

51 The following illustration shows the VMware virtual network layout for the hosts running on Cisco UCS C- Series Blade Servers within the primary site for multi-site option with VPLEX: Logical network for the Cisco UCS C-Series Rack Mount Servers in a secondary site In a multi-site option with VPLEX environment, a single Edge port group is used instead of two edge port groups. Also, the ESXi management and Layer 3 vmotion traffic is on a separate network from the primary site. 51 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

52 The following illustration shows the VMware virtual network layout for the hosts running on Cisco UCS C- Series Blade Servers in the secondary site for multi-site option with VPLEX: Logical network for the Cisco UCS B-Series Blade Servers in a primary site In a multi-site option with VPLEX environment, a single edge port group is used instead of two edge port groups. Also, the ESXi management and Layer 3 vmotion traffic is on a separate network from the secondary site. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 52

53 The following illustration shows the VMware virtual network layout for the hosts running on Cisco UCS B- Series Blade Servers within the primary site for multi-site option with VPLEX. Logical network for the Cisco UCS B-Series Blade Servers in a secondary site In a multi-site option with VPLEX environment, a single edge port group is used instead of two edge port groups. Also, the ESXi management and Layer 3 vmotion traffic is on a separate network from the primary site. 53 Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System

54 The following illustration shows the VMware virtual network layout for the hosts running on Cisco UCS B- Series Blade Servers within the secondary site for multi-site option with VPLEX: VMware NSX with VPLEX: compute cluster The VMware NSX compute cluster contains all the production VMs. Part 2: VMware NSX 6.2 with VPLEX on a VxBlock System 54

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Table of Contents Executive Summary... 2 Benefits of NSX Architecture... 4 2.1 NSX Primary Use Cases... 4 2.2 Logical Layer Connectivity...

More information

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture Date: 2017-03-29 Version: 1.0 Copyright IBM Corporation 2017 Page 1 of 16 Table of Contents 1 Introduction... 4 1.1 About

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1350BUR Deploying NSX on a Cisco Infrastructure Jacob Rapp jrapp@vmware.com Paul A. Mancuso pmancuso@vmware.com #VMworld #NET1350BUR Disclaimer This presentation may contain product features that are

More information

Cross-vCenter NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3

Cross-vCenter NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 Cross-vCenter NSX Installation Guide Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Cross-vCenter NSX Installation Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

Cross-vCenter NSX Installation Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 Cross-vCenter NSX Installation Guide Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

VMware Validated Design for NetApp HCI

VMware Validated Design for NetApp HCI Network Verified Architecture VMware Validated Design for NetApp HCI VVD 4.2 Architecture Design Sean Howard Oct 2018 NVA-1128-DESIGN Version 1.0 Abstract This document provides the high-level design criteria

More information

Cross-vCenter NSX Installation Guide. Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4

Cross-vCenter NSX Installation Guide. Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4 Cross-vCenter NSX Installation Guide Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances

Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances Ordering and deleting Single-node Trial for VMware vcenter Server on IBM Cloud instances The Single-node Trial for VMware vcenter Server on IBM Cloud is a single-tenant hosted private cloud that delivers

More information

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer 21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...

More information

NSX Administration Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

NSX Administration Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 NSX Administration Guide Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.

Deploying VMware Validated Design Using OSPF Dynamic Routing. Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4. Deploying VMware Validated Design Using PF Dynamic Routing Technical Note 9 NOV 2017 VMware Validated Design 4.1 VMware Validated Design 4.0 Deploying VMware Validated Design Using PF Dynamic Routing You

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 13 FEB 2018 VMware Validated Design 4.2 VMware Validated Design for Software-Defined Data Center 4.2 You can find the most up-to-date

More information

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE WHITE PAPER - DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE ABSTRACT This planning guide provides best practices and requirements for using stretched clusters with VxRail appliances. April 2018

More information

Dell EMC. IPv6 Overview for VxBlock Systems

Dell EMC. IPv6 Overview for VxBlock Systems Dell EMC IPv6 Overview for VxBlock Systems Document revision 1.2 February 2018 Revision history Date Document revision Description of changes February 2018 1.2 Added support for Isilon Generation 6 and

More information

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 740 Architecture Overview Dell EMC VxBlock and Vblock Systems 740 Architecture Overview Document revision 1.15 December 2017 Revision history Date Document revision Description of changes December 2017 1.15 Added Cisco UCS B-Series

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS

VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation. Harry Meier GLOBAL SPONSORS VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Harry Meier GLOBAL SPONSORS Dell EMC VxRack SDDC Integrated compute, storage, and networking powered by VMware Cloud Foundation

More information

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min.

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min. 2V0-642 vmware Number: 2V0-642 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 A network administrator has been tasked with deploying a 3-tier application across two data centers. Tier-1 and tier-2

More information

VMware vsan Network Design-OLD November 03, 2017

VMware vsan Network Design-OLD November 03, 2017 VMware vsan Network Design-OLD November 03, 2017 1 Table of Contents 1. Introduction 1.1.Overview 2. Network 2.1.vSAN Network 3. Physical Network Infrastructure 3.1.Data Center Network 3.2.Oversubscription

More information

ENTERPRISE HYBRID CLOUD 4.1.2

ENTERPRISE HYBRID CLOUD 4.1.2 ENTERPRISE HYBRID CLOUD 4.1.2 September 2017 Abstract This solution guide provides an introduction to the concepts and architectural options available within Enterprise Hybrid Cloud. It should be used

More information

IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER

IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER International scientific conference - ERAZ 2016: Knowledge based sustainable economic development IMPLEMENTING VIRTUALIZATION IN A SMALL DATA CENTER Ionka Gancheva, PhD student 213 Abstract: The article

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme LHC2384BU VMware Cloud on AWS A Technical Deep Dive Ray Budavari @rbudavari Frank Denneman - @frankdenneman #VMworld #LHC2384BU Disclaimer This presentation may contain product features that are currently

More information

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4 NSX-T Data Center Migration Coordinator Guide 5 APR 2019 VMware NSX-T Data Center 2.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Deployment Automation Architecture Guide for VMware NFV 3.0 with VMware Integrated OpenStack 5.0 with Kubernetes Dell Engineering

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

VMware vsphere Administration Training. Course Content

VMware vsphere Administration Training. Course Content VMware vsphere Administration Training Course Content Course Duration : 20 Days Class Duration : 3 hours per day (Including LAB Practical) Fast Track Course Duration : 10 Days Class Duration : 8 hours

More information

ENTERPRISE HYBRID CLOUD 4.1.1

ENTERPRISE HYBRID CLOUD 4.1.1 ENTERPRISE HYBRID CLOUD 4.1.1 September 2017 Abstract This solution guide provides an introduction to the concepts and architectural options available within Enterprise Hybrid Cloud. It should be used

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

NetApp HCI Network Setup Guide

NetApp HCI Network Setup Guide Technical Report NetApp HCI Network Setup Guide Version 1.2 Aaron Patten, NetApp April 2018 TR-4679 TABLE OF CONTENTS 1 Introduction... 4 2 Hardware... 4 2.1 Node and Chassis Layout... 4 2.2 Node Types...

More information

Vmware VCXN610. VMware Certified Implementation Expert (R) Network Virtualization.

Vmware VCXN610. VMware Certified Implementation Expert (R) Network Virtualization. Vmware VCXN610 VMware Certified Implementation Expert (R) Network Virtualization http://killexams.com/exam-detail/vcxn610 QUESTION: 169 A company wants to deploy VMware NSX for vsphere with no PIM and

More information

vcloud NFV ScaleIO Detailed Design

vcloud NFV ScaleIO Detailed Design vcloud NFV ScaleIO Detailed Design Installation Guide Dell Networking Solutions Engineering December 2017 A Dell EMC Technical White Paper Revisions Date Description Version September 2017 Initial release

More information

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers VMware vcloud Network VMware vcloud Architecture Toolkit for Service Providers Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers Version 2.8 August 2017 Harold Simon 2017 VMware,

More information

VMware Validated Design for Micro-Segmentation Reference Architecture Guide

VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation 3.0 This document supports the version of each product listed and supports all

More information

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3

Architecture and Design of VMware NSX-T for Workload Domains. Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 Architecture and Design of VMware NSX-T for Workload Domains Modified on 20 NOV 2018 VMware Validated Design 4.3 VMware NSX-T 2.3 You can find the most up-to-date technical documentation on the VMware

More information

Dell EMC + VMware Cloud Infrastructure Platform for NFV

Dell EMC + VMware Cloud Infrastructure Platform for NFV Dell EMC + VMware Cloud Infrastructure Platform for NFV Service Provider Solutions Group April 2017 A Dell EMC Design Guide Revisions Date April 2017 Description Initial release Copyright 2017 Dell Inc.

More information

Parallel to NSX Edge Using VXLAN Overlays with Avi Vantage for both North-South and East-West Load Balancing Using Transit-Net

Parallel to NSX Edge Using VXLAN Overlays with Avi Vantage for both North-South and East-West Load Balancing Using Transit-Net Page 1 of 11 Parallel to NSX Edge Using VXLAN Overlays with Avi Vantage for both North-South and East-West Load Balancing Using Transit-Net view online In this topology, the Avi SE is installed parallel

More information

New Features in VMware vsphere (ESX 4)

New Features in VMware vsphere (ESX 4) New Features in VMware vsphere (ESX 4) VMware vsphere Fault Tolerance FT VMware Fault Tolerance or FT is a new HA solution from VMware for VMs. It is only available in vsphere 4 and above and provides

More information

IPv6 Best Operational Practices of Network Functions Virtualization (NFV) With Vmware NSX. Jeremy Duncan Tachyon Dynamics

IPv6 Best Operational Practices of Network Functions Virtualization (NFV) With Vmware NSX. Jeremy Duncan Tachyon Dynamics IPv6 Best Operational Practices of Network Functions Virtualization (NFV) With Vmware NSX Jeremy Duncan Tachyon Dynamics Overview NSX as it pertains to NFV How NSX works NSX IPv6 Capabilities & Limitations

More information

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 350 Architecture Overview Dell EMC VxBlock and Vblock Systems 350 Architecture Overview Document revision 1.13 December 2018 Revision history Date Document revision Description of changes December 2018 1.13 Added support for VMware

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE Design Guide APRIL 2017 1 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1927BU vsphere Distributed Switch Best Practices for NSX Gabriel Maciel VMware, Inc. @gmaciel_ca #VMworld2017 #NET1927BU Disclaimer This presentation may contain product features that are currently

More information

HARDWARE UMET / OCIT Private Cloud

HARDWARE UMET / OCIT Private Cloud QTY ITEM Description Precio Total HARDWARE UMET / OCIT Private Cloud 1 Cisco UCS Chassis Cisco UCS 5108 Blade Chassis Two UCS 2208XP I/O (8 External / 32 Internal 10GB Ports) 8 Fan Modules 4 2500W Platinum

More information

Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5

Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5 Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5 Table of Contents 1. Introduction... 4 2. Network Function Virtualization Overview... 5 2.1 NFV Infrastructure

More information

Design Guide to run VMware NSX for vsphere with Cisco ACI

Design Guide to run VMware NSX for vsphere with Cisco ACI White Paper Design Guide to run VMware NSX for vsphere with Cisco ACI First published: January 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page

More information

Dell EMC. Converged Technology Extensions for Storage Product Guide

Dell EMC. Converged Technology Extensions for Storage Product Guide Dell EMC Converged Technology Extensions for Storage Product Guide Document revision 1.9 December 2017 Revision history Date Document revision Description of changes December 2017 1.9 Removed the topic,

More information

Parallel to NSX Edge Using Avi Vantage for North-South and East-West Load Balancing

Parallel to NSX Edge Using Avi Vantage for North-South and East-West Load Balancing Page 1 of 10 Parallel to NSX Edge Using Avi Vantage for North-South and East-West Load Balancing view online In this topology the Avi SE is installed parallel to NSX Edge. Physically, the Avi SE gets deployed

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Vmware 3V VMware Certified Advanced Professional Data Center Virtualization Design.

Vmware 3V VMware Certified Advanced Professional Data Center Virtualization Design. Vmware 3V0-624 VMware Certified Advanced Professional 6.5 - Data Center Virtualization Design http://killexams.com/pass4sure/exam-detail/3v0-624 DEMO Find some pages taken from full version Killexams 3V0-624

More information

VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018

VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018 VMware vcloud Director Configuration Maximums vcloud Director 9.1 and 9.5 October 2018 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide

Dell EMC. VxBlock and Vblock Systems 350 Administration Guide Dell EMC VxBlock and Vblock Systems 350 Administration Guide Document revision 1.10 June 2018 Revision history Date Documen t revision Description of changes June 2018 1.10 Changed the following sections

More information

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019

Dell EMC. Converged Systems Glossary. Document revision 1.25 January 2019 Dell EMC Converged Systems Glossary Document revision 1.25 January 2019 Revision history Date Document revision Description of changes January 2019 1.25 Added the following terms: Hyper-converged deployment

More information

VMware Integrated OpenStack Quick Start Guide

VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

DELL EMC VSCALE FABRIC

DELL EMC VSCALE FABRIC NETWORK DATA SHEET DELL EMC VSCALE FABRIC FIELD-PROVEN BENEFITS Increased utilization and ROI Create shared resource pools (compute, storage, and data protection) that connect to a common, automated network

More information

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers

Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers VMware vcloud Architecture Toolkit for Service Providers Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers Version 2.9 January 2018 Adrian Roberts 2018 VMware, Inc. All rights reserved.

More information

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check

More information

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview

Dell EMC. VxBlock and Vblock Systems 540 Architecture Overview Dell EMC VxBlock and Vblock Systems 540 Architecture Overview Document revision 1.16 July 2018 Revision history Date Document revision Description of changes July 2018 1.16 Updated Compute Connectivity

More information

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] [VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] Length Delivery Method : 5 Days : Instructor-led (Classroom) Course Overview This five-day course features intensive hands-on training that

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

Integrating Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments

Integrating Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Integrating Juniper Networks QFX5100 Switches and Junos Space into VMware NSX Environments Implementing an NSX vsphere Version 6.3 Overlay with a QFX5100 Underlay Implementation Guide July 2017 Juniper

More information

Table of Contents HOL-SDC-1412

Table of Contents HOL-SDC-1412 Table of Contents Lab Overview... 2 - IT Outcomes Data Center Virtualization and Standardization... 3 Module 1 - Lab Overview (15 Min)... 5 Physical Topology... 6 Application Topology... 8 Access the 3-Tier

More information

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide

Dell EMC. Converged Technology Extension for Isilon Storage Product Guide Dell EMC Converged Technology Extension for Isilon Storage Product Guide Document revision 1.7 December 2017 Revision history Date Document revision Description of changes December 2017 1.7 Added Generation

More information

Exam Name: VMware Certified Associate Network Virtualization

Exam Name: VMware Certified Associate Network Virtualization Vendor: VMware Exam Code: VCAN610 Exam Name: VMware Certified Associate Network Virtualization Version: DEMO QUESTION 1 What is determined when an NSX Administrator creates a Segment ID Pool? A. The range

More information

VxRack SDDC Deep Dive:

VxRack SDDC Deep Dive: VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation GLOBAL SPONSORS What is HCI? Systems design shift Hyper-converged HYPER-CONVERGED SERVERS SAN STORAGE THEN NOW 2 What is HCI?

More information

Network Configuration Example

Network Configuration Example Network Configuration Example MetaFabric Architecture 2.0: Configuring Virtual Chassis Fabric and VMware NSX Modified: 2017-04-14 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089

More information

Cisco UCS Network Performance Optimisation and Best Practices for VMware

Cisco UCS Network Performance Optimisation and Best Practices for VMware 1 Cisco UCS Network Performance Optimisation and Best Practices for VMware Chris Dunk Technical Marketing Engineer, Cisco UCS #clmel Agenda Server to Server East West Traffic Flow Architecture Why it is

More information

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Networking Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified on 04 OCT 2017 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware Web site at: https://docs.vmware.com/ The VMware

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

VersaStack for Data Center Design & Implementation (VDCDI) 1.0

VersaStack for Data Center Design & Implementation (VDCDI) 1.0 VersaStack for Data Center Design & Implementation (VDCDI) 1.0 COURSE OVERVIEW: VersaStack for Data Center Design and Implementation (VDCDI) is a four-day training program. The Cisco-IBM VersaStack validated

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

Cisco ACI with Cisco AVS

Cisco ACI with Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Cisco AVS Installation, page 6 Key Post-Installation Configuration Tasks for the Cisco AVS, page 43 Distributed Firewall, page 62

More information

NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3

NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation,

More information

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity, Solution overview A New Generation of Converged Infrastructure that Improves Flexibility, Efficiency, and Simplicity Enterprises everywhere are increasingly adopting Converged Infrastructure (CI) as one

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

VMware Integrated OpenStack Installation and Configuration Guide

VMware Integrated OpenStack Installation and Configuration Guide VMware Integrated OpenStack Installation and Configuration Guide VMware Integrated OpenStack 3.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview

VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview www.vce.com VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Document revision 1.6 February 2016 VCE Vblock and VxBlock Systems 540 Gen 2.1 Architecture Overview Revision history Revision

More information

GV STRATUS Virtualized Systems. Alex Lakey November 2016

GV STRATUS Virtualized Systems. Alex Lakey November 2016 GV STRATUS Virtualized Systems Alex Lakey November 2016 Introduction With the introduction of version 4.5 of GV STRATUS, Grass Valley, a Belden Brand, gives users the option of running the GV STRATUS video

More information

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

VMware vsphere 5.5 VXLAN Networking and Emulex OneConnect OCe14000 Ethernet Adapters

VMware vsphere 5.5 VXLAN Networking and Emulex OneConnect OCe14000 Ethernet Adapters VMware vsphere 5.5 VXLAN Networking and Emulex OneConnect OCe14000 Ethernet Adapters Configuring VXLAN with Emulex OneConnect OCe14000 Adapters Emulex OneConnect Network Adapters Table of contents 1.0

More information

IBM Cloud for VMware Solutions VMware on IBM Cloud Solution Architecture

IBM Cloud for VMware Solutions VMware on IBM Cloud Solution Architecture IBM Cloud for VMware Solutions VMware on IBM Cloud Solution Architecture Date: 2017 11 15 Version: 3.0 Copyright IBM Corporation 2017 Page 1 of 28 Table of Contents 1 Introduction... 4 1.1 About VMware

More information

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

Configuration Maximums. Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Configuration s You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

GUIDE. Optimal Network Designs with Cohesity

GUIDE. Optimal Network Designs with Cohesity Optimal Network Designs with Cohesity TABLE OF CONTENTS Introduction...3 Key Concepts...4 Five Common Configurations...5 3.1 Simple Topology...5 3.2 Standard Topology...6 3.3 Layered Topology...7 3.4 Cisco

More information

VxBlock System Deep inside the next generation converged infrastructure

VxBlock System Deep inside the next generation converged infrastructure VxBlock System 1000 Deep inside the next generation converged infrastructure Scott Redfern Senior Director, Modern Data Centers Jeff Wheeler Consultant Architect, Modern Data Centers Agenda VxBlock System

More information

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV.

Enterprise. Nexus 1000V. L2/L3 Fabric WAN/PE. Customer VRF. MPLS Backbone. Service Provider Data Center-1 Customer VRF WAN/PE OTV OTV. 2 CHAPTER Cisco's Disaster Recovery as a Service (DRaaS) architecture supports virtual data centers that consist of a collection of geographically-dispersed data center locations. Since data centers are

More information

Cisco Nexus 1100 Series Virtual Services Appliances

Cisco Nexus 1100 Series Virtual Services Appliances Deployment Guide Cisco Nexus 1100 Series Virtual Services Appliances Deployment Guide Version 1.0 June 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide

Dell EMC. VxBlock and Vblock Systems 540 Administration Guide Dell EMC VxBlock and Vblock Systems 540 Administration Guide Document revision 1.16 December 2018 Revision history Date Document revision Description of changes December 2018 1.16 Added support for VxBlock

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

Planning and Preparation. VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0

Planning and Preparation. VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0 VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Datrium DVX Networking Best Practices

Datrium DVX Networking Best Practices Datrium DVX Networking Best Practices Abstract This technical report presents recommendations and best practices for configuring Datrium DVX networking for enterprise level use for VMware vsphere environments.

More information

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Table of Contents Executive Summary....4 Audience....4 Overview....4 VMware Software Components....6 Architectural Overview... 7 Cluster...

More information

PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Con

PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Con PBO1064BU VxRack SDDC Deep Dive: Inside VxRack SDDC Powered by VMware Cloud Foundation Jason Marques, Dell EMC Georg Edelmann, VMware VMworld 2017 Content: Not for publication #VMworld #PBO1064BU PBO1064BU

More information

RecoverPoint for Virtual Machines

RecoverPoint for Virtual Machines RecoverPoint for Virtual Machines Basic Configuration Installation Guide P/N 302-004-369 Rev 01 Version 5.1.1 RecoverPoint for Virtual Machines Version 5.1.1 Basic Configuration Installation Guide Copyright

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme PBO1295BU VMware Validated Design for Remote/Branch Office Technical Overview VMworld 2017 Content: Not for publication ##VMworld #PBO1295BU Disclaimer This presentation may contain product features that

More information