Design Guide to run VMware NSX for vsphere with Cisco ACI

Size: px
Start display at page:

Download "Design Guide to run VMware NSX for vsphere with Cisco ACI"

Transcription

1 White Paper Design Guide to run VMware NSX for vsphere with Cisco ACI First published: January Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 64

2 Contents Introduction... 4 Goal of this document... 4 Cisco ACI fundamentals... 6 Cisco ACI policy model... 8 Cisco ACI policy-based networking and security... 8 Cisco ACI VMM domains VMware NSX fundamentals NSX for vsphere NSX for vsphere network requirements Running vsphere Infrastructure as an application with Cisco ACI vsphere infrastructure Physically connecting ESXi hosts to the fabric Mapping vsphere environments to Cisco ACI network and policy model Obtaining per-cluster visibility in APIC Securing vsphere Infrastructure VMware vswitch design and configuration considerations Option 1. Running NSX-v security and virtual services using a Cisco ACI integrated overlay for network virtualization Using NSX Distributed Firewall and Cisco ACI integrated overlays Option 2. NSX-v overlays as an application of the Cisco ACI fabric NSX-v VXLAN architecture NSX VXLAN Understanding BUM traffic replication NSX transport zones NSX VTEP subnetting considerations Running NSX VXLAN on a Cisco ACI fabric Bridge domain EPG design when using NSX hybrid replication Bridge domain EPG design when using NSX unicast replication Providing visibility of the underlay for the vcenter and NSX administrators Virtual switch options: Single VDS versus dual VDS NSX Edge Clusters NSX routing and Cisco ACI Introduction to NSX routing Connecting ESG with NAT to the Cisco ACI fabric ESG routing through the fabric ESG peering with the fabric using L3Out Bridging between logical switches and EPGs Conclusion Do you need NSX when running an Cisco ACI fabric? Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 64

3 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. ( This product includes software written by Tim Hudson (tjh@cryptsoft.com). Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 64

4 Introduction With the launch of the Cisco Application Centric Infrastructure (Cisco ACI ) solution in 2013, Cisco continued the trend of providing best-in-class solutions for VMware vsphere environments. Cisco ACI is a comprehensive Software-Defined Networking (SDN) architecture that delivers a better network, implementing distributed Layer 2 and Layer 3 services across multiple sites using integrated Virtual Extensible LAN (VXLAN) overlays. Cisco ACI also enables distributed security for any type of workload, and introduces policy-based automation with a single point of management. The core of the Cisco ACI solution, the Cisco Application Policy Infrastructure Controller (APIC), provides deep integration with multiple hypervisors, including VMware vsphere, Microsoft Hyper-V, and Red Hat Virtualization; and with modern cloud and container cluster management platforms, such as OpenStack and Kubernetes. The APIC not only manages the entire physical fabric but also manages the native virtual switching offering for each of the hypervisors or container nodes. Since its introduction, Cisco ACI has seen incredible market adoption and is currently deployed by thousands of customers across all industry segments. In parallel, some vsphere customers may choose to deploy hypervisor-centric SDN solutions, such as VMware NSX, oftentimes as a means of improving security in their virtualized environments. This leads customers to wonder how to best combine NSX and Cisco ACI. This document is intended to help those customers by explaining the design considerations and options for running VMware NSX with a Cisco ACI fabric. Goal of this document This document explains the benefits of Cisco ACI as a foundation for VMware vsphere, as well as how it makes NSX easier to deploy, more cost effective, and simpler to troubleshoot when compared to running NSX on a traditional fabric design. As Cisco ACI fabrics provide a unified overlay and underlay, two possible NSX deployments options are discussed (Figure 1): Option 1. Running NSX-V security and virtual services with a Cisco ACI integrated overlay: In this model, Cisco ACI provides overlay capability and distributed networking, while NSX is used for distributed firewalling and other services, such as load balancing, provided by NSX Edge Services Gateway (ESG). Option 2. -Running NSX overlay as an application: In this deployment model, the NSX overlay is used to provide connectivity between vsphere virtual machines, and the Cisco APIC manages the underlying networking, as it does for vmotion, IP Storage, or Fault Tolerance Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 64

5 Figure 1. VMware NSX deployment options These two deployment options are not mutually exclusive. While Cisco ACI offers substantial benefits in both of these scenarios when compared to a traditional device-by-device managed data center fabric, the first option is recommended, because it allows customers to avoid the complexities and performance challenges associated with deploying and operating NSX ESGs for north-south traffic and eliminates the need to deploy any VXLAN-to-VLAN gateway functions. Regardless of the chosen option, some key advantages of using Cisco ACI as a fabric for vsphere with NSX are significant, including: Best-in-class performance: Cisco ACI builds on best-in-class Cisco Nexus 9000 Series Switches to implement a low-latency fabric that uses Cisco Cloud Scale smart buffering and provides the highest performance on a leaf-and-spine architecture. Simpler management: Cisco ACI offers a single point of management for the physical fabric with full FCAPS 1 capabilities, thus providing for a much simpler environment for running all required vsphere services with high levels of availability and visibility. Simplified NSX networking: Because of the programmable fabric capabilities of the Cisco ACI solution, customers can deploy NSX VXLAN Tunnel Endpoints (VTEPs) with minimal fabric configuration, as opposed to device-by-device subnet and VLAN configurations. In addition, customers can optimize, reduce, or completely eliminate the need for certain functions such as NSX ESGs. This contributes to requiring fewer computing resources and simplifying the virtual topology. 1 FCAPS is the ISO Telecommunications Management Network model and framework for network management. FCAPS is an acronym for fault, configuration, accounting, performance, security the management categories the ISO model uses to define network management tasks Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 64

6 Operational benefits: The Cisco ACI policy-based model with single point of management facilitates setting up vsphere clusters while providing better visibility, enhanced security, and easier troubleshooting of connectivity within and between clusters. Furthermore, Cisco ACI provides many built-in network management functions, including consolidated logging with automatic event correlation, troubleshooting wizards, software lifecycle management, and capacity management. Lower total cost of ownership: Operational benefits provided by Cisco ACI and the savings in resources and licenses from enabling optimal placing of NSX ESG functions, along with faster time to recovery and easier capacity planning, add up to reduced costs overall. This document is intended for network, security, and virtualization administrators who will deploy NSX on a vsphere environment running over an Cisco ACI fabric. We anticipate that the reader is familiar with NSX for vsphere and with Cisco ACI capabilities. General networking knowledge is presupposed as well. Cisco ACI fundamentals Cisco ACI is the industry s most widely adopted SDN for data center networking. Cisco ACI pioneered the introduction of intent-based networking in the data center. It builds on a leaf-and-spine fabric architecture with an (APIC that acts as the unifying point of policy and management. The APIC implements a modern object model to provide a complete abstraction of every element in the fabric. This model includes all aspects of the physical devices, such as interfaces or forwarding tables, as well as its logical elements, like network protocols and all connected virtual or physical endpoints. The APIC extends the principles of Cisco UCS Manager software and its service profiles to the entire network: everything in the fabric is represented in an object model at the APIC, enabling declarative, policy-based provisioning for all fabric functions and a single point of management for day 2 operations. Networks are by nature distributed systems. This distributed characteristic has brought significant challenges when managing fabrics: if a network administrator wishes to modify a certain network attribute, touching discrete switches or routers is required. This necessity poses significant challenges when deploying new network constructs or troubleshooting network issues. Cisco ACI fixes that problem by offloading the management plane of network devices to a centralized controller. This way, when provisioning, managing, and operating a network, the administrator only needs to access the APIC. It is very important to note that in the Cisco ACI architecture, centralizing the management and policy planes in the APIC does not impose scalability bottlenecks in the network, as the APIC fabric management functions do not operate in the data plane of the fabric. Both the control plane (intelligence) and data plane (forwarding) functions are performed within the switching layer by intelligent Nexus 9000 Series Switches, which use a combination of software and hardware features. A centralized management and policy plane also does not mean that the network is less reliable or has a single point of failure. As the intelligence function stays at the switches, the switches can react to any network failure without having to ask the controller what to do. Because a highly available scale-out cluster of at least three APIC nodes is used, any controller outage does not diminish the capabilities of the network. In the unlikely event of a complete controller cluster outage, the fabric can still react to such events as the addition of new endpoints or the movement of existing endpoints across hypervisors (for instance, when performing virtual machine vmotion operations) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 64

7 The Cisco ACI policy model provides a complete abstraction from the physical devices to allow programmable deployment of all network configurations. Everything can be programmed through a single, open API, whether it is physical interface settings, routing protocols, or application connectivity requirements inclusive of advanced network services. The Cisco ACI fabric is a VXLAN-based leaf-and-spine architecture that provides Layer 2 and Layer 3 services with integrated overlay capabilities. Cisco ACI delivers integrated network virtualization for all workloads connected, and the APIC can manage not only physical devices but also virtual switches. Virtual and physical endpoints can connect to the fabric without any need for gateways or additional per-server software and licenses. The Cisco ACI solution works with all virtualized compute environments, providing tight integration with leading virtualization platforms like VMware vsphere, Microsoft System Center VMM, or Red Hat Virtualization. APIC also integrates with the leading open source cloud management solution, OpenStack, by having APIC program distributed services on Open vswitch using OpFlex. Finally, the Cisco ACI declarative model for defining application connectivity also goes hand in hand with modern frameworks for running Linux containers, and Cisco ACI has the same level of integration with Kubernetes, OpenShift or Cloud Foundry clusters. Figure 2. APIC declarative model enables intent-based networking This configuration provides an enormous operational advantage because the APIC has visibility into all the attached endpoints and has automatic correlation between virtual and physical environments and their application or tenant context. Integration with virtualization solutions is implemented by defining virtual machine manager domains in APIC Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 64

8 Cisco ACI policy model The Cisco ACI solution is built around a comprehensive policy model that manages the entire fabric, including the infrastructure, authentication, security, services, applications, and diagnostics. A set of logical constructs, such as Virtual Route Forwarding (VRF) tables, bridge domains, Endpoint Groups (EPGs), and contracts, define the complete operation of the fabric, including connectivity, security, and management. At the upper level of the Cisco ACI model, tenants are network-wide administrative folders. The Cisco ACI tenancy model can be used to isolate separate organizations, such as sales and engineering, or different environments such as development, test, and production, or combinations of both. It can also be used to isolate infrastructure for different technologies or fabric users, for instance, VMware infrastructure versus OpenStack versus big data, Mesos, and so forth. The use of tenants facilitates organizing and applying security policies to the network and providing automatic correlation of statistics, events, failures, and audit data. Cisco ACI supports thousands of tenants that are available for users. One special tenant is the common tenant, which can be shared across all other tenants, as the name implies. Other tenants can consume any object that exists within the common tenant. Customers that choose to use a single tenant may configure everything under the common tenant, although in general it is best to create a dedicated user tenant and keep the common for shared resources. Cisco ACI policy-based networking and security Cisco ACI has been designed to provide a complete abstraction of the network. As shown in Figure 3, each Cisco ACI tenant contains various network constructs: Layer 3 contexts known as VRF tables: These provide routing isolation and enable running overlapping address spaces between tenants, or even within a single tenant, and can contain one or more bridge domains. Layer 2 flooding domains, called bridge domains: These provide scoping for Layer 2 flooding. Bridge domains belong to a particular VRF table and can contain one or more subnets. External bridged or routed networks: These are referred to as L2Out or L3Out interfaces and connect to other networks, such as legacy spanning-tree or Cisco FabricPath networks, or simply to data center routers Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 64

9 Figure 3. Cisco ACI tenancy model The Cisco ACI tenancy model facilitates the administrative boundaries of all network infrastructure. Objects in the common tenant can be consumed by any tenant. For instance, in Figure 3, Production and Testing share the same VRF tables and bridge domains from the common tenant. Figure 4 provides a snapshot of the tenant Networking constructs from the APIC GUI, showing how the VRF tables and bridge domains are independent of the topology and can be used to provide connectivity across a number of workloads, including vsphere, Hyper-V, OpenStack, bare metal, and IP storage solutions. Figure 4. Flexibility of the Cisco ACI networking model 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 64

10 Endpoints that require a common policy are grouped together into endpoint groups (EPGs); an Application Network Profile (ANP) is a collection of EPGs and the contracts that define the connectivity required between them. By default, connectivity is allowed between the endpoints that are part of the same EPG (intra-epg connectivity). This default can be changed by configuring isolated EPGs (in which connectivity is not allowed), or by adding intra-epg contracts. Also, by default, communication between endpoints that are members of different EPGs is allowed only when contracts between them are applied. These contracts can be compared to traditional Layer 2 to Layer 4 firewall rules from a security standpoint. In absence of a contract, no communication happens between two EPGs. Contracts not only define connectivity rules but also include Quality of Service (QoS) and can be used to insert advanced services like load balancers or Next- Generation Firewalls (NGFWs) between any two given EPGs. Contracts are tenant-aware and can belong to a subset of the tenant resources (to a VRF table only) or be shared across tenants. The EPGs of an ANP do not need to be associated to the same bridge domain or VRF table, and the definitions are independent of network addressing. For instance, contracts are not defined based on subnets or network addresses, making policy much simpler to configure and automate. In this sense, ANPs can be seen as constructs that define the application requirements and consume network and security resources, including bridge domains, VRF tables, contracts, or L3Out interfaces. Figure 5 shows an example of a three-tier application with three environments (Production, Testing, and Development) where the Production Web EPG (Web-prod) allows only Internet Control Message Protocol (ICMP) and SSL from the external network accessible through an L3Out interface. Other similar contracts govern connectivity between tiers. Because there are no contracts between the Development and Testing or Production EPGs, the environments are completely isolated regardless of the associated bridge domain or VRF table or IP addresses of the endpoints. Figure 5. Example of an application network profile for a three-tier application with three environments The ANP model is ideal for use in combination with cloud management and container frameworks. An application design stack like that provided by Cisco CloudCenter software or VMware vrealize can directly map into Cisco ACI ANPs Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 64

11 From a networking point of view, the fabric implements a distributed default gateway for all defined subnets. This ensures optimal traffic flows between any two workloads for both east-west and north-south traffic without bottlenecks. At the simplest level, connectivity can be modeled with an ANP using VLANs and related subnets. A simple ANP can contain one or more VLANs associated with an environment represented as one EPG per VLAN associated to a single bridge domain. This still provides the benefit of using the distributed default gateway, eliminating the need for First-Hop Redundancy Protocol and providing better performance, while using contracts for inter-vlan security. In this sense, Cisco ACI provides flexibility and options to maintain traditional network designs while rolling out automated connectivity from a cloud platform. Workloads are connected to the fabric using domains that are associated to the EPGs. Bare metal workloads are connected through physical domains, and data center routers are connected as external routing domains. For virtualization platforms, Cisco ACI uses the concept of Virtual Machine Management (VMM) domains. Cisco ACI VMM domains Cisco ACI empowers the fabric administrator with the capability of integrating the APIC with various VMM solutions, including VMware vcenter (with or without NSX), Microsoft System Center Virtual Machine Manager (SCVMM), Red Hat Virtualization (RHV), and OpenStack. 2 This integration brings the benefit of consolidated visibility and simpler operations, because the fabric has a full view of physical and virtual endpoints and their location, as shown in Figure 6. APIC can also automate provisioning of virtual networking within the VMM domain. Figure 6. Once a VMM domain is configured, the APIC has a full view of physical and virtual endpoints 2 In Cisco ACI 3.1 the VMM domain tab is changed from VM Networking to Virtual Networking because the VMM domain concept now also extends to containers, not just virtual machines. Cisco ACI supports this level of integration with Kubernetes and OpenShift, with Pivotal Cloud Foundry and other integrations planned for a future release Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 64

12 The fabric and vsphere administrators work together to register APIC with vcenter using proper credentials. In addition, it is also possible to install a Cisco ACI plug-in for the vsphere web client. The plug-in is registered with the APIC and uses the latter s Representational State Transfer (REST) API. The plug-in allows the fabric administrator to provide vcenter administrators with the ability to see and configure of tenant-level elements, such as VRF tables, bridge domains, EPGs, and contracts, as shown in Figure 7, where we see the entry screen of the Cisco ACI plug-in and can observe the various possible functions exposed on the right hand side. In vsphere environments, when a VMM domain is created in the APIC, it automatically configures a vsphere Distributed Switch (VDS) through vcenter with the uplink settings that match the corresponding Cisco ACI fabric port configurations, in accordance with the configured interface policies. This provides for automation of interface configuration on both ends (ESXi host and Cisco ACI Top-of-Rack [ToR] switch, referred to as Cisco ACI Leaf), and ensures consistency of Link Aggregation Control Protocol (LACP), Link Layer Discovery Protocol (LLDP), Cisco Discovery Protocol, and other settings. In addition, once the VMM domain is created, the fabric administrator can see a complete inventory of the vsphere domain, including hypervisors and Virtual Machines. If using the Cisco ACI vcenter plug-in is used, the vsphere administrator can also have a view of the relevant fabric aspects, including non-vsphere workloads (such as bare metal servers, virtual machines running on other hypervisors, or even Linux containers). Figure 7. The Cisco ACI plug-in for vcenter It is important to note that using a VMM domain enables consolidated network provisioning and operations across physical and virtual domains while using standard virtual switches from the hypervisor vendors. 3 For instance, in the case of Microsoft Hyper-V, APIC provisions logical networks on the native Hyper-V virtual switch, using the open protocol OpFlex to interact with it. Similarly, in OpenStack environments, Cisco ACI works with Open vswitch using OpFlex and Neutron ML2 or the Group-Based Policy plug-in within OpenStack. 4 3 Cisco also offers the Cisco ACI Virtual Edge solution (AVE) as a virtual switch option for vsphere environments. This offers additional capabilities, but its use is entirely optional. 4 For more details about the Group-Based Policy model on OpenStack, refer to this link: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 64

13 Specifically in the context of this white paper, which focuses on VMware environments, the APIC provisions a native vsphere Distributed Switch (VDS) and can use either VLAN or VXLAN 5 encapsulation as attach points to the fabric. The APIC does not need to create the VDS. When defining a vcenter VMM domain, APIC can operate on an existing VDS. In that case, APIC expects the existing VDS to be placed in a folder with the same name as the VDS. Since Cisco ACI 3.1 it is also possible to define a VMM domain to vcenter in read-only mode. In read-only mode, APIC will not provision dvportgroups in the VDS, but the fabric administrator can leverage the added visibility obtained by VMM integration. VMware NSX fundamentals NSX for vsphere NSX for vsphere is an evolution of VMware vcloud Networking and Security. At the heart of the system is NSX Manager, which is similar to the former vshield Manager and is instrumental in managing all other components, from the installation to provisioning and upgrading. NSX for vsphere has the following components: NSX Manager: Provides the northbound API and a graphical management interface through a plug-in to the vcenter web client. It also automates the installation for all other components and is the configuration and management element for the NSX distributed firewall. NSX Controller: An optional component that is required if VXLAN overlays with unicast control plane. Running the VXLAN overlay with multicast flood and learn does not require NSX controllers, and NSX controllers are not required to run the NSX distributed firewall. ESXi kernel modules: A set of virtual installation bundles that are installed on each ESXi host during the NSX host-preparation process to provide distributed firewall, distributed routing, VXLAN capabilities, and optional data security (known as guest introspection ). Distributed Logical Router (DLR) control virtual machines: One or more virtual machines deployed from the NSX Edge image. When deployed this way, the virtual machine provides the control plane for distributed logical routers. NSX Edge Services Gateways (ESGs): One or more virtual machines deployed from the NSX Edge image. When deployed as an ESG, the virtual machine provides control plane and data plane for Edge features including the north-south routing that is required to communicate from the VXLAN overlay to external networks or between different VXLAN overlay environments (between independent NSX Managers or between NSX transport zones under a single NSX Manager). The ESGs also provide services such as Network Address Translation (NAT), load balancing, and VPN. Some customers may adopt NSX for its network virtualization capabilities, while others are interested only in its security features. With the exception of guest introspection security features that are required for the integration of certain technology partner solutions, Cisco ACI provides equivalent functionality to NSX and in many cases offers a superior feature set that better meets real-world customer requirements. 5 The APIC can program the VDS to use VXLAN when working with vshield environments or when interacting with NSX Manager as a vshield Manager Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 64

14 NSX for vsphere network requirements Hypervisor-based network overlays like those provided by VMware NSX are intended to provide network virtualization over any physical fabric. Their scope is limited to the automation of VXLAN-based virtual network overlays created between software-based VXLAN tunnel endpoints (VTEPs) running at the hypervisor level. In the case of NSX-v, these software VTEPs are only created within the vsphere Distributed Switch by adding NSX kernel modules that enable VXLAN functionality. The virtual network overlay must run on top of a physical network infrastructure, referred to as the underlay network. The NSX Manager and NSX Controller do not provide any level of configuration, monitoring, management, or reporting for this physical layer. 6 VMware s design guide for implementing NSX-v describes a recommended Layer 3 routed design of the physical fabric or underlay network. The following points summarize the key design recommendations: Fabric design: Promotes a leaf-and-spine topology design with sufficient bisectional bandwidth. When network virtualization is used, mobility domains are expected to become larger and traffic flows between racks may increase as a consequence. Therefore, traditional or legacy designs may be insufficient in terms of bandwidth. Suggests a Layer 3 access design with redundant ToR switches, limiting Layer 2 within the rack. Recommends using IP Layer 3 equal-cost multipathing (ECMP) to achieve fabric load balancing and high availability. Per-rack (per ToR switch pair) subnets for vsphere infrastructure traffic, including Management, vmotion, IP Storage (Small Computer System Interface over IP [iscsi], network file server [NFS]) and NSX VTEP pools. QoS implemented by trusting Differentiated Services Code Point (DSCP) marking from the VDS. Server-to-fabric connectivity: Redundant connections between ESXi hosts and the ToR switches is configured at the VDS level, using either LACP, routing based on originating port, routing based on source MAC, or active/standby. The Layer 3 access fabric design imposes constraints on the NSX overlay design. For example, the NSX Edge services gateways require connecting to VLAN-backed port groups to route between the NSX overlay and any external networks. On a traditional Layer 3 ToR design, those VLANs will not be present across the infrastructure but instead will be limited to a small number of dedicated servers on a specific rack. This is one reason the NSX architecture recommends dedicating vsphere clusters to a single function: running NSX Edge virtual machines in so-called edge clusters. Another example is that the VTEP addressing for VXLAN VMkernel interfaces needs to consider per-rack subnets. That requirement can be accomplished by using Dynamic Host Configuration Protocol (DHCP) with option 82 but not by using simpler-to-manage NSX Manager IP pools. 6 Some literature states that when using the Open vswitch Database protocol the NSX controller can manage the underlay hardware VTEPs. In fact, this protocol enables only provisioning of basic Layer 2 bridging between VXLAN and VLANs, and it serves only point use cases, as opposed to full switch management Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 64

15 This design recommendation also promotes a legacy device-by-device operational model for the data center fabric that has hindered agility for IT organizations. Furthermore, it assumes that the network fabric will serve only applications running on ESXi hosts running NSX. But for most customer environments, the network fabric must also serve bare metal workloads, applications running in other hypervisors, or core business applications from a variety of vendors, such as IBM, Oracle, and SAP. vsphere infrastructure traffic (management, vmotion, virtual storage area network [VSAN], fault tolerance, IP storage, and so forth) that is critical to the correct functioning of the virtualized data center is not considered. Neither is it viewed or secured through NSX, and as a result it remains the sole responsibility of the physical network administrator. From a strategic perspective, the physical fabric of any modern IT infrastructure must be ready to accommodate connectivity requirements for emerging technologies for instance, clusters dedicated to container-based applications such as Kubernetes, OpenShift, and Mesos. Finally, just as traffic between user virtual machines needs to be secured within the NSX overlay, traffic between subnets for different infrastructure functions must be secured. By placing the first-hop router at each ToR pair, it is easy to hop from the IP storage subnet to the management subnet or the vmotion network and vice versa. The network administrator will need to manage Access Control Lists (ACLs) to prevent this from happening. This means configuring ACLs in all access devices (in all ToR switches) to provide proper filtering of traffic between subnets and therefore ensure correct access control to common services like Domain Name System (DNS), Active Directory (AD), syslog, performance monitoring, and so on. Customers deploying NSX on top of a Cisco ACI Fabric will have greater flexibility to place components anywhere in the infrastructure, and may also avoid the need to deploy of NSX Edge virtual machines for perimeter routing functions, potentially resulting in significant cost savings on hardware resources and on software licenses. They will also benefit from using Cisco ACI contracts to implement distributed access control to ensure infrastructure networks follow a white-list zero-trust model. Running vsphere Infrastructure as an application with Cisco ACI vsphere infrastructure vsphere Infrastructure has become fundamental for a number of customers because so many mission-critical applications now run on virtual machines. At a very basic level, the vsphere infrastructure is made up of the hypervisor hosts (the servers running ESXi), and the vcenter servers. vcenter is the heart of vsphere and has several components, including the Platform Services Controller, the vcenter server itself, the vcenter SQL database, Update Manager, and others. This section provides some ideas for configuring the Cisco ACI fabric to deploy vsphere infrastructure services. The design principles described here apply to vsphere environments regardless of whether they will run NSX. The vsphere infrastructure generates different types of IP traffic, as illustrated in Figure 8, including management between the various vcenter components and the hypervisor host agents, vmotion, storage traffic (iscsi, NFS, VSAN). This traffic is handled through kernel-level interfaces (VMkernel network interface cards, or VMKNICs) at the hypervisor level Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 64

16 Figure 8. ESXi VMkernel common interfaces From a data center fabric perspective, iscsi, NFS, vmotion, fault tolerance, and VSAN are all just application traffic for which the fabric must provide connectivity. It is important to note that these applications must also be secured, in terms of allowing only the required protocols from the required devices where needed. Lateral movement must be restricted at the infrastructure level as well. For example, from the vmotion VMkernel interface there is no need to access management nodes, and vice versa. Similarly, VMkernel interfaces dedicated to connecting iscsi targets do not need to communicate with other VMkernel interfaces of other hosts in the cluster. And only authorized hosts on the management network need access to vcenter, or to enterprise configuration management systems like a puppet master. In traditional Layer 3 access fabric designs, where each pair of ToR switches has dedicated subnets for every infrastructure service, it is very difficult to restrict lateral movement. In a Layer 3 access design, every pair of ToRs must be the default gateway for each of the vsphere services and route toward other subnets corresponding to other racks. Restricting access between different service subnets then requires ACL configurations on every access ToR for every service Layer 3 interface. Limiting traffic within a service subnet is even more complicated and practically impossible. Cisco ACI simplifies configuring the network connectivity required for vsphere traffic. It also enables securing the infrastructure using Cisco ACI contracts. The next two sections review how to configure physical ports to redundantly connect ESXi hosts and then how to configure Cisco ACI logical networking constructs to enable secure vsphere traffic connectivity. Physically connecting ESXi hosts to the fabric ESXi software can run on servers with different physical connectivity options. Sometimes physical Network Interface Cards (NICs) are dedicated for management, storage, and other functions. In other cases, all traffic is placed on the same physical NICs, and traffic may be segregated by using different port groups backed by different VLANs. It is beyond the scope of this document to cover all possible options or provide a single prescriptive design recommendation. Instead, let s focus on a common example where a pair of physical NICs is used to obtain redundancy for ESXi host-to-fabric connectivity. In modern servers, these NICs could be dual 10/25GE or even dual 40GE. When using redundant ports, it is better to favor designs that enable active/active redundancy to maximize the bandwidth available to the server. For instance, when using Cisco ACI EX or FX leaf models, all access ports support 25 Gbps. With modern server NICs also adding 25GE support, it becomes affordable to have 50 Gbps of bandwidth available to every server Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 64

17 In Cisco ACI, interface configurations are done using leaf policy groups. For redundant connections to a pair of leaf switches, a VPC policy group is required. VPCs are configured using VPC Policy Groups, under Fabric Access Policies. Within a VPC policy group, the administrator can select multiple policies to control the interface behavior. Such settings include Storm Control, Control Plane Policing, Ingress or Egress rate limiting, LLDP, and more. These policies can be reused across multiple policy groups. For link redundancy, port-channel policies must be set to match the configuration on the ESXi host. Table 1 summarizes the options available in vsphere distributed switches and the corresponding settings recommended for Cisco ACI port-channel policy configuration. Table 1. Cisco ACI port-channel policy configuration vsphere Distributed Switch (VDS) Teaming and Failover Configuration Redundancy Expected with dual vmnic per host Route Based on Originating Port Active/Active MAC Pinning Route based on Source MAC Hash Active/Active MAC Pinning ACI Port-Channel Policy Configuration LACP (802.3ad) Active/Active LACP Active: Graceful Convergence, Fast Select Hostandby Ports (remove Suspend Individual Port Option) Route Based on IP Hash Active/Active Static Channel Mode On Explicit Failover Order Active/Standby MAC Pinning or Access Policy Group Of the options shown in Table 1, we do not recommend Explicit Failover Order, as it keeps only one link active. We recommend LACP as the best option, for the following reasons: LACP is an IEEE standard (802.3ad) implemented by all server, hypervisor, and network vendors. LACP is easy to configure, well understood by network professionals, and enables active/active optimal load balancing. LACP enables very fast convergence on the VMware VDS, independent of the number of virtual machine or MAC addresses learned. LACP has extensive support in vsphere, beginning with version 5.1. Another key setting of the interface policy groups is the Attachable Entity Profile (AEP). AEPs can be considered the where " of the fabric configuration and are used to group domains with similar requirements. AEPs are tied to interface policy groups. One or more domains (physical or virtual) can be added to an AEP. Grouping domains into AEPs and associating them enables the fabric to know where the various devices in the domain live, and the APIC can push and validate the VLANs and policy where it needs to go. AEPs are configured in the Global Policies section in Fabric Access Policies. Specific for vsphere, ESXi hosts can be connected to the fabric as part of a physical domain or a Virtual Machine Manager (VMM) domain, or both. An AEP should be used to identify a set of servers with common access requirements. It is possible to use anything from a single AEP for all servers to one AEP per server at other extreme. The best design is to use an AEP for a group of similar servers, such as one AEP per vsphere cluster or perhaps one AEP per NSX transport zone. In Figure 9 for instance we show a VPC Policy Group associated with an AEP for the particular vsphere environment that has both a VMM and a Physical domain associated Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 64

18 Each domain type will have associated encapsulation resources, such as a VLAN or VXLAN pool. Having interface configurations such as VPC policy groups associated to AEPs simplifies several tasks, for instance: Network access that must be present for all servers of a particular kind can be implemented by associating the relevant EPGs to the AEP directly. For example, all ESXi hosts in a particular vsphere environment require connections to vmotion, Management, or IP storage networks. Once the corresponding EPGs are attached to the AEP for the ESXi hosts, all leaf switches with connected ESXi hosts are automatically configures with those EPG network encapsulations and the required bridge domains, Switch Virtual Interfaces (SVIs), and VRF tables, without any further user intervention. Encapsulation mistakes can be identified and flagged by the APIC. If the network administrator chooses a VLAN pool for a group of servers or applications, the VLAN ID pool will be assigned to the corresponding domain and, by means of the AEP, associated to relevant interfaces. If a VLAN from the wrong pool is then chosen by mistake for a port or VPC connecting to a particular server type (as identified by the AEP), the APIC will flag an error on the port and the EPG. Figure 9. ESXi host connected using a VPC policy group 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 64

19 Mapping vsphere environments to Cisco ACI network and policy model The Cisco ACI solution provides multiple options for achieving various levels of isolation for applications. You can use different VRF tables to achieve Layer 3 level isolation, use different bridge domains and EPGs for Layer 2 isolation, and use contracts to implement Layer 2 4 access control between or within EPGs. In addition, the fabric administrator can also leverage the Cisco ACI tenancy model to create administrative isolation on top of logical network isolation. This can be useful for customers that have development or testing environments that must be completely isolated from production, or in environments where fabric resources are shared by multiple organizations. For an IT organization without specific requirements for multi-tenancy, vsphere infrastructure traffic is probably well served by keeping it under the common tenant in Cisco ACI. As mentioned earlier, the common tenant facilitates sharing resources with other tenants. Also, because the common tenant is one of the APIC system tenants it cannot be deleted. That said, for administrative reasons, it may be desirable to keep a dedicated tenant for infrastructure applications such as vsphere traffic. Within this tenant, connectivity for vsphere and other virtualization platforms and associated storage is configured using Cisco ACI application profiles. This method allows the fabric administrator to benefit from APIC automatically correlating events, logs, statistics, and audit data specific to the infrastructure. Figure 10 shows a tenant called Virtual-Infrastructure and indicates how the fabric administrator can see at a glance the faults that impact the tenant. In this view, faults are automatically filtered out to show only those relevant to infrastructure traffic. Figure 10. View of tenant-level fault aggregation in APIC 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 64

20 From a networking perspective, we recommend using different bridge domains for different vsphere traffic types. Figure 11 shows for a configuration example with a bridge domain and an associated subnet for each of the main types of traffic: management, vmotion, IP storage, hyper-converged storage (for example, VMware vsan or Cisco HyperFlex server nodes). Each bridge domain can be configured with large subnets, and can expand across the fabric, serving many clusters. In its simplest design option, all VMkernel interfaces for specific functions are grouped into a single EPG by traffic type. Figure 11. Configuration example with a bridge domain and associated subnet for each main traffic type Within each Bridge Domain, traffic sources are grouped into EPGs. Each ESXi host therefore represents not one but a number of endpoints, with each VMkernel interface being an endpoint in the fabric with different policy requirements. Within each of the infrastructure bridge domains, the VMkernel interfaces for every specific function can be grouped into a single EPG per service as shown in Figure 11. Obtaining per-cluster visibility in APIC The previous model of one EPG or bridge domain per vsphere service enables the simplest configuration possible. It is similar to legacy deployments where a single VLAN is used for all vmotion VMKNICs, for instance, albeit with the benefits of a distributed default gateway, Layer 2 and Layer 3 ECMP, and so forth. Although simple is usually good, such an approach limits the understanding of the vsphere infrastructure at the network level. For example, if you have 10 vsphere clusters, each of which has 32 servers, you will have 320 endpoints in the vmotion EPG. By looking at any two endpoints, it is impossible to understand if vmotion traffic was initiated by vcenter, as is the case inside a VMware Distributed Resource Scheduler (DRS) cluster, or by an administrator, if it was between two clusters. It may be convenient to represent the vsphere cluster concept in the network configuration. The Cisco ACI EPG model enables fabric administrators to accomplish this without imposing changes in the IP subnetting for the services Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 64

21 For instance, VMkernel interfaces for each function may be grouped on a per-rack or per-vsphere-cluster basis. Again, this does not have to impact subnetting, as multiple EPGs can be associated to the same bridge domain, and therefore all clusters can still have the same default gateway and subnet for a particular traffic type. This approach, shown in Figure 12, enables more granular control and provides a simple and cost-effective way for the fabric administrator to get visibility by cluster. APIC automatically correlates statistics, audit, event correlation, and health scores at the EPG level, so in this way, they represent per-rack or per-cluster level. An alternative design can make use of per-cluster EPG for each traffic type. This option requires extra configuration that is easy to automate at the time of cluster creation. An orchestrator such as Cisco UCS Director or vrealize Orchestrator can create the EPG at the same time the cluster is being set up. This method has no impact on IP address management, as all clusters can still share the same subnet, and it takes advantage of APIC automatic perapplication and per-epg statistics, health scores, and event correlation. Figure 12. Configuration example with VMkernel interfaces grouped by function and Cluster Let s continue looking at vmotion traffic as an example. Within a vsphere cluster, all VMkernel interfaces for vmotion are grouped into an EPG. EPG traffic statistics now represent vmotion within the cluster. Figure 13 shows how fabric administrator can look at the tenant-level statistics to view an aggregate of all the traffic for the entire vsphere infrastructure (not virtual machine traffic, only infrastructure traffic), and then drill down into vmotionspecific traffic to identify a spike, and finally check that vmotion activity was within a specific EPG for the COMPUTE-01 cluster Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 64

22 Figure 13. Monitoring vmotion-specific traffic on a per-cluster basis The same approach works for iscsi, Cisco HyperFlex, or VSAN traffic. It is convenient for troubleshooting and capacity planning to be able to correlate traffic volumes to a specific cluster and/or to communications between clusters. Table 2 summarizes the recommended bridge domains, their settings, and associated EPGs to provide vsphere infrastructure connectivity, whether single or multiple EPGs are used. Table 2. Recommended bridge domains, settings, and EPGs for vsphere infrastructure connectivity Bridge Domain Settings Subnet(s) EPGs MGMT_BD Hardware Proxy: Yes ARP Flooding: Yes L3 Unknown Multicast Flooding: Flood Multi Destination Flooding: Flood in BD Unicast Routing: Yes Enforce Subnet Check: No Management Ex /16 MGMT_vmknics vcenter_servers vcenter_db vmotion_bd Hardware Proxy: Yes ARP Flooding: Yes L3 Unknown Multicast Flooding: Flood Multi Destination Flooding: Flood in BD Unicast Routing: Yes Enforce Subnet Check: No vmotion Subnet Ex /16 vmotion_vmknics_cluster1 vmotion_vmknics_cluster2 vmotion_vmknics_clustern Storage_BD Hardware Proxy: Yes ARP Flooding: Yes L3 Unknown Multicast Flooding: Flood Multi Destination Flooding: Flood in BD Unicast Routing: Yes Enforce Subnet Check: No iscsi Subnet Ex /16 Storage_vmknics_Cluster1 Storage_vmknics_Cluster2 Storage_vmknics_ClusterN 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 64

23 A common question is whether a subnet must be configured under the bridge domain for services that will not be routed. For instance, for services like vmotion, NFS, or iscsi you have the option not to configure a subnet on the bridge domain. However, if no subnet is configured, then Address Resolution Protocol (ARP) flooding must be configured when hw-proxy is used. ARP flooding is not strictly required, as long as a subnet is configured on the bridge domain, if for example, the Cisco ACI spines will do ARP gleaning when hw-proxy is configured and IP routing is checked on the bridge domain. Securing vsphere Infrastructure The designs outlined above include examples of using a single EPG per service and multiple EPGs per service. When all vmotion VMKNICs are on the same EPG it is clear that vmotion can work, because the default policy for traffic internal to an EPG is to allow all communications. When using multiple EPGs, however, the default is to block all communication between different EPGs. Therefore, for the model in Figure 12 to work and allow intercluster vmotion, this default must be changed. One way is to place all the EPGs in Figure 12 in the preferred group. Another way is to disable policy enforcement on the VRF table where these EPGs belong. In those two ways, the fabric behaves like a traditional network, allowing traffic connectivity within and between subnets. However, it is safer and recommended to implement a zero-trust approach to infrastructure traffic. Let s use the vmotion example from the previous section to illustrate how to use the Cisco ACI contract model to secure the vsphere infrastructure using a zero-trust model. In the design where an EPG is used for each service and cluster, intercluster vmotion communication requires inter-epg traffic. Fabric administrators can leverage the Cisco ACI contract model to ensure that only vmotion traffic is allowed within the vmotion EPG or between vmotion EPGs (or both). Figure 14 shows an example where per-cluster EPGs have been configured for vmotion, all within the same bridge domain (same subnet). A contract called vmotion-traffic is configured that allows only the required ports and protocols for vsphere vmotion. 7 This contract can be associated to all vmotion EPGs. To allow vmotion traffic between clusters, all vmotion EPGs will consume and provide the contract. To allow vmotion traffic within a cluster, but restricted to vmotion ports and protocols, each vmotion EPG will add an Intra-EPG contract association. Once this is done, only vmotion traffic is accepted by the network from the vmotion VMkernel interfaces; the fabric will apply the required filters on every access leaf in a distributed way. The same concept can be applied to NFS, iscsi, management, and so forth: contracts can be used to allow only the required traffic. Fabric administrators must pay special attention when configuring intra-epg contracts. Support for intra-epg contracts requires Cisco ACI 3.0 or later and is provided only on Cisco Nexus EX and FX leaf switches or later models. In addition, when intra-epg contracts are used, the fabric implements proxy ARP. Therefore, when the contract is applied to an EPG with known endpoints, traffic will be interrupted until the endpoint ARP cache expires or is cleaned. Once this is done, traffic will resume for the traffic allowed by the contract filters. (This interruption is not an issue in green field deployments.) 7 This VMware Knowledge Base article illustrates the required ports for different vsphere services depending on the ESXi release: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 64

24 Figure 14. Configuration with multiple vsphere DRS clusters using per-cluster vmotion EPGs The fabric administrator has a view of specific VMkernel interfaces, easily mapping the IP address or MAC address to the appropriate access leaf, access port, or VPC with cluster context and filter information, as shown in Figure 14. One primary advantage of the EPG model for vsphere infrastructure is the security enhancement it provides. Another important advantage is operational, as it is easy for each vsphere administrator to view statistics and health scores by cluster, as illustrated in Figure 13. VMware vswitch design and configuration considerations Another important consideration is how to map EPGs to port group configurations in vcenter. The vsphere VDS can be connected to the Cisco ACI leaf switches as a physical domain, as a VMM domain, or both. Using a VMM domain integration provides various benefits. From a provisioning standpoint, it helps ensure that the vswitch dvuplinks are configured in a way that is consistent with the access switch port configuration. It also helps to automate creating dvportgroups with the correct VLAN encapsulation for each EPG and to avoid configuring the EPG on specific access switches or ports. From an operational standpoint, using a VMM domain increases the fabric administrator s visibility into the virtual infrastructure, as shown in Figure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 64

25 Figure 15. Example of data from one hypervisor in of one of the VMM domains When using a VMM domain, the APIC leverages vcenter s northbound API to get access to the VDS, giving the fabric administrator several advantages: Ensures that the dvuplinks configurations of the VDS match those of the fabric. Monitors the VDS statistics from the APIC, to view dvuplinks, VMKNIC, and virtual machine level traffic statistics. Automates dvportgroups configurations by mapping EPGs created on APIC to the VMM domain. APIC creates a dvportgroup on the VDS and automatically assigns a VLAN from the pool of the VMM domain, thus completely automating all network configurations. Automates EPG and VLAN provisioning across physical and virtual domains. When the vcenter administrator assigns VMKNICto a dvportgroup provisioned by the APIC, the latter automatically configures the EPG encapsulation on the required physical ports on the switch connecting to the server. Enables the fabric administrator to have a contextual view of the virtualization infrastructure. APIC provides a view of the vcenter inventory and uses it to correlate virtual elements with the fabric. Figure 16 represents a VDS with redundant uplinks to a Cisco ACI leaf pair. The VDS can be automatically created by APIC when the VMM domain is configured, or it can be created by the vcenter administrator prior to VMM configuration. If it was created by the vcenter administrator, the fabric administrator must use the correct VDS name at the moment of creation of the VMM domain. The APIC ensures that the dvuplinks have the correct configuration Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 64

26 Figure 16. VDS with redundant uplinks to a Cisco ACI leaf pair When the fabric administrator creates EPGs for vsphere services, as in Figures 11and 12 earlier in the document, they only need to associate the EPGs to the VMM domain. No further configuration is required; the APIC configures the required dvportgroups and physical switch ports automatically. If a VMM domain is not used, then the EPGs for vsphere infrastructure must be mapped to a physical domain. The dvportgroups must be configured separately by the vcenter administrator using the VLAN encapsulation communicated by the fabric administrator. In this case, it is best to use statically assigned VLANs. The fabric administrator then needs to configure the required EPGs on the access leaf switches. Although the VMM configuration offers the simplest configurations, Cisco ACI offers advantages in simplicity and automation compared to traditional fabrics also when using physical domains. For instance, going back to the design in Figure 11, which is partially reproduce in Figure 17, we can see that this design approach is extremely simple to configure in Cisco ACI, even for large deployments using physical domains Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 64

27 Assuming that an Attachable Entity Profile (AEP) has been configured for ESXi servers in a particular vsphere environment, it is sufficient to associate the EPGs from Figure 11 to the corresponding AEP. The APIC will take care of automatically configuring the correct VLAN encapsulation and the distributed default gateway for the corresponding subnet in every leaf switch where the AEP is present. Adding new ESXi hosts requires only configuring a VPC and selecting the correct AEP, nothing more. As illustrated in Figure 17, associating the service EPGs to the AEP, automatically provisions all the required VLAN encapsulation, bridge domains, SVIs, and VRF tables on all required access switches. Figure 17. Configuring EPGs at the AEP used on all VPC policy groups for all ESXi hosts on the vsphere clusters 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 64

28 We recommend using VMM domains for the simplified configuration and enhanced visibility they provide, as outlined above. Table 3 compares the use of VMM to that of physical domains, specifically in the context of vsphere infrastructure. Table 3. Comparing VMM and physical domains for vsphere infrastructure APIC and Cisco API connection to vcenter for VDS management Not required Physical domain VDS Connection to ACI Required dvuplinks port configuration Manually by virtual administrator Automated by APIC dvportgroups configuration Manually by virtual administrator Automated by APIC VLAN assignment Static Dynamic or static EPG configuration on leaf switches Static path or EPG mapped to AEP, or both Automated by APIC Virtual Machine Kernel Network Interface Card (VMKNIC) visibility in the fabric Virtual Machine Network Interface Card (VMNIC) visibility in the fabric IP and MAC addresses Not available VMM domain IP and MAC addresses, hypervisor association and configuration Hypervisor association, faults, statistics Virtual machine level visibility IP and MAC addresses Virtual machine objects, virtual NIC configuration, statistics Consolidated statistics Not available APIC can monitor VDS statistics Starting with Cisco ACI 3.1, it is also possible to configure a read-only VMM domain. In that mode, APIC always interfaces with an existing VDS and in view-only mode. A read-only VMM domain does not enable automated configurations by APIC but provides many of the visibility capabilities outlined above Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 64

29 Option 1. Running NSX-v security and virtual services using a Cisco ACI integrated overlay for network virtualization Some customers are interested in using VMware NSX security capabilities, or perhaps using NSX integration with specific ecosystem partners, but do not want to incur the complexity associated with running the NSX overlay, such as deploying and operating the NSX VXLAN controllers, Distributed Logical Router (DLR) control virtual machines, ESG virtual machines, and so forth. Sometimes separation of roles must also be maintained, with network teams responsible for all network connectivity and virtualization or security teams responsible for NSX security features. These customers can utilize the integrated overlay capabilities of the Cisco ACI fabric for automated and dynamic provisioning of connectivity for applications while using the NSX network services such as the ESG Load Balancer, Distributed Firewall, and other NSX security components and NSX technology partners. Using NSX Distributed Firewall and Cisco ACI integrated overlays This design alternative uses supported configurations on both Cisco ACI and VMware NSX. The Cisco ACI fabric administrator works with the VMware vsphere Administrator to define a VMM domain on the APIC for vsphere integration, as described in earlier parts of this document. The VMM domain enables APIC to have visibility into the vcenter inventory. When the VMM domain is configured, the APIC uses a standard vsphere Distributed Switch (VDS) that corresponds to the VMM domain on APIC. As this document is being written, only VLAN mode is supported for the VMM integration and APIC can use any supported VDS version. The APIC VMM domain integration interacts with the VDS through vcenter in the same way as when PowerShell, vrealize Orchestrator, or another orchestration system is used. The NSX administrator installs the NSX Manager and registers it with the same vcenter 8 used for the APIC VMM domain. Then the NSX administrator continues with the NSX installation by preparing hosts in the vsphere clusters to install the NSX kernel modules. However, there is no need to deploy or maintain NSX controllers, nor to configure VXLAN or NSX transport zones. The option to deploy NSX security features without adding the NSX controllers and VXLAN configuration is well known to vcloud Networking and Security. The NSX Distributed Firewall, Service Composer, Guest Introspection, and Data Security features are fully functional and can work without the use of NSX overlay networking features. This type of installation is shown in Figure At the time of this writing there must be a 1:1 mapping between NSX Manager and vcenter. APIC does not have this limitation and can support multiple VMM domains at once, which makes possible multiple vcenter or other vendor VMM domains. Therefore, you can have multiple vcenter or NSX Manager pairs mapped to a single Cisco ACI fabric Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 64

30 Figure 18. Installation screens for deploying an NSX environment and its security features In this model of operation, the connectivity requirements for virtual machine traffic are provisioned on APIC by creating bridge domains and EPGs, instead of creating logical switches and DLRs. The required bridge domains and EPGs can be configured on APIC from a variety of interfaces, including the APIC GUI, NX-OS CLI, or APIC REST API; through a cloud management platform; or from the vcenter web client by using the Cisco ACI vcenter plug-in mentioned earlier in this document. APIC also supports out-of-the-box integration with key cloud management platforms, including VMware vrealize Automation, Cisco CloudCenter, and others, to automate these configurations. In a nutshell, instead of deploying NSX ESG and DLR virtual machines and configuring NSX logical switches, the administrator defines VRF tables, bridge domains, and EPGs and maps the EPGs to the VMM domain. Instead of connecting virtual machines to logical switches, the virtual administrator connects virtual machines to the dvportgroups created to match each EPG. We recommend using a different Cisco ACI tenant for virtual machine data traffic from the one used for vsphere Infrastructure connectivity. This method ensures isolation of user data from infrastructure traffic as well as separation of administrative domains. For instance, a network change for a user tenant network can never affect the vsphere infrastructure, and vice versa. Within the user tenant (or tenants), network connectivity is provided using VRF tables, bridge domains, and EPGs with associated contracts to enable Layer 3 and Layer 2 forwarding as described for infrastructure traffic earlier in this document. Virtual machines leverage Cisco ACI Distributed Default Gateway technology in hardware to enable optimal distributed routing without performance compromises, regardless of the nature of the workload (physical, virtual, containers) or traffic flow (east-west and north-south). We can illustrate this model using a classic example of a three-tier application with web, application, and database tiers, as shown in Figure 19. The web and app tiers are running in vsphere virtual machines. The database tier is running in bare. Additional requirements include low-latency access from the WAN, load balancing, and secure connectivity between tiers Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 64

31 Figure 19. A typical three-tier application with a nonvirtualized database This setup can be configured using the fabric for distributed routing and switching. Figure 20 shows a VRF table, three bridge domains for the various application tiers, and an L3Out interface to connect to endpoints external to the fabric. Cisco ACI provides distributed routing and switching. Connectivity between bare metal and virtual machines is routed internally in the fabric, and connectivity with external endpoints is routed via physical ports that are part of L3Out Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 64

32 Figure 20. The ACI fabric provides programmable distributed routing and switching for all workloads On Figure 21 shows the rest of the logical configuration, with three EPGs corresponding to our application tiers (web, application, database). The DB_Prod EPG is mapped to a physical domain to connect bare metal databases, while the Web_Prod and App_Prod EPGs are mapped to the vsphere VMM domain so that the APIC automatically creates corresponding dvportgroups on the VDS using standard vsphere API calls to vcenter. The port groups are backed using locally significant VLANs that are dynamically assigned by APIC. All routing and switching functionality is done in hardware in the Cisco ACI fabric. The default gateway for virtual machines is implemented by the distributed default gateway on every Cisco ACI leaf switch that has attached endpoints on those EPGs. The EPGs can map to both virtual and physical domains and enable seamless connectivity between virtual machines and nonvirtual devices. In the vsphere domain, each EPG corresponds to a dvportgroup that is automatically created by APIC by means of the VMM domain integration. In the figure, the Web_Prod and App_Prod EPGs are also placed in the Preferred group, allowing unrestricted connectivity between these EPGs, just as if they were logical switches connected to a DLR. The only difference from using NSX DLRs is that if two virtual machines are located on the same ESXi host, traffic is routed at the leaf. However, on a given vsphere cluster only a small percentage of the traffic can ever stay local in the hypervisor and switching through the leaf provides low-latency line rate connectivity Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 64

33 Figure 21. Basic example of an application network profile, with three EPGs and contracts between them In this way, the administrator can easily ensure that all EPGs created for connecting virtual machines (mapped to the VMM domain) can be placed in the Preferred group and policy will be controlled using the NSX Distributed Firewall (DFW). The advantage of this model of network virtualization is that no gateways are required when a virtual machine needs to communicate with endpoints outside of vsphere. In addition, as the NSX DFW cannot configure security policies for physical endpoints, the DB_Prod EPG, and any other bare metal or external EPG can be placed outside of the Preferred group, and Cisco ACI contracts can be used to provide security. This functionality is not limited to communication with bare metal. For instance, in the same way, the fabric administrator can enable a service running on a Kubernetes cluster to access a vsphere virtual machine without involving any gateways. The vsphere administrator can create virtual machines and place them into the relevant dvport groups using standard vcenter processes, API calls, or both. This process can be automated through vrealize Automation, Cisco CloudCenter, or other platforms. The virtual machines can then communicate with other virtual machines in the different application tiers as per the policies defined in the NSX DFW. All routing and bridging required between virtual machines happens in a distributed way in the fabric hardware using low-latency switching. This method means that virtual machines can communicate with bare metal servers or containers on the same or on different subnets without any performance penalties or bottlenecks. Security policies between endpoints that are not inside vsphere can be configured using Cisco ACI contracts. The vsphere hosts have the NSX kernel modules running, and the NSX administrator can use all NSX security features, including NSX Distributed Firewall and Data Security for Guest Introspection, as shown in Figure 22. The administrator can also add antivirus or antimalware partner solutions Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 33 of 64

34 Figure 22. Example of NXS security features: Service Composer is used to create security groups for application and web tiers routed by the Cisco ACI fabric The approach of combining NSX security features with Cisco ACI integrated overlay offers the clear advantage of better visibility, as shown in Figure 23. Virtual machines that are part of a security group are also clearly identified as endpoints in the corresponding EPG. Figure 23. Virtual machines that are part of an NSX security group are visible inside the APIC EPG 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 34 of 64

35 It is important to note that, in this model, the use of Cisco ACI contracts is entirely optional. Since we are assuming that the NSX DFW will be used to implement filtering or insert security services, the fabric administrator may choose to disable contract enforcement inside the VRF tables to eliminate the need for contracts. However, instead of completely disabling policy inside the VRF, we recommend placing EPGs mapped to the VMM domain in the Preferred group to allow open communication between them while allowing the fabric administrator to use contracts for other EPGs inside the same VRF. This design approach does not limit NSX to security features only. Other services, such as using the ESG for load balancing, can also be used. Both NSX Distributed Firewall and Cisco ACI security are limited to Layer 2 4 filtering. Full-featured firewalls are also capable of filtering based on URLs, DNS, IP options, packet size, and many other parameters. In addition, Next-Generation Firewalls (NGFWs) are also capable of performing deep packet inspection to provide advanced threat management. It may be desirable to insert more advanced security services to protect north-south or east-west traffic flows, or both. In this context, east-west commonly refers to traffic between virtual machines in vsphere clusters with a common NSX installation. Any other traffic must be considered north-south for NSX, even if it is traffic to endpoints connected in the fabric. The need for service insertion is not limited to NGFWs but also to other security services such as next-generation network intrusion prevention, or to advanced Application Delivery Controllers (ADCs) that can also perform SSL offload in hardware for high-performance web applications. Figure 24 illustrates the various insertion points available in this design. Because Cisco ACI provides the overlay where all endpoints are connected, organizations can use Cisco ACI service graphs to insert services such as physical and virtual NGFWs between tiers without affecting NSX service insertion, which does not leverage VXLAN and is independent of the NSX controllers. Service insertion is implemented using the NSX API and can be used to insert virtual services between virtual machines running in the vsphere hypervisors Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 35 of 64

36 Figure 24. Advanced services can be added using both Cisco ACI and NSX service partners Option 2. NSX-v overlays as an application of the Cisco ACI fabric Using the Cisco ACI integrated overlay offers many advantages, and customers who look at NSX to implement better security for their vsphere environments are encouraged to follow that model. Here we will explain instead how to best configure Cisco ACI in the case where customers also use NSX overlay capabilities. NSX-v VXLAN architecture This section describes various alternatives for running NSX-v VXLAN overlay capabilities on a Cisco ACI fabric. Figure 25 shows a general representation of the reference architecture for NSX-v as outlined in the NSX for vsphere Design Guide. In the NSX reference architecture, VMware recommends dedicating compute resources for user applications and for running NSX Edge functions, all connected through a leaf-and-spine fabric to maximize bisectional bandwidth. In the figure, servers that cannot be part of NSX overlay are shown in blue, including bare metal, other hypervisors, container platforms, and vsphere clusters without NSX installed Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 36 of 64

37 Figure 25. Compute clusters, management cluster, and edge cluster for a multiple vcenter solution In NSX-v architecture, the NSX Manager is the heart of the system and has a 1:1 relation with a single vcenter. Therefore, organizations that require using multiple vcenter servers also must run and maintain multiple NSX Managers. The controller clusters can be dedicated per NSX Manager or shared among up to eight NSX Managers if the customer can use universal configurations that is, configurations that are synchronized across NSX Managers. The use of universal configurations may impose limits on the overall system scale and prevent certain features and configurations from being used, such as using rich vcenter object names for defining microsegmentation. The NSX reference design shown in Figure 25 makes a clear distinction between ESXi compute clusters dedicated to running applications (that is, vsphere clusters running user or tenant virtual machines), and those clusters dedicated to running NSX routing and services virtual machines (that is, NSX Edge services gateway virtual machines). The architecture calls these dedicated clusters compute clusters and edge clusters. VMware Validated Design documents for Software-Defined Data Center (SDDC) also described converged designs, in which an ESG virtual machine can coexist with the user virtual machines, eliminating the need for Edge clusters are required. The key implication, as we will see, is that ESG virtual machines that route between the NSX overlay and the rest of IT infrastructure require that routing configurations and policies be defined on the physical switches they connect to. For this reason, at scale, it becomes complicated, if not impossible, to enable the ESG virtual machines to be placed anywhere in the infrastructure, especially when using a Layer 3 access leaf-andspine fabric design. This limitation does not occur when using Cisco ACI. NSX VXLAN Understanding BUM traffic replication The NSX network virtualization capabilities rely on implementing a software VXLAN Tunnel Endpoint (VTEP) on the vsphere Distributed Switch (VDS). This is accomplished by adding NSX-v kernel modules to each ESXi host. The hypervisor software s lifecycle management is thereby tied to that of NSX. The NSX kernel modules are programmed from both NSX Manager and the NSX Controller cluster using VMware proprietary protocols Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 37 of 64

38 During the NSX host preparation process for an ESXi cluster, the NSX administrator enables VXLAN on ESXi clusters from the NSX Manager. This process requires selection of a VDS that must already exist during the NSX host preparation. At this time, a new VMkernel interface is created on each ESXi host on the given cluster using a dedicated TCP/IP stack specific to the VTEP traffic. The NSX Manager also creates a new dvportgroup on the VDS. The VXLAN VMKNIC is placed in that dvportgroup and given an IP address: the NSX VTEP address. The NSX VTEP address can be configured through either DHCP or an NSX Manager defined IP pool. The NSX VTEP IP address management must be considered carefully, because it may have an impact on how NSX handles broadcast, unknown unicast, and multicast traffic (known as BUM traffic). NSX-v can handle BUM replication in three ways: Multicast mode: This mode does not require using NSX-v controllers and implies running VXLAN in floodand-learn mode, with BUM accomplished by leveraging IP Multicast in the underlay. This mode is essentially equivalent to the legacy vshield implementation and relies on Protocol Independent Multicast (PIM) or Internet Group Management Protocol (IGMP) in the network fabric for efficient multicast transport. Unicast mode: This mode requires using NSX-v controllers. In this mode, the ingress ESXi host receiving a BUM packet does head-end replication. For each VTEP subnet, one ESXi host is selected as a Unicast VXLAN (utep) Proxy). The ingress ESXi host performs BUM replication by creating a copy for each other interested VTEP in the VTEP subnet, plus one copy for the utep proxy for each other VTEP subnet in the transport zone. The utep proxy on each VTEP subnet in turn replicates the packets for each VTEP known in the subnet. This is less than optimal in terms of CPU usage and bandwidth utilization if a lot of VTEPs exist in a subnet. Hybrid mode: This mode also requires using the NSX-v controllers. It uses a combination of both unicast and multicast to handle flooding of BUM traffic. Within each VTEP subnet, an ESXi host is chosen as a multicast VXLAN VTEP (mtep) proxy. Then the ESXi host receiving a BUM packet sends it to the physical network encapsulated in IP Multicast and relies on the network device to replicate it for all VTEPs in the subnet. All the VTEPs in the subnet subscribe to the same multicast group (send an IGMP join) to receive the BUM packet from the network device. The receiving host also sends the BUM traffic as unicast to the mtep for each of the VTEP subnets, which in turn sends it to the physical network as multicast as well. In this model, the physical network does not require routing IP Multicast, but it is assumed that Layer 2 multicast is supported. NSX transport zones As part of NSX-v VXLAN configuration, the administrator must also define a transport zone to which ESXi clusters are mapped. A transport zone defines the scope of VXLAN segments (called logical switches) in NSX overlays. This mapping determines which virtual machines can access a specific logical network based on the cluster to which they are associated. The replication mode configured for each transport zone defines the default VXLAN control plane mode and can be set to multicast, unicast, or hybrid. More importantly, as mentioned, the transport zone defines the scope of a logical switch. A logical switch is a dvportgroup backed by a VXLAN segment created by NSX and is limited to a single transport zone. This means that virtual machines in different transport zones cannot be on the same Layer 2 segment or use the same NSX constructs. Scalability limits also defining how many vsphere DRS clusters can be part of a transport zone, so the scope of a logical switch is limited not by the physical network but by the supported server count per NSX transport zone (limited to 512 servers in NSX version 6.3). At scale, a typical NSX deployment therefore consists of multiple transport zones, requiring careful consideration for VDS and transport zone design, and the alignment and 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 38 of 64

39 connectivity between them. Virtual machines connected to different transport zones must communicate through gateways routing between them. NSX VTEP subnetting considerations When considering the NSX replication models explained earlier in this document, it will be clear that VTEP subnetting has important implications for the system s behavior. For instance, if we imagine 250 VTEPs in a subnet, when using unicast mode, an ESXi host in the worst case scenario would have to create 250 copies for each received BUM packet. The same setup in hybrid mode would require creating a single copy, and the physical network would handle the replication. IGMP snooping can restrict the flooding in the fabric. When deploying a traditional Layer 3 access fabric, subnetting for NSX VTEPs offers a single choice: one subnet per rack. This statement has three important implications: It becomes complicated to use NSX Manager IP pools for VTEP addressing unless vsphere clusters are limited to a single rack. Using DHCP for VTEP addressing requires configuring DHCP scopes for every rack, using option 82 as the rack discriminator. Applying security to prevent other subnets from accessing the VTEP subnets requires configuring ACLs on every ToR switch. In contrast, when using a Cisco ACI fabric, subnets can span multiple racks and ToR switches, or even the entire fabric, for greater simplicity and flexibility in deployment. Deploying NSX on Cisco ACI enables customers to use IP pools or DHCP with complete flexibility. Running NSX VXLAN on a Cisco ACI fabric NSX VXLAN traffic must be on an Cisco ACI bridge domain (). NSX VXLAN traffic will primarily be unicast, but depending on configuration, it may also require multicast. This requirement will influence the Cisco ACI bridge domain configuration; the best configuration will depend on the NSX BUM replication model that is prevalent in the design. Administrators select a BUM replication mode when configuring an NSX transport zone. This configuration may be overridden later by the settings for each logical switch. In other words, a transport zone may be configured to use hybrid replication mode, and subsequently a logical switch in that transport zone can use unicast mode instead. The following two sections cover the recommended Cisco ACI bridge domain and EPG configurations for NSX VTEP traffic for unicast and hybrid replication modes. Bridge domain EPG design when using NSX hybrid replication If an NSX deployment will primarily use hybrid replication, the simplest and best performing design is to use a single bridge domain and one NSX VTEP subnet for each NSX transport zone, as illustrated in Figure 26. This example expands on the design for vsphere infrastructure outlined earlier in this document, using a bridge domain for each of the vsphere services: vmotion, IP storage, and forth. A bridge domain is added with a subnet for the NSX VXLAN traffic. This configuration allows multiple vsphere clusters to share the same NSX VTEP pool while allowing complete flexibility in how clusters are deployed to racks. In this configuration, NSX administrators can use either IP pools or DHCP for VTEP addressing. Clusters can be deployed tied to a rack as shown in Figure 27, or they can expand across racks Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 39 of 64

40 Figure 26. Example of NSX VTEP Pool creating matching a Cisco ACI BD and EPG created for the transport zone. Since hybrid mode is used, the fabric must take care of NSX BUM traffic replication using Layer 2 multicast. To minimize unnecessary flooding, we recommend configuring the bridge domain with IGMP snooping and to enable IGMP querier functionality, as illustrated in Figure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 40 of 64

41 Figure 27. Example showing a single BD and single EPG for all NSX VTEP interfaces In this design model, if maximum simplicity is desired a single EPG can be used for the NSX VTEP VMkernel IP addresses for all clusters. It is important to note that the EPG (or EPGs) dedicated for NSX VTEP VMKNIC interfaces must always be mapped as a physical domains. As explained earlier, the endpoint can be mapped directly at the AEP level, thereby simplifying the configuration of the fabric. However, as explained earlier for other vsphere services, it may be desirable to use an EPG for NSX VXLAN VTEP interfaces by cluster. This configuration is possible when using a single subnet for NSX VTEP addresses, as shown in Figure 28, and offers the fabric administrator better visibility and understanding of the virtual environment. Figure 28. Example using a single BD for the NSX Transport Zone with per-cluster EPG for NSX VTEP interfaces Using one or multiple EPGs for NSX VTEP VMkernel configuration does not complicate NSX configurations for VXLAN. In all cases, the NSX VTEP is on the same subnet, associated to a Cisco ACI bridge domain, and can use the same VLAN encapsulation on the NSX host VXLAN configuration. However, for this to work, two considerations are important: 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 41 of 64

42 If using multiple EPGs per cluster, the fabric administrator must configure local VLAN significance on the interface policy group to reuse the same VLAN encapsulation as for other EPGs. It is not possible to use the same VLAN for two EPGs mapped to the same bridge domain on the same leaf. Therefore, this model, if using a single bridge domain, works only if clusters do not expand onto multiple racks. If clusters expand onto multiple racks, different VLANs must be used per EPG to ensure no VLAN conflicts occur. Figure 29 illustrates again the advantages of per-cluster EPGs. In this case, the NSX VTEP VMkernel interfaces are grouped by cluster and Cisco ACI automatically allows the fabric administrator to keep track of per-cluster health scores, events, and statistics. Figure 29. The NSX VTEP interfaces are viewed at the EPG level and VXLAN-specific traffic stats automatically correlated Bridge domain EPG design when using NSX unicast replication It is possible to use a single bridge domain and a single subnet for all NSX VTEPs when using unicast mode replication for BUM traffic. However, in that case, head-end replication performed by NSX at the ingress hypervisor may have a significant negative impact on the performance of the NSX overlay, especially for environments with a larger number of ESXi hosts and consequently larger number of VTEPs. For this reason, when using NSX unicast replication mode, we recommend using one bridge domain and subnet per vsphere cluster or per rack. In Layer 3 fabrics, it is not possible to do this per cluster if the clusters expand across multiple racks. This fact is yet another reason to use Cisco ACI as the underlay for NSX, as opposed to a traditional Layer 3 fabric. The capability of Cisco ACI for extending a bridge domain wherever needed across the fabric allows for maximum flexibility: vsphere clusters can be local to a rack or they can expand across racks. But it is better to tie the subnet to the cluster, not the rack. By tying the VTEP subnet to the cluster, the NSX administrator can still use NSX IP pools to address the VTEP interfaces, defining them by cluster, and is not forced to use DHCP as the only option. Alternatively, DHCP relay can be configured at the bridge domain level, if that is the selected option for NSX VTEP IP address management Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 42 of 64

43 Figure 30 illustrates how using one bridge domain or EPG per cluster allows the same VLAN to be used for all the VMKNIC NSX VXLAN configuration. This method requires configuring local VLAN significance on the policy groups if the clusters will expand onto multiple racks but raises no problems because two EPGs can use the same VLAN on the same leaf, if the EPGs belong to different bridge domains, as is the case here. Figure 30. A design using different NSX VTEP subnets per cluster, each with a dedicated BD and EPG in Cisco ACI Providing visibility of the underlay for the vcenter and NSX administrators Regardless of the design chosen, much of the operational information in APIC can be made available to the NSX or vcenter administrator leveraging the APIC API. Cisco ACI offers a vcenter plug-in that is included with the APIC license. The Cisco ACI vcenter plug-in is extremely lightweight and uses the APIC API to provide visibility of the Cisco ACI fabric directly from within vcenter. as shown in Figure 31. Using this tool, the NSX administrator can leverage the vcenter web client to confirm that the fabric has learned the correct IP or MAC address for NSX VTEPs, to view security configurations in Cisco ACI, to troubleshoot fabric-level VTEP-to-VTEP communication, and more. The vsphere administrator can also use the plug-in to configure or view tenants, VRF tables, bridge domains, and EPGs on APIC. The figure shows the EPGs related to an NSX transport zone, including the VXLAN contract and NSX VTEPs inside an EPG. Figure 31. A view of the NSX VTEP EPGs for various clusters as seen from the Cisco ACI vsphere plug-in 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 43 of 64

44 The Cisco ACI fabric also provides excellent visibility and troubleshooting tools built in to the APIC. One example is the troubleshooting wizard available on the APIC to monitor details of the endpoints in the topology. Let s imagine that there are connectivity problems between any two given ESXi hosts. The fabric administrator can use the Cisco ACI Visibility and Troubleshooting tool to have APIC draw the topology between those two hosts. From this tool, we can quickly pull all the statistics and packet drops for every port (virtual or physical) involved in the path between the referred endpoints. We can also pull all events, configuration change logs, or failures that are specific to the devices involved in the path during the time window specified, and we also can pull in the statistics of the contract involved. Other troubleshooting tools accessible from the same wizard help the fabric administrator configure a specific SPAN session inject traffic to simulate the protocol conversations end to end across the path, and the like. This visibility is available on the APIC for any two endpoints connected to the fabric, whether they are virtual or physical. Virtual switch options: Single VDS versus dual VDS Previous sections describe how to best provide NSX connectivity and security requirements in terms of using Cisco ACI constructs such as VRF tables, bridge domains, application network profiles, EPGs, and contracts. As explained earlier, NSX VXLAN settings require an existing VDS. One design approach is to have one VDS dedicated solely for NSX logical switches and their corresponding dvportgroups and to have another VDS dedicated for vsphere infrastructure traffic, such as vmotion, IP Storage, and Management. This approach is illustrated in Figure 32. The first VDS is managed by APIC through vcenter using a VMM domain. The other connects to Cisco ACI using a physical domain and is dedicated to NSX traffic. The NSX VTEP dvportgroups is created on this VDS by NSX Manager, and all logical switch port groups are created on this VDS. Figure 32. Dual-VDS design with separate VDS for vsphere infrastructure and for NSX We recommend using a Cisco ACI VMM domain for managing and monitoring the VDS for infrastructure traffic, as described in the previous sections, and to keep a separate domain for the NSX VXLAN VMkernel interface. Configuring a VMM in APIC for the vcenter gives the fabric administrator maximum visibility into the vsphere infrastructure, while the vsphere administrator retains full visibility through the use of native vcenter tools, and deploying vsphere clusters becomes much easier Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 44 of 64

45 The downside of dedicating one VDS for infrastructure and another for virtual machine data is that more uplinks are required on the ESXi host. However, considering the increasing relevance of hyperconverged infrastructure and IP storage in general, it is not a bad idea to have separate 10/25GE for user data and storage. Alternatively, to reduce the number of uplinks required, customers can run the NSX VXLAN VMkernel interface on the same VDS used for vsphere infrastructure traffic. This way, APIC can have management access to the same VDS through the vcenter API. Cisco has tested and validated this configuration. The VMM domain configuration on Cisco ACI and the corresponding VDS must be provisioned before NSX host preparation. The configuration of the dvportgroup used for the VXLAN VMKNIC is done by NSX Manager in any case, and therefore should not be done by mapping the VXLAN EPG to the VMM domain. All logical switch dvportgroups are created on the same VDS. The NSX VXLAN VMkernel interface must always be attached to an EPG mapped to a physical domain. Figure 33. Single-VDS design using same VDS for vsphere infrastructure and NSX Using VMM may also be convenient when considering the Edge clusters. The compute resources required for running NSX Edge virtual machines will need to participate on the NSX overlay, but they will also need communication with the physical network. The connectivity between the NSX ESG virtual machines and the physical network must be done using standard dvportgroups backed by VLANs. These ESG-uplink dvportgroups are not automated through NSX, nor are they created from NSX Manager. As we will see in the following sections, the dvportgroups used for connecting NSX ESG virtual machine uplinks to the physical world can be configured by creating EPGs that provide the required connectivity and policy and then mapping those EPG to the specific VMM domain. This method helps in automating the connection of the ESG to the physical world and provides additional benefits in terms of mobility, visibility, and security Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 45 of 64

46 NSX Edge Clusters NSX routing and Cisco ACI Introduction to NSX routing When using NSX VXLAN overlays, traffic that requires connecting to subnets outside of the overlay must be routed through NSX Edge virtual machines, also known as Edge Services Gateway (ESG). NSX does not perform any automation of the physical network. However, the ESG virtual machines require connection to the physical network for routing. Since legacy networks lack programmability, to minimize physical network configurations, some VMware documentation recommends concentrating all ESG virtual machines in dedicated clusters. This way, network administrators do not need to know which ToR to configure for ESG network connectivity as all ESGs will appear in the same rack. This approach can lead to suboptimal use of compute resources. On one hand, it is possible for the vsphere clusters dedicated to running ESG virtual machines to be underprovisioned in capacity, in which case application performance will suffer. On the other hand, the opposite may happen and Edge clusters could be overprovisioned, wasting compute resources and associated software licenses. This section discusses how best to connect ESG virtual machines to the Cisco ACI fabric to minimize or eliminate these constraints. Edge clusters follow the same designs presented in previous sections in terms of NSX VTEP configuration. Specifically, a VXLAN VMKNIC is created for NSX VTEP addresses, and it must be connected to the required EPG in the Cisco ACI fabric. NSX Edge is a virtual appliance that can be deployed in two different personalities : an Edge services gateway or a Distributed Logical Router (DLR) control virtual machine. The ESG virtual machine is a Linux-based basic IP router. Each ESG runs its own control plane and data plane, which operate independently of the NSX controllers. In fact, the ESG can run without the NSX controllers and be connected to Cisco ACI provisioned port groups as well. In addition to IPv4 routing, the ESG can also be configured to perform functions such as Network Address Translation (NAT), basic Layer 2 4 firewall, IP Security (IPsec) or SSL VPN, and basic load-balancing services similar to those offered by open source HAProxy. The DLR uses an NSX Edge image deployed as a control virtual machine. When using a DLR, the routing data plane is distributed to the hypervisor kernel modules, but the control plane for routing protocols runs on the DLR control virtual machine. In other words, protocols like Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) run on the DLR control virtual machine, which will communicate routing prefixes to the NSX Controller Cluster, which in turn programs them on the hypervisor DLR kernel modules. In most NSX overlay designs, both ESG and DLR control virtual machines are required. The DLR is recommended for routing traffic between virtual machines in the NSX-enabled ESXi clusters. The ESG is used for routing traffic between virtual machines in those clusters and subnets external to the overlay. It is important to note that traffic between different NSX transport zones also will need to be routed through ESG virtual machines. The ESG virtual machines may also be used for implementing NAT or DHCP services or for running basic load-balancing services Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 46 of 64

47 Typically, a DLR may have various logical switches connected, each representing a different subnet inside the NSX overlay. To route traffic toward ESGs, the DLR will use a VXLAN logical switch that will serve as transit between the DLR and the ESG. This two-tier routing is illustrated in Figure 34. The DLR control virtual machine requires a second virtual machine (not depicted) to implement control plane redundancy. The ESG may be deployed in Active/Standby mode, as in the figure, or in Active/Active mode and leverage ECMP between the DLR and ESG sets. In the latter case, the ESG can only do IPv4 dynamic routing: NAT, firewall, and load balancing are not supported in ECMP mode. As the ESG and DLR have no concept of VRF, when routing isolation is required, additional DLR-ESG sets are required as well. It can be said that implementing the equivalent of a VRF requires at least four virtual machines: two ESGs and two DLR control virtual machines (assuming redundancy is required). Figure 34. Typical two-tier minimum logical routing design with NSX for vsphere In some deployments, the ESG does not need to run a dynamic routing protocol, most often when the subnets on the DLR use RFC1918 private address space and the ESG performs Network Address Translation (NAT) to a given routable address. In such scenarios, some NSX designs propose adding a third routing tier inside the overlay, such as that shown in Figure 35. In that model, different tenants may get RFC1918 subnets for their apps and route outside towards an ESG that performs NAT. Those ESGs connect using another Transit logical switch to an ECMP ESG set that performs routing to external subnets Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 47 of 64

48 Figure 35. Typical scalable three-tier overlay routing design with NSX The sections that follow describe how best to connect ESGs to the Cisco ACI fabric for the these three scenarios: Connecting ESG running NAT to the Cisco ACI fabric. ESG does not do dynamic routing. ESG running dynamic routing through the fabric. ESG peers with another router, not with the Cisco ACI fabric. ESG running dynamic routing peering with the fabric. ESG peers with the Cisco ACI fabric border leaf switches. Regardless of the routing scenario, the ESG will always be a virtual machine with at least two virtual NICs (vnics), as illustrated in Figure 36. One vnic connects to the overlay network, typically to the transit logical switch; this is a downlink interface. The other vnic is an uplink and may connect to a dvportgroup backed by a VLAN or sometimes also to another transit logical switch. In the figure, the downlink vnic is connected to a logical switch and the DLR, while the uplink is to a dvportgroups backed by VLAN Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 48 of 64

49 Figure 36. Representation of the ESG virtual machine with two vnics, one connected to the overlay and one to the physical network Connecting ESG with NAT to the Cisco ACI fabric In certain cloud solutions it is common to use RFC1918 private address spaces for tenant subnets, for instance, in certain topologies deployed with vcloud Director or vsphere Integrated OpenStack. In those cases, the private subnets are configured at the DLR, and the ESG will have one or more routable addresses that it uses to translate addresses for the DLR private subnets. From a Cisco ACI fabric perspective, an ESG performing NAT does not require routing configurations since it appears as a regular endpoint in an EPG.. The simplest design in Cisco ACI is to create a bridge domain for the subnet that will be used to provide addresses for the NAT pool of the ESGs. The ESG default route will point to the bridge domain default gateway IP address. The uplink dvportgroup for the ESG will correspond with an EPG where the ESG appears connected. Figure 36 shows an ESG with two vnic, one connected to a VLAN-backed dvportgroups that in turn would connect to a Cisco ACI EPG. In this design, the additional ESGs shown in Figure 35 to provide routing using ECMP between the NSX tenant ESG and the rest of the world are not required. The fabric can route between different NSX tenant ESG and, toward other subnets connected to the same Cisco ACI fabric, or also, as shown in Figure 37, toward subnets external to the Cisco ACI fabric through an L3Out interface Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 49 of 64

50 Figure 37. NSX tenant ESG perform NAT and connect to a Cisco ACI EPG. The Cisco ACI fabric provides all routing required between ESG virtual machines and any other subnet. In this case, with Cisco ACI it is very simple to eliminate the need for the second tier of ESGs by leveraging the L3Out model in the fabric as depicted in Figure 37. This configuration is simple to implement and to automate. It also has great benefits in terms of performance and availability. Performance-wise, any traffic switched from a NAT ESG toward an endpoint attached to the fabric will benefit from optimal routing, directly from the first connected leaf switch. Traffic toward addresses external to the fabric will go through Cisco ACI L3Outs that use 10GE or 40GE links with hardware-based, low-latency and low-jitter line-rate throughput. The design in Figure 37 is far better than routing through a second tier of ESG shown in Figure 35. It is also more cost-effective, as no additional servers are required for a second tier of ESG. Availability-wise, as of the time of this writing it is impossible to guarantee subsecond convergence reacting to any failure vector affecting an ESG router (be it a link, switch, server, or ESG failure). However, by reducing or eliminating reliance on ESG routing and performing it in the fabric, customers achieve better availability from eliminating failure vectors and because Cisco ACI provides subs-second convergence on switch or link failures. Finally, this configuration can also help customers benefit financially. By reducing the number of ESGs required, customers require fewer servers, fewer vsphere and NSX licenses, and fewer network ports to connect them. ESG routing through the fabric When not doing NAT, more often than not the ESG virtual machines are running a dynamic routing protocol with external routers and with the DLR control virtual machine. The DLR announces connected subnets to the ESG that will propagate them to the routers outside of the NSX overlay. The DLR provides a distributed default gateway that runs on the hypervisor distributed router kernel module and provides routing between virtual machines connected to the logical switches attached to the DLR. However, for external connectivity, traffic is sent to the ESG, which removes the VXLAN header and routes the packets into a VLAN to reach external routes Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 50 of 64

51 Typically, the data center routers are physically connected to the Cisco ACI fabric. If it is expected that each ESG will route only to subnets external to the Cisco ACI fabric, the ESG virtual machines may be peering with data center routers. In this case, it is only necessary to provide the Layer 2 path between the router interfaces and the ESG external vnic. In such a scenario, the Cisco ACI fabric does Layer 2 transit between the ESG and the data center routers using regular EPGs. Route peering is configured between the ESG and the external routers, as illustrated in Figure 38, which shows the ESG uplink port group corresponding to an EPG for the ESG external vnics on the same bridge domain as the EPG used to connect the WAN router ports. Note that the ESG and the data center routers connected to different leaf switches simply for illustration purposes. They could be connected to the same leaf switches or not. Figure 38. The ESG uplink connects to a dvportgroups mapped to an EPG where the ACI fabric provides transit at Layer 2 towards the data center WAN routers Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 51 of 64

52 It is possible to use a single EPG for connecting the ESG uplink port group and the physical WAN router ports. However, keeping them on separate EPGs facilitates visibility and allows the fabric administrator to leverage Cisco ACI contracts between ESG and WAN routers, offering several benefits: The fabric administrator can control the protocols allowed toward the NSX overlay, and vice versa, using Cisco ACI contracts. If required, the Cisco ACI contracts can drop undesired traffic in hardware, thus preventing the NSX ESG from having to use compute cycles on traffic that will be dropped anyway. The Cisco ACI fabric administrator can use service graphs and automate perimeter NGFWs or intrusion detection and prevention, whether using physical or virtual firewalls. The Cisco ACI fabric administrator can use tools such as Switched Port Analyzer (SPAN), Encapsulated Remote Switched Port Analyzer (ERSPAN), or Copy Service to capture in hardware all the traffic to/from specific ESGs. In addition, by creating different EPGs for different tenant ESG external vnics, the NSX and fabric administrators benefit from isolation provided within Cisco ACI. For instance, imagine a scenario in which you want to prevent traffic from some networks behind one ESG from reaching other specific ESGs. With NSX, the only way to block that traffic is by configuring complex route filtering or firewall policies to isolate tenants, assuming all ESGs are peering through the same logical switch or external uplink port group. With the Cisco ACI EPG model, however, if two ESG vnics are on different EPGs, by default those ESGs cannot communicate. This separation could also be accomplished by creating a single isolated EPG for all ESG-external vnics. Figure 39 shows an example of this configuration option. A new bridge domain is created to provide a dedicated flooding domain for the Layer 2 path between the ESG and the external routers. Although this domain does not require a subnet to be associated to it, it is always best to configure one and enable routing, so that the fabric handles ARP flooding in the most optimal way and can learn IP addresses of all endpoints to enhance visibility, facilitate troubleshooting, and so forth. In Figure 39 we see two NSX tenants, each with dedicated ESG and DLR sets, as well as redundant ESG virtual machines. The redundant DLR control virtual machine is not shown, but would be expected. The ESG external vnics are mapped to dedicated Cisco ACI EPGs for each tenant, as well, ensuring isolation without requiring complex routing policies. In the basic example, the contract between the ESG vnic EPG and the router port EPG is a Permit Any contract. This contract could be used for more sophisticated filtering, too, or to insert a NGFW using a service graph Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 52 of 64

53 Figure 39. ESG to WAN over EPG Isolating NSX routing domains using different EPGs and contracts In addition to facilitating tenant isolation, mapping the vnic external connectivity to an EPG has additional benefits in terms of both automation and mobility. This EPG, like any other, can be defined programmatically and mapped to a VMM domain directly from a cloud platform at the same time the ESG is instantiated, for instance, by leveraging the Cisco ACI plug-in for VMware vrealize Automation and Orchestrator. Additionally, the EPG is not tied to a specific leaf or rack. The ESG in this sense is like any other virtual machine connecting to the fabric. If the virtual machine moves, the fabric keeps the virtual machine attached to the right EPG and policy anywhere it roams within the fabric, so the Edge clusters are no longer tied to specific racks. Administrators can now create an Edge cluster anywhere in the topology and add or remove servers to a cluster as they see fit. This flexibility contributes to reducing costs, because there is no longer a need to overprovision the Edge clusters or to risk underprovisioning them and consequently suffering from additional performance problems on top of those already inherent to using software routing in the data center. Using this approach, the NSX Edge virtual machine can run in any cluster; there is no need to run clusters dedicated solely to Edge virtual machines. However, note that it is better to follow VMware recommendations and keep dedicated compute resources for ESG because of the difficulty in providing any performance guarantees today for routing functions on a virtual machine. The advantage in any case is that any vsphere cluster can be used for Edge services. Using Cisco ACI as a Layer 2 fabric to facilitate ESG peering with the data center WAN routers provides some very simple and flexible design options. However, traffic between virtual machines in the NSX overlay communicating with endpoints in subnets connected to the Cisco ACI fabric needs to wind its way through the ESG and then through the WAN routers as well, as illustrated in Figure 40. Note that this is also the case if traffic flows between different NSX transport zones Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 53 of 64

54 Figure 40. Traffic from a Virtual machine in the NSX overlay that needs to communicate with an endpoint connected in the ACI fabric will need to traverse the WAN routers if the fabric is Layer-2 only ESG peering with the fabric using L3Out In the designs presented earlier in this document, routing is always done from the NSX ESG to an external data center router where the fabric is used to provide a programmable Layer 2 path between them. If the traffic from the NSX overlays is all going to subnets external to the fabric, and not to subnets routed within the fabric, this solution is optimal, given its simplicity. However, if there is a frequent need for the virtual machines in the NSX overlays to communicate with endpoints attached to the fabric, such as bare metal servers, IP storage, or backup systems, routing through the external routers is not efficient. In these cases, routing can happen directly with the Cisco ACI fabric to obtain more optimal paths, better performance, and lower latency. This method essentially means using L3Outs between the fabric leaf switches and the ESG virtual machines. An L3Out interface is a configuration abstraction that enables routing between Cisco ACI fabric nodes and external routers. A Cisco ACI node with an L3Out configuration is commonly called a border leaf. There are various ways to configure L3Outs in Cisco ACI, including using subinterfaces, physical ports, and Switch Virtual Interfaces (SVIs). When peering with virtual routers, using SVI L3Outs is often the most convenient way, given that the virtual router may not be tied to a specific hypervisor and therefore may be connecting to different leaf switches throughout its lifetime. An L3Out can expand across up to eight border leaf switches, enabling a fairly large mobility range for virtual routers peering with Cisco ACI Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 54 of 64

55 There are multiple ways to connect NSX ESGs to Cisco ACI L3Outs. When deciding on the best option, we consider the following design principles: Consistency: The design is better if it allows consistent configurations with other ESXi nodes. For instance, VDS configurations should be the same on a ESXi node, whether it runs ESGs or not. Simplicity: The design should involves the least number of configuration elements. Redundant: The fabric should provide link and switch node redundancy, providing the fastest convergence possible. Figure 41 shows a high-level design for connecting ESGs to Cisco ACI SVI L3Outs. The fundamental aspects of this configuration align with the design principles: Consistency: Use Link Aggregation Control Protocol (LACP) Virtual Port Channels (VPCs) between leaf switches and ESXi hosts. LACP is standards-based and simple. This way the VDS uplink configuration is always the same whether an ESXi host is part of a compute cluster, edge cluster, or converged designs. Single ESG uplink port group. This element minimizes configuration, as a single SVI L3Out is required and a single uplink is required on the ESG. Redundant: The ESG will peer with all leaf switches in the L3Out (up to eight if needed). Load balancing happen at the LACP level between every ESXi host and the fabric border leafs. Link failure convergence does not rely on routing protocols, but rather on LACP. VDS LACP provides subsecond link failure convergence for link failure scenarios between leaf and ESXi host. Figure 41. ESG to Cisco ACI routing design with a single SVI L3Out and single uplink port group, link redundancy achieved with VPC and LACP. Figure 41 highlights the fact that, while the ESG will see two routing peers, one per Cisco ACI leaf switch, all prefixes resolve to the same next-hop MAC address corresponding to the SVI L3Out. Traffic is load balanced through the LACP bundle between the ESXi host and the border leafs and is always routed optimally Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 55 of 64

56 To understand this concept better, let s look at Figure 42, which shows an SVI L3Out using VLAN 312 expanding across two racks (four Cisco ACI leaf switches, assuming redundant leaf switches per rack). The configuration is very simple at the Layer 2 level, since all VDS uplinks and corresponding Cisco ACI VPC policy groups always use the same redundant configuration, based on standard LACP. Figure 42. A Cisco ACI SVI L3Out can expand multiple leafs, therefore allowing ESG virtual machines to run route peering even if running on clusters expanding multiple racks. Although the figure refers to BGP, the concept is the same if the subnet is advertised using OSPF. Both ESG-01 and ESG-02 will have a single uplink and four BGP peers. Note that this example uses four Cisco ACI leaf switches to illustrate that the ESG can run on a cluster stretching multiple racks, and therefore the administrator could employ vmotion to move an ESG across racks. Figure 43 shows these same two ESGs learning subnet /24 configured on an Cisco ACI bridge domain. This bridge domain has an association with the L3Out and the subnet is flagged as Public to be advertised. Both ESG-01 and ESG-02 have four possible routing next-hops for the subnet in question in their routing table, one per Cisco ACI leaf. However, the forwarding table on the ESG for the next-hops resolves to the same router MAC address for all four routes. Therefore, when any one of the ESGs needs to send traffic toward /24, it may choose to send to leaf-01, leaf-02, leaf-03, or leaf-04, but the traffic will always leave the ESXi host having the same VLAN encapsulation (VLAN 312) and the same destination MAC address (00:22:BD:F8:19:FF). When any of the four leaf switches receives packets to that destination MAC address, it performs a route lookup on the packet and routes it to the final destination optimally. For instance, ESG-01 is running on the ESXi host physically connected to leaf-01 and leaf-02. If ESG-01 sends a packet hitting on the entry pointing to leaf-03, the packet will reach the VDS, where LACP load balances it between the two links between the ESXi host and leaf-01 and leaf- 02. Then one of these two switches looks up the packet, because it is sent to the L3Out router MAC address, and forwards the packet to the right destination. In other words, at all times, all traffic sent from ESG-01 will be using the all links toward leaf-01 and leaf-02, the traffic from ESG-02 will use the links toward leaf-03 and leaf-04, and all leaf switches will do a single lookup to route to the final destination. The traffic back from subnet /24 will be load balanced across the four leaf switches Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 56 of 64

57 Figure 43. An example of the routing and forwarding table for ESG-01 and ESG-02 when peering with a Cisco ACI SVI L3Out One advantage of this design is shown in Figure 44, where ESG-01 has moved to the ESXi host connected to leaf- 03 and leaf-04, on another rack. ESG-01 can have a live vmotion for this move, and the routing adjacency will not drop, nor will traffic be impacted. Of course, in the scenario in this figure, traffic from ESG-01 and ESG-02 flows only through leaf-03 and leaf-04. Load balancing continues, but now both ESGs send and receive traffic through two leaf switches instead of four. Figure 44. ESG-01 migrated to another host without impacting the BGP peering status or the routed traffic Using VMM integration also provides benefits in this design scenario. When connecting the ESG toward an L3Out, the VMM domain is not used to configure the ESG uplink dvportgroups, which must be mapped to an external router domain. But if the VMM domain is configured and monitoring the VDS, the fabric administrator can identify ESG endpoints using the same semantics as the NSX administrator Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 57 of 64

58 For instance, Figure 45 shows partial screenshots of the APIC GUI, where the administrator can find an ESG by its name, identify the ESXi host where it is running (and consequently the fabric ports), and monitor the ESG traffic statistics after verifying that the vnics are connected. Figure 45. The VMM domain allows the fabric administrator greater visibility, simplifies troubleshooting workflows, in the display seeing details of an ESG VM and its reported traffic Figure 46 illustrates the final result: Edge clusters can be designed across multiple racks, and ESG virtual machines can move within the rack or across racks without impacting routing status or traffic forwarding. Because all ESXi hosts can connect using the same link redundancy mode (VDS), optimal link utilization is achieved while keeping cluster configuration consistency. This facilitates converged designs where ESG and user virtual machines share the same clusters. Figure 46. Using SVI L3Outs delivers fastest possible convergence for link-failure scenarios and enables mobility of ESG within and across racks 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 58 of 64

59 It is important to understand that the scenario illustrated in Figure 46 does not remove the potential bottleneck that the ESG may represent. The DLR load balancing across multiple ESGs is based on source and destination IP address, and the ESG will in turn use a single virtual CPU (vcpu) for a single flow. The maximum performance per flow will be limited by the capacity of the vcpu, and as a flow is defined by source-destination IP addresses, this configuration can create a bottleneck for any two endpoints communicating. In addition, there is a high probability that multiple IP source-destinations will be sent to the same ESG and the same vcpu. Bridging between logical switches and EPGs Sometimes virtual machines and bare metal servers are required to be on the same network subnet. When running a NSX VXLAN overlay for virtual machines and a VLAN trunk, or untagged access port, for the bare metal servers, you need to bridge from the VXLAN encapsulated traffic to VLAN encapsulated traffic. This bridging allows the virtual machines and bare metal servers running on the same subnet to communicate with each other. NSX offers this functionality in software through the deployment of NSX Layer 2 bridging, allowing virtual machines to be connected at Layer 2 to a physical network through VXLAN-to-VLAN ID mapping. This bridging functionality is configured in the NSX DLR and occurs on the ESXi host that is running the active DLR control virtual machine. The ESXi host running the backup DLR control virtual machine is the backup bridge in case the primary ESXi host fails. In Figure 47, the database virtual machine running on the ESXi host is on the same subnet as the bare metal database server. They communicate at the Layer 2 level. After the traffic leaves the virtual machine, it is encapsulated in VXLAN by the NSX VTEP in the hypervisor and sent to the VTEP address of the ESXi host with the active DLR control virtual machine. That ESXi host removes the VXLAN header, puts on the appropriate VLAN header, and forwards it to the physical leaf switch. The return traffic goes from the bare metal database server encapsulated in a VLAN header to the Cisco ACI leaf. The leaf forwards the packets to the ESXi host running the DLR bridging, because the MAC address for the database virtual machine is learned on the ports connecting to that host. The host removes the VLAN header, puts the appropriate VXLAN header on the packet, and sends it back to the Cisco ACI leaf, but this time to be sent on the VTEP EPG. The VXLAN packet is delivered to the appropriate ESXi VTEP for forwarding to the virtual machine. Figure 47. Bare metal database server and database virtual machine running on the ESXi host share the same subnet 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 59 of 64

60 To configure NSX bridging, a unique dvportgroup is created on the VDS, as shown in Figure 48. This is the dvportgroup that will be used for the VLAN side of the bridge. A different dvportgroup is needed for each VXLAN Network Identifier (VNI) that will be bridged. Note the VLAN on the dvportgroups, as this will be used in the Cisco ACI configuration. Figure 48. Example of a dvportgroup configured on the VDS backed to a VLAN connected to an EPG Once a dvportgroup is created for the bridge, the bridge can be configured on the DLR. This is where the logical switch (VNI) is mapped to a VLAN using the correct dvportgroup. In the example in Figure 49, a bridge is mapped between the DB-LS-1 logical switch and the Bridge-For-DB port group, which uses VLAN Figure 49. Detail of NSX Manager configuration showing the bridge between a Logical Switch and a dvportgroup Now we can create an EPG and map the ports to the two ESXi hosts with the active and standby DLR control virtual machines (as these are the hosts that can do the bridging operations) and any bare metal servers that need to be on the same subnet using the VLAN tag configured in the dvportgroup using static port bindings. One EPG is created per bridged VNI-VLAN. Figure 50, shows the two EPGs that are being used for bridging Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 60 of 64

61 Figure 50. Illustration of two EPGs configured to bridge to specific Logical Switches in NSX. Figure 51 shows the static binding configuration for an EPG using VLAN While it is possible to configure this mapping also at the AEP level, for EPG connected to NSX Logical Bridges it is a good practice to ensure static path bindings are used and configured only for the servers running the DLR Bridge. Figure 51. A sample of the configuration of the static path binding. Each path represents ports connected to the DLR Bridge ESXi hosts Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 61 of 64

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack White Paper Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack Introduction Cisco Application Centric Infrastructure (ACI) is a next-generation data center fabric infrastructure

More information

Virtual Machine Manager Domains

Virtual Machine Manager Domains This chapter contains the following sections: Cisco ACI VM Networking Support for Virtual Machine Managers, page 1 VMM Domain Policy Model, page 3 Virtual Machine Manager Domain Main Components, page 3,

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

Cisco ACI for Red Hat Virtualization Environments

Cisco ACI for Red Hat Virtualization Environments White Paper Cisco ACI for Red Hat Virtualization Environments First Published: April 2018 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1350BUR Deploying NSX on a Cisco Infrastructure Jacob Rapp jrapp@vmware.com Paul A. Mancuso pmancuso@vmware.com #VMworld #NET1350BUR Disclaimer This presentation may contain product features that are

More information

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.2 Architecture Overview Document revision 1.6 December 2018 Revision history Date Document revision Description of changes December 2018 1.6 Remove note about

More information

Virtualization Design

Virtualization Design VMM Integration with UCS-B, on page 1 VMM Integration with AVS or VDS, on page 3 VMM Domain Resolution Immediacy, on page 6 OpenStack and Cisco ACI, on page 8 VMM Integration with UCS-B About VMM Integration

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Cisco Nexus 3000 Series NX-OS Verified Scalability Guide, Release 7.0(3)I7(2)

Cisco Nexus 3000 Series NX-OS Verified Scalability Guide, Release 7.0(3)I7(2) Cisco Nexus Series NX-OS Scalability Guide, Release 7.0(3)I7(2) Introduction 2 Scalability s 3 Topology s 14 Revised: November 23, 2017, Introduction The values provided in this guide should not be interpreted

More information

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay

Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Design Guide: Deploying NSX for vsphere with Cisco ACI as Underlay Table of Contents Executive Summary... 2 Benefits of NSX Architecture... 4 2.1 NSX Primary Use Cases... 4 2.2 Logical Layer Connectivity...

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Cisco ACI Simulator Installation Guide

Cisco ACI Simulator Installation Guide First Published: 2014-11-11 Last Modified: 2018-02-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Service Graph Design with Cisco Application Centric Infrastructure

Service Graph Design with Cisco Application Centric Infrastructure White Paper Service Graph Design with Cisco Application Centric Infrastructure 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction...

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Quick Start Guide (SDN)

Quick Start Guide (SDN) NetBrain Integrated Edition 7.1 Quick Start Guide (SDN) Version 7.1a Last Updated 2018-09-03 Copyright 2004-2018 NetBrain Technologies, Inc. All rights reserved. Contents 1. Discovering and Visualizing

More information

Cisco ACI with Cisco AVS

Cisco ACI with Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Cisco AVS Installation, page 6 Key Post-Installation Configuration Tasks for the Cisco AVS, page 43 Distributed Firewall, page 62

More information

Running RHV integrated with Cisco ACI. JuanLage Principal Engineer - Cisco May 2018

Running RHV integrated with Cisco ACI. JuanLage Principal Engineer - Cisco May 2018 Running RHV integrated with Cisco ACI JuanLage Principal Engineer - Cisco May 2018 Agenda Why we need SDN on the Data Center What problem are we solving? Introduction to Cisco Application Centric Infrastructure

More information

Cisco ACI and Cisco AVS

Cisco ACI and Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Installing the Cisco AVS, page 5 Key Post-Installation Configuration Tasks for the Cisco AVS, page 14 Distributed Firewall, page

More information

Cisco ACI with OpenStack OpFlex Architectural Overview

Cisco ACI with OpenStack OpFlex Architectural Overview First Published: February 11, 2016 Last Modified: March 30, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco ACI increases network security, automates communication policies based on business-relevant application requirements, and decreases developer

More information

Cisco ACI Virtualization Guide, Release 2.2(1)

Cisco ACI Virtualization Guide, Release 2.2(1) First Published: 2017-01-18 Last Modified: 2017-07-14 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco ACI Virtualization Guide, Release 2.2(2)

Cisco ACI Virtualization Guide, Release 2.2(2) First Published: 2017-04-11 Last Modified: 2018-01-31 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco IT Compute at Scale on Cisco ACI

Cisco IT Compute at Scale on Cisco ACI Cisco IT ACI Deployment White Papers Cisco IT Compute at Scale on Cisco ACI This is the fourth white paper in a series of case studies that explain how Cisco IT deployed ACI to deliver improved business

More information

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit Integration of Hypervisors and L4-7 Services into an ACI Fabric Azeem Suleman, Principal Engineer, Insieme Business Unit Agenda Introduction to ACI Review of ACI Policy Model Hypervisor Integration Layer

More information

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server December 17 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA95134-1706 USA http://www.cisco.com

More information

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer 21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal By Adeyemi Ademola E. Cloud Engineer 1 Contents Introduction... 5 1.2 Document Purpose and Scope...5 Service Definition...

More information

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers Huawei CloudFabric and ware Collaboration Innovation Solution in Data Centers ware Data Center and Cloud Computing Solution Components Extend virtual computing to all applications Transform storage networks

More information

Cross-vCenter NSX Installation Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

Cross-vCenter NSX Installation Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 Cross-vCenter NSX Installation Guide Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

1V0-642.exam.30q.

1V0-642.exam.30q. 1V0-642.exam.30q Number: 1V0-642 Passing Score: 800 Time Limit: 120 min 1V0-642 VMware Certified Associate 6 Network Visualization Fundamentals Exam Exam A QUESTION 1 Which is NOT a benefit of virtualized

More information

Cisco ACI Simulator Release Notes, Release 1.1(1j)

Cisco ACI Simulator Release Notes, Release 1.1(1j) Cisco ACI Simulator Release Notes, This document provides the compatibility information, usage guidelines, and the scale values that were validated in testing this Cisco ACI Simulator release. Use this

More information

Wired Network Summary Data Overview

Wired Network Summary Data Overview Wired Network Summary Data Overview Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE.

More information

VMware Validated Design for NetApp HCI

VMware Validated Design for NetApp HCI Network Verified Architecture VMware Validated Design for NetApp HCI VVD 4.2 Architecture Design Sean Howard Oct 2018 NVA-1128-DESIGN Version 1.0 Abstract This document provides the high-level design criteria

More information

Layer 4 to Layer 7 Design

Layer 4 to Layer 7 Design Service Graphs and Layer 4 to Layer 7 Services Integration, page 1 Firewall Service Graphs, page 5 Service Node Failover, page 10 Service Graphs with Multiple Consumers and Providers, page 12 Reusing a

More information

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design White Paper Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design Emerging IT technologies have brought about a shift from IT as a cost center to IT as a business driver.

More information

OpenStack Group-Based Policy User Guide

OpenStack Group-Based Policy User Guide First Published: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco ACI Multi-Pod and Service Node Integration

Cisco ACI Multi-Pod and Service Node Integration White Paper Cisco ACI Multi-Pod and Service Node Integration 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 68 Contents Introduction... 3 Prerequisites...

More information

Cisco Application Centric Infrastructure

Cisco Application Centric Infrastructure Data Sheet Cisco Application Centric Infrastructure What s Inside At a glance: Cisco ACI solution Main benefits Cisco ACI building blocks Main features Fabric Management and Automation Network Security

More information

NSX Administration Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2

NSX Administration Guide. Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 NSX Administration Guide Update 3 Modified on 20 NOV 2017 VMware NSX for vsphere 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation)

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation) This chapter contains the following sections:, on page 1 Alias API Inspector App Center Alias A changeable name for a given object. While the name of an object, once created, cannot be changed, the Alias

More information

2V VMware Certified Professional 6 - Network Virtualization. Exam Summary Syllabus Questions

2V VMware Certified Professional 6 - Network Virtualization. Exam Summary Syllabus Questions 2V0-642 VMware Certified Professional 6 - Network Virtualization Exam Summary Syllabus Questions Table of Contents Introduction to 2V0-642 Exam on VMware Certified Professional 6 - Network Virtualization...

More information

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service Cisco ACI Multi-Site Service Integration, on page 1 Cisco ACI Multi-Site Back-to-Back Spine Connectivity Across Sites Without IPN, on page 8 Bridge Domain with Layer 2 Broadcast Extension, on page 9 Bridge

More information

Cross-vCenter NSX Installation Guide. Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4

Cross-vCenter NSX Installation Guide. Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4 Cross-vCenter NSX Installation Guide Update 4 VMware NSX for vsphere 6.4 VMware NSX Data Center for vsphere 6.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Cross-vCenter NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3

Cross-vCenter NSX Installation Guide. Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 Cross-vCenter NSX Installation Guide Update 6 Modified on 16 NOV 2017 VMware NSX for vsphere 6.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Cisco ACI Virtualization Guide, Release 2.1(1)

Cisco ACI Virtualization Guide, Release 2.1(1) First Published: 2016-10-02 Last Modified: 2017-05-09 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: July 2017 Release 2.5.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:

More information

Cisco Application Centric Infrastructure (ACI) Simulator

Cisco Application Centric Infrastructure (ACI) Simulator Data Sheet Cisco Application Centric Infrastructure (ACI) Simulator Cisco Application Centric Infrastructure Overview Cisco Application Centric Infrastructure (ACI) is an innovative architecture that radically

More information

Cisco UCS Director and ACI Advanced Deployment Lab

Cisco UCS Director and ACI Advanced Deployment Lab Cisco UCS Director and ACI Advanced Deployment Lab Michael Zimmerman, TME Vishal Mehta, TME Agenda Introduction Cisco UCS Director ACI Integration and Key Concepts Cisco UCS Director Application Container

More information

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture Date: 2017-03-29 Version: 1.0 Copyright IBM Corporation 2017 Page 1 of 16 Table of Contents 1 Introduction... 4 1.1 About

More information

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric Solution Overview Next-Generation Interconnect Powered by the Adaptive Cloud Fabric Increases availability and simplifies the stretching and sharing of resources across distributed data centers Highlights

More information

Cisco Network Assurance Engine Release Notes, Release 3.0(1)

Cisco Network Assurance Engine Release Notes, Release 3.0(1) Cisco Network Assurance Engine Release Notes, Release 3.0(1) Table of Contents Cisco Network Assurance Engine, Release 3.0(1), Release Notes................................. 3 Introduction.............................................................................

More information

Cisco UCS Director Tech Module Cisco Application Centric Infrastructure (ACI)

Cisco UCS Director Tech Module Cisco Application Centric Infrastructure (ACI) Cisco UCS Director Tech Module Cisco Application Centric Infrastructure (ACI) Version: 1.0 September 2016 1 Agenda Overview & Architecture Hardware & Software Compatibility Licensing Orchestration Capabilities

More information

Cisco ACI Simulator Release Notes, Release 2.2(3)

Cisco ACI Simulator Release Notes, Release 2.2(3) Cisco ACI Simulator Release Notes, Release 2.2(3) This document provides the compatibility information, usage guidelines, and the scale values that were validated in testing this Cisco ACI Simulator release.

More information

Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5

Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5 Creating a VMware vcloud NFV Platform R E F E R E N C E A R C H I T E C T U R E V E R S I O N 1. 5 Table of Contents 1. Introduction... 4 2. Network Function Virtualization Overview... 5 2.1 NFV Infrastructure

More information

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x First Published: August 01, 2014 Last Modified: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung Alexei Agueev, Systems Engineer ETHERNET MIGRATION 10G/40G à 25G/50G/100G Interface Parallelism Parallelism increases

More information

Exam Name: VMware Certified Associate Network Virtualization

Exam Name: VMware Certified Associate Network Virtualization Vendor: VMware Exam Code: VCAN610 Exam Name: VMware Certified Associate Network Virtualization Version: DEMO QUESTION 1 What is determined when an NSX Administrator creates a Segment ID Pool? A. The range

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.

Architecture and Design. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4. Modified on 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers

Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers VMware vcloud Architecture Toolkit for Service Providers Workload Mobility and Disaster Recovery to VMware Cloud IaaS Providers Version 2.9 January 2018 Adrian Roberts 2018 VMware, Inc. All rights reserved.

More information

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Architecture and Design. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.3 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES

DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES DEPLOYING A VMWARE VCLOUD DIRECTOR INFRASTRUCTURE-AS-A-SERVICE (IAAS) SOLUTION WITH VMWARE CLOUD FOUNDATION : ARCHITECTURAL GUIDELINES WHITE PAPER JULY 2017 Table of Contents 1. Executive Summary 4 2.

More information

Integrating Cisco UCS with Cisco ACI

Integrating Cisco UCS with Cisco ACI Integrating Cisco UCS with Cisco ACI Marian Klas, mklas@cisco.com Systems Engineer Data Center February 2015 Agenda: Connecting workloads to ACI Bare Metal Hypervisors UCS & APIC Integration and Orchestration

More information

VMware Validated Design for Micro-Segmentation Reference Architecture Guide

VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation Reference Architecture Guide VMware Validated Design for Micro-Segmentation 3.0 This document supports the version of each product listed and supports all

More information

DELL EMC VSCALE FABRIC

DELL EMC VSCALE FABRIC NETWORK DATA SHEET DELL EMC VSCALE FABRIC FIELD-PROVEN BENEFITS Increased utilization and ROI Create shared resource pools (compute, storage, and data protection) that connect to a common, automated network

More information

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x First Published: 2018-07-05 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min.

2V0-642 vmware. Number: 2V0-642 Passing Score: 800 Time Limit: 120 min. 2V0-642 vmware Number: 2V0-642 Passing Score: 800 Time Limit: 120 min Exam A QUESTION 1 A network administrator has been tasked with deploying a 3-tier application across two data centers. Tier-1 and tier-2

More information

Dedicated Hosted Cloud with vcloud Director

Dedicated Hosted Cloud with vcloud Director VMware vcloud Architecture Toolkit for Service Providers Dedicated Hosted Cloud with vcloud Director Version 2.9 April 2018 Harold Simon 2017 VMware, Inc. All rights reserved. This product is protected

More information

Cisco ACI Terminology ACI Terminology 2

Cisco ACI Terminology ACI Terminology 2 inology ACI Terminology 2 Revised: May 24, 2018, ACI Terminology Cisco ACI Term Alias API Inspector App Center Application Policy Infrastructure Controller (APIC) Application Profile Atomic Counters Alias

More information

Configuring APIC Accounts

Configuring APIC Accounts This chapter contains the following sections: Adding an APIC Account, page 1 Viewing APIC Reports, page 3 Assigning an APIC account to a Pod, page 15 Handling APIC Failover, page 15 Adding an APIC Account

More information

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 First Published: 2017-05-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

PLEXXI HCN FOR VMWARE ENVIRONMENTS

PLEXXI HCN FOR VMWARE ENVIRONMENTS PLEXXI HCN FOR VMWARE ENVIRONMENTS SOLUTION BRIEF FEATURING Plexxi s pre-built, VMware Integration Pack makes Plexxi integration with VMware simple and straightforward. Fully-automated network configuration,

More information

Cisco ACI Simulator Release Notes, Release 3.0(2)

Cisco ACI Simulator Release Notes, Release 3.0(2) Cisco ACI Simulator Release Notes, Release 3.0(2) This document provides the compatibility information, usage guidelines, and the scale values that were validated in testing this Cisco ACI Simulator release.

More information

Building NFV Solutions with OpenStack and Cisco ACI

Building NFV Solutions with OpenStack and Cisco ACI Building NFV Solutions with OpenStack and Cisco ACI Domenico Dastoli @domdastoli INSBU Technical Marketing Engineer Iftikhar Rathore - INSBU Technical Marketing Engineer Agenda Brief Introduction to Cisco

More information

Cisco FindIT Plugin for Kaseya Quick Start Guide

Cisco FindIT Plugin for Kaseya Quick Start Guide First Published: 2017-10-23 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme LHC2103BU NSX and VMware Cloud on AWS: Deep Dive Ray Budavari, Senior Staff Technical Product Manager NSX @rbudavari #VMworld #LHC2103BU Disclaimer This presentation may contain product features that are

More information

Pluribus Adaptive Cloud Fabric

Pluribus Adaptive Cloud Fabric Product Overview Adaptive Cloud Fabric Powering the Software-Defined Enterprise Highlights Completely software enabled and built on open networking platforms Powered by the Netvisor ONE network Operating

More information

ACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU

ACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU ACI Multi-Site Architecture and Deployment Max Ardica Principal Engineer - INSBU Agenda ACI Network and Policy Domain Evolution ACI Multi-Site Deep Dive Overview and Use Cases Introducing ACI Multi-Site

More information

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version Course: Duration: Fees: Cisco Learning Credits: Kit: DCAC9K v1.1 Cisco Data Center Application Centric Infrastructure 5 days lecture course and hands-on lab $3,295 USD 33 Digital Version Course Details

More information

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers

Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers VMware vcloud Network VMware vcloud Architecture Toolkit for Service Providers Customer Onboarding with VMware NSX L2VPN Service for VMware Cloud Providers Version 2.8 August 2017 Harold Simon 2017 VMware,

More information

2018 Cisco and/or its affiliates. All rights reserved.

2018 Cisco and/or its affiliates. All rights reserved. Beyond Data Center A Journey to self-driving Data Center with Analytics, Intelligent and Assurance Mohamad Imaduddin Systems Engineer Cisco Oct 2018 App is the new Business Developer is the new Customer

More information

The Cisco HyperFlex Dynamic Data Fabric Advantage

The Cisco HyperFlex Dynamic Data Fabric Advantage Solution Brief May 2017 The Benefits of Co-Engineering the Data Platform with the Network Highlights Cisco HyperFlex Dynamic Data Fabric Simplicity with less cabling and no decisions to make The quality

More information

Deploying IWAN Routers

Deploying IWAN Routers Deploying IWAN Routers Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x)

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) First Published: May 17, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme NET1927BU vsphere Distributed Switch Best Practices for NSX Gabriel Maciel VMware, Inc. @gmaciel_ca #VMworld2017 #NET1927BU Disclaimer This presentation may contain product features that are currently

More information

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform

Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Dell EMC Ready Solution for VMware vcloud NFV 3.0 OpenStack Edition Platform Deployment Automation Architecture Guide for VMware NFV 3.0 with VMware Integrated OpenStack 5.0 with Kubernetes Dell Engineering

More information

Networking Domains. Physical domain profiles (physdomp) are typically used for bare metal server attachment and management access.

Networking Domains. Physical domain profiles (physdomp) are typically used for bare metal server attachment and management access. This chapter contains the following sections:, on page 1 Bridge Domains, on page 2 VMM Domains, on page 2 Configuring Physical Domains, on page 4 A fabric administrator creates domain policies that configure

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Microsegmentation with Cisco ACI

Microsegmentation with Cisco ACI This chapter contains the following sections:, page 1 Microsegmentation with the Cisco Application Centric Infrastructure (ACI) provides the ability to automatically assign endpoints to logical security

More information

Cisco Meeting Management

Cisco Meeting Management Cisco Meeting Management Cisco Meeting Management 1.1 User Guide for Administrators September 19, 2018 Cisco Systems, Inc. www.cisco.com Contents 1 Introduction 4 1.1 The software 4 2 Deployment overview

More information

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003 Agenda ACI Introduction and Multi-Fabric Use Cases ACI Multi-Fabric Design Options ACI Stretched Fabric Overview

More information

Provisioning Overlay Networks

Provisioning Overlay Networks This chapter has the following sections: Using Cisco Virtual Topology System, page 1 Creating Overlays, page 2 Creating Network using VMware, page 3 Creating Subnetwork using VMware, page 4 Creating Routers

More information

Brocade and VMware Strategic Partners. Kyle Creason Brocade Systems Engineer

Brocade and VMware Strategic Partners. Kyle Creason Brocade Systems Engineer Brocade and VMware Strategic Partners Kyle Creason Brocade Systems Engineer Brocade Data Center Network Technologies Strategic focus areas FIBRE CHANNEL FABRICS ETHERNET FABRICS CORE ROUTING SDN NFV CLOUD

More information

The Virtualisation Security Journey: Beyond Endpoint Security with VMware and Symantec

The Virtualisation Security Journey: Beyond Endpoint Security with VMware and Symantec The Virtualisation Security Journey: Beyond Endpoint Security with VMware and Symantec James Edwards Product Marketing Manager Dan Watson Senior Systems Engineer Disclaimer This session may contain product

More information

70-745: Implementing a Software-Defined Datacenter

70-745: Implementing a Software-Defined Datacenter 70-745: Implementing a Software-Defined Datacenter Target Audience: Candidates for this exam are IT professionals responsible for implementing a software-defined datacenter (SDDC) with Windows Server 2016

More information