Cisco HyperFlex Systems

Similar documents
Cisco ACI Virtual Machine Networking

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

Service Graph Design with Cisco Application Centric Infrastructure

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking

Virtual Machine Manager Domains

Cisco ACI with Cisco AVS

Modeling an Application with Cisco ACI Multi-Site Policy Manager

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Virtualization Design

Cisco ACI and Cisco AVS

Quick Start Guide (SDN)

Cisco ACI Multi-Site Fundamentals Guide

Quick Start Guide (SDN)

Manage Hybrid Clouds with a Cisco CloudCenter, Cisco Application Centric Infrastructure, and Cisco UCS Director Solution

Unify Virtual and Physical Networking with Cisco Virtual Interface Card

Configuring Policy-Based Redirect

Layer 4 to Layer 7 Design

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation)

ACI Fabric Endpoint Learning

Configuring Policy-Based Redirect

Cisco ACI Virtual Machine Networking

Cisco HyperFlex Systems

Cisco ACI Virtual Machine Networking

Microsegmentation with Cisco ACI

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service

Automate Application Deployment with F5 Local Traffic Manager and Cisco Application Centric Infrastructure

Cisco ACI vcenter Plugin

Principles of Application Centric Infrastructure

Tenant Onboarding. Tenant Onboarding Overview. Tenant Onboarding with Virtual Data Centers

Cisco UCS Director Tech Module Cisco Application Centric Infrastructure (ACI)

Cisco ACI Terminology ACI Terminology 2

Configuring Policy-Based Redirect

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

ACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU

The Cisco HyperFlex Dynamic Data Fabric Advantage

Cisco Application Policy Infrastructure Controller Data Center Policy Model

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Configuring APIC Accounts

Design Guide for Cisco ACI with Avi Vantage

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design

F5 BIG-IP Local Traffic Manager Service Insertion with Cisco Application Centric Infrastructure

Cisco Application Centric Infrastructure Release 2.3 Design Guide

Cisco ACI Virtualization Guide, Release 2.2(1)

Cisco ACI Multi-Pod and Service Node Integration

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Cisco ACI Simulator Release Notes, Release 1.1(1j)

NS0-171.network appliance

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Layer 4 to Layer 7 Service Insertion, page 1

Cisco UCS Director and ACI Advanced Deployment Lab

Cisco ACI Virtualization Guide, Release 2.2(2)

Virtual Security Gateway Overview

SharkFest 16. Cisco ACI and Wireshark. Karsten Hecker Senior Technical Instructor Fast Lane Germany. Getting Back Our Data

Cisco ACI Virtualization Guide, Release 2.1(1)

Cisco IT Compute at Scale on Cisco ACI

Cisco ACI Multi-Site Architecture

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version

Design Guide to run VMware NSX for vsphere with Cisco ACI

Question No: 3 Which configuration is needed to extend the EPG out of the Cisco ACI fabric?

Fabric Failover Scenarios in the Cisco Unified Computing System

Page 2

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit

Microsegmentation with Cisco ACI

Intra-EPG Isolation Enforcement and Cisco ACI

VersaStack for Data Center Design & Implementation (VDCDI) 1.0

Integrating Cisco UCS with Cisco ACI

VXLAN Overview: Cisco Nexus 9000 Series Switches

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

Segmentation. Threat Defense. Visibility

Cisco ACI Multi-Pod/Multi-Site Deployment Options Max Ardica Principal Engineer BRKACI-2003

Integrating the Cisco ASA with Cisco Nexus 9000 Series Switches and the Cisco Application Centric Infrastructure

Exam Questions

Exam Questions Demo Cisco. Exam Questions

Dell EMC. VxRack System FLEX Architecture Overview

Cisco Application Centric Infrastructure (ACI) Simulator

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

UCS Director: Tenant Onboarding Cisco ACI & Microsoft HyperV. Dec. 2016

Cisco ACI Simulator Release Notes, Release 2.2(3)

Provisioning Overlay Networks

Cisco ACI Multi-Site, Release 1.1(1), Release Notes

VMware vsan Network Design-OLD November 03, 2017

NetApp AltaVault and Symantec NetBackup Solution with FlexPod Datacenter and ACI

Cisco ACI Virtualization Guide, Release 1.1(1j)

"Charting the Course... Troubleshooting Cisco Data Center Infrastructure v6.0 (DCIT) Course Summary

DELL EMC VSCALE FABRIC

Intra-EPG Isolation Enforcement and Cisco ACI

Cisco Nexus 1000V InterCloud

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

Cisco ACI for Red Hat Virtualization Environments

CISCO EXAM QUESTIONS & ANSWERS

Toggling Between Basic and Advanced GUI Modes

Cisco Cloud Architecture with Microsoft Cloud Platform Peter Lackey Technical Solutions Architect PSOSPG-1002

Cisco ACI Simulator Release Notes, Release 3.0(2)

Configuring Layer 4 to Layer 7 Resource Pools

Networking Domains. Physical domain profiles (physdomp) are typically used for bare metal server attachment and management access.

Cisco Nexus Data Broker

Transcription:

White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data with multiple dependencies. The information may be updated as and when necessary and is subject to change without notice. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 25

Contents Executive Summary Technology Overview Cisco HyperFlex System Cisco Nexus 9000 Series Application-Centric Infrastructure Physical Topology Cisco HyperFlex System Network Connectivity Cisco HyperFlex and VMware vsphere Network Configuration HyperFlex Cluster Installation Cisco Application Centric Infrastructure Design Virtual Port Channel Configuration Cisco ACI Tenants Enabling Management Access Through the Common Tenant Foundation Tenant Configuration for Cisco HyperFlex Infrastructure VLAN Setup ACI Fabric Settings Enforce Subnet Check for IP & MAC learning Limit IP learning to Subnet ARP Flooding GARP based Detection COS Preservation Jumbo Frames and MTU Virtual Machine Networking Virtual Machine Manager Domains Virtual Network Provisioning Considerations Conclusion For More Information Appendix: High-Level Overview of Implementation Activities Copyright and trademarks 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 25

Executive Summary Today s applications have complex network requirements, and to be competitive in the market, you must be able to meet these requirements in a timely manner. The Cisco Application Centric Infrastructure (Cisco ACI ) solution is designed to automate and orchestrate physical and virtual network infrastructure with applications in focus. Cisco ACI uses a combination of highly coupled hardware and software components to provide advantages not possible with other solutions. The Cisco HyperFlex system is a hyperconverged infrastructure solution that offers end-to-end software-defined computing, network, and storage infrastructure. It is fast to deploy, simple to manage, and easy to scale, and it is ready to provide a unified pool of resources to power applications as customer business needs dictate. Built-in networking of HyperFlex system can be integrated with Cisco ACI to implement unified software defined networking across your organization. This document shows how to connect a newly purchased Cisco HyperFlex system to an existing Cisco ACI fabric and manage HyperFlex networking using ACI Application Policy Infrastructure Controller (APIC).. Cisco ACI can manage the networking aspect of a Cisco HyperFlex virtual environment using a APIC controlled VMware vsphere Distributed Switch (vds) or Cisco ACI Virtual Edge (AVE). Cisco ACI Virtual Edge (AVE) is supported as a vleaf switch as of Cisco APIC 3.1(1i) release and VMware ESXi 6.0. This document demonstrates the use of a vds managed by Cisco ACI with Cisco HyperFlex infrastructure. Technology Overview A Cisco HyperFlex and Cisco ACI design consists of the infrastructure components shown in Figure 1. Figure 1. Cisco HyperFlex with Cisco ACI: Components This solution is validated with Cisco HyperFlex HX220c nodes, but HX240c nodes can be used without any changes to this solution. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 25

Table 1 lists the hardware and software versions that are used for the validation described in this document. Table 1. Hardware and Software Versions Used Component Version Cisco APIC appliance Release 2.0(2h) Cisco Nexus 9336 and 9396 platform switches Release 12.0(2h) Cisco Unified Computing System (Cisco UCS ) Release 3.1(2b) Cisco HyperFlex Installer Release 1.8.1b VMware vcenter Release 6.0.0 VMware ESXi Release 6.0.0 Cisco HyperFlex System The Cisco HyperFlex hyperconverged infrastructure solution combines software-defined computing in the form of Cisco UCS servers, software-defined storage with the Cisco HyperFlex HX Data Platform software, and softwaredefined networking with Cisco UCS fabric, which integrates smoothly with Cisco ACI. The Cisco HyperFlex solution ships as a cluster of three or more Cisco HyperFlex HX-Series nodes that are integrated into a single system by a pair of Cisco UCS 6200 Series Fabric Interconnects. Fabric interconnects provide a single point of connectivity and hardware management for the cluster. Fabric interconnects also provide low-latency, high-bandwidth, 10-Gbps connectivity for all system components. The Cisco HyperFlex HX Data Platform is a purpose-built, high-performance, distributed file system with a wide array of enterprise-class data management services. The data platform is implemented using an HX Data Platform controller that runs on each node. It consolidates storage on each node into a single storage pool that is visible to all the nodes in a Cisco HyperFlex cluster. As of the writing of this document, the Cisco HyperFlex solution supports VMware vsphere virtualization platform. Each HX-Series node ships with a preinstalled VMware ESXi server to speed up the cluster installation process. Cisco HyperFlex cluster installation and expansion is fully automated using the Cisco HyperFlex Installer application. The HX Data Platform is administrated through a vsphere web client plug-in. This plug-in allows administrators to perform general storage administration tasks such as creating volumes, monitoring data platform health, and managing resource use. The solution is validated using one Cisco HyperFlex cluster consisting of eight Cisco HX220c nodes connected and managed through a pair of Cisco UCS 6248 Fabric Interconnects. Cisco Nexus 9000 Series Application-Centric Infrastructure Cisco Nexus 9000 Series Switches support two modes of operation: standalone mode (NX-OS mode) and fabric mode (ACI mode). In standalone mode, the switch performs like a typical Cisco Nexus switch, with the Cisco NX- OS Software operating system, with increased port density, low latency, and 40-Gbps connectivity. In fabric mode, the administrator can take advantage of Cisco ACI capabilities. Cisco ACI is a new data center architecture designed to address the requirements of today s traditional networks as well as emerging demands from new computing trends and business factors. Cisco ACI not only addresses the challenges of agility and network programmability that software-based overlay networks are trying to address, but it 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 25

also presents a solution to the new challenges that software-defined networking (SDN) technologies are currently unable to address. In a Cisco ACI fabric, switches connect in a spine-and-leaf topology. Leaf switches provide physical connectivity for servers, storage, and other network elements as well as enforce Cisco ACI policies. The spine switches provide the mapping database function and the connectivity among leaf switches. All leaf switches connect to all spine switches. Spine switches don t connect to other spine switches, and leaf switches don t connect to other leaf switches directly. No special port-channel configuration is required between spine and leaf switches. Cisco ACI fabric setup is automatic and is performed by the APIC. The APIC is a physical appliance. It is responsible for pushing the network policies and configurations defined by user to the physical and virtual network devices in the Cisco ACI fabric. For high availability and performance, you should deploy a cluster of three APIC appliances to manage the Cisco ACI fabric. To set up the proposed solution, at least two spine switches and two leaf switches are required along with at least one APIC appliance. Figure 2 shows a sample fabric. Figure 2. Cisco ACI Fabric Validation of this solution is performed using Cisco Nexus 9336 (spine) and Nexus 9396 (leaf) switches managed by three APIC appliances. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 25

Physical Topology Figure 3 shows the high-level topology of the proposed solution. This solution meets the high-availability design requirements and is physically redundant across the computing, network, and storage stacks. All the common infrastructure services required by this solution, such as Microsoft Active Directory, Domain Name System (DNS), Network Time Protocol (NTP), and VMware vcenter, are hosted on common management infrastructure outside the Cisco HyperFlex system. Figure 3. Cisco HyperFlex with Cisco ACI: Physical Topology Cisco HyperFlex System Network Connectivity Cisco HyperFlex nodes connect to the enterprise network through fabric interconnects. Cisco UCS fabric interconnects are configured with two port channels: one from each fabric interconnect to the Cisco Nexus 9396 leaf switches for uplink connectivity. Cisco Nexus 9396 switches are configured for virtual port channels (vpcs) using the APIC to provide device-level redundancy. As shown in Figure 3, the proposed design uses two uplinks from each fabric interconnect to the leaf switches for an aggregate bandwidth of 40 Gbps. The number of uplinks can be increased based on customer data-throughput requirements. Fabric interconnects and host networking are automatically configured by the Cisco HyperFlex Installer during cluster installation. Table 2 lists the VLANs used for the validation described in this document. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 25

Table 2. VLAN ID VLANs Used in This Validation Purpose 16 In-band management VLAN on existing management infrastructure 116 In-band management VLAN configured for the path between Cisco ACI leaf switches and fabric interconnects 3092 Cisco HyperFlex data VLAN to carry Network File System (NFS) traffic between the Cisco HyperFlex controller virtual machines and ESXi hosts on the Cisco HyperFlex cluster 3093 VMware vmotion VLAN 2100 through 2120 VLAN pool for the APIC to use to set up virtual machine networking Cisco HyperFlex and VMware vsphere Network Configuration On all the ESXi hosts in a Cisco HyperFlex cluster, the Cisco HyperFlex system segregates various types of infrastructure and virtual machine traffic using different VMware standard vswitches. Two physical adapters are allocated to every standard vswitch. Note: Physical adapters visible to ESXi host are virtual network interface cards (vnics) on converged network adapters on the Cisco HyperFlex node. Figure 4 shows the default ESXi host networking configuration automatically set up by the Cisco HyperFlex Installer during cluster installation. Figure 4. Cisco HyperFlex and VMware ESXi Host Default Networking When the Cisco HyperFlex system is integrated with Cisco ACI, virtual machine network provisioning on the system is automated by the APIC. The APIC provisions and controls the vsphere Distributed Switch, or vds. After the vds is provisioned, all the hosts in the Cisco HyperFlex cluster should be migrated to the vds with following considerations: ESXi management, vmotion, and Cisco HyperFlex storage data traffic remains on the VMware standard vswitch Cisco HyperFlex storage controller virtual machines are part of the Cisco HyperFlex core infrastructure and remain on the VMware standard vswitch. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 25

Figure 5 provides overview of ESXi networking after vds migration. Figure 5. Cisco HyperFlex and VMware ESXi Host Networking After vds Migration Migration of the ESXi host virtual machine networking from the standard vswitch to vds is performed from vcenter. Refer to the VMware documentation (see the For More Information section at the end of this document) for migration best practices. Note: After the physical adapters for ESXi host virtual machine networking are moved from the standard vswitch to vds, delete the standard vswitch to avoid placing virtual machines on this vswitch by mistake. HyperFlex Cluster Installation Cisco HyperFlex cluster installation is performed using the Cisco HyperFlex Installer application. Review the Cisco HyperFlex installation documentation (see the For More Information section at the end of this document) for detailed instructions on the Cisco HyperFlex Installer wizard. When you run the wizard in a Cisco ACI environment, note the following requirements: After integration with Cisco ACI, the virtual machine network will be created and managed by the APIC, so during Cisco HyperFlex cluster installation, you should provide a temporary virtual machine network VLAN ID to the installer wizard. In a Cisco ACI environment, in-band management communication between the preexisting management switch and Cisco UCS uses two different VLAN IDs. When invoking the Cisco HyperFlex Installer wizard, be sure to use the in-band management VLAN ID configured for the path between the Cisco ACI leaf switches and the Cisco UCS fabric interconnects. Cisco Application Centric Infrastructure Design The following section provides an overview of the Cisco ACI configuration required to integrate the Cisco HyperFlex system with Cisco ACI. Virtual Port Channel Configuration A vpc allows Ethernet links that are connected to two different Cisco Nexus 9000 Series Switches to appear as a single port channel. Unlike in an NX-OS mode design, in ACI mode a vpc configuration does not require a vpc 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 25

peer link to be explicitly connected and configured between the peer devices (leaf switches). The peer communication is carried over the 40-Gbps connections through spine switches. Any two leaf switches in the Cisco ACI fabric can be used for vpc configuration, and vpc configuration is performed from the APIC. To set up this Cisco HyperFlex and Cisco ACI solution, you need to configure the following vpcs on the Cisco ACI leaf switches: vpc between Cisco ACI leaf switches and in-band management switches to carry in-band management traffic from the existing management infrastructure to the Cisco ACI fabric: The in-band management VLAN should be enabled on this vpc. vpc between Cisco ACI leaf switches and Cisco UCS fabric interconnects to carry traffic between the Cisco ACI fabric and Cisco HyperFlex system: All Cisco HyperFlex infrastructure and virtual machine networking VLANs should be enabled on this vpc. Figure 6 shows all the vpc-enabled connections in this design. Figure 6. vpc-enabled Connections Cisco ACI Tenants A Cisco ACI tenant is a logical container that represents an actual tenant, an organization, an application, or a construct to easily organize information. From a policy perspective, a tenant represents a unit of isolation. A Cisco ACI tenant consists of the following constructs: Virtual Routing and Forwarding (VRF): VRF instances provide a way to further separate the organizational and forwarding requirements for a given tenant. Because VRF uses separate forwarding instances, IP addressing can be duplicated across VRF instances for multitenancy. In the design discussed here, each tenant typically uses a single VRF instance. Bridge domain: In Cisco ACI, A bridge domain represents the broadcast domain. A bridge domain has global scope, whereas VLANs do not. A bridge domain can have one or more subnets associated with it, and one or more bridge domains together form a tenant network Application profile: An application profile models application requirements and contains one or more endpoint groups (EPGs) as needed to provide the application capabilities. Endpoint group: An EPG is a collection of physical and virtual endpoints that require common services and policies. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 25

EPG mapping: In this design, network traffic is associated with an EPG in one of two ways: By statically mapping a path or VLAN to an EPG (Figure 7): This design uses static mapping for in-band management, vmotion, and storage data VLANs. By associating an EPG with a virtual machine manager (VMM) domain, thereby allocating a VLAN dynamically from a predefined pool in the APIC (Figure 8). In this design, VMM domain mapping is used to deploy virtual machines in multitier applications requiring one or more EPGs. Figure 7. Cisco ACI: Static Path Binding Figure 8. Cisco ACI: EPG Assigned to Virtual Machine Manager Contracts: Contracts define the way an EPG can communicate with other EPGs. Contracts consists of inbound and outbound traffic filters, quality-of-service (QoS) rules, and Layer 4 to Layer 7 redirect policies. Contracts are defined using provider-consumer relationships; one EPG provides a contract and another EPG consumes that contract. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 25

This Cisco HyperFlex and Cisco ACI solution requires creation of an infrastructure tenant called Foundation to establish connectivity for all the Cisco HyperFlex infrastructure VLANs from the fabric interconnects to the Cisco ACI fabric. This design also uses the predefined common tenant to provide in-band management infrastructure connectivity to other Cisco ACI tenants. Enabling Management Access Through the Common Tenant In this solution, the common tenant is configured to provide in-band management connectivity as a service to other tenants. The Foundation infrastructure tenant consumes this service to provide Cisco HyperFlex and ESXi hosts management connectivity. The Foundation tenant and common tenant communicate using Cisco ACI inter-tenant contracts. The common tenant provides the contract, and the Foundation tenant consumes it. Figure 9 provides an overview of activities involved in configuring a common tenant for management access. VLAN values documented in this flow diagram are for example purposes only, and you must replace them with the correct VLAN values for your environment during implementation. Figure 9. Activities Involved in Configuring the Common Tenant The common tenant may already be configured in existing Cisco ACI fabrics to provide in-band management access to other tenants. Figure 10 provides an overview of connectivity details and the relationships between various Cisco ACI elements for the common tenant. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 25

Figure 10. Cisco ACI: Enabling Management Access Through the Common Tenant Foundation Tenant Configuration for Cisco HyperFlex Infrastructure VLAN Setup A tenant named Foundation is created to enable Cisco HyperFlex infrastructure VLANs between Cisco ACI leaf switches and Cisco UCS fabric interconnects. The Foundation tenant enables connection between computing and storage resources for Cisco HyperFlex NFS data stores. It also gives ESXi hosts and virtual machines access to existing management infrastructure and gives ESXi hosts access to the vmotion network. The Foundation tenant consists of a single bridge domain called bd-foundation-internal. This bridge domain is shared by all EPGs in the Foundation tenant. Because there are no overlapping IP address requirements, the Foundation tenant consists of a single VRF instance called Foundation. Note: This design maps the different HyperFlex infrastructure and virtual machine networks (Management, vmotion, Storage Data, VM Data) to different EPGs in the same bridge domain of the Foundation Tenant. Alternatively, customers can use multiple bridge domains one for each HyperFlex network, to connect into the ACI fabric. Figure 11 provides an overview of activities involved in the configuration of the Foundation tenant. VLAN values documented in this flow diagram are for example purposes only, and you must replace them with the correct VLAN values for your environment during implementation. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 25

Figure 11. Activities Involved in Configuring the Foundation Tenant 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 25

Figure 12 provides an overview of the connectivity details and the relationships between various Cisco ACI elements for the Foundation tenant. Figure 12. Cisco ACI: Foundation Tenant Configuration for Cisco HyperFlex Infrastructure VLANs After you have finished setting up the Foundation tenant, the Cisco HyperFlex Installer on the existing in-band management network can access the Cisco HyperFlex infrastructure components, and the automated installation of the Cisco HyperFlex cluster can begin. ACI Fabric Settings The following ACI fabric settings are recommended in a Cisco HyperFlex deployment. Enforce Subnet Check (Optional, Fabric Wide) Limit IP Learning to Subnet (On by default in newer APIC releases, Bridge Domain Level) GARP Based Detection for EP Move Detection Mode (Bridge Domain Level) ARP Flooding (Bridge Domain Level) 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 25

The implementation of the above features can vary depending on the generation of ACI leaf switches used in the deployment. It is therefore important to determine whether a customer deployment uses first-generation switches, second-generation switches or a combination of both. Examples of first and second-generation Cisco ACI leaf switches are provided below, however, see Cisco Product documentation for a complete list. First-generation Cisco ACI leaf switches include: Cisco Nexus 9332PQ, 9372PX-E, 9372TX-E, 9372PX, 9372TX, 9396PX, 9396TX, 93120TX, 93128TX Switches Second-generation Cisco ACI leaf switches include: Cisco Nexus 9300-EX and 9300-FX platform switches Enforce Subnet Check for IP & MAC learning This feature changes the endpoint address learning behavior of the ACI fabric. Enabling this feature will disable endpoint address learning on subnets that are outside the VRF and only learn the addresses when the source IP address is from one of the configured subnets for that VRF. For local endpoint learning, enabling this feature will disable learning of both MAC and IP address if the source IP does not belong to one of the VRF subnets. This feature is disabled by default. To enable this feature, see below based on APIC release 3.1(1i). Figure 13. Cisco ACI Fabric Settings: Enforce Subnet Check Note the following caveats with this feature: Available only on second-generation leaf switches. In a mixed environment with first and second-generation leaf switches, the first-generation switches will ignore this feature. Enabling this feature will enable it system-wide, across all VRFs though the impact of the feature is on a per VRF basis Available in APIC Releases: 2.2(2q), 3.0(2h) and higher In APIC Releases 3.0(2h) or higher, this command is available under System > System Settings > Fabric Wide Setting. In earlier APIC releases, this command is available under Fabric > Access Policies > Global Policies > Fabric Wide Setting Policy Limit IP learning to Subnet This feature changes the endpoint IP address learning behavior of the ACI fabric. Enabling this feature will disable IP address learning on subnets that are not part of the bridge domain subnets and only learn if the source IP 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 25

address belongs to one of the configured subnets for that bridge domain. A bridge domain can have multiple IP subnets and enabling this feature will limit the IP address learning to those subnets rather than all subnets in the ACI fabric. This feature is enabled by default as of APIC releases 2.3(1e) and 3.0(1k). To change/verify the status of the feature, see below based on APIC release 3.1(1i). To configure this feature as the bridge domain is being created, go to the L3 Configuration section. Figure 14. Cisco ACI Fabric Settings: Limit IP Learning to Subnet Note the following caveats with this feature: Available on first and second-generations of ACI leaf switches Not required if Enforce Subnet Checking is enabled since it supersedes this feature. Enable this feature if you have first-generation leaf switches or a mixed environment with both first and second- generation leaf switches. Prior to APIC release 3.0(1k), toggling this feature with Unicast Routing enabled could result in an impact of 120s. In prior releases, ACI flushed all endpoints addresses and suspended learning on the bridge domain for 120s. The behavior in 3.0(1k) and later releases is to only flush endpoint IP addresses that are not part of the bridge domain subnets and there is no suspension of address learning. ARP Flooding By default, with unicast routing enabled, the ACI fabric will treat the ARP requests like unicast packets and forward them using the target IP address in the ARP packets. It will not flood the ARP traffic to all the leaf nodes in the bridge domain. However, this feature provides the ability to change this default behavior and flood the ARP traffic across the fabric to all the leaf nodes in a given bridge domain. This feature is required in deployments that use 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 25

Gratuitous ARP (GARP) to indicate endpoint move and for ACI to detect that move when the move occurs on the same interface and EPG. In a Cisco HyperFlex deployments, to support GARP based learning of endpoint addresses, this feature needs to be enabled. This feature is disabled by default. To change/view the status of this feature, see below based on APIC release 3.1(1i). Figure 15. Cisco ACI Fabric Settings: ARP Flooding GARP based Detection This feature enables the ability for ACI to detect an endpoint IP address move from one MAC address to another when the new MAC address is on the same interface and EPG as the old MAC address. ACI can detect endpoint address movements across different ports, switches, EPGs or bridge-domains but not when it occurs on the same interface and EPG. With this feature, ACI will use Gratuitous ARP (GARP) packets to learn and trigger an endpoint move when the GARP packets are received on the same interface and EPG as the old MAC address. This feature is disabled by default. To change/view the status of this feature, see below based on APIC release 3.1(1i). 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 25

Figure 16. Cisco ACI Fabric Settings: GARP based Detection Note the following caveats with this feature: ARP Flooding must be enabled for the bridge domain for this feature to be available. The feature will not be visible from the GUI otherwise. Unicast Routing must also be enabled for the bridge domain for enabling GARP based detection. COS Preservation Cisco HyperFlex Installer deploys a Cisco HyperFlex system with the following UCS QoS Classes enabled. Different types of Cisco HyperFlex traffic are classified and marked using COS values of 5, 4, 2 and 1. Figure 17. Cisco ACI: QoS Classes Enabled 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 25

When traffic with non-zero COS values are received by the ACI fabric, the fabric will remark the COS value to 0 before forwarding it. COS 0 is normally used for the Best Effort class causing any traffic forwarded through the ACI fabric to be classified and queued as Best Effort traffic. To prevent the ACI fabric from remarking the COS values for the different types of HyperFlex traffic, the Preserve COS feature should be enabled as shown below. This is a fabric-wide feature that will preserve the COS value of any traffic forwarded through the ACI fabric. Figure 18. Cisco ACI Fabric: Preserve COS Enabled Cisco UCS QoS policy also determines MTU for a given class of HyperFlex traffic. The storage and vmotion classes are enabled for jumbo frames but the remaining classes have default MTU of 1500B. If the fabric re-marks storage and vmotion traffic as COS 0, this traffic will get dropped by the Best Effort class if the MTU is larger than 1500B. To prevent this, the Preserve COS feature should be enabled in a HyperFlex deployment. Figure 19. Cisco ACI Fabric: Preserve COS Feature 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 25

Jumbo Frames and MTU Traditional switching fabrics typically us a 1500B MTU and must be configured to support Jumbo frames. However, the ACI fabric, by default uses an MTU of 9150B on core facing ports of leaf and spine switches and 9000B on access ports of leaf switches. Therefore, no configuration is necessary to support Jumbo frames on an ACI fabric. Virtual Machine Networking The APIC automates networking for all virtual workloads, including access policies and Layer 4 through Layer 7 services. When you are connected to vcenter using the VMM domain, the APIC controls the virtual machine virtual distributed switching as described in the following section. The Cisco HyperFlex cluster must be successfully installed using the Cisco HyperFlex Installer before Cisco ACI can control virtual machine networking. Virtual Machine Manager Domains The VMM domain contains virtual machine controllers such as vcenter and the credentials required for the APIC to interact with the virtual machine controller. The VMM domain for a Cisco HyperFlex system needs to be set up using Cisco HyperFlex and vcenter. As part of the VMM domain creation, the APIC controls the creation and configuration of the vds. Note: With the vds, Cisco ACI uses VLANs to segregate port-group traffic. After the vds is deployed in vcenter, all the ESXi hosts in the Cisco HyperFlex cluster should be migrated to the vds following the guidelines presented earlier in this document in the section Cisco HyperFlex and VMware vsphere Network Configuration. The APIC communicates with the vds to publish network policies that are applied to the virtual workloads, including creation of port groups for virtual machine association. VLANs for virtual machine port groups are allocated from the VLAN pool identified during VMM domain creation. EPGs in the APIC translate to vds port groups in the VMware virtual infrastructure. A VMM domain can contain multiple EPGs, and hence multiple port groups. To position an application, the application administrator deploys the virtual machines and places the vnic in the port group defined by the APIC for the appropriate application tier. Figure 20 provides an overview of the relationship between EPGs in Cisco ACI and port groups on vds. 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 25

Figure 20. Cisco ACI: Attaching Application EPGs with VMware vds When you create the vds controlled by the APIC during VMM domain creation, use the settings listed in Table 3 and illustrated in Figure 21. Table 3. Recommended vds Settings Item Port-channel mode vswitch policy Firewall mode Setting MAC pinning physical NIC load Cisco Discovery Protocol Disabled 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 25

Figure 21. Cisco ACI: VMM Domain Creation Virtual Network Provisioning Considerations When the vds is provisioned using the APIC, a pool of VLANs is defined for use on demand in virtual machine networking. VLANs from this pool are dynamically assigned to the EPGs mapped to the VMM domain. Because the APIC does not manage or configure Cisco UCS, VLANs from the virtual machine networking VLAN pool need to be configured on Cisco UCS. For the validation described in this document, all the virtual machine networking VLANs were preconfigured on fabric interconnects and vnic templates of the Cisco HyperFlex nodes from Cisco UCS Manager. Note: You can use Cisco UCS Director to automate virtual machine network provisioning across Cisco ACI and Cisco HyperFlex systems. Instead of adding all the virtual machine networking VLANs initially on Cisco UCS, you can use Cisco UCS Director to provision the VLANs when the EPGs are provisioned on Cisco ACI. Conclusion As described in this document, the Cisco HyperFlex system transparently integrates with Cisco ACI. After integration, Cisco ACI enables automated, policy-based network deployment on the Cisco HyperFlex system. It secures applications and tenants from each other by placing each in a microsegmented environment with application- and tenant-specific policy-based network control. For More Information Instructions for vswitch to vds migration: https://kb.vmware.com/selfservice/microsites/search.do?language=en_us&cmd=displaykc&externalid=101 0612 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 25

Cisco ACI 2.0 design guide: http://www.cisco.com/c/en/us/solutions/collateral/data-centervirtualization/application-centric-infrastructure/white-paper-c11-737909.html Cisco HyperFlex installation documentation: http://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/hyperflex_hx_dataplatformsoftware/getti ngstartedguide/1-8/b_hyperflexsystems_gettingstartedguide_1_8_c.html Cisco HyperFlex solution overview: http://www.cisco.com/c/dam/en/us/products/collateral/hyperconvergedinfrastructure/hyperflex-hx-series/solution-overview-c22-736815.pdf 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 25

Appendix: High-Level Overview of Implementation Activities Figure 22 provides a high-level overview of the activities involved in the proposed solution. Refer to this figure to understand the flow of activities when reading this document. Figure 22. Overview of Implementation Activities 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 25

Copyright and trademarks Cisco routinely authors white papers, implementation guides, and technology overviews. Cisco does not generally license these materials for reproduction or distribution for commercial purposes (e.g., in for-profit books or compilations) or permit local hosting of these materials on third-party webpages. Instead, Cisco encourages and specifically authorizes parties to deep link" to webpages on www.cisco.com, without additional permission from Cisco. This policy ensures that all links are to the most current versions of the works. Printed in USA C11-738327-02 03/18 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 25