Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Similar documents
vpc Layer 3 Backup Routing with F1 and Peer Gateway

Auto-Configuration. Auto-Configuration. Information About Auto-Configuration in DFA. Configuration Profile

Nexus 9000/3000 Graceful Insertion and Removal (GIR)

Routing Design. Transit Routing. About Transit Routing

Configuring VXLAN EVPN Multi-Site

Optimizing Layer 2 DCI with OTV between Multiple VXLAN EVPN Fabrics (Multifabric)

Border Provisioning Use Case in VXLAN BGP EVPN Fabrics - Multi-Site

Layer 4 to Layer 7 Design

Configuring VXLAN EVPN Multi-Site

Unicast Forwarding. Unicast. Unicast Forwarding Flows Overview. Intra Subnet Forwarding (Bridging) Unicast, on page 1

Cisco Dynamic Fabric Automation Architecture. Miroslav Brzek, Systems Engineer

Feature Information for BGP Control Plane, page 1 BGP Control Plane Setup, page 1. Feature Information for BGP Control Plane

Configuring VXLAN EVPN Multi-Site

Introduction to External Connectivity

Hierarchical Fabric Designs The Journey to Multisite. Lukas Krattiger Principal Engineer September 2017

Service Graph Design with Cisco Application Centric Infrastructure

DHCP Relay in VXLAN BGP EVPN

Contents. EVPN overview 1

VXLAN Design with Cisco Nexus 9300 Platform Switches

Contents. Introduction. Prerequisites. Requirements. Components Used

Data Center Configuration. 1. Configuring VXLAN

ACI Fabric Endpoint Learning

Managing the Unicast RIB and FIB

Configuring IPv6 Provider Edge over MPLS (6PE)

Static VLAN Pools that will be used for the encapsulation VLAN between the external devices

MPLS VPN--Inter-AS Option AB

VXLAN Overview: Cisco Nexus 9000 Series Switches

ACI Transit Routing, Route Peering, and EIGRP Support

Provisioning Overlay Networks

This document is not restricted to specific software and hardware versions.

Deploying LISP Host Mobility with an Extended Subnet

Segment Routing on Cisco Nexus 9500, 9300, 9200, 3200, and 3100 Platform Switches

MPLS VPN Inter-AS Option AB

Traffic Load Balancing in EVPN/VXLAN Networks. Tech Note

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

DHCP Relay in VXLAN BGP EVPN

Upgrade Your Cisco Data Center Programmable Fabric Deployment with FabricPath Encapsulation

Cisco Dynamic Fabric Automation Architecture

VXLAN EVPN Fabric and automation using Ansible

Layer-4 to Layer-7 Services

Finding Feature Information, page 2 Information About DHCP Snooping, page 2 Information About the DHCPv6 Relay Agent, page 8

VXLAN Multipod Design for Intra-Data Center and Geographically Dispersed Data Center Sites

Cisco Dynamic Fabric Automation Migration Guide

Multiprotocol BGP Extensions for IP Multicast Commands

Virtual Extensible LAN and Ethernet Virtual Private Network

UniNets MPLS LAB MANUAL MPLS. UNiNets Multiprotocol label Switching MPLS LAB MANUAL. UniNets MPLS LAB MANUAL

HPE FlexFabric 5940 Switch Series

VXLAN Cisco and/or its affiliates. All rights reserved. Cisco Public

VXLAN EVPN Multi-Site Design and Deployment

IP Fabric Reference Architecture

Route Leaking in MPLS/VPN Networks

Configuring MPLS, MPLS VPN, MPLS OAM, and EoMPLS

Implementing VXLAN in DataCenter

Configuring Static Routing

Configuring Policy-Based Routing

Contents. Introduction. Prerequisites. Requirements. Components Used

Introduction to Segment Routing

InterAS Option B. Information About InterAS. InterAS and ASBR

Cisco Dynamic Fabric Automation Solution Guide

Managing the Unicast RIB and FIB, on page 5

Cisco HyperFlex Systems

Managing the Unicast RIB and FIB, page 5

Use Case: Three-Tier Application with Transit Topology

IPv6 Switching: Provider Edge Router over MPLS

ibgp Multipath Load Sharing

VXLAN EVPN Multihoming with Cisco Nexus 9000 Series Switches

VXLAN Deployment Use Cases and Best Practices

MPLS VPN Multipath Support for Inter-AS VPNs

BGP Support for the L2VPN Address Family

Cisco FabricPath Best Practices

BGP Best External. Finding Feature Information

Table of Contents 1 MSDP Configuration 1-1

H3C S6520XE-HI Switch Series

IPv6 Switching: Provider Edge Router over MPLS

Configuring BGP. Cisco s BGP Implementation

Chapter 13 Configuring BGP4

Configuring MPLS and EoMPLS

Configuring Cisco Nexus 7000 Series Switches

Configuring Virtual Port Channels

Cisco ACI Multi-Pod and Service Node Integration

MPLS VPN Carrier Supporting Carrier Using LDP and an IGP

LTRDCT-2781 Building and operating VXLAN BGP EVPN Fabrics with Data Center Network Manager

Cisco Configuring Cisco Nexus 7000 Switches v3.1 (DCNX7K)

BGP Diverse Path Using a Diverse-Path Route Reflector

MPLS VPN Inter-AS with ASBRs Exchanging VPN-IPv4 Addresses

Configuring Layer 4 to Layer 7 Resource Pools

Provisioning Overlay Networks

Multi-VRF Support. Finding Feature Information. Prerequisites for Multi-VRF Support

H3C S7500E-X Switch Series

F5 BIG-IP Local Traffic Manager Service Insertion with Cisco Application Centric Infrastructure

BGP Support for the L2VPN Address Family

Deploy MPLS L3 VPN. APNIC Technical Workshop October 23 to 25, Selangor, Malaysia Hosted by:

Configuring Virtual Port Channels

IOS-XR EVPN Distributed Anycast IRB Gateway, L2/L3VPN Service with MPLS Data Plane

Configuring PIM. Information About PIM. Send document comments to CHAPTER

Why dynamic route? (1)

V Commands. virtual ip, page 2 virtual ipv6, page 5 vrf, page 8. Cisco Nexus 7000 Series NX-OS Intelligent Traffic Director Command Reference 1

IP Routing: BGP Command Reference, Cisco IOS XE Release 3SE (Catalyst 3850 Switches)

Configuring Virtual Port Channels

MPLS VPN Carrier Supporting Carrier Using LDP and an IGP

Transcription:

White Paper Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA Last Updated: 1/27/2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 32

Contents Introduction... 3 Target Audience... 3 Prerequisites... 3 Placing the Application Load Balancer in the Fabric... 3 Choosing the Load Balancer Deployment Type... 3 Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached to Fabric... 4 Data Traffic Path in the Fabric... 5 Configuring Autoconfiguration Profiles... 6 Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic Routing between Load Balancer and Fabric... 10 Data Traffic Path in the Fabric... 11 Configuring Autoconfiguration Profiles... 13 Deployment Scenario 3: Application Load Balancer with Static Routing Between Load Balancer and Fabric... 18 Data Traffic Path in the Fabric... 19 Configuring Autoconfiguration Profiles... 21 Deployment Scenario 4: Shared Hardware-Accelerated Application Delivery Controller with VIP Address Directly Attached to Fabric... 25 Data Traffic Path in the Fabric... 26 Configuring Autoconfiguration Profiles... 28 Deployment Considerations for vpc+ Dual-Attached Appliances... 28 Appendix: CLI Configurations for the Profiles... 29 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 32

Introduction The primary goal of this document is to provide guidelines about how to implement application load balancers in the data center using Cisco DFA (Dynamic Fabric Automation). Readers will learn how to integrate load balancers into the DFA Fabric using network autoconfiguration on Cisco Nexus Family switches. The network integration deployment scenarios covered in this document are not specific to any vendor and can accommodate any application load balancer available on the market today. Target Audience This document is written for network architects; network design, planning, and implementation teams; and application services and maintenance teams. Prerequisites This document assumes that the reader is already familiar with the mechanisms of the DFA autoconfiguration feature. The reader should be familiar with mobility domain, virtual switch interface (VSI) Discovery and Configuration Protocol (VDP), network profile, and services-network profile configurations. Please refer to the following configuration guide for more information: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfa-configuration.html. Placing the Application Load Balancer in the Fabric Load-balancer appliances can be connected in several places in the network. Network autoconfiguration on Cisco Nexus switches allows dynamic instantiation of the necessary configuration on leaf nodes, so the recommended approach is to connect load balancers at the leaf level. Spine nodes do not contain any classical ethernet (CE) host ports and should not be used as service attachment points. With the dynamic autoconfiguration feature, load balancers, in both hardware and virtual machine form factors, can be connected anywhere in the network. Network utilization and forwarding can be optimized when relevant service appliances are attached to a single pair of leaf nodes, referred to as the service leaf. The logical role of the service leaf does not change the configuration or enable additional features on this set of leaf nodes. It is used essentially as a central location for attaching service nodes. If your organization chooses to use the service leaf and needs to use virtual load balancers or virtual appliances, you will need to follow certain guidelines. With automated or orchestrated virtual services deployment mechanisms, the automation or orchestration tool must help ensure the location of deployed virtual services and virtual machines. For example, in Cisco UCS Director, you can specify a set of hypervisors, on which virtual services can be created. Attaching this set of hypervisors to the service leaf will help ensure the location of deployed services in the network. Choosing the Load Balancer Deployment Type In a network, a load balancer can be deployed in the following scenarios: One or more load balancers for a given tenant: Load balancers can be virtual or physical. One or more load balancers shared across multiple tenants: Here, the load balancer is most likely a hardware platform, and depending on the vendor and software, the load balancer may provide built-in virtualization features, such as traffic domains, Virtual Routing and Forwarding (VRF) functions, and virtual contexts. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 32

One or more hardware offload appliance shared across multiple tenants: This appliance would primarily be used with SSL offload or other resource-intensive applications. This document focuses on deployment scenarios in which a given load balancer is used by a single tenant. The availability of multitenancy mechanisms allows you to easily expand the single-tenant scenario described here to multitenant deployments by using VLAN and VRF separation. Deployment Scenario 1: Application Load Balancer with Virtual IP Address Directly Attached to Fabric This scenario walks through a one-arm application load balancer. The virtual IP (VIP) address of the load balancer is directly attached to the switch and will be visible in a similar way to an end host in the fabric. This very general and frequently seen use case is shown in Figure 1. Figure 1. Logical Schema of One-Arm Load Balancer, Web Servers, and Clients Internal and External to Fabric For this and all other deployment scenarios in this document, the load balancer is configured with Source Network Address Translation (SNAT) to facilitate the server return path through the load balancer. The load balancer is configured with one or more VIP addresses depending on the application requirements. These addresses have their respective default gateways on the Leaf-1 node, which maintains the Address Resolution Protocol (ARP) cache for all directly attached IP addresses. Each VIP address entry in the ARP cache of the leaf node is then converted to the /32 IP address prefix and is distributed throughout the fabric using the fabric control plane (Multiprotocol Border Gateway Protocol [MP-BGP]). The default gateway for the VIP subnet is a switch virtual interface (SVI), which is automatically configured with the autoconfigure feature of the fabric. Network segments, which host web servers and internal to fabric clients, are configured with their respective autoconfiguration profiles and can use the expedited forwarding or traditional forwarding mode. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 32

Data Traffic Path in the Fabric Clients that access the load-balanced application can be located within the fabric or external to fabric. Figures 2 and 3 show how application data traffic is load-balanced in the network fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through VIP1. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the real web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes back through the load balancer. The packet is then forwarded to the real server. In most deployment scenarios, VIP addresses and real web servers reside on different subnets. Figure 2. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 3. When the load balancer receives the return traffic from the web server, the traffic is subjected to SNAT. This process helps ensure that the client maintains the TCP session of a current web transaction or the User Datagram Protocol (UDP) data stream of a given application. 4. The load balancer then forwards the return traffic back to the client. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 32

Figure 3. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path Configuring Autoconfiguration Profiles You can use the autoconfiguration feature of the Cisco Nexus switches and the related fabric to dynamically instantiate the necessary configuration wherever end hosts or services appliances are attached to the fabric. In this deployment scenario, the load balancer, as a services appliance, is configured so that the VIP address of the load-balanced service is in the same subnet as the physical load-balancer network interface. The VIP address is seen directly in the ARP table of the switch and redistributed to the fabric as a host /32 prefix. Moreover, there is no need for any static or dynamic routing adjacency in this case. The load balancer must be properly configured in the IP subnet (with the correct default gateway IP address). The autoconfiguration profile defaultnetworkuniversaltfprofile 1 will be used here to attach the load balancer in exactly the same way as you attach regular hosts. With the autoconfiguration feature, you can attach the load-balancer appliance from any vendor to fabric. Note: This example does not cover out-of-band (OOB) management-port configuration. If an OOB management interface is connected to the fabric and needs to be configured, you also need to create a separate autoconfiguration profile in Cisco Prime Data Center Network Manager (DCNM). First, you need to determine which tenant will be hosting the load balancer (Figures 4 and 5). If the organization and partition for the tenant do not exist, you will need to define them in DCNM. When you create the partition, note that with DCNM and Cisco NX-OS Software Release 7.1 and later, you can use universal autoconfiguration profiles. For this and the next deployment scenarios, use vrf-commonuniversal-dynamic-lb-es 2 as the partition profile. This specific partition profile is needed to facilitate the redistribution of leaf-local routing information to the fabric. Please refer to the appendix for details about the command-line interface (CLI) commands. 1 The CLI command details for this profile can be found in the appendix. 2 The CLI command details for this profile can be found in the appendix. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 32

Figure 4. Organization Creation Figure 5. Partition Creation Next, you need to provision the autoconfiguration profile to which you intend to attach the load balancer (Figure 6). Note that the functions described in this deployment scenario are verified only for matching network and partition autoconfiguration profiles. You should use the traditional forwarding mode profile, defaultnetworkuniversaltfprofile, to help ensure that VIP addresses are discovered throughout the fabric and do not go silent, which may happen as a result of various vendor implementations. Also note the VLAN and mobility domain being used. You will need to use this exact VLAN ID in the load-balancer configuration. In the example used here, the global mobility domain is used to uniquely derive the virtual network ID (VNI) value for a bridge domain to which the load balancer is attached. However, customers can use the multiplemobility-domain feature, which allows the choice of a value from the drop-down menu for the network profile configuration. If a virtual appliance with a VDP-capable virtual switch is used (for example, Cisco Nexus1000V Switch or Kernel-based Virtual Machine [KVM] Open Virtual Switch [OVS]), the mobility domain is not needed. Please refer to the configuration guide for details. http://www.cisco.com/c/en/us/td/docs/switches/datacenter/dfa/configuration/b-dfaconfiguration/auto_configuration.html. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 32

Figure 6. Autoconfiguration Profile Creation for Load-Balancer VIP-Attached Subnet. After you plug in your load-balancer appliance or, in case of a virtual appliance, spin up the virtual machine and launch a service, the SVI default gateway is instantiated on the leaf node using autoconfiguration. Then the VIP address for a configured service is learned on the leaf node along with the IP address of the main interface of a load balancer in one-arm mode. The instantiated autoconfiguration profile can be checked from the CLI of the leaf node to which the load balancer is attached: show fabric database host detail Active Host Entries flags: L - Locally inserted, V - vpc+ inserted, R - Recovered, X - xlated Vlan VLAN VNI STATE FLAGS PROFILE(INSTANCE) 100 30003 Profile Active L defaultnetworkuniversaltfprofile(instance_def_100_1) Displaying Data Snooping Ports Interface Encap Flags State Eth1/1 100 L Profile Active 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 32

VIP addresses configured on the load balancer are learned and can be seen from the MAC address table on the leaf node: show mac address-table vlan 100 Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vpc Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+------------------ * 100 2020.0000.00aa static 0 F F sup-eth2 * 100 d867.d903.f345 dynamic 0 F F Eth1/1 As the configuration of the load balancer dictates, all VIP addresses use the same subnet and terminate on the leaf node: show ip arp vrf OrganizationABC:PartitionABC Flags: * - Adjacencies learnt on non-active FHRP router + - Adjacencies synced via CFSoE # - Adjacencies Throttled for Glean D - Static Adjacencies attached to down interface IP ARP Table for context OrganizationABC:PartitionABC Total number of entries: 4 Address Age MAC Address Interface 172.16.10.10 00:02:11 d867.d903.f345 Vlan100 172.16.10.11 00:03:02 d867.d903.f345 Vlan100 172.16.10.12 00:03:02 d867.d903.f345 Vlan100 172.16.10.13 00:03:02 d867.d903.f345 Vlan100 The leaf node converts each of the ARP entries for the corresponding VIP addresses to /32 IP address prefixes and shares them with the fabric: sh ip route vrf OrganizationABC:PartitionABC IP Route Table for VRF "OrganizationABC:PartitionABC" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 0.0.0.0/0, ubest/mbest: 1/0 *via 10.201.4.21%default, [200/0], 00:13:50, bgp-65510, internal, tag 65510, segid 50003 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 32

172.16.10.0/24, ubest/mbest: 1/0, attached *via 172.16.10.1, Vlan100, [0/0], 00:14:01, direct, tag 12345, 172.16.10.1/32, ubest/mbest: 1/0, attached *via 172.16.10.1, Vlan100, [0/0], 00:14:01, local, tag 12345, 172.16.10.10/32, ubest/mbest: 1/0, attached *via 172.16.10.10, Vlan100, [190/0], 00:06:18, hmm 172.16.10.11/32, ubest/mbest: 1/0, attached *via 172.16.10.11, Vlan100, [190/0], 00:06:18, hmm 172.16.10.12/32, ubest/mbest: 1/0, attached *via 172.16.10.12, Vlan100, [190/0], 00:06:18, hmm 172.16.10.13/32, ubest/mbest: 1/0, attached *via 172.16.10.13, Vlan100, [190/0], 00:06:18, hmm sh ip bgp vrf OrganizationABC:PartitionABC BGP routing table information for VRF OrganizationABC:PartitionABC, address fami ly IPv4 Unicast BGP table version is 10, local router ID is 172.16.10.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I- injected Origin codes: i - IGP, e - EGP,? - incomplete, - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>i0.0.0.0/0 10.201.4.21 100 0 i *>r172.16.10.0/24 0.0.0.0 0 100 32768? *>r172.16.10.10/32 0.0.0.0 0 100 32768? *>r172.16.10.11/32 0.0.0.0 0 100 32768? *>r172.16.10.12/32 0.0.0.0 0 100 32768? *>r172.16.10.13/32 0.0.0.0 0 100 32768? The load balancer s network connectivity is now provisioned. The load balancer is now ready for further service policy configuration, which can be performed through its CLI or GUI, depending on the vendor of the load balancer in use. Such configuration is beyond the scope of this document. Deployment Scenario 2: Application Load Balancer with Host Route Injection and Dynamic Routing between Load Balancer and Fabric In this scenario, the virtual or physical load-balancer appliance is directly attached to a leaf switch, However, the VIP address for the load-balanced application appears to be attached behind a virtual router inside the load balancer. The reachability information about the configured VIPs addresses is shared with the fabric using the Open Shortest Path First (OSPF) dynamic routing protocol. The load balancer establishes dynamic routing protocol peering with the leaf device to facilitate the exchange of route information (Figure 7). 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 32

Figure 7. Logical Schema Showing Dynamic Routing Adjacency Between the Load Balancer and the Fabric Just as in deployment scenario 1, the load balancer is configured with SNAT to facilitate the server return path through the load balancer. Using the OSPF dynamic routing protocol, the load balancer shares reachability information about the entire subnet on which VIP addresses reside. When the leaf node receives this reachability information, it is redistributed to the MP-BGP control plane and shared throughout the fabric. As a result, the entire fabric will know how to reach the VIP addresses for the applications. Note: Configuration of the dynamic routing protocol and peering is handled using the autoconfiguration profile and is discussed later in this document. Data Traffic Path in the Fabric Scenario 2 is similar in many ways to scenario 1. Figures 8 and 9 show how application data traffic is loadbalanced in the DFA fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load balancer and shared with the fabric, so any workload or device attached to the fabric in the same VRF instance will be able to reach the desired VIP address. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes through the load balancer. The packet is then forwarded to web server selected earlier. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 32

Figure 8. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 3. When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This process helps ensure that the client maintains the TCP session of a current web transaction. 4. The load balancer then forwards the return traffic back to client. Figure 9. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 32

Configuring Autoconfiguration Profiles In this deployment scenario, the fabric needs to establish dynamic routing adjacency with the load balancer. In other words, the leaf node must automatically establish OSPF routing adjacency with the load balancer, receive prefixes from the load balancer, and then redistribute the prefixes to the BGP control plane of the fabric. In contrast to the first scenario, there is no need to configure distributed anycast gateway, when establishing dynamic routing protocol adjacency between the load balancer and the leaf node. The network autoconfiguration profile that meets this requirement and that is created specifically for such a scenario is servicenetworkuniversaldynamicroutinglbprofile 3. Note that this autoconfiguration profile must be deployed in the partition defined with the vrf-common-universal-dynamic-lb-es 4 partition profile. Using these two profiles in parallel facilitates the redistribution of the correct route information between the fabric and the load balancer (Figures 10 and 11). Figure 10. Configuring the Partition Using the vrf-common-universal-dynamic-lb-es Profile 3 The CLI command details for this profile can be found in the appendix. 4 The CLI command details for this profile can be found in the appendix. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 32

Figure 11. Configuring the Network Segment Used for Dynamic Routing Peering Between the Fabric and Load Balancer The OSPF routing protocol configuration on the load balancer itself needs to be specified separately, using either the load balancer s CLI or GUI. The following options need to be configured: Peering with the fabric using backbone area 0 (equivalent to area 0.0.0.0) Default route (0.0.0.0/0) with the next hop pointing to the gateway: in the example here, 10.10.15.1 OSPF router ID according to the load-balancer-specific syntax Advertisement of the VIP addresses in OSPF VLAN ID value that matches the value configured in the autoconfiguration profile in DCNM: in the example here, 301 After the load balancer is connected to the fabric, the leaf node will detect on the host port the data traffic tagged with VLAN ID 301. This detection will trigger the instantiation of the autoconfiguration profile. The following configuration is instantiated on the leaf or added to the existing configuration as part of the autoconfiguration process: show run ospf feature ospf router ospf 5 vrf OrganizationA:PartitionA router-id 10.10.15.1 interface Vlan301 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 32

ip router ospf 5 area 0.0.0.0 sh run bgp router bgp 65510 vrf OrganizationA:PartitionA address-family ipv4 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET redistribute ospf 5 route-map ospfmap maximum-paths ibgp 2 address-family ipv6 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-V6HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET maximum-paths ibgp 2 vrf context OrganizationA:PartitionA rd auto address-family ipv4 unicast route-target import 65510:9999 route-target both auto address-family ipv6 unicast route-target import 65510:9999 route-target both auto show run int vlan 301 expand-port-profile interface Vlan301 no shutdown vrf member OrganizationA:PartitionA ip address 10.10.15.1/24 tag 12345 ip router ospf 5 area 0.0.0.0 Note the redistribute ospf 5 command in the BGP configuration. This command helps ensure that all VIP address prefixes received from the load balancers are redistributed to the fabric BGP control plane and shared with the rest of the fabric: that is, that the entire fabric will learn these prefixes through BGP. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 32

The instantiated autoconfiguration profile can be checked from the CLI of leaf node to which the load balancer is attached: sh fabric database host detail Active Host Entries flags: L - Locally inserted, V - vpc+ inserted, R - Recovered, X - xlated Vlan VLAN VNI STATE FLAGS PROFILE(INSTANCE) 301 30001 Profile Active L servicenetworkuniversaldynamicroutinglbprofile(instance_def_301_1) Displaying Data Snooping Ports Interface Encap Flags State Eth1/1 301 L Profile Active As seen in the following CLI output, the load balancer successfully established a routing adjacency with the fabric leaf: sh ip ospf neighbors vrf OrganizationA:PartitionA OSPF Process ID 5 VRF OrganizationA:PartitionA Total number of neighbors: 1 Neighbor ID Pri State Up Time Address Interface 10.10.15.2 1 FULL/DR 00:00:03 10.10.15.2 Vlan301 The next CLI output confirms that the leaf received valid /32 IP routes through OSPF. Here, each such IP route represents a VIP address configured on the load balancer: sh ip route vrf OrganizationA:PartitionA IP Route Table for VRF "OrganizationA:PartitionA" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 0.0.0.0/0, ubest/mbest: 1/0 *via 10.201.4.21%default, [200/0], 00:45:15, bgp-65510, internal, tag 65510, segid 50005 10.10.15.0/24, ubest/mbest: 1/0, attached *via 10.10.15.1, Vlan301, [0/0], 00:45:28, direct, tag 12345, 10.10.15.1/32, ubest/mbest: 1/0, attached *via 10.10.15.1, Vlan301, [0/0], 00:45:28, local, tag 12345, 172.16.10.10/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.11/32, ubest/mbest: 1/0 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 32

*via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.12/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra 172.16.10.13/32, ubest/mbest: 1/0 *via 10.10.15.2, Vlan301, [110/41], 00:18:42, ospf-5, intra The following CLI output shows that redistribution from OSPF to BGP works as expected: sh ip bgp vrf OrganizationA:PartitionA BGP routing table information for VRF OrganizationA:PartitionA, address family I Pv4 Unicast BGP table version is 35, local router ID is 10.10.15.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I- injected Origin codes: i - IGP, e - EGP,? - incomplete, - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>i0.0.0.0/0 10.201.4.21 100 0 i *>r10.10.15.0/24 0.0.0.0 0 100 32768? *>r172.16.10.10/32 0.0.0.0 41 100 32768? *>r172.16.10.11/32 0.0.0.0 41 100 32768? *>r172.16.10.12/32 0.0.0.0 41 100 32768? *>r172.16.10.13/32 0.0.0.0 41 100 32768? In addition, the next two sets of CLI output show the MAC address and the respective ARP entry of the load balancer s interface: sh mac address-table vlan 301 Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vpc Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+------------------ * 301 d867.d903.f345 dynamic 10 F F Eth1/1 sh ip arp vrf OrganizationA:PartitionA Flags: * - Adjacencies learnt on non-active FHRP router + - Adjacencies synced via CFSoE # - Adjacencies Throttled for Glean D - Static Adjacencies attached to down interface 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 32

IP ARP Table for context OrganizationA:PartitionA Total number of entries: 1 Address Age MAC Address Interface 10.10.15.2 00:16:51 d867.d903.f345 Vlan301 As a summary, Figure 12 depicts the logical routing topology of this scenario. Figure 12. Logical Routing Topology Deployment Scenario 3: Application Load Balancer with Static Routing Between Load Balancer and Fabric This scenario is very similar to scenario 2: that is, the VIP address for the load-balanced application is configured on the load balancer. However, in scenario 3 the load balancer does not establish dynamic routing protocol adjacency with the leaf node in the fabric. Instead, the reachability information about VIP addresses is configured on the leaf node and the load balancer using static routes (Figure 13). Figure 13. Logical Schema Showing the Static Routing Between the Load Balancer and the Fabric 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 32

Just as in the previous deployment scenarios, the load balancer is configured with SNAT to facilitate the server return path through the load balancer. Static routes toward VIP addresses need to be configured on a directly attached leaf node: in the example here, on Leaf-1. The next hop for these prefixes should point to the load balancer s interface IP address: in the example here, 10.10.20.2. In addition, these static routes must be redistributed to the MP-BGP control plane of the fabric to facilitate fabricwide reachability to VIP addresses. Static routes to VIP addresses together with their redistribution are configured in DCNM as part of the autoconfiguration profile and are dynamically instantiated when the load balancer is attached to the network. As a result, the entire fabric will know how to reach VIP addresses for the respective applications. Please note, that automated configuration of the static routes happens as part of the partition profile autoconfiguration. This means, that any network autoconfiguration profile, which is associated with such partition profile or VRF, will also trigger automated configuration of static routes on a given Leaf node. Data Traffic Path in the Fabric Figures 14 and 15 show how application data traffic is load-balanced in the DFA fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the web application, which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load balancer. Static routes to the VIP addresses are configured on the Leaf-1 node and are redistributed to the fabric control plane, so any workload or device attached to the fabric in the same VRF instance will be able to reach the desired VIP address. 2. On the basis of the algorithm configured for the load balancer, the received request is prepared for forwarding to one of the web servers on the configuration list. The load balancer performs a NAT operation and swaps out the client s source IP address in the packet header and swaps in the VIP1 address. This process helps ensure that the return traffic passes through the load balancer. The packet is then forwarded to the web server selected earlier. Figure 14. Data Traffic Path in the Fabric: Client to Load Balancer to Web Server Path 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 32

3. When the load balancer receives the return traffic from the web server, the traffic is subjected to NAT. This process helps ensure that the client maintains the TCP session of a current web transaction or UDP data stream of a given application. 4. The load balancer then forwards the return traffic back to the client. Figure 15. Return Data Traffic Path in the Fabric: Web Server to Load Balancer to Client Return Path 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 32

Configuring Autoconfiguration Profiles In this deployment scenario, the fabric needs to configure static routes to VIP addresses on the load balancer attached Leaf-1. In addition, Leaf-1 needs to redistribute these static routes to the MP-BGP control plane of the fabric. The network autoconfiguration profile that fits this requirement and is specifically created for such a scenario is servicenetworkuniversaltfstaticroutingprofile 5. Note that this autoconfiguration profile must be deployed in the partition defined with the vrf-common-universal-static 6 partition profile. Using these two profiles in parallel facilitates redistribution of the correct route information between the fabric and the load balancer. When configuring the vrf-common-universal-static partition profile in DCNM, you must specify the static route to the subnet in which the VIP addresses are located. The next hop for this route should point to the interface IP address of the load balancer (Figures 16 and 17). Figure 16. Configuring the Partition Profile: n00, n01, n02, etc. Signify the Subnet of the Static Route, and, nh00, nh01, nh02, etc. Signify the Next-Hop IP Address 5 The CLI command details for this profile can be found in the appendix. 6 The CLI command details for this profile can be found in the appendix 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 32

Figure 17. Configuring the Network Segment Used for Attaching a Load Balancer to the Fabric The static route configuration on the load balancer itself needs to be specified separately, using either the load balancer s CLI or GUI. In most deployment cases, only the static default route will be required. The following options need to be configured: Default route (0.0.0.0/0) with the next hop pointing to the gateway: in the example here, 10.10.20.1 VLAN ID that matches the value configured in the autoconfiguration profile in DCNM: in the example here, 331 After the load balancer is connected to the fabric, the leaf node will detect on the host port the data traffic tagged with VLAN ID 331. This detection will trigger the instantiation of the autoconfiguration profiles. The following configuration is dynamically instantiated on the leaf or added to existing configuration as part of the autoconfiguration process: sh run int vlan 331 expand-port-profile interface Vlan331 no shutdown vrf member OrganizationA:PartitionC ip address 10.10.20.1/24 tag 12345 fabric forwarding mode anycast-gateway 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 32

sh run bgp router bgp 65510 vrf OrganizationA:PartitionC address-family ipv4 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET redistribute static route-map staticmap maximum-paths ibgp 2 address-family ipv6 unicast redistribute hmm route-map FABRIC-RMAP-REDIST-V6HOST redistribute direct route-map FABRIC-RMAP-REDIST-SUBNET maximum-paths ibgp 2 vrf context OrganizationA:PartitionC rd auto address-family ipv4 unicast route-target import 65510:9999 route-target both auto address-family ipv6 unicast route-target import 65510:9999 route-target both auto sh run ip vrf context OrganizationA:PartitionC ip route 172.18.10.0/24 10.10.20.2 Note the redistribute static command in the preceding configuration. This command helps ensure that the static route pointing to VIP addresses is redistributed to the fabric BGP control plane and shared with the rest of the fabric. The instantiated autoconfiguration profile can be checked from the CLI of the leaf node to which the load balancer is attached: 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 32

sh fabric database host detail Active Host Entries flags: L - Locally inserted, V - vpc+ inserted, R - Recovered, X - xlated Vlan VLAN VNI STATE FLAGS PROFILE(INSTANCE) 331 30031 Profile Active L servicenetworkuniversaltfstaticroutingprofile(instance_def_331_4) Displaying Data Snooping Ports Interface Encap Flags State Eth1/1 331 L Profile Active The following two sets of CLI output show that static route redistribution into BGP works as expected: sh ip route vrf OrganizationA:PartitionC IP Route Table for VRF "OrganizationA:PartitionC" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 0.0.0.0/0, ubest/mbest: 1/0 *via 10.201.4.21%default, [200/0], 00:03:18, bgp-65510, internal, tag 65510, segid 50007 10.10.20.0/24, ubest/mbest: 1/0, attached *via 10.10.20.1, Vlan331, [0/0], 00:03:32, direct, tag 12345, 10.10.20.1/32, ubest/mbest: 1/0, attached *via 10.10.20.1, Vlan331, [0/0], 00:03:32, local, tag 12345, 172.16.10.0/24, ubest/mbest: 1/0 *via 10.10.20.2, [1/0], 00:03:31, static sh ip bgp vrf OrganizationA:PartitionC BGP routing table information for VRF OrganizationA:PartitionC, address family I Pv4 Unicast BGP table version is 10, local router ID is 10.10.20.1 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-i njected Origin codes: i - IGP, e - EGP,? - incomplete, - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>i0.0.0.0/0 10.201.4.21 100 0 i *>r10.10.20.0/24 0.0.0.0 0 100 32768? *>r10.10.20.2/32 0.0.0.0 0 100 32768? *>r172.16.10.0/24 0.0.0.0 0 100 32768? 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 32

In addition, the next two sets of CLI output show the MAC address and the respective ARP entry of the load balancer s interface: sh mac address-table vlan 331 Legend: * - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC age - seconds since last seen,+ - primary entry using vpc Peer-Link VLAN MAC Address Type age Secure NTFY Ports/SWID.SSID.LID ---------+-----------------+--------+---------+------+----+------------------ * 331 2020.0000.00aa static 0 F F sup-eth2 * 331 d867.d903.f345 dynamic 400 F F Eth1/1 sh ip arp vrf OrganizationA:PartitionC IP ARP Table for context OrganizationA:PartitionC Total number of entries: 1 Address Age MAC Address Interface 10.10.20.2 00:00:09 d867.d903.f345 Vlan331 Deployment Scenario 4: Shared Hardware-Accelerated Application Delivery Controller with VIP Address Directly Attached to Fabric Scenario 4 deploys an application load balancer along with a hardware-accelerated application delivery controller (ADC). The ADC is equipped with hardware-accelerated encryption offload mechanisms and can be used as a shared resource among multiple applications (Figure 18). The load balancer can be either a physical appliance or a virtual appliance. In the latter case, it should be able to offload SSL encryption from the virtual load balancer to a physical hardware-accelerated ADC. Figure 18. Logical Schema Showing the Load Balancer and Hardware-Accelerated ADC Connection to the Service Leaf 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 32

The load balancer and hardware-accelerated ADC can be deployed using any of the first three scenarios discussed in this document, depending on the requirements and each platform s capabilities. However, to make this scenario simpler, the deployment uses scenario 1 to deploy both the load balancer and hardware-accelerated ADC. Also, each of the two devices in the network needs to perform NAT operations to enforce the return traffic path. Network autoconfiguration on Cisco Nexus switches dynamically instantiates autoconfiguration profiles anywhere in the fabric, so the load balancer or hardware-accelerated ADC can be placed anywhere in the fabric. However, to optimize fabric utilization, it is recommended to enforce a single location for placement of service nodes. This can be performed by designating a single leaf or a pair of leaf nodes bundled in a virtual PortChannel (vpc+) as service leaves and then attaching the most demanded and utilized service nodes there. This approach helps ensure that back-end traffic is locally switched on a service leaf and does not need to traverse the fabric across spines. Note that Layer 3 route peering over vpc+ is supported on the Cisco Nexus 6000 Series and 5600 platform, switches, as well as Nexus 7000 and Nexus 7700 series with the NX-OS version 7.2.1 or later. Also note that the service leaf designation does not pertain to any special leaf configuration, but rather is an administrative designation. Data Traffic Path in the Fabric Figures 19 and 20 show how application data traffic is load-balanced in the DFA fabric in this deployment scenario. 1. Clients external or internal to the fabric request data from the SSL-encrypted web application (TCP port 443), which can be reached through the VIP address (VIP1). The VIP addresses are already configured on the load balancer and shared with the fabric, so any workload or device attached to fabric in the same VRF instance will be able to reach the desired VIP address. 2. The load balancer is configured to forward any received SSL-encrypted traffic to the hardware-accelerated ADC. The load balancer will perform a SNAT operation to enforce the return path. 3. Upon receipt of the traffic, the hardware-accelerated ADC will decrypt the web traffic, select one of the web servers according to the configured algorithm, and forward the data. Upon forwarding the data, the ADC will perform a NAT operation and also change the destination TCP port from 443 to 81. This process enforces the return path and helps ensure that the web server recognizes that the traffic received on port 81 is decrypted traffic. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 32

Figure 19. Data Traffic Path in the Fabric: Client to Load Balancer to Hardware-Accelerated ADC 4. The return traffic from the web server is sent to the hardware-accelerated ADC. 5. The ADC SSL-encrypts the traffic, performs a NAT operation, and forwards the traffic to the load balancer. 6. The load balancer receives the encrypted web traffic, performs a NAT operation, and then forwards the traffic back to the client. Figure 20. Return Data Traffic Path in the Fabric 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 32

Configuring Autoconfiguration Profiles Refer to deployment scenario 1 of this document for details about how to create the autoconfiguration profiles. Note that both the load balancer and the hardware-accelerated ADC must be members of the same partition to successfully communicate within the fabric. Deployment Considerations for vpc+ Dual-Attached Appliances If the load-balancer appliance needs to be dual homed, additional network autoconfiguration profile configuration is required for deployment scenario 2. Deployment scenarios 1 and 3 require no additional changes. In deployment scenario 2, the load balancer needs to establish and maintain OSPF dynamic routing adjacency with both vpc+ peer switches: Leaf-1 and Leaf-2 (Figure 21). Figure 21. vpc+ Dual-Attached Load Balancer Establishes OSPF Routing Adjacency with Both vpc+ Peer Switches The OSPF routing protocol requires a unique IP address to establish such routing adjacency. That is why the network autoconfiguration profiles need to include additional detail. Figure 22 shows the Secondary Gateway IPv4 Address field. The autoconfiguration process will use the IP address specified in this field to configure the SVI IP address on a secondary vpc+ peer. The primary vpc+ peer is configured with the value specified in the gatewayipaddress field. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 32

Figure 22. Configuration of the IP address for SVI of the secondary vpc+ peer. No additional configuration is needed on the load balancer. Note that the load balancer establishes and maintains routing adjacencies with both vpc+ peers: Leaf1 and Leaf2. Appendix: CLI Configurations for the Profiles Figures 23 through 27 provide CLI configurations for the profiles used in this document. Note: This appendix is provided for reference only. 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 32

Figure 23. Network Profile defaultnetworkuniversaltfprofile Figure 24. Partition Profile vrf-common-universal-dynamic-lb-es 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 32

Figure 25. Network Profile servicenetworkuniversaldynamicroutinglbprofile Figure 26. Network Profile servicenetworkuniversaltfstaticroutingprofile 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 32

Figure 27. Network Profile vrf-common-universal-static Printed in USA C11-735416-01 01/16 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 32