Integration of Hypervisors and L4-7 Services into an ACI Fabric Azeem Suleman, Principal Engineer, Insieme Business Unit
Agenda Introduction to ACI Review of ACI Policy Model Hypervisor Integration Layer 4-7 Services Integration Conclusion
Introduction to ACI
Cisco ACI Logical Network Provisioning of Stateless Hardware Web App DB Outside (Tenant VRF) QoS Filter QoS Service QoS Filter APIC ACI Fabric Scale-Out Penalty Free Overlay Application Policy Infrastructure Controller
ACI Nomenclature Spine Nodes Leaf Nodes AVS EPG Internet Service Producers EPG Files EPG Users Service Consumers
Review of the ACI Policy Model
Bridge Domain (BD) Unique layer 2 (L2) or layer 3 (L3) forwarding domain Can contain one or more subnets (if unicast routing is enabled) Each bridge domain must be linked to a context (VRF) Equivalent Network Construct: If a BD is configured as L2 forwarding domain It will have one or more associated VLANs Each VLAN will be equal to EPG If a BD is configured as L3 forwarding domain This is equivalent to a SVI with one or more subnets per BD NOTE: BD can span across multiple switches
Object Relationship Tenant Context Context BD BD BD Subnet A Subnet B Subnet C Subnet B Subnet C
End Point Group (EPG) Set of host(s) that behave the same Behavior describes as all host(s) representing application or application components independent of other network constructs HTTPS Service EPG - Web HTTPS Service POLICY MODEL HTTPS Service HTTP Service HTTP Service HTTPS Service HTTP Service HTTP Service
Application Network Profile (ANP) Application Network Profile(s) are group of EPGs and the policies that define the communication between them Application Network Profile EPG - WEB EPG - APP EPG - DB POLICY MODEL = Inbound/Outbound Policies Inbound/Outbound Policies
Integration with Multiple Hypervisors
Hypervisor Integration Agenda Hypervisor Integration Overview VMware vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration
Hypervisor Interaction with ACI Two Modes of Operation Non-Integrated Mode Integrated Mode VLAN 10 VLAN 10 VXLAN 10000 APP WEB DB DB ACI Fabric as an IP-Ethernet Transport Encapsulations manually allocated Separate Policy domains for Physical and Virtual ACI Fabric as a Policy Authority Encapsulations Normalised and dynamically provisioned Integrated Policy domains across Physical and Virtual
Hypervisor Integration with ACI Control Channel - VMM Domains Relationship is formed between APIC and Virtual Machine Manager (VMM) Multiple VMMs likely on a single ACI Fabric Each VMM and associated Virtual hosts are grouped within APIC vcenter DVS vcenter AVS SCVMM Called VMM Domain VMM Domain 1 VMM Domain 2 VMM Domain 3 There is 1:1 relationship between a Virtual Switch and VMM Domain
Hypervisor Integration Agenda Hypervisor Integration Overview VMware vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration
VMware Integration Three Different Options Distributed Virtual Switch (DVS) vcenter + vshield Application Virtual Switch (AVS) + Encapsulations: VLAN Installation: Native VM discovery: LLDP Software/Licenses: vcenter with Enterprise+ License Encapsulations: VLAN, VXLAN Installation: Native VM discovery: LLDP Software/Licenses: vcenter with Enterprise+ License, vshield Manager with vshield License Encapsulations: VLAN, VXLAN Installation: VIB through VUM or Console VM discovery: OpFlex Software/Licenses: vcenter with Enterprise+ License
ACI Basics: APIC EPG to vsphere Port Group APIC EPG Web Virtual Distributed Switch Policy EPG App Policy EPG DB Port Group Web VXLAN 5001 Port Group App VXLAN 5002 Port Group DB VXLAN 5003
ACI Hypervisor Integration VMware
Hypervisor Integration with ACI Endpoint Discovery Virtual Endpoints are discovered for reachability & policy purposes via 2 methods: APIC Control Plane Learning: - Out-of-Band Handshake: vcenter APIs - Inband Handshake: OpFlex-enabled Host (AVS, Hyper-V, etc.) Data Path Learning: Distributed switch learning LLDP used to resolve Virtual host ID to attached port on leaf node (non- OpFlex Hosts) Control (OpFlex) Data Path DVS Host Data Path LLDP VMM Control (vcenter API) OpFlex Host
ACI Hypervisor Integration VMware DVS/vShield APIC 5 Create Application Policy F/W Application Network Profile EPG WEB L/B EPG APP EPG DB APIC Admin 9 Push Policy ACI Fabric 1 Cisco APIC and VMware vcenter Initial Handshake 6 Automatically Map EPG To Port Groups 4 Learn location of ESX Host through LLDP 2 Create VDS VIRTUAL DISTRIBUTED SWITCH VI/Server Admin vcenter Server / vshield 8 Instantiate VMs, Assign to Port Groups 7 3 Create Port Groups Attach Hypervisor to VDS WEB PORT GROUP APP PORT GROUP DB PORT GROUP Web App HYPERVISOR DB Web Web HYPERVISOR DB
Application Virtual Switch (AVS) Integration Overview OpFlex Control protocol - Control channel - VM attach/detach, link state notifications VEM extension to the fabric vsphere 5.0 and above BPDU Filter/BPDU Guard SPAN/ERSPAN Port level stats collection Southbound OpFlex API VM VM VM VM N1KV VEM Hypervisor Manager vsphere
ACI Hypervisor Integration AVS APIC 5 Create Application Policy F/W Application Network Profile EPG WEB L/B EPG APP EPG DB APIC Admin 9 Push Policy ACI Fabric 1 Cisco APIC and VMware vcenter Initial Handshake 6 Automatically Map EPG To Port Groups 4 Learn location of ESX Host through OpFlex OpFlex Agent OpFlex Agent VI/Server Admin vcenter Server 8 Instantiate VMs, Assign to Port Groups 2 7 3 Create AVS VDS Create Port Groups Attach Hypervisor to VDS Application Virtual Switch (AVS) WEB PORT GROUP APP PORT GROUP DB PORT GROUP Web App HYPERVISOR DB Web Web HYPERVISOR DB
ACI Hypervisor Integration VMware AVS Name of VMM Domain Type of vswitch (DVS or AVS) Switching mode (FEX or Normal) Associated Attachable Entity Profile (AEP) VXLAN Pool Multicast Pool vcenter Administrator Credentials vcenter server information
Micro-segmentation: VM Attribute based Grouping VM Attribute Guest OS VM Name VM (id) VNIC (id) DVS DVS Port-group Data centre MAC IP Address Prefix Flexible Attribute based Grouping for VMs Enables Micro-Segmentation based on VM attributes Supported on vsphere with AVS EPG: VM name contains web
Hypervisor Integration Agenda Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration
Microsoft Interaction with ACI Two modes of Operation Integration with SCVMM Integration with Azure Pack APIC APIC + Policy Management: Through APIC Software / License: Windows Server with HyperV, SCVMM VM Discovery: OpFlex Encapsulations: VLAN Plugin Installation: Manual Superset of SCVMM Policy Management: Through APIC or through Azure Pack Software / License: Windows Server with HyperV, SCVMM, Azure Pack (free) VM Discovery: OpFlex Encapsulations: VLAN Plugin Installation: Integrated
ACI Azure Pack Integration APIC 1 APIC Admin (Basic Infrastructure) 7 ACI Fabric 3 2 Pull Policy on leaf where EP attaches Get VLANs allocated for each EPG Push Network Profiles to APIC Create Application Policy 1 Create VM Networks 4 4 5 Instantiate VMs 6 Indicate EP Attach to attached leaf when VM starts APIC Plugin SCVMM Plugin OpFlex Agent OpFlex Agent OpFlex Agent HYPERVISOR HYPERVISOR HYPERVISOR Azure Pack Tenant Azure Pack \ SPF Web App Web App DB Web Web DB
Summary Micro-segmentation in Microsoft Hyper-V Static IP pool automation through SCVMM and Azure Pack SCVMM integration WAP integration Multiple BDs in the same VRF (for WAP virtual private plan) Layer3 out in the user tenant (for WAP virtual private plan)
Hypervisor Integration Agenda Hypervisor Integration Overview VMWare vcenter Integration Microsoft SCVMM & Azure Pack Integration OpenStack Integration
OpenStack Components (Neutron) Initial Focus on Networking (Neutron)
OpenStack Neutron Networking Model Tenant Router Network: external Network Security Group Subnet Port Security Group Rule L3 + External Net Extension Core API Sec Grp Extension
Cisco ACI Model Tenant Outside Network App Profile Bridge Domain Context (VRF) Contract Subnet Subject Endpoint Group
OpenStack Driver Options Neutron API and Modular Layer 2 (ML2) Group-Based Policy Network Router Security Group Policy Group FW Rule Set ADC Policy Group Group-Based Policy OpenStack Controller APIC ML2 OpenStack Controller GBP APIC Driver Plug-in performs conversion from Neutron to Cisco APIC policy model Group-based policy native drivers interfaces directly with APIC policy model
OpFlex Extends Cisco ACI to Hypervisor Pre-OpFlex Implementation OpFlex and OVS VLAN per network and group to ToR VXLAN within Cisco ACI Physical domain in Cisco ACI No Cisco APIC GUI integration Supports unmodified OVS and OVS agent OpenStack Controller APIC Driver VLAN OVS Driver Hypervisor Open vswitch OVS Agent VLAN or VXLAN per network and policy group to ToR OpFlex proxy runs in leaf, and OpFlex agent manages OVS Hypervisor-local traffic has policy and switching, routing handled locally VMM domain and GUI integration with APIC OpenStack Controller VXLAN and VLAN APIC Driver Hypervisor Open vswitch OVS Agent OpFlex Proxy Project 1 Project 2 Project 3 vm1 vm2 vm3 vm4 vm5 Distributed support for NAT, metadata server proxies, and DHCP Project 1 Project 2 Project 3 vm1 vm2 vm3 vm4 vm5 Native Neutron approach using OVS agent OpFlex agent directly manages OVS and integrates with APIC
Summary: OpenStack OpenStack Controller APIC Driver VXLAN and VLAN Hypervisor Open vswitch OpFlex Agent Project 1 Project 2 Project 3 vm1 vm3 vm5 vm2 vm4 OpFlex Proxy Multiple OpenStack driver options: Cisco APIC native group-based policy Neutron ML2 Operations, troubleshooting, and visibility for physical and virtual Endpoint statistics, health, and faults in APIC Hypervisor local enforcement security policies Security groups (ML2 driver) through IP address tables Group-based policies through OpenFlow in Open vswitch Distributed NAT support on each computing node Floating IP address Source NAT (snat) (through hypervisor host IP address) Distributed Neutron services per computing node Layer 3 and anycast gateway, metadata, and Dynamic Host Configuration Protocol (DHCP) Multiple Virtual Routing and Forwarding (VRF) instance support Support for VLAN and VXLAN to Cisco ACI fabric Solution high availability: Support for virtual port channel *vpc) and multiple APICs
Layer 4-7 Services Integration
Challenges with Network Service Insertion Router Configure Network to insert Firewall FW Router Switch vfw LB Configure firewall network parameters Configure firewall rules as required by the application Configure Load Balancer Network Parameters Configure Router to steer traffic to/from Load Balancer Service insertion takes days Network configuration is time consuming and error prone Difficult to track configuration on services servers Service Insertion In traditional Networks Configure Load Balancer as required by the application
L4-7 Integration Options No integration (same as today) Unmanaged (network-only automation) Managed (full automation)
Network Service Insertion EXTERNAL Consumes Web Contract HTTP: Accept, Service Graph Provides WEB Consumer Provider LB FW Contract provides a mechanism to add Network Services through associating a Service Graph A Service Graph identifies a set of network service functions required by an application APIC configures network service functions on devices like firewall, Load Balancers through a device packages A device package can be uploaded on APIC at run time Adding new network service support through device package does not require APIC reboot
The Advantages of the Service Graph By using the Service graph you can install a service, such as a firewall once and deploy it multiple times in different logical topologies The benefits of the service graph are: a configuration template that can be reused multiple times Automatic management of VLAN assignments collecting Health scores from the device collecting statistics from the device updating ACLs and Pools automatically with endpoint discovery
Layer 4-7 Services Integration Do I really need a Service graph?
A Different Operational Models Without Service Graph With Service Graph APIC Network admin: configures the ports, VLANs to connect the FW or the LB FW admin day 0: configures ports and VLANs FW admin day 1: configures ACLs and so on The three configurations are spread over multiple phases / days ACI admin: configures the ports, VLANs to connect the FW or the LB FW admin day 0: configures ports and VLANs FW admin day 1: configures ACLs and so on All configurations are performed in a single step.
Configurations with Service Graph All configurations performed in a single operation: Fabric configuration: Bridge Domains, VLANs, Routing, EPGs Firewall configuration: VLANs, Interfaces ACLs
Network-only Stitching
With Network-only Stitching ACI Only Configures the Fabric Not the L4L7 Device Create Tenants, VRF BD EPG Associate vnic or physical port Create contracts Device Not managed by ACI
Network Stitching - unmanaged L4 L7 Device Uncheck Managed Fill in the info Name: Concrete Device Name Service Type: Firewall, ADC, IPS etc Device Type: Physical or Virtual Domain Mode
Network Only Stitching Some customer have requirements that APIC only completes network automation for service devices. (For example, customer have existing orchestrator or tool for configuring L4-L7 service appliances or a device package is not available for L4-L7 device) Network only switching feature adds the flexibility for customer to use only network automation for service appliance. The configuration of the L4-L7 device is completed by L4- L7 admin so a Device Package is not required. 1: configure ACI Fabric for L4-L7 service appliance 2: configure L4-L7 service appliance L4-L7 Admin
Service Graph APIC-to-L4 L7 communication Device Package
APIC Talks to the L4 L7 Device L4L7 Device language API API No Requirements for New Protocols
APIC Requires a Device Package Device Package Configuration Model (XML File) Python Scripts Service functions are added to APIC through device package Device Package contains a device model and device python scripts APIC Policy Manager Configuration Model Script Engine Python Scripts APIC Script Interface Device Interface: REST/CLI Service Devices Device Model defines Service Function and Configuration Device scripts translates APIC API callouts to device specific callouts Script can interface with the device using REST, SSH or any mechanism
Device Package Example Following functions can be configured through APIC
Device Information Extracted Out of Device Package Functions (Or Services) provided by the Service Device SLB, SSL, Responder Vendor Info, Software Version Info and Model Info of Service Device Info on how many interfaces types the appliance has (Inside, Outside and Mgmt for e.g.)
Only Configuration needed on the L4L7 Device is Management Access Enable SSH Enable HTTP access Configure Credentials
Terminology:
The Guiding Principle of Service Graph is to Connect functions not Boxes. E.g. a Load Balancer can provide various functions: Load balancing SSL offloading etc This may be academic, but this is the abstraction that ACI provides
Key Concepts in Service Insertion Concrete Device: it represents a service device, e.g. one load balancer, or one firewall Logical Device: represents a cluster of 2 devices that operate in active/standby mode for instance. Service Graph: defines a sequence of functions connected: e.g. a firewall from Checkpoint followed by a load balancing from F5. Logical Device Context: specifies upon which criteria a specific device in the inventory should be used to render a service graph Device Package: defines things such as how to label connectors for a function, and how to translate names from ACI to the specific device. E.g. a load balancer function has predefined connectors called: external internal management.
ACI Service Graph Definitions Connectors (VLANs) Connectors (VLANs) Service Graph: web-application Consumer Function Firewall Function SSL offload Function Load Balancer Provider Terminal Terminal L4L7 Parameters Ipaddress <vip> port 80 Virtual-ip <vip> Port 80 Lb-aglorithm: round-robin Permit ip tcp * dest-ip <vip> dest-port 80 Deny ip udp *
Connectors (VLANs) Connectors (VLANs) ACI Rendering a Service Graph EPG outside Contract webtoapp EPG web Function Firewall Function SSL offload Function Load Balancer
L4-L7 Service Graph Template Generic representation of the expected traffic flow Defines Connection Points (connections and terminals) Nodes
The Service Graph Template The Service Graph Template defines the sequence of nodes/functions Example Load Balancer or Load Balancer followed by a Firewall
Templates Must be Applied For it to Be Rendered
Concrete and Logical Devices Concrete Device: it represents a service device, e.g. one load balancer, or one firewall. Can be physical or virtual Logical Device: represents a cluster of 2 devices that operate in active/standby mode for instance. Service Graph Function Node Logical Device SLB Concrete Device Concrete Device
Device Selection Policies (or Logical Device Context) Selects the right device cluster and interfaces based on selectors: Service Graph Template Name Contract Name Logical Devices Node Name Graph Template Rendered/deployed Graph Function Firewall Function Load Balancer EPG outside EPG web Contract
Deployed Graph Instances
L4 L7 Parameters
L4 L7 Parameters L4L7 Device language externaif IP Address API L4L7 Parameters
L4 L7 Parameters Function Profile Entering the L4L7 parameters is tedious and error prone The Function Profile solves this problem Each Function Profile is a collection of L4 L7 parameters
Deployment Steps and Data Plane Considerations
Service Insertion Deployment Steps Preparation: Create the necessary Physical and Virtual Domains Configure the Basic Management access on the L4L7 Device Import Device Package Create the necessary Bridge Domains/ VRFs Create EPGs and Contracts Configure Logical and Concrete Device Create or import a function profile Create a Graph Template (and use a function profile) OR Create a Graph Template and enter L4 L7 parameters by hand Deploy the Graph Template Create the Device Selection Policy Associate to a contract
Basics of ACI Forwarding How to Create a L2 Domain? Create a Bridge Domain Keep Unicast Routing Enabled Associate the Bridge Domain with a VRF The association with the VRF is because of the object model The hardware won t program any VRF if the Bridge Domain is configured only as L2 Bridge Domain 1 VRF Bridge Domain 2
You Still Need to Create Bridge Domains and VRFs Consumer Side VRF / Object model Relation Provider Side Bridge Domain 1 Bridge Domain 2 BD1 BD2
ACI Create Tenant, VRF, BD and EPG
Three Main Deployment Modes Goto: the L4L7 is the default gateway for the servers Gothrough: the L4L7 is just a transparent/l2 device, the next-hop or the outside BD provides the default gateway One-arm: the BD of the servers is the default gateway
Except for One-arm Mode you Need to Start with Two Bridge Domains Bridge Domain 1 Bridge Domain 2 10.10.10.5 20.20.20.5 EPG outside EPG web 10.10.10.x 20.20.20.x
Goto Mode VRF For Consistency with ACI Policy Model For Consistency with ACI Policy Model Bridge Domain Outside Consumer Client EPG Contract Bridge Domain Inside Provider Server EPG ARP flooding Unknown Unicast Flooding No IP Routing Consumer Side Service Graph ARP Flooding Unknown Unicast Flooding No IP Routing Provider Side Default Gateway for the Servers
ACI Behind the scenes Contract (defined by the user) EPG outside Shadow EPG Shadow EPG EPG web Internal Contracts
VLAN Assignment Physical Appliance VLANs are automatically created on the ACI interfaces VLANs are also automatically created on the L4L7 device one VLAN per each BD it is attached to
VLAN Assignment Virtual Appliance In case of Virtual Appliances vnics are automatically assigned to the shadow port-groups VLANs are automatically created on the ACI interfaces VLANs are also automatically created on the L4L7 device YOU CANNOT REUSE THE SAME GRAPH ON DIFFERENT BDs No trunking on vnics
Create Service Graph Template Create L4-L7 Device Select Path In this case, ASA use one physical interface for consumer and provider. ACI Fabric E1/9 E1/9 EPG Client EPG Web Device Type: Physical VLAN 110 VLAN 111 Select VLAN Encap for each interface EPG client 192.168.1.1/24 BD1 vlan110 consumer 192.168.1.100 vlan111 provider 192.168.2.100 BD2 EPG web 192.168.2.1
Dynamic Endpoint Attach
Dynamic Attach Endpoint with Load Balancers APIC dynamically detect new endpoint, then the endpoint is automatically added to the pool member of VIP New New 20.20.20.1 20.20.20.2 20.20.20.3 EPG Consumer 10.10.10.100/24 20.20.20.100/24 EPG Provider VIP: 10.10.10.200 Web-Pool
You Can Enable Endpoint Attachment Notification in the Graph
F5 - Endpoints are Automatically Added to the Pool
Multi-context
Multi-context Support When you select Multi-context it means that the same appliance can be exported to multiple Tenants This only works with PHYSICAL APPLIANCES The Virtual Appliance may also let you create multiple partitions but How are the vnics shared if the Virtual Appliance is on multiple Tenant? It cannot be shared because there cannot be a trunk with VLANs on the same vnic
Multi-context Support in ASA and in F5 We can partition a single physical ASA into multiple virtual firewall, known as security/virtual contexts. Each context acts as an independent device, with its own security policy, interfaces and management IP. ACI doesn t create the ASA contexts, they must be predefined. With F5 Partitions are automatically created and ACI Tenants are automatically mapped to an F5 partition.
Data Plane Separation ACI configures sub interfaces automatically VLAN 1006 VLAN 1040 Context 1 Context 2 VLAN 1073 VLAN 1074 APIC creates sub-interfaces based on dynamically allocated VLAN from a pool, and in the System context it assigns Port-channel sub-interfaces to appropriate user context, Contexts A, B, and C
Data Plane Separation ACI configures interfaces as trunks VLAN 1006 VLAN 1040 Partition 1 Partition 2 VLAN 1073 VLAN 1074
Sharing Service Devices
ACI Shared Services Tenant Common Tenant Sales Tenant Sales2 ACI lets you configure objects in tenant common that can be used by other Tenants. E.g. filters, BDs, VRFs and also Logical and Concrete Devices Tenants can attach EPGs to these objects for instance
ACI Shared Services Tenant level Tenant Common Tenant Sales Tenant Sales2 You can define Logical and Concrete Devices in Tenant Common and use them from other Tenants
Sharing Devices with Multi-Context L4 L7 Devices With Multi-context Devices, you can share a device defined in Tenant common and use it from more than one Tenant. Tenant Common Tenant Sales Tenant Sales2 Partition 1 Partition 2
How To Undo a Service Graph
How to Undo a Configuration? If you delete the Template, the graph is removed but there may be stale objects You need to remove some of the objects created da service graph OR There is a wizard to do the deletion of all objects created by the Apply wizard. Right click on a graph (one created with the template) and select "Remove Related Objects Of Graph Template"
Conclusion
Conclusion ACI is a highly flexible, programmable and integrated data centre network fabric ACI allows ease of connectivity via policy of physical and virtual devices ACI allows the automation of tedious tasks such as L4 to L7 Integration ACI has advanced troubleshooting capability for the network fabric and connected services
Q & A
Complete Your Online Session Evaluation Give us your feedback and receive a Cisco 2016 T-Shirt by completing the Overall Event Survey and 5 Session Evaluations. Directly from your mobile device on the Cisco Live Mobile App By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/ Visit any Cisco Live Internet Station located throughout the venue T-Shirts can be collected Friday 11 March at Registration Learn online with Cisco Live! Visit us online after the conference for full access to session videos and presentations. www.ciscoliveapac.com
Thank you