Integrating the Cisco ASA with Cisco Nexus 9000 Series Switches and the Cisco Application Centric Infrastructure

Similar documents
Service Graph Design with Cisco Application Centric Infrastructure

Layer 4 to Layer 7 Design

Configure. Background. Register the FTD Appliance

Cisco HyperFlex Systems

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

VXLAN Overview: Cisco Nexus 9000 Series Switches

Configuring Policy-Based Redirect

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Cisco ACI vcenter Plugin

Virtual Machine Manager Domains

Automate Application Deployment with F5 Local Traffic Manager and Cisco Application Centric Infrastructure

Deploying ASA. ASA Deployment Modes in ACI Fabric

Cisco Application Policy Infrastructure Controller Data Center Policy Model

Segmentation. Threat Defense. Visibility

Layer 4 to Layer 7 Service Insertion, page 1

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design

Cisco Application Centric Infrastructure (ACI) Simulator

Configuring APIC Accounts

Cisco ACI Multi-Pod and Service Node Integration

F5 BIG-IP Local Traffic Manager Service Insertion with Cisco Application Centric Infrastructure

Configuring Policy-Based Redirect

Virtual Security Gateway Overview

Cisco ACI Virtual Machine Networking

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

Cisco ACI Virtual Machine Networking

Deploying the Cisco ASA 1000V

Cisco ACI Virtual Machine Networking

Cisco ACI and Cisco AVS

Cisco Nexus Data Broker

Cisco ACI with Cisco AVS

Intuit Application Centric ACI Deployment Case Study

Design Guide for Cisco ACI with Avi Vantage

Configuring Policy-Based Redirect

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit

Layer-4 to Layer-7 Services

Integrating NetScaler ADCs with Cisco ACI

Cisco Intelligent Traffic Director Deployment Guide with Cisco ASA

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation)

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service

Cisco Application Centric Infrastructure

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version

Evolution of Data Center Security Automated Security for Today s Dynamic Data Centers

Table of Contents HOL-PRT-1305

Cisco ACI Multi-Site, Release 1.1(1), Release Notes

Creating Application Containers

Cisco Virtual Security Gateway Deployment Guide VSG 1.4

Configuring a Device Cluster (Logical Device)

F5 iworkflow : Cisco APIC Administration. Version 2.0

Data Center Security. Fuat KILIÇ Consulting Systems

Cisco ACI Multi-Site Fundamentals Guide

Virtualization Design

Configuring Layer 4 to Layer 7 Resource Pools

Cisco UCS Director Tech Module Cisco Application Centric Infrastructure (ACI)

Cisco Nexus 1000V InterCloud

Deploying Cloud Network Services Prime Network Services Controller (formerly VNMC)

Quick Start Guide (SDN)

Creating Application Containers

Cisco ACI Terminology ACI Terminology 2

Modeling an Application with Cisco ACI Multi-Site Policy Manager

IBM Cloud for VMware Solutions NSX Edge Services Gateway Solution Architecture

Cisco ACI Simulator Release Notes, Release 1.1(1j)

DELL EMC VSCALE FABRIC

Building NFV Solutions with OpenStack and Cisco ACI

Cisco UCS Director and ACI Advanced Deployment Lab

Cisco IT Compute at Scale on Cisco ACI

SDN Security BRKSEC Alok Mittal Security Business Group, Cisco

Application Provisioning

Cisco ACI Simulator VM Installation Guide

VXLAN Design with Cisco Nexus 9300 Platform Switches

Securing Containers Using a PNSC and a Cisco VSG

21CTL Disaster Recovery, Workload Mobility and Infrastructure as a Service Proposal. By Adeyemi Ademola E. Cloud Engineer

STRATEGIC WHITE PAPER. Securing cloud environments with Nuage Networks VSP: Policy-based security automation and microsegmentation overview

Cisco ACI Virtual Machine Networking

Principles of Application Centric Infrastructure

Chapter 10 Configure Clientless Remote Access SSL VPNs Using ASDM

Cisco Virtual Networking Solution for OpenStack

ACI Transit Routing, Route Peering, and EIGRP Support

ACI Fabric Endpoint Learning

Service Insertion with Cisco Application Centric Infrastructure

Deploy Application Load Balancers with Source Network Address Translation in Cisco DFA

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4

Service Insertion with ACI using F5 iworkflow

Next-Generation Data Center Interconnect Powered by the Adaptive Cloud Fabric

Securing Containers Using a PNSC and a Cisco VSG

Pluribus Adaptive Cloud Fabric

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Interfaces for Firepower Threat Defense

Cisco CCIE Data Center Written Exam v2.0. Version Demo

Orchestration: Accelerate Deployments and Reduce Operational Risk. Nathan Pearce, Product Development SA Programmability & Orchestration Team

Deploy Next-Generation Cloud Applications on Apache Cassandra with Datos IO RecoverX on Cisco UCS and Cisco ACI

Cisco ACI with Red Hat Virtualization 2

Question No: 3 Which configuration is needed to extend the EPG out of the Cisco ACI fabric?

Transparent or Routed Firewall Mode

5 STEPS TO BUILDING ADVANCED SECURITY IN SOFTWARE- DEFINED DATA CENTERS

Configuring Virtual Private LAN Services

Chapter 10 Configure Clientless Remote Access SSL VPNs Using ASDM

Cisco ACI vpod. One intent: Any workload, Any location, Any cloud. Introduction

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Network Virtualization

Transcription:

Solution Guide Integrating the Cisco ASA with Cisco Nexus 9000 Series Switches and the Cisco Application Centric Infrastructure Data Center Design Opportunities Modern designs for the highly secure data center concentrate on overcoming the constraints of traditional physical hardware network infrastructures. Network designers strive to optimize physical device insertion points and accommodate the emerging virtualized environments and applications. Although virtual computing promotes topological abstraction and supports dynamic logical designs, the underlying network technology must accommodate the computing layer within the limits of physical connections, VLANs, routing protocols, and traditionally fragmented management models. Several features can be viewed as clear opportunities in future data center architectures. Agile provisioning: Although application flows change dynamically along with business needs, physical network topologies do not. For instance, all transit traffic may be directed through a security device simply because that particular path cannot be easily avoided. Implementing VLAN segregation and dynamic routing protocols for service insertion becomes a complex task, and it often results in suboptimal paths for time-sensitive application traffic. The virtualized provisioning of computation resources has become a nearly instantaneous requirement, and the associated network service devices must be instantiated just as quickly and smoothly anywhere within the topology. Elastic scalability: As new computing resources and network service devices are added to the network, the availability of switch port and power becomes a constraint around the critical application farms. Direct physical connections are typically required to insert firewalls, traffic analysis tools, and other network services as close to the application hosts as possible. The network should decouple the placement of hardware devices from their functions and provide native load-distribution capabilities in order to scale with business needs. Service virtualization: Traditional network services are still relevant within a virtualized environment, and the physical-insertion model must be complemented with easy-to-deploy virtual appliances. A colocated virtual device can effectively extend firewall, load balancing, and similar services to application flows contained in the same computing hardware without the need to traverse a physical network. Such virtualized services can be rapidly deployed and retired on demand, increasing the overall scalability and versatility of the architecture. Unified configuration and visibility: Every network device typically uses its own configuration syntax and interface. Virtualized environments are managed separately from the network infrastructure with minimal shared control of common elements. A single point of network management, service provisioning, flow policy control, and monitoring provides a unified view of the infrastructure and allows the contextual reuse of common elements in an end-to-end design. Policy set simplification: Even when unified management applications are used to define the common policy rule set, the administrator must either manually select the policy for each network service device or push the same extensive rule set to all of them. As new rules are added to this set, obsolete rules are rarely 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 29

retired because the operational decoupling between service devices and application flows results in obsolete rules remaining in place. In the end, the configured policy set no longer matches the desired topology. The network must be intelligent enough to apply or retire rules on service devices based on the current application and policy needs. A layered model should be used to classify and filter traffic as close to the point of entry as possible. Each service device should be contextually programmed with only those rules that are relevant to the specific transit flows, creating a truly distributed and simplified policy set. Although recent developments in the area of software-defined networking (SDN) have produced solutions that mask some of the challenges, none of these products solve the underlying problem with the traditional physical network. A typical shortcut is to preconfigure network devices with basic terminal scripts and overlay a complex mesh of virtual tunnels to assist with the traffic engineering tasks. Full Layer 2 and 3 connectivity is still required within the underlying data center network, so the complications of VLAN segregation and IP routing continue to apply. As a result, the network administrator must now manage two networks, the physical and virtual. Many such models remain central to virtual computing, and none of them attempt to break away from the traditional network limitations or concepts. The simple conclusion is that the network itself must gain the speed and intelligence to adapt itself to its applications and the associated services. Next-Generation Data Center Fabric Architecture The Cisco Nexus 9000 Application Centric Infrastructure (ACI) framework revolutionizes the traditional data center model by separating logical and physical network topologies and supporting centralized control, automation, and service orchestration. The next-generation data center fabric becomes an ultra-high-speed physical network itself, so it can dynamically configure and interconnect heterogeneous external devices based on application policy needs. The Cisco Application Policy Infrastructure Controller (APIC) represents a single point of orchestration, distributed policy provisioning, and network intelligence. This new model abstracts the network into the following components: Fabric nodes are powered by the Cisco Nexus 9000 switch platform. Spine nodes create the core of the intelligent fabric, and they interconnect leaf nodes, which provide connections for external physical endpoints into the fabric. The longest path between two leaf interfaces anywhere in the fabric is always through a single spine node. The overhead of such point-to-point connections through the fabric measures in nanoseconds, which is negligible as far as the hosted applications and network services are concerned. This light-speed connectivity almost eliminates the need for hardware colocation and directly addresses the elastic scalability goal. Service consumers are simply the endpoints that rely on network services. They can be physical devices of virtual machines. Typical service consumers are the data center application servers or their clients, but they can also include external network connections. Both physical consumer devices and the computing server hosts connect to the fabric through leaf nodes. Service producers are the typical network service devices, such as firewalls, intrusion detection and prevention systems, network analyzers, SSL accelerators, and other in-line traffic processing systems. Physical service producers connect to the fabric directly through leaf nodes. Virtual service producers logically link into the fabric through port groups that are extended to the leaf node through the underlying physical server hardware; this capability directly supports service virtualization. Endpoint groups (EPGs) define similar service consumers in terms of application services and usage. For instance, all web servers on the same network segment may be grouped into a single EPG. Each physical or logical fabric leaf port belongs to a particular EPG, and you can group any such ports into any number of 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 29

EPGs. Networks that are external to the fabric are represented as EPGs as well. The fabric controls the communication between different EPGs according to a configured policy set, and the default behavior is to drop all inter-epg traffic. Application profiles describe network parameters, such as subnets and default IP gateways, for service consumers in each EPG. The fabric acts as the default gateway for each subnet, and it can assign IP addresses automatically or rely on the pre-existing addressing scheme. For consumers within the same subnet, the fabric transparently bridges them regardless of the physical location. In addition to supporting agile provisioning and elastic scalability, application profiles help enable unified configuration management. Contracts describe the traffic-filter rules and service requirements for network communication between EPGs. Because EPGs are defined at the fabric port level, the contracts allow fine-grained hardwareaccelerated policies at the individual application level. Contracts define what flows should be allowed from a given EPG, what network services should apply to these flows, and whether any fabric prioritization is required. For instance, a contract between Web and Database EPGs may allow only backend database connections and force such traffic through external firewall and intrusion prevention system (IPS) services. This layered filtering approach paves the way for simplification of the overall policy set. Service graphs (SGs) or service chains are ordered processing sequences of inter-epg traffic through service nodes based on the established contracts. For each allowed flow in a contract, the associated service graph defines the packet path through the network producers. For instance, the administrator may direct all HTTP traffic on TCP port 80 to traverse a stateful firewall policy, then an IPS and a network analysis device. Service graphs allow for a greater level of device-level rule abstraction and reuse that also supports policy set simplification. Figure 1 illustrates a fabric view of the network and its physical and virtual devices. Any device attaches to the fabric at any available port at any physical location. The desired topology is created completely by the fabric itself, so scaling can be extremely elastic. All spine nodes connect directly to every leaf node, which brings inherent reliability and network path optimization. Service consumers in a single EPG or subnet can be located anywhere within the topology and still communicate in a completely transparent fashion. Virtual and physical consumers grouped in a single EPG can use both physical and virtualized services from the producers. Additional virtual service producers can be instantiated and retired by the fabric based on immediate network needs, helping to enable the agile provisioning of services. The fabric can intelligently balance flows across multiple service producers even within a single contract and maintain symmetry for stateful devices. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 29

Figure 1. Fabric Network Setup and Device Attachment Figure 2 depicts Web and Database EPGs, which receive network settings from their respective application profiles within the fabric. All communication from Web EPG to Database EPG must go through the associated policy contract. Based on this contract, the fabric port ASIC (application-specific integrated circuit) immediately denies all Telnet traffic from Web service consumers to any Database consumer and allows all TCP connections from Web consumers to Database consumers on ports 1400 and 1401. All permitted traffic from any Web consumer to any Database consumer is redirected through the service graph. The service graph contains a firewall device that statefully inspects all traffic based on more specific policies. Once the firewall permits a packet, it will be delivered by the fabric to the appropriate consumer in the Database EPG. All other traffic from Web to Database is dropped by the fabric. This approach to flow forwarding perfectly illustrates the policy set simplification aspect of the fabric. Some basic IP and transport-port-based rules are applied directly at the port level, so the blocked traffic does not consume network resources. The firewall in the service graph can apply more detailed rules, including application inspection where the secondary connections are permitted automatically; in that case, the fabric filter must be more permissive. Figure 2. EPG Flow Abstraction with Application Profiles, Contracts, and Service Graphs 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 29

In addition to controlling the network itself, the Cisco APIC is also capable of pushing device-specific policies to the service producers through standard APIs. Network service administrators can define such policies on the APIC and expose associated service instances for insertion into the inter-epg contracts. For instance, a security administrator can create firewall configuration templates that follow the general organizational security policy for each tier of traffic. These templates will be exposed to application and network administrators who simply need to define the EPGs, create the contracts for flows between these EPGs, and instantiate firewall services using the predefined templates for the service graphs. This level of abstraction allows contract creation at the application level without knowledge of the specifics of the firewall configuration. Once all of the elements of a service graph are defined, the fabric instantiates the appropriate services on physical or virtual devices and programs the hardware port rules. Figure 3 illustrates service graphs between multiple EPGs within the fabric. Notice that application traffic from the Users EPG can traverse to either the Web EPG or the Files EPG with both shared and unique service producers in the path. The fabric creates a loop-free path through the service nodes to assure single-pass processing within each service, such as the SSL accelerator and firewall in this example. This approach optimizes the use of processing resources on the service producers and of the shared device-specific policy templates. As another example of agile provisioning capability, the new fabric model simply removes any ties to the underlying physical hardware from the logical application flow design. This service producer sharing capability across multiple service graphs depicts a future functionality of Cisco ACI. Figure 3. Optimal Service Graphs Even though application, network, and service administrators have different roles, all their tasks can be accomplished using REST API through the Cisco APIC. This makes possible a unified point of orchestration where all policies are deployed or retired based on the most current application and network configuration that the fabric maintains. All administrators can also monitor and troubleshoot the network from a single interface. Role-based access control (RBAC), change control, and audit capabilities are inherently provided at the network level, because all devices are controlled in a centralized fashion from the APIC. Transition Path to a Fabric Environment with ASA The Cisco ASA 5585-X appliances as well as the new virtualized ASA (ASAv) platform integrate transparently into the intelligent fabric model as service nodes, supporting the full range of stateful security capabilities in the nextgeneration data center. Depending on the type of deployment and state of readiness for the data center transformation, multiple paths are available for implementing a highly secure data center with Cisco Adaptive Security Appliances, the Cisco Nexus 9000 Series Switches, and the Cisco APIC. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 29

Standalone Mode If an immediate migration to the fabric model is not feasible, Cisco Nexus 9000 Series Switches still provide higher interface density at lower per-port costs. The fully interoperable ASA clustering solution supports flexible scaling of firewall forwarding capacity with Cisco ASA 5585-X appliances for north-south flows. Hypervisor-agnostic ASAv instances can effectively protect east-west traffic. The Cisco ASAv can perform the same functions as any ASA appliance that now simply resides on a hypervisor. An ASAv instance can be colocated with the protected virtual machines to use the traditional bridging and routing capabilities and contain the interapplication flows to the same hardware host. Unlike the Cisco ASA 1000V Cloud Firewall, the ASAv can maintain its own data path with no dependency on vpath or the Cisco Nexus 1000V. Similar to any service VM in the Cisco ACI environment, Cisco ASAv works also with a Cisco Application Virtual Switch (AVS). This approach prepares the network for a fabric migration at a later point, and the Cisco Security Manager can be used for centralized ASA management and monitoring in the interim. Traditional Mode Building on the previous case for a gradual migration into the fabric environment, this model adds intelligent service and policy provisioning capabilities while maintaining traditional bridging and routing configuration on the Cisco ASA instances. The applications are abstracted into EPGs, and contracts are used to define application flow policies between different EPGs. The security administrator creates ASA-specific configuration templates, so firewall services can be instantiated within the service graphs for inter-epg traffic. This approach adds policy set simplification to the list of benefits. Within the physical domain, the ASA data plane typically connects to leaf fabric nodes in a firewall on a stick fashion. When operating in a virtual domain, ASAv instances are hosted on physical server hardware that connects to the fabric in a similar manner. This interface is used to carry network application data across service graphs between the fabric and the Cisco ASA, so it is called a carrier interface. Transit packets destined for security inspection in a service graph arrive to the ASA on this interface and leave from the same interface back into the fabric. In addition to this data plane interface, ASA consumers typically have management and utility interfaces as well. The management interface is used to configure and monitor the ASA from the APIC, and the utility interface is used to reach Domain Name System (DNS), authentication, authorization, and accounting (AAA), Network Time Protocol (NTP), and other services that ASA requires in normal operation. These additional interfaces connect to the fabric as service consumers, so they never carry any transit application traffic. Figure 4 presents a logical diagram of the carrier interface connection for both an appliance and an ASAv instance. Multiple physical carrier interfaces are supported, but a single instance is typically configured for simplicity and scalability. A fabric-enabled ASA still supports full high availability and scalability functionality with failover and clustering. Both regular and spanned EtherChannels can be used as carrier interfaces when additional bandwidth aggregation is necessary. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 29

Figure 4. Cisco ASA Fabric Attachment The traditional insertion mode is based on configuring the carrier interface as a VLAN trunk. The fabric extends the ingress and egress VLAN pair to each ASA and applies a desired policy on traffic that traverses this pair in either routed or transparent mode. The same approach can be used with multiple physical interfaces. Figure 5 illustrates the VLAN-based insertion process for an ASA running in transparent mode. In this example, each bridge virtual interface (BVI) represents a firewall instance in the corresponding service graph. The associated security policy is defined by the template that is referenced by the inter-epg contract. All these decisions are made at the fabric level, and the ASA needs no awareness of the surrounding topology or the traffic flow. Figure 5. Transparent Mode ASA Insertion in Traditional Mode Ingress packets for each graph arrive on the trunk carrier interface with an appropriate VLAN tag. After decapsulation, these packets are pushed through a corresponding bridge group, where the associated security policy is applied. Using the regular MAC bridging table, the Cisco ASA sends the packets back to the fabric using the other VLAN in the bridge group. The fabric will deliver these packets to another device in a service graph or direct to the destination consumer. It should be noted that the ASA continues to use regular MAC bridging tables in transparent and routing tables in routed mode for next-hop discovery in this transitional insertion mode. The APIC is still capable of intelligently stitching the ASA into the traffic path using these methods through a unified orchestration channel. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 29

Multiple Cisco ASA appliances can be connected to different leaf nodes to create distributed firewall service farms with ASA clustering. This functionally allows the reuse of existing ASA hardware and extends the usable life of previous network infrastructure investments. Since the fabric abstracts network connections through service graphs, the physical positioning of the firewall farm members does not have to change when the new application flows are instantiated. This logical abstraction and topological decoupling removes the requirements to supply firewalls with switch ports and power capacities near the application server farms. The fabric supports stateless load-sharing between the participating firewall instances while supporting flow symmetry through the service graph. The service graphs also help to open the best possible path through the associated network resources, so all firewalled traffic for a given flow is inspected in a single pass. ASA clustering internally compensates for the external load-balancing asymmetry to provide true stateful scalability with up to 16 ASA5585-X appliances in a single logical firewall service farm. Deployment of the Cisco ASA becomes a streamlined process in a next-generation data center that relies on Cisco ACI: ASA appliances are cabled up, connected to the fabric leaf nodes, configured with basic management information, and registered with the fabric. From that point on, all configuration tasks are automatically performed by the fabric itself. On-demand ASAv provisioning can be automatically instantiated by the Cisco APIC through an API. Using the APIC, the network administrator configures device-level ASA parameters, such as failover, clustering, and carrier interfaces. The standards-based REST API is used to abstract fabric-driven management tasks from the traditional ASA configuration syntax and interface. Using the APIC, the security administrator defines standard templates for firewall rules based on the organizational security policies. The configuration is held within an abstract service graph that is not tied to any particular physical device. This configuration is imposed on an available and compatible firewall instance when the service graph is attached to a specific policy contract. These templates are used to instantiate firewall services from within the contracts without the need to understand ASA operational and configuration models. Using the APIC, application administrators define EPGs and associate them with policy contracts for interapplication traffic. Standard firewall policy templates are used to attach ASA services to the application flows. The associated policies are transparently instantiated on the available ASA and other service producers that are to be inserted into the optimized service graphs. When the application flows change or expire, all the related policy sets and service instances are automatically decommissioned. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 29

ASA Service Insertion Workflow in APIC Using the Cisco APIC northbound API, the insertion of Cisco ASA firewall services in Cisco ACI tenants can be fully programmed. The same API objects are equally accessible in the APIC GUI, so administrators can review and refine any configurations through the GUI after initial script deployment. Figure 6 shows a logical workflow composed of five basic steps needed to add an ASA service and graph it into EPGs. Upon completion, this workflow creates a tenant and renders a service graph in the fabric. Cisco ASA devices are set up to allow registration, and the APIC can then orchestrate the ASA data plane for a given tenant. Figure 6. Cisco ASA Service Insertion To address the workload, a tenant can define additional ASA service devices and consume those into new service graphs. Further, any service graph can easily be upgraded or downgraded in order to make use of any registered ASA devices of different performance levels. Here are the five general steps in creating a tenant with Cisco ASA services: 1. Create the tenant: The tenant must be created along with its application profiles, EPGs, bridge domains (BDs), and contracts before any Layer 4 through Layer 7 (L4-L7) services can be added. Please refer to Cisco APIC documentation for more details on this step. 2. Define the Cisco ASA L4-L7 service device cluster: Allow for virtual (VMM) or physical domain clusters of concrete Cisco ASAv or ASA 5585 devices. 3. Create the service graph: Create Cisco ASA security policies for provisioning to any registered ASA device. 4. Define the logical device context: Establish mappings between ASA terminals and BDs, and to their underlying ASA logical clusters. 5. Apply the graph to the contract subject: Verify the Cisco APIC configurations and provision the graph and services in the fabric. In Step 5, the firewall service is provisioned for the tenant. The Cisco APIC renders the graph in the Cisco ACI fabric and orchestrates the ASA data plane to connect the ASA into communication between EPGs. The ASA can serve as a routing or bridging device. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 29

Product Version Compatibility This integration guide is based on the ASA device package and versions of the APIC and fabric switch software specified in Table 1. As the Cisco ACI solution evolves, the Cisco APIC GUI will naturally change, prompting relevant changes in the Cisco ASA device package. Table 1. Cisco ASA in Cisco ACI: Product Versions Product Software Version Cisco APIC 1.0.1e Cisco Nexus 9000 Series Switches 11.0.1b Cisco ASA device package 1.0.1 Cisco ASA 5585-X 8.4 or later Cisco ASAv 9.2.1 or later Implementing Traditional Mode Insertion with ASA Services The following discussion covers the GUI aspects of adding a Cisco ASA firewall to a tenant s L4-L7 services. The ASA device package imports firewall features into the Cisco APIC and exposes ASA configurations through the GUI or API. The ASA integrates into the Cisco ACI as a service producer in these modes: Go-To: The routed firewall maintains static or dynamic routing and Address Resolution Protocol (ARP) tables. Go-Through: A transparent firewall bridges VLANs and tracks MAC address tables. These two modes are selected in the APIC and must be matched with adequate ASA firewall mode configuration. Importing the ASA Device Package For the Cisco APIC to understand Cisco ASA capabilities, the ASA device package must be imported. Multiple versions of the package can be installed in the APIC. The steps are as follows: 1. Navigate to L4-L7 SERVICES > PACKAGES. 2. Right-click on L4-L7 Service Device Types and choose Import Device Package. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 29

3. Import and review the latest ASA device package, previously downloaded from Cisco.com. ASA configuration parameters can now be expanded in the APIC, exposing the features supported by the imported version of the device package. Preconfiguring the ASA Device Registering a Cisco ASA device requires a basic configuration on the ASA to support highly secure network connectivity and match the corresponding credentials provided to the Cisco APIC. Before applying the preconfiguration, the ASA device should be placed in the appropriate firewall, then in the clustering mode. The ASA spanned interface mode can be used in Cisco ACI to achieve the best level of redundancy, where existing connections are synced throughout the cluster and continue even on ASA node failure. Further, the appropriate ASA firewall mode must be entered, transparent for Go-Through or routed for Go-To, as an option under the APIC device cluster (ASA by default comes up in routed mode):!for Go-To mode no firewall transparent!for Go-Through mode firewall transparent!for ASA Spanned Cluster Interface mode cluster interface-mode spanned Optionally, Cisco ASA running code 9.2.1 or later has a restricted debug-only command-line interface (CLI), commonly used by the Cisco Technical Assistance Center (TAC). This CLI allows a strict fabric management mode, where only the Cisco APIC is allowed to apply a new configuration to the ASA device. When enabled, the ASA configuration is cleared and preconfiguration is allowed by means of the CLI in order to register this device in the APIC. From this point forward, the configuration can be built on that ASA only through the APIC GUI or API. To bring the ASA out of this mode, one must negate this management-mode CLI and recover full configuration ability in the CLI. To use the ASA clustering, administrators must configure it using CLI or the Cisco Adaptive Security Device Manager (ASDM) before registering the master with the APIC. Consequently, the clustering feature does not support the fabric management mode.!enable advanced settings service internal!enable fabric-controller management-mode management-mode fabric-controller 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 29

The following ASA preconfiguration allows the Cisco APIC to register the ASA device under L4-L7 services. The ASA M0/0 interface is connected to a separate switch, outside the fabric, and is reachable from the APIC.!Management interface (OOB) interface Management0/0 nameif management security-level 100 ip address 10.1.1.36 255.255.0.0!Credentials for ssl session aaa authentication http console LOCAL http server enable http 0.0.0.0 0.0.0.0 management username admin password <device_password> encrypted privilege 15!Route needed to reach APIC route management 0.0.0.0 0.0.0.0 10.1.0.1 1!Ensure Crypto key is present on ASA!aci-asa1# sh crypto key mypubkey rsa Key pair was generated at: 13:45:59 UTC Jul 16 2014 Key name: <Default-RSA-Key> Usage: General Purpose Key <snip> The Go-To ASA Routed Firewall The routed firewall mode on the Cisco ASA matches the Go-To mode in L4-L7 services. The fabric routes traffic to a Layer 3 ASA device, the security policy is applied, and the ASA then forwards packets back to the fabric on the second interface. Before adding L4-L7 services, a tenant must be created with its basic EPG, BD, and contract constructs. ASA L4-L7 Service Device Clusters Service devices in the Cisco APIC are consumed in clusters, associated to either a physical or a virtual (VMM) domain. Each device cluster may contain multiple concrete ASA devices, provided they fit into the appropriate domain. That is, the Cisco ASA 5585 belongs in the physical domain, and the ASAv must be in the VMM domain. Note that the APIC cluster is a generic construct which allows multiple concrete devices to share the application load with out tracking state information. An ASA cluster can be a concrete device under an APIC cluster and extend state sharing and greater resiliency in the Cisco ACI fabric 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 29

To create a device cluster, you must select an appropriate tenant and follow these steps: 1. Navigate to the tenant and expand the L4-L7 Services folder. 2. Right-click on Device Clusters and choose Create Device Cluster. 3. For a physical ASA, indicate PHYSICAL under Device Type and add connectivity details. The ASA must be preconfigured as indicated in the screen below to allow the Cisco APIC to open an SSL connection to the ASA and gain administrator rights. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 29

4. For ASAv device clusters, the Device Type is VIRTUAL, and the appropriate VMM domain ASAv instance is filled in, along with the connectivity and interface mapping parameters. 5. Next we create L4-L7 Concrete Devices under the cluster. Examples for physical and virtual devices are given below. The Cisco ASA 5585 concrete device requires selecting path information from a pull-down menu of known leaf switch ports. The Interfaces section shows the mappings of ASA 10 Gigabit Ethernet ports to leaf ports and the logical connectors. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 29

The virtual ASA example includes information on the VM and vcenter names. These names help the Cisco APIC identify the VM and assign vnics to the given service graph port profiles it created on the Cisco AVS or distributed virtual switch (dvswitch). When mapping vnic interfaces, Network adapter1 matches the Management0/0 interface on the ASAv, and the remaining data plane interfaces are started from Network adapter2. High-availability and port bundle device parameters are shown in the next window. IP address parameters are added later under Security Group settings. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 29

You can define the Cisco ASA device under the cluster. Next you can define the ASA parameters to be applied to this device. Finish creating the device cluster and then verify that the ASA device is registered with the Cisco APIC and is in a stable state. Navigate and select the new cluster created under L4-L7 Services > Device Clusters folder. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 29

Creating a Service Graph Navigate to the Service Graphs folder under L4-L7 Services and choose Create L4-L7 Service Graph. Drag the Firewall service node to the graph and connect the internal and external terminals to the Provider EPG and Consumer EPG, respectively. Each ASA connection should be L2 adjacent to the EPG. Select config function parameters in the service node to define the data plane and security policy parameters. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 29

The Cisco ASA IP and security policy configuration can now be added under Firewall configuration function parameters. These parameters can later be updated under the Service Graphs folder. For example, let s take the IP topology shown in Figure 7, where ASAv is the Go-To device between two EPGs. Figure 7. Tenant1 Diagram with ASAv as a Default Gateway 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 29

You can update the ASA parameters to define two ASA interfaces and associate them to external and internal connectors under the Function Config folder. The interface names (i1 and i2) translate to the ASA interface names defined under the nameif CLI. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 29

Next you can create a second interface with a name, an IP address, and security-level settings. In the Function Config folder, external and internal interface mapping is established to the newly defined interfaces. Now the data plane portion is defined and all the ASA features that are exposed in the device package, like access lists, NAT, and inspection capabilities, can be configured under the security group. Multiple security groups can expose different levels of security policy. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 29

Creating a Logical Device Context Navigate to Device Cluster Selection Policies, under L4-L7 Services, and right-click to Create Logical Device Context. Select the cluster to be used with all the service graphs, or specific service graphs, as shown in the next screen. You can also bind logical connectors to the appropriate bridge domains. Now we are ready to apply the service graph to a contract. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 29

Applying a Service Graph to a Contract Subject Navigate to Security Policies > Contracts > Contract, and select the subject. Under the Service Graph pull-down menu, select the graph created earlier and click Submit. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 29

The APIC now renders the graph in the fabric and orchestrates the data plane on the ASA leaf and virtual switches. To verify that the device cluster and service graph have been deployed, make sure they are shown under their respective Deployed folders. Following are Go-To examples of Cisco ASA 5585 and ASAv data plane interface configurations added by the Cisco APIC. For the ASA 5585, 10 Gigabit Ethernet (TenGig0/6 and 0/7) ports are used as trunks, where VLANs are dynamically selected from the VLAN pool and configured on the ASA.! ASA5585 interface TenGigabitEthernet0/6.771 vlan 771 nameif i1 security-level 100 ip address 10.1.1.2 255.255.255.0! interface TenGigabitEthernet0/7.738 vlan 738 nameif i2 security-level 0 ip address 10.2.2.2 255.255.255.0 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 29

ASAv GigabitEthernet0/0 and 0/1 ports are placed in the appropriate virtual switch port profiles and updated with the given parameters.! ASAv10 interface GigabitEthernet0/0 nameif outside security-level 0 ip address 10.1.1.2 255.255.255.0! interface GigabitEthernet0/1 nameif inside security-level 100 ip address 10.2.2.2 255.255.255.0! The Go-Through ASA Transparent Firewall The Cisco ASA transparent firewall matches the Go-Through mode in Cisco APIC L4-L7 services. In this mode, it is recommended that flooding be enabled in BDs that connect to the ASA. Flooding allows the ASA to learn the MAC addresses and to apply security policy to the bridged interfaces The Go-Through mode setup is similar to that of the Go-To workflow, except that endpoints are on the same subnet. In this mode, an ASA bridge group is defined with internal and external interface members and the appropriate security levels. A single IP address in the same subnet is assigned to the BVI, which can be used to verify connectivity issues to the endpoints. Cisco ASA security policy access control lists (ACLs) and inspections are applied in the same fashion but reflect the given endpoint IP information. Consider the topology in Figure 8, where the ASA bridges two connectors into one broadcast domain. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 29

Figure 8. Example of Tenant1 with ASA 5585 Transparent Firewall When the Go-Through option is selected under Device Cluster, you can configure the Cisco ASA configuration transparent firewall under the service graph. 1. Navigate to the BridgeGroupIntf folder and define a bridge group ID, from 1 to 100. Then add a BVI IP address and mask as shown below. 2. Next add a first interface configuration (i1), and place it in the created bridge group. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 29

3. A second interface is added and assigned to the same bridge group. 4. The Function folder is updated in the same way as in the Go-To mode. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 29

This completes the data plane configuration needed for the Cisco ASA to bridge the two terminals in the service graph in a Go-Through service device mode. The rest of the steps given in Go-To section apply here in the same fashion. The parameters defined in the Cisco APIC translate to the following ASA configuration: interface GigabitEthernet0/0 nameif i1 bridge-group 2 security-level 100! interface GigabitEthernet0/1 nameif i2 bridge-group 2 security-level 0! interface BVI2 ip address 10.1.1.100 255.255.255.0 Workarounds for Unsupported ASA Features The Cisco ASA device package for integration with the Cisco ACI fabric keeps improving. Before full support for new product features is available, a hybrid approach can be used. One feature in high demand is support for multiple contexts in the device package for the Cisco ASA 5585-X physical appliance. To configure virtual ASA contexts in the Cisco ACI without their support in an existing device package involves additional ASA preconfiguration before the Cisco APIC takes over the rest of the policy control. To do this, take the following steps to allow the APIC to register each context as a separate ASA service producer in the ACI fabric. 1. Inside the ASA system context, preconfigure a port-channel VLAN trunk interface with two leaf switches, forming a one-arm logical interface on the Cisco ASA 5585-X appliance(s). Note that this hybrid approach is more difficult to implement without a one-arm port channel. Further, if clustering or failover is used, those settings must be configured in the system context. 2. Using a range of VLANs defined in a VLAN pool in the ASA 5585-X physical domain, preconfigure all VLAN subinterfaces in the system context. Note that this VLAN range must hold an even number of VLANs, since all APIC assignments are in pairs. 3. Define all user contexts from the system context with the allocated management interface, and assign all portchannel subinterfaces to each user context. 4. Inside each user context, define the firewall mode and a unique management IP. At a later point, the Cisco APIC will dynamically allocate VLANs from a pool and configure them appropriately inside a user context. 5. Add the remaining preconfiguration to each context so that the APIC can properly register the ASA contexts. At this point, the Cisco APIC can be instructed to contact and register each ASA context as a separate service producer to the ACI fabric. The APIC still needs to use a port-channel ID (Port-channel2) for each user context. As a result of the hybrid approach, certain configuration commands issued by the APIC fail gracefully, and the rest of the ASA policy and data plane is configured as expected. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 27 of 29

The following highlighted configuration commands are not allowed in user context and do generate APIC warnings: New Model and Future Horizons The new fabric-based model brings many unique architectural and operational advantages that solve rather than hide problems with traditional data center networks. Hardware-accelerated point-to-point connections within the fabric eliminate the need to colocate and dedicate computing and network service devices. A flexible set of flow parameters is used by the ingress fabric ports to block, mark, or redirect traffic toward rapidly provisioned service instances on physical and virtual device farms. You can use additional Cisco ASA 5585-X appliances or ASAv instances to elastically scale a highly secure network as necessary without worrying about specific insertion points, localized port availability, or the classic limitations of VLANs and dynamic IP routing. When Cisco ASA services are applied, every packet of the flow receives the full set of stateful security checks in a single pass. The multiple layers of filtering with generic service contracts and ASA-specific rules allow a high degree of policy set simplification. As application needs evolve, the appropriate policy sets and firewall instances can be retired or repurposed automatically. The Cisco APIC creates a unified orchestration and visibility point for the fabric, service nodes, and individual consumers with built-in RBAC support, distributed policy deployment, health visibility, and inherent change-control capabilities. Even though the entire network is orchestrated from a single interface, this model creates a high degree of abstraction among application, network, security, and other network services. A firewall administrator configures generic templates that describe the Cisco ASA policy for each application flow group, and the application and network administrators instantiate these templates for specific flows without having to learn ASA functionality. The interoperability of all interconnected fabric devices is validated by Cisco and backed by award-winning Cisco TAC support. Problems are bound to happen with any product or service, and having a team of experts available around the clock to resolve any problems makes all the difference when every second of downtime counts. There is no need to be bounced between different vendors when everything can be solved by one phone call. In addition to its multitude of immediate benefits, the fabric architecture paves a way for many future features and opportunities to create more intelligent networks. Because every network device integrates into the fabric, you can use bidirectional feedback loops to create adaptable and contextual application policies based on flow behavior and endpoint reputation. Various metadata can be transparently attached to transit packets within the fabric to carry and expand this additional intelligence between service devices without complex out-of-band overlays. These functionalities could be effectively used to create an entirely new approach to path optimization and traffic engineering. The possibilities are nearly endless, and this is the true power of a network that is smart by design. 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 28 of 29

Printed in USA C11-733540-00 12/14 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 29 of 29