Cisco ACI for Red Hat Virtualization Environments

Size: px
Start display at page:

Download "Cisco ACI for Red Hat Virtualization Environments"

Transcription

1 White Paper Cisco ACI for Red Hat Virtualization Environments First Published: April 2018 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA USA Tel: NETS (6387) Fax: Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 31

2 Contents 1. Executive summary Introducing Red Hat Virtualization Introducing Cisco ACI The challenges of virtual networking with traditional data center fabrics Cisco ACI: The next-generation data center networking... 8 Cisco ACI physical and virtual domains Running Red Hat Virtualization with Cisco ACI Introducing the Red Hat VMM Domain Completing virtual and physical network automation Augmenting visibility and simplifying operations with Cisco ACI Red Hat VMM domain Implementing distributed Layer 2-to -Layer 4 security policies Extending RHV networks across data centers Migrating from physical to VMM domains Conclusions Appendix A: Connecting RHV hosts to the Cisco ACI leafs Appendix B: Using ACI distributed networking without contract with preferred groups Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 31

3 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. ( This product includes software written by Tim Hudson (tjh@cryptsoft.com). Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) 2018 Cisco Systems, Inc. All rights reserved 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 31

4 Deploying Red Hat Virtualization with Cisco ACI 1. Executive summary This document describes the benefits that Cisco Application Centric Infrastructure (Cisco ACI ) brings to environments running Red Hat Virtualization (RHV). Cisco ACI Virtual Machine Manager (VMM) domain integration with RHV, introduced with Cisco Application Policy Infrastructure Controller (APIC) Release 3.1, brings industry-leading Software-Defined Network (SDN) capabilities to customers deploying the most popular open-source Kernel-based Virtual Machine (KVM) virtualization platform. This document describes the Cisco ACI solution and the details of the integration with Red Hat Virtualization. The document addresses the tangible benefits it brings for customers, including: Faster infrastructure provisioning Fabricwide security and enhanced segmentation for virtual workloads Simpler operations and enhanced collaboration between network and virtualization teams This document is for data center architects, network engineers, security engineers, and virtualization engineers with interest in SDN and RHV. No prior knowledge of Cisco ACI is assumed. 2. Introducing Red Hat Virtualization RHV is the leading open-source x86 virtualization technology, building on the widely deployed Red Hat Enterprise Linux (RHEL) operating system. IT organizations use RHV to virtualize a variety of workloads running on RHEL and other operating systems, including Java applications, Customer Relationship Management (CRM), Enterprise Resource Planning (ERP), databases, and more. The key components of RHV include the Red Hat Virtualization Hosts (RVH), the Red Hat Virtualization Manager (RHVM), and the Storage Domains that can support a variety of choices, including open-source and commercial solutions. 3. Introducing Cisco ACI Cisco ACI is the industry s most widely deployed SDN for data center networking. Cisco pioneered the introduction of intent-based networking with Cisco ACI in the data center. Cisco ACI implements a programmable data center Virtual Extensible LAN (VXLAN) fabric that delivers distributed networking and security for any workload, regardless of its nature (virtual, physical, container, etc.). Cisco ACI has four key components: Cisco APIC: Cisco APIC is a clustered software application that represents the heart of the system. Cisco ACI design was inspired by promise theory, and Cisco APIC implements a declarative model approach to SDN. Contrary to other solutions in the industry, Cisco APIC is not concerned with the low-level details of programming network data planes. The application offloads low-level data-plane programming to a distributed fabric, and communicates network and policy intent to data-plane elements using the open OpFlex protocol. OpFlex agents then program the data plane of hardware or software or both switches to fulfil the intent programmed on Cisco APIC Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 31

5 The data center fabric: The data center fabric builds on the Cisco Nexus 9000 Family to implement a leafspine architecture. The Cisco Nexus 9000 delivers nonblocking, penalty-free VXLAN fabrics that can operate at 40 and 100 Gigabit Ethernet leaf-to-spine. It also delivers flexible access speeds for connecting to external devices, including Fast Ethernet, 1, 10, 25, or 40 Gigabit Ethernet. The virtual access layer: The Cisco APIC declarative model allows it to manage many different virtual switches, including the Cisco ACI Virtual Edge and several third-party virtual switches. Cisco APIC can use OpFlex to control open-source virtual switching platforms, such as Open vswitch, and uses northbound open Application Programming Interfaces (APIs) to manage a VMware Virtual Distributed Switch (VDS). The Cisco ACI policy model: Cisco APIC implements an object model that abstracts the entire data center fabric, including physical and virtual elements as well as network constructs. The policy model facilitates the automation of provisioning and operational tasks and enables intent-based networking. With Cisco ACI, customers can take advantage of a single SDN solution to have consistent network and security policies across heterogeneous environments common in most IT organizations. Figure 1 illustrates how Cisco APIC enables customers to use intent-based networking to have comprehensive network and policy scope across virtual, physical, and container environments. Figure 1. The Cisco APIC declarative approach to SDN enables an intent-based network across a heterogenous data center Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 31

6 Also illustrated in the figure, Cisco ACI fabrics can have leaf-spine Points Of Delivery (PODs) running on different locations using Cisco ACI multipod capabilities. These capabilities facilitate extending networks and security policies across different server rooms or different data centers. Finally, Cisco APIC integrates with other domain controllers, including virtualization managers, container or Platform-as-a-Service (PaaS) environments, and more to extend networking into the respective virtual switching environments. 4. The challenges of virtual networking with traditional data center fabrics Before covering the details of Cisco ACI, it is helpful to review the existing state of affairs for many organizations. Data center networking and compute virtualization have long operated as independent silos with limited or no collaboration between the teams operating the respective domains. In most organizations, when the virtualization team requires connectivity to deploy Virtual Machines (VMs) for a new application, they request a service from the network team through a ticketing system and wait for the information that the network team will provide them to provision logical networks on the virtual clusters. For the network team, the creation of new networks may include Layer 2 and Layer 3 aspects as well as security and network services. For a new application or application tier, the network team may select a VLAN ID and an IP subnet for the new service, update the required Cisco Prime Network Registrar IP Address Management (IPAM) tools and/or Configuration Management Databases (CMDBs), and then configure the network accordingly. This configuration is performed box-by-box to create the VLAN on every required switch, enabling the right VLAN ID on every access port of every switch where a hypervisor is connected. Routing is then configured for the selected subnet on Switched Virtual Interfaces (SVIs), setting up first-hop redundancy protocols such as Hot Standby Router Protocol (HSRP) or Virtual Router Redundancy Protocol (VRRP), updating routing protocol filters to announce the new subnet, etc. After the configuration is complete, the network team can tell their peers on the virtualization team which VLAN ID to use for the new subnet. In RHV, networking for virtual machines is implemented by configuring logical networks. The virtualization team configures a logical network on the RHVM using the VLAN ID provided by the network team. The team then associates the logical network with the RHV clusters and finally configures the logical network on the uplinks of every host in all the clusters that may run virtual machines that should have access to that network Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 31

7 Figure 2 illustrates the tasks outlined previously in a very simplified way, showing only a small number of devices. Figure 2. Fabric and RHV administrators have too many management touch points in a traditional deployment. The red dotted arrows indicate the many provisioning touch points involved for both network and virtualization administrators, in order to give an idea of the complexity of the tasks involved. In Figure 2, the blue line identifies a VLAN dedicated to connecting to central IP storage, for instance, the Network File System (NFS). The NFS VLAN must be configured on every switch port connecting Red Hat Virtual Hosts (RHVHs), and it must have a corresponding logical network configured with the right network role in the RHVM (such as storage, virtual-machine data, or migration). The purple and yellow lines identify virtual-machine logical networks for different applications, application tiers, or both. Every such VLAN requires a corresponding logical network. The process described in Figure 2 is a simplification of reality because additional services such as security and application load-balancing services often must also be configured. This task is a cumbersome, nontrivial job to automate when running traditional networks. In addition, after all these tasks are accomplished, the network administrator lacks proper visibility of the virtual endpoints connected. Beyond seeing IP and MAC addresses that appear on the network, the network administrator is left without any context of what device type the IP or MAC belong to, which virtual machine, what RHV cluster the virtual machine is running on, or what application they are part of Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 31

8 5. Cisco ACI: The next-generation data center networking Cisco ACI implements a leaf-spine VXLAN fabric that uses a distributed control plane built using standard technologies such as Intermediate System-to-Intermediate System (IS-IS), Council-Of-Oracle-Protocol (COOP), and Multiprotocol Border Gateway Protocol (MP-BGP). The fabric provides distributed networking and security to a variety of endpoints. Traffic between connected endpoints, such as physical servers, virtual machines, containers, or routers, and the Cisco ACI leafs can use VLAN or VXLAN encapsulation, whereas traffic between Cisco ACI leafs is always VXLAN encapsulated. Cisco APIC automatically programs and maintains the necessary control-plane elements so that fabric administrators do not need to configure IS-IS or COOP, or set up VXLAN tunnels. The control plane is distributed on the fabric to maximize scale and availability. In addition, Cisco APIC automatically assigns the required VXLAN Virtual Network IDs (VNIDs). To enable intent-based programming of network services, the Cisco ACI model provides abstractions that facilitate network and security to be configured on the fabric without the administrator s having to be concerned with box-bybox device configuration or with the physical or virtual topology. These abstractions are implemented in a multitenant object model so that a single fabric can be shared by multiple administrative domains. For the purpose of this paper, the more relevant components of the object model are the following: Tenant: At the upper level of the Cisco ACI model, tenants are fabricwide administrative folders that contain logical network and policy elements. You can use the Cisco ACI tenancy model to isolate separate organizations, such as sales and engineering, or different environments, such as development, test, and production, or combinations of both. You also can use the model to isolate infrastructure for different technologies or fabric users, such as RHV infrastructure versus OpenStack versus Big Data, Mesos, etc. The use of tenants facilitates organizing and applying security policies to the network and provides automatic correlation of statistics, events, failures, and audit data. VRF (Layer 3 context): VRF is a well-known networking concept that enables the definition of isolated routing spaces. Each Virtual Route Forwarding (VRF) instance maintains a dedicated routing and forwarding table. A VRF instance must be part of a tenant. It also constitutes a policy domain. Bridge domain: A bridge domain implements a Layer 2 broadcast domain, effectively implemented as a VXLAN. A bridge domain may have one or more IP subnets associated to it. When a subnet is configured on a bridge domain, Cisco APIC enables corresponding SVIs distributed on the required Cisco ACI leafs. Cisco ACI automatically implements distributed routing and a distributed default gateway for each configured subnet. In addition, Cisco ACI provides numerous features to minimize or eliminate Layer 2 flooding. A bridge domain must be associated with a VRF. External bridged or routed networks: These networks are referred to as L2Out or L3Out interfaces that connect to other networks. They could be Layer 2 legacy Spanning Tree Protocol or FabricPath networks. Or they could be routed networks that use standard routing protocols, such as Border Gateway Protocol (BGP), IS-IS, or Open Shortest Path First (OSPF) to peer with the Cisco ACI fabric Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 31

9 Endpoint groups (EPGs): An EPG is a group of endpoints that have the same connectivity requirements. Therefore, an EPG can represent a VLAN if all the endpoints in a broadcast domain have the same connectivity. However, the EPG also can be used to segment broadcast domains into smaller segments. This segmentation allows multiple EPGs to be associated to a single bridge domain and subnet. The association of an endpoint to an EPG may be based on a locally significant encapsulation (VLAN or VXLAN). Alternatively, the association can be based on endpoint attributes, such as IP, MAC address, or metadata, such as virtual-machine name or Guest OS, or Kubernetes annotations. This feature allows extremely granular dynamic classification of endpoints into EPGs. Contracts: Contracts enable administrators to define connectivity requirements including Layer 2-to-Layer 4 access control, Quality-of-Service (QoS) settings, or redirecting services to Layer 4-to-Layer 7 devices. EPGs can consume or provide contracts to control traffic to and from other EPGs or can associate to contracts to restrict traffic internal to the EPG. The use of contracts enables administrators to implement granular distributed security and QoS policies. Network domains: Domains provide an abstraction level to tie together physical and Layer 2 network configurations associated to specific type of endpoints. For instance, a domain has associated encapsulation spaces, such as VLAN or VXLAN pools. These are the main types of domain profiles: physical, Virtual Machine Manager (VMM), bridged external (L2Ext), routed external (L3Ext), and Fibre Channel. Attachable Entity Profiles (AEP): AEPs can be considered the where of fabric access configurations. They group domains with similar requirements. For instance, an AEP may be dedicated to identify a group of servers running a hypervisor as part of a cluster. An AEP is be associated to one or more domain profiles. Interface policy groups: Interface-level configurations, such as use of Cisco Discovery Protocol, storm control, rate limiting, etc. are defined as interface policies in Cisco ACI so they can be reused. An interface policy group creates an object to represent the interface policies required to implement access configurations for a given type of connected devices. For instance, an interface policy group defines all required protocols and settings to connect a Red Hat Enterprise Linux server with redundant connections. Interface policy groups are associated to AEPs in order to link them with domains and to allow Cisco APIC to validate Layer 2 encapsulation. By using VRF instances, bridge domains, EPGs, and contracts, administrators can describe network and security intent with a declarative model. The administrator does not need to identify the specific network devices on which specific VRF instances, route targets, SVIs, or access control lists should be configured. Cisco APIC configures all the required networks and policies when an EPG is enabled on a specific port Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 31

10 Figure 3 shows how the fabric administrator implements new application tiers by provisioning new EPGs to Cisco APIC. In most instances, particularly when using virtual-machine VMM integrations, this provisioning needs no awareness of the physical topology or the specific fabric elements that will be involved. Figure 3. An administrator can enable connectivity for applications running on a virtual machine or bare metal across the fabric by creating new EPGs. This feature allows new web and app tiers with their specific network connectivity, where the fabric provides a distributed default gateway. These EPGs can enable connectivity for both physical and virtual endpoints. In fact, virtual machines, bare-metal servers, and even containers can coexist in the same EPG. The administrator does not need to configure the required VRF, bridge domain, or the routing for the subnet on the various physical or virtual switches: Cisco APIC automatically provisions these constructs on the required network elements. This provisioning allows the administrator to express network intent: I need web and app tiers, and I need the web tier to have access to the WAN. Furthermore, thanks to Cisco APIC, extending the new networks to other data centers is extremely easy. Figure 4 depicts a Cisco ACI fabric deployed with multiple PODs (multipod) controlled by a single distributed Cisco APIC cluster Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 31

11 Figure 4. Cisco ACI multipod makes implementing stretched virtualization clusters and endpoint mobility extremely simple. Extending networks requires no Data Center Interconnect (DCI) configuration in this case. Virtual workloads connected to the web or app EPG can move from hypervisors connected on one Cisco ACI POD to another. They remain connected to the same network, routing to the same Cisco ACI distributed default gateway, and they have the same security policies applied. Cisco ACI also provides solutions such as GOLF and Policy-Based Redirect service graphs to localize ingress and egress traffic and Layer 4-to-Layer 7 services to the specific data center as the endpoints are moving. Cisco ACI can also work in multisite configurations (Cisco ACI Multi-Site); that is, involving multiple independent Cisco APIC clusters. Cisco ACI physical and virtual domains The first benefit that Cisco ACI brings to fabric administrators is providing a single point of management for the creation of new segments, subnets, security, and network services, as well as for day-2 operations. In addition, Cisco ACI implements abstractions to represent many connected devices as domains. Domains represent the nature of the devices connected, and the Cisco APIC automatically manages encapsulation pools (VLAN or VXLAN) for different domains on behalf of the fabric administrator. For instance, RHV clusters can be considered as one or more domains connected to the fabric, and the Cisco APIC administrator can assign a pool of VLAN resources initially to simplify VLAN space management. Various types of domains are defined in Cisco ACI, including physical domains, virtual domains, and external domains. Any type of device, virtual or physical, can be connected to the fabric as part of a physical domain. The fabric knows endpoints that are part of a physical domain by their network identity, such as IP or MAC address Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 31

12 You can connect devices running on virtualization platforms to Cisco ACI as part of virtual domains by using VMMs. This connection allows Cisco ACI to match the network identity of the endpoint (IP or MAC) with the virtual identity of the endpoint, such as the virtual machine or POD name. EPGs are associated to one or more domains, depending on the need to connect physical or virtual endpoints to the group or both. Domains also have an association with the AEP, and by consequence, with the switch ports. In a Cisco ACI fabric, every provisioned port has an associated policy group that provides much of its configuration. For instance, a policy group of type Virtual Port Channel enables redundant configuration toward server ports. One of the parameters of the interface policy group is the AEP, which allows Cisco APIC to know the domains available in a port, and by consequence, the VLAN or VXLAN IDs that are valid on that port. Cisco ACI allows the fabric administrator to integrate Cisco APIC with various VMM solutions, including VMware vcenter, Microsoft System Center Virtual Machine Manager (SCVMM), RHV, and OpenStack. The same concept is used to integrate with container cluster managers, such as Kubernetes, OpenShift, and Cloud Foundry. When using a VMM domain, the Cisco APIC interfaces with the virtualization domain manager using open northbound APIs to obtain an inventory of virtual resources. Cisco APIC can then correlate that inventory with the fabric resources and facilitate virtual-switch provisioning and many network operational tasks. This integration brings the benefit of consolidated visibility and simpler operations, because the fabric has a complete view of physical and virtual endpoints and their locations, both in the fabric as well as in the virtual or container cluster. Although running RHV hosts connected to Cisco ACI physical domains is a distinct possibility, and it provides advantages versus running over traditional box-by-box networks, a much better solution to use the Red Hat VMM domains available since Cisco APIC Release Running Red Hat Virtualization with Cisco ACI 6.1. Introducing the Red Hat VMM Domain Starting with Cisco APIC Release 3.1, Cisco APIC supports a VMM domain option to connect RHV environments to the Cisco ACI fabric. RHV hosts that are part of a Red Hat VMM domain can use Linux bridge or Open vswitch. The RHV VMM domain in Cisco APIC is directly associated with a data center object in the RHVM. All the Red Hat clusters under this data center will be considered part of the VMM domain. The Cisco ACI EPG concept is mapped to a logical network in the RHVM Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 31

13 Figure 5 illustrates the workflow used to configure and use the Red Hat VMM domain integration in Cisco APIC. Figure 5. Workflow to configure and use Red Hat VMM domain integration in Cisco APIC 1. Cisco APIC establishes an API connection with the RHVM and obtains the inventory of RHV hosts and virtual machines running on the selected RHV data center. The fabric administrator must assign a VLAN pool for the VMM domain. The VMM domain must be associated with the AEP of the RHV Hosts. 2. Cisco APIC administrators can define multiple EPGs and associated contracts to express their network and security policies. 3. The EPGs configured on Cisco APIC can be associated to the Red Hat VMM domain. Cisco APIC automatically creates a new logical network to match the EPG. Cisco APIC can dynamically assign a VLAN ID from the domain pool. This VLAN ID is communicated to the RHVM to create the logical network. 4. Cisco APIC pushes the EPG policy (access encapsulation, network and security contracts) to the Cisco ACI leaf ports connected to the Red Hat clusters that are part of the RHVM data center associated to the VMM domain. At the same time, Cisco APIC creates the logical network on RHVM and automatically assigns it to the clusters in the RHV data center. 5. The logical networks created by Cisco APIC are labeled with a distinctive label and can be automatically added to the uplinks of the RHV Hosts that are part of the clusters. 6. The RHVM administrator can instantiate virtual machines and connect their virtual Network Interface Cards (vnics) to the logical networks. Thanks to the VMM integration, Cisco APIC can provide complete virtual and physical network automation, thus simplifying the work for both network and virtual administrators. In addition, as shown in Figure 6, the fabric administrator has access to the RHVM inventory to get context of which endpoint IP or MAC address maps to which virtual machine, which hypervisor, etc Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 31

14 Figure 6. A view of the RHVM inventory from APIC GUI 6.2. Completing virtual and physical network automation With Cisco ACI VMM integration with RHV, tasks that previously required many manual steps are completely automated by Cisco APIC. When the virtualization team needs new network services, Cisco APIC can provide them. (Note: The fabric administrator may also allow restricted access to the Cisco APIC API for RHV administrators to directly provision connectivity, by either using the API or the Cisco ACI Ansible modules). As described previously, a Cisco ACI EPG maps to a RHV logical network. The fabric administrator needs only to create the EPG and map it to the RHV VMM domain. When this process is done, Cisco APIC automatically provisions the EPG using the dynamically selected VLAN encapsulation on every port that has connected RHV clusters of the corresponding VMM domain. The fabric administrator no longer needs to map the EPG to the physical ports on the Cisco ACI leafs connecting to the RHV servers. In addition, Cisco APIC automatically creates a corresponding logical network with the same VLAN ID on the RHV manager. The logical network is created with a name matching the Cisco APIC provisioning by concatenating the name of the Cisco ACI tenant, application profile, and EPG using a given character delimiter. For example, for a tenant named T1, application profile AP1, and EPG Web, using the underscore character (_) as delimiter, the logical switch is named T1_AP1_Web. The administrator can change the delimiter to other characters supported by Red Hat Virtualization. By default, Cisco ACI uses the pipe character ( ) as the delimiter. Because RHV releases earlier than do not support using that character as part of a logical network name, special attention must be given to selecting a different delimiter. In addition, RHV releases earlier than RHV limit the name of the logical networks to 15 characters. For these reasons, we recommended using VMM integration from RHV or later Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 31

15 Cisco APIC creates the logical network on RHVM with a distinctive label that identifies the Cisco ACI VMM domain (Refer to Figure 7). Figure 7. EPG map to RHVM logical networks and are created with a distinctive label The first time that Cisco APIC creates a logical network (that is, the first time an EPG is mapped to the RHV VMM domain), the RHVM administrator has to asssign the Cisco ACI label to the host uplinks for every RHV host in the RHV data center. After this assignment is made, as illustrated in Figure 7, every subsequent EPG mapped to the RHV as domain automatically triggers the creation of the corresponding logical network with the same Cisco ACI label. As that label is created, this logical network also is configured on the uplinks of every RHV host in the clusters without any further intervention from the RHV administrator. Similarly, if an EPG is deleted or the association with the RHV VMM domain is deleted, Cisco APIC will automatically delete the corresponding logical network and remove it from the uplinks of the hosts by virtue of the Cisco ACI label. The RHV administrator can connect the virtual-machine vnics to the Cisco APIC-created logical networks using any familiar tool: RHVM graphical user interface, Representational State Transfer (REST) API, Ansible modules, Command-Line Interface (CLI), etc. Cisco ACI VMM integration is not intrusive to the RHV platform, and the RHV administrator can continue using every feature of RHV. Cisco ACI VMM Integration offers several key benefits for the RHV administrator: Simplified and faster provisioning: On a single provisioning operation, the fabric administrator can configure a new EPG and logical network across an arbitrary number of Cisco ACI leafs and RHV clusters. Configuration consistency: VLAN encapsulation is dynamically chosen and always kept in-sync between the virtual and physical domains. In addition, the naming convention for logical networks and EPGs is always consistent Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 31

16 Multiple logical networks can share a single subnet and default gateway, yet enjoy the policy-based isolation provided by Cisco ACI. For instance, logical networks for production and Quality Assurance (QA) can be on the same subnet, yet virtual machines are isolated. This setup simplifies promoting workloads from QA to production because no readdressing is required. These advantages are illustrated in Figure 8. In a single action, the Cisco APIC administrator creates new EPG/logical network pairs that are automatically configured on the physical and virtual infrastructure. Figure 8. Advantages of Cisco ACI VMM Integration If we compare Figure 8 with Figure 2, we can see the many steps that are automated by using ACI with VMM integration. This automation not only results in much faster provisioning and de-provisioning of services, but also dramatically reduces the chances for human error because Cisco APIC ensures correct VLANs for every network. The Cisco ACI and RHV VMM are associated to a RHV data center. Cisco APIC supports multiple VMM domains and can be configured to work with data centers of the same RHVM or a different RHVM. Thus, extending an EPG to multiple RHVMs becomes extremely simple: It is necessary only to map multiple VMM domains. An EPG can also be mapped to VMM domains of various vendors, as well as to multiple physical domains. Therefore, it is possible to have bare-metal and virtual endpoints on the same network. This setup facilitates communication between virtual machines and bare-metal endpoints regardless of subnets, without introducing bottlenecks, and offers the possibility of taking advantage of Cisco ACI contracts to help ensure that connectivity is restricted only to required protocols and ports Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 31

17 6.3. Augmenting visibility and simplifying operations with Cisco ACI Red Hat VMM domain Cisco ACI RHV VMM domain integration not only simplifies provisioning and reduces the possibility of configuration mistakes, it also provides benefits to fabric administrators in terms of visibility and operations associated to the RHV platform. This scenario enables organizations to improve and streamline operations by facilitating cooperation between network and virtualization administrators. Using the VMM domain, Cisco APIC can read the RHV inventory. As shown in Figure 9, Cisco APIC has the list of available RHV hosts inside a RHV data center. For each RHV node, Cisco APIC can see the status of physical and virtual NICs, the inventory, virtual machines, and more. Figure 9. List of available RHV hosts inside a RRHV data center This ability to see the status of physical and virtual NICs allows Cisco APIC to automatically correlate physical and virtual resources. It also enables the fabric administrator to refer to virtual resources using the same semantics as the RHV administrator. Figure 10 shows an example of how the fabric administrator can search Cisco APIC using a virtual-machine name in the example test-ubuntu-vm. Cisco APIC immediately finds the virtual machine in its copy of the inventory, allowing the fabric administrator to quickly find out that the virtual machine is running on the hypervisor rhvh- 02.nillo.net, whether or not the virtual machine is powered up, and confirm that it is connected to EPG T1_AP1_Web logical network Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 31

18 Figure 10. Illustration of how a fabric admin can find a virtual resource searching by its name Cisco APIC alsssso uses this information to automatically correlate virtual endpoints with network information and physical network topology. Figure 11 illustrates how the fabric administrator can view the virtual machines inside any EPG and identify each of them by their name, correlate the specific RHVH node where they are running, and learn the IP or MAC information and physical ports as well as the encapsulation used to attach to the fabric. Figure 11. The administrator can see the information of endpoints connected to each EPG correlating network and virtual identity 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 31

19 6.4. Implementing distributed Layer 2-to -Layer 4 security policies In addition to defining network connectivity in the form of EPGs mapped to logical networks, fabric and security administrators also can control connectivity between those logical networks. In Cisco ACI, EPGs that are not part of the preferred group by default cannot communicate with other EPGs, even if they share the same Layer 2 broadcast domain. In Figure 12, we can see two RHV clusters, one dedicated to production workloads and another to QA testing. The fabric administrator creates two EPGs to generate corresponding logical networks, one for Web-PROD and another for Web-QA. Cisco APIC dynamically chooses different VLAN IDs for each of the EPGs and logical networks. Both EPGs are associated to the same Bridge Domain (BD Web) that has subnet /24 associated, with as the distributed default gateway address. Because there is no contract between the Web-PROD and the Web-QA EPGs, the virtual machines of the two environments are isolated from one another. If at any moment the RHVM administrator wants to promote a workload from QA to production, it is sufficient to migrate the virtual machine from one cluster to another, placing it on the corresponding logical network. Figure 12. Multiple EPGs and corresponding logical networks can share the same subnet, thus facilitating moves between QA and production. After a virtual machine is moved from the QA to the PROD cluster, it can be placed on the EPG-Web-PROD. After the virtual machine is connected there, it has the privileges and policies of production virtual machines without any further configuration. The virtual machine does not need to receive a new IP address, and continues communicating with the same default gateway. Figure 13 shows another example of the Cisco ACI distributed security policy in place. In this case, the administrator has defined contracts to allow connectivity from external addresses toward the virtual machines in the EPG-Web. In turn, those virtual machines can communicate with the Tomcat application in the EPG-App. Finally, only the virtual machines in EPG-App can reach the Oracle databases that may be running in bare-metal servers Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 31

20 Figure 13. The ACI contract model can be used to enforce security between logical networks Extending RHV networks across data centers Cisco ACI offers various options to support dispersed data centers. Customers who have clusters on different data centers can use a Cisco ACI RHV VMM domain to simplify the environment without requiring a data center interconnect solution. For instance, Cisco ACI multipod enables designs for multiple leaf and spine deployments (PODs) that can be interconnected over an IP network and managed by a single Cisco APIC cluster. The various PODs can be across different server rooms, or even different data centers that can be many miles or kilometers apart. Each POD works as a different failure domain from a fabric-control-plane perspective, whereas the single Cisco APIC cluster offers a single management and policy plane. When using Cisco ACI with multipod and RHV VMM domain, it is extremely simple to extend RHV logical networks across physical data centers. This capability enables the following solutions: RHV stretched clusters: Customers who have data centers with different rooms, or different physical data centers connected through low-latency, high-bandwidth links, can stretch a cluster of hypervisors. Previously, even if storage latency requirements were met, stretching clusters was very complicated because of the difficulty and inefficiency of stretching VLANs and subnets associated with logical networks. However, with Cisco ACI multipod and RHV VMM domain, RHV clusters can have hosts physically sitting across different racks, in different rooms, or even different data centers refer to Figure 14) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 31

21 Figure 14. With Cisco ACI multipod, customers can implement stretched RHV clusters. RHV data centers with clusters in different locations: Customers may have different RHV clusters on different physical data centers. In this scenario, normally the clusters may have dedicated storage. Common examples are clusters dedicated to production, development, or disaster recovery designs. Figure 15 illustrates this design combined again with Cisco ACI multipod, where we see two networks (orange and purple) available to both clusters. This setup is similar to the illustration in Figure 14. Because the networks are available to both clusters, it is very simple to implement disaster recovery solutions Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 31

22 Figure 15. Cisco ACI multipod allows extending networks between different RHV clusters on different physical fabrics. Multiple RHV environments: A single Cisco APIC cluster can support multiple VMM domains. Therefore, it can interface simultaneously with multiple RHVMs. All that is required to configure a logical network is to associate an EPG with the RHV VMM domain, a very simple task that requires a couple of clicks in the Cisco APIC GUI, a few lines of configuration, or a simple API call. The same EPG can be associated with multiple RHVMs in the same way. Therefore, customers can schedule workloads to be distributed on different clusters, under different RHVMs in different data centers but which are part of the same application tier, as seen in Figure 16. This feature enables building completely automated pipelines of application deployment with the greatest availability possible Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 31

23 Figure 16. It is possible to have multiple VMM domains on a single Cisco ACI multipod environment, thereby extending the same networks to more than one Red Hat Virtualization Manager. 7. Migrating from physical to VMM domains Many customers using Cisco ACI and RHV used physical domains before Cisco APIC Release 3.1. This configuration is illustrated in Figure 17. We see an EPG NETWORK-01 that has been statically mapped to the port channels virtual (vpcs) connecting to two RHV hosts using VLAN 500. A corresponding logical network has been manually configured by the RHVM administrator Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 31

24 Figure 17. Example of a configuration using physical domains After customers upgrade the fabric to a release that supports the Red Hat VMM domain, they may want to migrate virtual machines connected on the physical domains to VMM domains. The VMM domain must be created with different] VLAN pools than the ones associated to the physical domain. After the VMM domain has been created, the first step for this migration is to map the existing EPG to the RHV VMM domain, as shown in Figure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 31

25 Figure 18. Existing EPG mapped to new RHV VMM domain This mapping will create a new logical network that will be named by concatenating the name of the EPG with that of the tenant and application profile, as explained earlier in this document. The new logical network will have a dynamic VLAN from the pool of the VMM domain. After this configuration is done, the RHVM administrator can reconfigure the vnic of the virtual machines to change to the new logical network. Because the endpoint will remain on the same EPG, its networking properties do not change, as illustrated in Figure Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 31

26 Figure 19. Reconnecting the VM vnic to the new logical network Finally, as shown in Figure 20, after all virtual machines for a given EPG have been migrated to the logical network managed by the VMM domain, administrators can clean the configuration by removing the static path bindings, physical domain association, and old logical network Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 31

27 Figure 20. When all vnic are on the new logical network clean up can be done This process can be easily automated using different configuration management tools. For instance, using Ansible 2.5, the following tasks would accomplish the previously described process of mapping the existing EPG to the RHV VMM domain and moving a vnic for a virtual machine to the new logical network: - name: associate an EPG to the RHV VMM Domain aci_epg_to_domain: hostname: {{ inventory_hostname }} private_key: /root/admin.key validate_certs: no tenant: { tenant_name }} ap: {{ ap_name }} epg: {{ epg_name }} domain: {{ vmm_name }} domain_type: vmm vm_provider: redhat resolution_immediacy: pre-provision deploy_immediacy: immediate state: present 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 31

28 - name: change VM NIC to new Logical Network ovirt_nics: auth: {{ ovirt_auth }} vm: {{ inventory_hostname }} name: nic1 profile: {{ tenant_name }} {{ ap_name }} {{ epg_name }} network: {{ tenant_name }} {{ ap_name }} {{ epg_name }} 8. Conclusions By combining Cisco ACI and RHV, customers can design and build highly available and completely automated data center infrastructures. Since Cisco APIC Release 3.1, the integration between Cisco APIC RHVM with a VMM domain provides great advantages for customers, including: Simpler and faster network provisioning Distributed security between Red Hat logical networks Simplified data center operations Cisco ACI multipod and Multi-Site capabilities can be combined with RHV VMM integration to build dispersed data centers with seamless network connectivity within and across RHV clusters. Finally, administrators can use the single point of management and Cisco APIC APIs to offer an automated consumption model for Red Hat environments. Appendix A: Connecting RHV hosts to the Cisco ACI leafs In most configurations, RHV hosts connect to the Cisco ACI fabric using multiple physical links. In this appendix, we review the basics of the interface configuration, assuming the RHV host server is connected using redundant 10 and 25 Gigabit Ethernet links (Figure A1) Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 31

29 Figure A1 Common deployment The figure depicts a common deployment with a RHV host connected using redundant links to a pair of Cisco ACI leafs configured as a virtual-port-channel (vpc) pair: The RHV host redundancy interfaces are configured as part of a Linux Bond using Mode 4 to select standard Link Aggregation Control Protocol (LACP). The Cisco ACI leafs are a vpc pair, and the fabric administrator has configured a vpc policy group that uses LACP Active as port-channel policy. The fabric administrator has configured an AEP (AEP-RHV-Cluster-01) to be used for all ports toward the RHV hosts in the particular cluster. The vpc policy group is associated with the AEP configured for the cluster. The AEP is associated with a physical domain and a Red Hat VMM domain, each managing a pool of VLAN IDs. In Figure A1, we see the default logical network for RHV management, ovirtmgmt. This network is configured by default upon RHVM installation, and is commonly used for the RHVM to communicate with the agents running on the RHV hosts. This network has a corresponding EPG in Cisco ACI that is mapped to a physical domain. However, this network is not created by Cisco APIC, so it must be mapped to a physical domain. To configure the EPG on the required ports, the fabric administrator has two options: Configure a static path under the EPG, selecting the vpc toward the RHV hosts ports. Associate the EPG directly with the AEP, thereby making the ovirtmgmt network present on every port without any further configuration Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 31

30 A RHV deployment requires more logical networks, including some for infrastructure such as migration, some for console or IP storage, and many for virtual-machine data traffic. With the exception of the ovirtmgmt logical network, it is possible to create all other infrastructure logical networks on Cisco APIC and map them to the VMM domain. Appendix B: Using ACI distributed networking without contract with preferred groups Cisco ACI was designed to allow implementing zero-trust networks, where communication is allowed only if explicitly configured. In this sense, by default a Cisco ACI fabric behaves like a single distributed firewall. However, organizations that are not ready to implement a positive security model to define connectivity can still benefit from Cisco ACI distributed networking, network automation, multipod, etc. One way to remove positive security default filtering is to disable policy enforcement for a given VRF instance. However, it may be desirable to have a more granular approach, where policy is used for some EPGs in the VRF instance, but not for others. In this case, the EPGs created for each corresponding network or application tier can be configured in Cisco APIC as part of the preferred group for the VRF. In this configuration option, the EPGs inside the preferred group can communicate freely without requiring contracts. This configuration mimics the behavior of a traditional network, where routing occurs between any two connected subnets (refer to Figure B1). The figure shows four EPGs (EPG- A, B, C, and D). It also shows an external EPG that represents devices that are not connected to the fabric and are accessible through one L3Out interface. The Cisco ACI fabric provides distributed routing and switching between these EPGs without any restrictions or need for contracts. By contrast, EPG-1, EPG-2, and EPG-3 are outside of the preferred group, and communication between them requires using contracts. Figure B1 An illustration of the concept of EPG preferred group Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 31

Running RHV integrated with Cisco ACI. JuanLage Principal Engineer - Cisco May 2018

Running RHV integrated with Cisco ACI. JuanLage Principal Engineer - Cisco May 2018 Running RHV integrated with Cisco ACI JuanLage Principal Engineer - Cisco May 2018 Agenda Why we need SDN on the Data Center What problem are we solving? Introduction to Cisco Application Centric Infrastructure

More information

Virtual Machine Manager Domains

Virtual Machine Manager Domains This chapter contains the following sections: Cisco ACI VM Networking Support for Virtual Machine Managers, page 1 VMM Domain Policy Model, page 3 Virtual Machine Manager Domain Main Components, page 3,

More information

Design Guide to run VMware NSX for vsphere with Cisco ACI

Design Guide to run VMware NSX for vsphere with Cisco ACI White Paper Design Guide to run VMware NSX for vsphere with Cisco ACI First published: January 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Cisco HyperFlex Systems

Cisco HyperFlex Systems White Paper Cisco HyperFlex Systems Install and Manage Cisco HyperFlex Systems in a Cisco ACI Environment Original Update: January 2017 Updated: March 2018 Note: This document contains material and data

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Modeling an Application with Cisco ACI Multi-Site Policy Manager

Modeling an Application with Cisco ACI Multi-Site Policy Manager Modeling an Application with Cisco ACI Multi-Site Policy Manager Introduction Cisco Application Centric Infrastructure (Cisco ACI ) Multi-Site is the policy manager component used to define intersite policies

More information

Cisco ACI Simulator Installation Guide

Cisco ACI Simulator Installation Guide First Published: 2014-11-11 Last Modified: 2018-02-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco ACI Terminology ACI Terminology 2

Cisco ACI Terminology ACI Terminology 2 inology ACI Terminology 2 Revised: May 24, 2018, ACI Terminology Cisco ACI Term Alias API Inspector App Center Application Policy Infrastructure Controller (APIC) Application Profile Atomic Counters Alias

More information

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x First Published: August 01, 2014 Last Modified: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Migration and Upgrade: Frequently Asked Questions

Migration and Upgrade: Frequently Asked Questions First Published: May 01, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco Nexus 3000 Series NX-OS Verified Scalability Guide, Release 7.0(3)I7(2)

Cisco Nexus 3000 Series NX-OS Verified Scalability Guide, Release 7.0(3)I7(2) Cisco Nexus Series NX-OS Scalability Guide, Release 7.0(3)I7(2) Introduction 2 Scalability s 3 Topology s 14 Revised: November 23, 2017, Introduction The values provided in this guide should not be interpreted

More information

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation)

ACI Terminology. This chapter contains the following sections: ACI Terminology, on page 1. Cisco ACI Term. (Approximation) This chapter contains the following sections:, on page 1 Alias API Inspector App Center Alias A changeable name for a given object. While the name of an object, once created, cannot be changed, the Alias

More information

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x First Published: 2018-07-05 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack

Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack White Paper Cisco Application Centric Infrastructure and Microsoft SCVMM and Azure Pack Introduction Cisco Application Centric Infrastructure (ACI) is a next-generation data center fabric infrastructure

More information

Cisco ACI with OpenStack OpFlex Architectural Overview

Cisco ACI with OpenStack OpFlex Architectural Overview First Published: February 11, 2016 Last Modified: March 30, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: July 2017 Release 2.5.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:

More information

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 First Published: 2017-05-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design

Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design White Paper Cisco Application Centric Infrastructure (ACI) - Endpoint Groups (EPG) Usage and Design Emerging IT technologies have brought about a shift from IT as a cost center to IT as a business driver.

More information

OpenStack Group-Based Policy User Guide

OpenStack Group-Based Policy User Guide First Published: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco FindIT Plugin for Kaseya Quick Start Guide

Cisco FindIT Plugin for Kaseya Quick Start Guide First Published: 2017-10-23 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2)

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) First Published: 2016-07-15 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco ACI with Red Hat Virtualization 2

Cisco ACI with Red Hat Virtualization 2 Cisco ACI and Red Hat Virtualization New and Changed Information 2 Cisco ACI with Red Hat Virtualization 2 Software Compatibility 2 Cisco ACI and Red Hat Terminology 3 Workflow for Red Hat Virtualization

More information

Cisco ACI Multi-Site Fundamentals Guide

Cisco ACI Multi-Site Fundamentals Guide First Published: 2017-08-10 Last Modified: 2017-10-09 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Virtualization Design

Virtualization Design VMM Integration with UCS-B, on page 1 VMM Integration with AVS or VDS, on page 3 VMM Domain Resolution Immediacy, on page 6 OpenStack and Cisco ACI, on page 8 VMM Integration with UCS-B About VMM Integration

More information

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x)

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) First Published: May 17, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,

More information

ACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU

ACI Multi-Site Architecture and Deployment. Max Ardica Principal Engineer - INSBU ACI Multi-Site Architecture and Deployment Max Ardica Principal Engineer - INSBU Agenda ACI Network and Policy Domain Evolution ACI Multi-Site Deep Dive Overview and Use Cases Introducing ACI Multi-Site

More information

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server December 17 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA95134-1706 USA http://www.cisco.com

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Network Assurance Engine Release Notes, Release 3.0(1)

Cisco Network Assurance Engine Release Notes, Release 3.0(1) Cisco Network Assurance Engine Release Notes, Release 3.0(1) Table of Contents Cisco Network Assurance Engine, Release 3.0(1), Release Notes................................. 3 Introduction.............................................................................

More information

SAML SSO Okta Identity Provider 2

SAML SSO Okta Identity Provider 2 SAML SSO Okta Identity Provider SAML SSO Okta Identity Provider 2 Introduction 2 Configure Okta as Identity Provider 2 Enable SAML SSO on Unified Communications Applications 4 Test SSO on Okta 4 Revised:

More information

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances Recovery Guide for Cisco Digital Media Suite 5.4 Appliances September 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Deploying Devices. Cisco Prime Infrastructure 3.1. Job Aid

Deploying Devices. Cisco Prime Infrastructure 3.1. Job Aid Deploying Devices Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION,

More information

Access Switch Device Manager Template Configuration

Access Switch Device Manager Template Configuration SDM Template Configuration Guide, Cisco IOS XE Release (Cisco ASR 920 Series) First Published: 2015-07-31 This chapter provides information about the Access Switch Device Manager (SDM) Template. For complete

More information

Cisco Application Policy Infrastructure Controller Data Center Policy Model

Cisco Application Policy Infrastructure Controller Data Center Policy Model White Paper Cisco Application Policy Infrastructure Controller Data Center Policy Model This paper examines the Cisco Application Centric Infrastructure (ACI) approach to modeling business applications

More information

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs May 10, 2017 Version 1.0 Cisco Systems, Inc. Corporate Headquarters 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Videoscape Distribution Suite Software Installation Guide

Videoscape Distribution Suite Software Installation Guide First Published: August 06, 2012 Last Modified: September 03, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Service Graph Design with Cisco Application Centric Infrastructure

Service Graph Design with Cisco Application Centric Infrastructure White Paper Service Graph Design with Cisco Application Centric Infrastructure 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction...

More information

Wired Network Summary Data Overview

Wired Network Summary Data Overview Wired Network Summary Data Overview Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE.

More information

Cisco ACI Virtualization Guide, Release 2.2(1)

Cisco ACI Virtualization Guide, Release 2.2(1) First Published: 2017-01-18 Last Modified: 2017-07-14 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Building NFV Solutions with OpenStack and Cisco ACI

Building NFV Solutions with OpenStack and Cisco ACI Building NFV Solutions with OpenStack and Cisco ACI Domenico Dastoli @domdastoli INSBU Technical Marketing Engineer Iftikhar Rathore - INSBU Technical Marketing Engineer Agenda Brief Introduction to Cisco

More information

Layer 4 to Layer 7 Design

Layer 4 to Layer 7 Design Service Graphs and Layer 4 to Layer 7 Services Integration, page 1 Firewall Service Graphs, page 5 Service Node Failover, page 10 Service Graphs with Multiple Consumers and Providers, page 12 Reusing a

More information

Validating Service Provisioning

Validating Service Provisioning Validating Service Provisioning Cisco EPN Manager 2.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes First Published: October 2014 Release 1.0.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408

More information

CPS UDC MoP for Session Migration, Release

CPS UDC MoP for Session Migration, Release CPS UDC MoP for Session Migration, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco ACI Multi-Pod and Service Node Integration

Cisco ACI Multi-Pod and Service Node Integration White Paper Cisco ACI Multi-Pod and Service Node Integration 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 68 Contents Introduction... 3 Prerequisites...

More information

Managing Device Software Images

Managing Device Software Images Managing Device Software Images Cisco DNA Center 1.1.2 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

VXLAN Overview: Cisco Nexus 9000 Series Switches

VXLAN Overview: Cisco Nexus 9000 Series Switches White Paper VXLAN Overview: Cisco Nexus 9000 Series Switches What You Will Learn Traditional network segmentation has been provided by VLANs that are standardized under the IEEE 802.1Q group. VLANs provide

More information

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco ACI increases network security, automates communication policies based on business-relevant application requirements, and decreases developer

More information

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework

Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework White Paper Deploy Microsoft SQL Server 2014 on a Cisco Application Centric Infrastructure Policy Framework August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Cisco ACI Virtual Machine Networking

Cisco ACI Virtual Machine Networking This chapter contains the following sections: Cisco ACI VM Networking Supports Multiple Vendors' Virtual Machine Managers, page 1 Virtual Machine Manager Domain Main Components, page 2 Virtual Machine

More information

Cisco Nexus 1000V for VMware vsphere VDP Configuration Guide, Release 5.x

Cisco Nexus 1000V for VMware vsphere VDP Configuration Guide, Release 5.x Cisco Nexus 1000V for VMware vsphere VDP Configuration Guide, Release 5.x First Published: August 12, 2014 Last Modified: November 10, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive

More information

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k)

Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Verified Scalability Guide for Cisco APIC, Release 3.0(1k) and Cisco Nexus 9000 Series ACI-Mode Switches, Release 13.0(1k) Overview 2 General Scalability Limits 2 Fabric Topology, SPAN, Tenants, Contexts

More information

Application Launcher User Guide

Application Launcher User Guide Application Launcher User Guide Version 1.0 Published: 2016-09-30 MURAL User Guide Copyright 2016, Cisco Systems, Inc. Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0 First Published: July 31, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text

More information

Cisco Application Centric Infrastructure

Cisco Application Centric Infrastructure Data Sheet Cisco Application Centric Infrastructure What s Inside At a glance: Cisco ACI solution Main benefits Cisco ACI building blocks Main features Fabric Management and Automation Network Security

More information

NetFlow Configuration Guide

NetFlow Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco ACI Virtualization Guide, Release 2.1(1)

Cisco ACI Virtualization Guide, Release 2.1(1) First Published: 2016-10-02 Last Modified: 2017-05-09 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

ACI Fabric Endpoint Learning

ACI Fabric Endpoint Learning White Paper ACI Fabric Endpoint Learning 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 45 Contents Introduction... 3 Goals of this document...

More information

Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide

Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide January 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Networking Domains. Physical domain profiles (physdomp) are typically used for bare metal server attachment and management access.

Networking Domains. Physical domain profiles (physdomp) are typically used for bare metal server attachment and management access. This chapter contains the following sections:, on page 1 Bridge Domains, on page 2 VMM Domains, on page 2 Configuring Physical Domains, on page 4 A fabric administrator creates domain policies that configure

More information

Cisco IT Compute at Scale on Cisco ACI

Cisco IT Compute at Scale on Cisco ACI Cisco IT ACI Deployment White Papers Cisco IT Compute at Scale on Cisco ACI This is the fourth white paper in a series of case studies that explain how Cisco IT deployed ACI to deliver improved business

More information

Design Guide for Cisco ACI with Avi Vantage

Design Guide for Cisco ACI with Avi Vantage Page 1 of 23 Design Guide for Cisco ACI with Avi Vantage view online Overview Cisco ACI Cisco Application Centric Infrastructure (ACI) is a software defined networking solution offered by Cisco for data

More information

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service

Multi-Site Use Cases. Cisco ACI Multi-Site Service Integration. Supported Use Cases. East-West Intra-VRF/Non-Shared Service Cisco ACI Multi-Site Service Integration, on page 1 Cisco ACI Multi-Site Back-to-Back Spine Connectivity Across Sites Without IPN, on page 8 Bridge Domain with Layer 2 Broadcast Extension, on page 9 Bridge

More information

Deploying IWAN Routers

Deploying IWAN Routers Deploying IWAN Routers Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Cisco Unified Communications Self Care Portal User Guide, Release

Cisco Unified Communications Self Care Portal User Guide, Release Cisco Unified Communications Self Care Portal User Guide, Release 10.0.0 First Published: December 03, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco ACI Virtualization Guide, Release 2.2(2)

Cisco ACI Virtualization Guide, Release 2.2(2) First Published: 2017-04-11 Last Modified: 2018-01-31 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Cisco UCS C-Series IMC Emulator Quick Start Guide Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Revised: October 6, 2017, Cisco IMC Emulator Overview About

More information

Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2

Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2 Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2 Software Release Notes First Published: April 2016 Software Version 5.2 Cisco Systems, Inc. 1 www.cisco.com 2 Preface Change History

More information

PSOACI Why ACI: An overview and a customer (BBVA) perspective. Technology Officer DC EMEAR Cisco

PSOACI Why ACI: An overview and a customer (BBVA) perspective. Technology Officer DC EMEAR Cisco PSOACI-4592 Why ACI: An overview and a customer (BBVA) perspective TJ Bijlsma César Martinez Joaquin Crespo Technology Officer DC EMEAR Cisco Lead Architect BBVA Lead Architect BBVA Cisco Spark How Questions?

More information

Cisco ACI vpod. One intent: Any workload, Any location, Any cloud. Introduction

Cisco ACI vpod. One intent: Any workload, Any location, Any cloud. Introduction Cisco ACI vpod One intent: Any workload, Any location, Any cloud Organizations are increasingly adopting hybrid data center models to meet their infrastructure demands, to get flexibility and to optimize

More information

Cisco ACI with Cisco AVS

Cisco ACI with Cisco AVS This chapter includes the following sections: Cisco AVS Overview, page 1 Cisco AVS Installation, page 6 Key Post-Installation Configuration Tasks for the Cisco AVS, page 43 Distributed Firewall, page 62

More information

Cisco IOS Flexible NetFlow Command Reference

Cisco IOS Flexible NetFlow Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Network Virtualization

Network Virtualization Network Virtualization Petr Grygárek 1 Traditional Virtualization Techniques Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on

More information

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version

5 days lecture course and hands-on lab $3,295 USD 33 Digital Version Course: Duration: Fees: Cisco Learning Credits: Kit: DCAC9K v1.1 Cisco Data Center Application Centric Infrastructure 5 days lecture course and hands-on lab $3,295 USD 33 Digital Version Course Details

More information

Cisco ACI Multi-Site, Release 1.1(1), Release Notes

Cisco ACI Multi-Site, Release 1.1(1), Release Notes Cisco ACI Multi-Site, Release 1.1(1), Release Notes This document describes the features, caveats, and limitations for the Cisco Application Centric Infrastructure Multi-Site software. The Cisco Application

More information

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference July 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Cisco TelePresence FindMe Cisco TMSPE version 1.2

Cisco TelePresence FindMe Cisco TMSPE version 1.2 Cisco TelePresence FindMe Cisco TMSPE version 1.2 User Guide May 2014 Contents Getting started 1 Keeping your FindMe profile up to date 5 Changing your provisioning password 8 Getting started Cisco TelePresence

More information

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide First Published: 2011-09-06 Last Modified: 2015-09-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA

More information

Cisco ACI Simulator Release Notes, Release 1.1(1j)

Cisco ACI Simulator Release Notes, Release 1.1(1j) Cisco ACI Simulator Release Notes, This document provides the compatibility information, usage guidelines, and the scale values that were validated in testing this Cisco ACI Simulator release. Use this

More information

Virtual Security Gateway Overview

Virtual Security Gateway Overview This chapter contains the following sections: Information About the Cisco Virtual Security Gateway, page 1 Cisco Virtual Security Gateway Configuration for the Network, page 10 Feature History for Overview,

More information

This document was written and prepared by Dale Ritchie in Cisco s Collaboration Infrastructure Business Unit (CIBU), Oslo, Norway.

This document was written and prepared by Dale Ritchie in Cisco s Collaboration Infrastructure Business Unit (CIBU), Oslo, Norway. Cisco TelePresence Management Suite Provisioning Extension Why upgrade to Cisco TMSPE? White Paper August 01 This document was written and prepared by Dale Ritchie in Cisco s Collaboration Infrastructure

More information

Media Services Proxy Command Reference

Media Services Proxy Command Reference Media Services Proxy Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-10-13 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution First Published: 2016-12-21 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Microsegmentation with Cisco ACI

Microsegmentation with Cisco ACI This chapter contains the following sections:, page 1 Microsegmentation with the Cisco Application Centric Infrastructure (ACI) provides the ability to automatically assign endpoints to logical security

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco UCS Director API Integration and Customization Guide, Release 5.4

Cisco UCS Director API Integration and Customization Guide, Release 5.4 Cisco UCS Director API Integration and Customization Guide, Release 5.4 First Published: November 03, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017

MP-BGP VxLAN, ACI & Demo. Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 MP-BGP VxLAN, ACI & Demo Brian Kvisgaard System Engineer, CCIE SP #41039 November 2017 Datacenter solutions Programmable Fabric Classic Ethernet VxLAN-BGP EVPN standard-based Cisco DCNM Automation Modern

More information

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x First Published: August 01, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Jabber IM for iphone Frequently Asked Questions

Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions 2 Basics 2 Connectivity 3 Contacts 4 Calls 4 Instant Messaging 4 Meetings 5 Support and Feedback

More information

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

New and Changed Information

New and Changed Information This chapter contains the following sections:, page 1 The following table provides an overview of the significant changes to this guide for this current release. The table does not provide an exhaustive

More information