Ultra M Solutions Guide, Release 5.5.x

Size: px
Start display at page:

Download "Ultra M Solutions Guide, Release 5.5.x"

Transcription

1 Ultra M Solutions Guide, Release 5.5.x First Published: July 27, 2017 Last Updated: November 29, 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA USA Tel: NETS (6387) Fax: Cisco Systems, Inc. 1

2 Ultra M Solutions Guide, Release 5.5.x Conventions THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB s public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. All printed copies and duplicate soft copies are considered un-controlled copies and the original on-line version should be referred to for latest version. Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) 2017 Cisco Systems, Inc. All rights reserved. 2

3 Ultra M Solutions Guide, Release 5.5.x Conventions Table of Contents Conventions... 7 Obtaining Documentation and Submitting a Service Request... 8 Ultra M Overview... 9 VNF Support... 9 Ultra M Models... 9 Functional Components Virtual Machine Allocations VM Requirements Hardware Specifications Cisco Catalyst Switches Catalyst C2960XR-48TD-I Switch Catalyst T-S Switch Cisco Nexus Switches Nexus YC-EX Nexus 9236C UCS C-Series Servers Server Functions and Quantities VM Deployment per Node Type Server Configurations Storage Software Specifications Networking Overview UCS-C240 Network Interfaces VIM Network Topology Openstack Tenant Networking VNF Tenant Networks Supporting Trunking on VNF Service ports Layer 1 Leaf and Spine Topology Non-Hyper-converged Ultra M Model Network Topology Hyper-converged Ultra M Single and Multi-VNF Model Network Topology

4 Ultra M Solutions Guide, Release 5.5.x Conventions Deploying the Ultra M Solution Deployment Workflow Plan Your Deployment Network Planning Install and Cable the Hardware Related Documentation Rack Layout Cable the Hardware Configure the Switches Prepare the UCS C-Series Hardware Prepare the Staging Server/Ultra M Manager Node Prepare the Controller Nodes Prepare the Compute Nodes Prepare the OSD Compute Nodes Prepare the Ceph Nodes Deploy the Virtual Infrastructure Manager Deploy the VIM for Hyper-Converged Ultra M Models Deploy the VIM for Non-Hyper-Converged Ultra M Models Configure SR-IOV Deploy the USP-Based VNF Event and Syslog Management Within the Ultra M Solution Syslog Proxy Event Aggregation Install the Ultra M Manager RPM Restarting the Ultra M Manager Service Check the Ultra M Manager Service Status Stop the Ultra M Manager Service Start the Ultra M Manager Service Uninstalling the Ultra M Manager Encrypting Passwords in the ultram_cfg.yaml File Appendix: Network Definitions (Layer 2 and 3) Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures

5 Ultra M Solutions Guide, Release 5.5.x Conventions Prerequisites Deploying the VIM Orchestrator Mount the Red Hat ISO Image Install RHEL Prepare Red Hat for the VIM Orchestrator Installation Install the VIM Orchestrator Deploying the VIM Node List Import Node Introspection Install the VIM Verify the VIM Installation Configure SR-IOV Creating Flat Networks for Trunking on Service Ports Appendix: Example ultram_cfg.yaml File Appendix: Ultra M MIB Appendix: Ultra M Component Event Severity and Fault Code Mappings OpenStack Events Component: Ceph Component: Cinder Component: Neutron Component: Nova Component: NTP Component: PCS Component: Services Component: Rabbitmqctl UCS Server Events UAS Events Appendix: Ultra M Troubleshooting Ultra M Component Reference Documentation UCS C-Series Server Nexus 9000 Series Switch Catalyst 2960 Switch

6 Ultra M Solutions Guide, Release 5.5.x Conventions Red Hat OpenStack UAS UGP Collecting Support Information From UCS: From Host/Server/Compute/Controler/Linux: From Switches: From ESC (Active and Standby) From UAS: From UEM (Active and Standby): From UGP (Through StarOS) About Ultra M Manager Log Files Appendix: Using the UCS Utilities Within the Ultra M Manager Perform Pre-Upgrade Preparation Shutdown the ESC VMs Upgrade the Compute Node Server Software Upgrade the OSD Compute Node Server Software Restarting the UAS and ESC (VNFM) VMs Upgrade the Controller Node Server Software Upgrading Firmware on UCS Bare Metal Upgrading Firmware on the OSP-D Server/Ultra M Manager Node Controlling UCS BIOS Parameters Using ultram_ucs_utils.py Script Appendix: ultram_ucs_utils.py Help

7 Ultra M Solutions Guide, Release 5.5.x Conventions Conventions This document uses the following conventions. Convention bold font italic font Indication Commands and keywords and user-entered text appear in bold font. Document titles, new or emphasized terms, and arguments for which you supply values are in italic font. [ ] Elements in square brackets are optional. {x y z } Required alternative keywords are grouped in braces and separated by vertical bars. [ x y z ] Optional alternative keywords are grouped in brackets and separated by vertical bars. string courier font A nonquoted set of characters. Do not use quotation marks around the string or the string will include the quotation marks. Terminal sessions and information the system displays appear in courier font. < > Nonprinting characters such as passwords are in angle brackets. [ ] Default responses to system prompts are in square brackets.!, # An exclamation point (!) or a pound sign (#) at the beginning of a line of code indicates a comment line. Note: Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual. Caution: Means reader be careful. In this situation, you might perform an action that could result in equipment damage or loss of data. Warning: IMPORTANT SAFETY INSTRUCTIONS Means danger. You are in a situation that could cause bodily injury. Before you work on any equipment, be aware of the hazards involved with electrical circuitry and be familiar with standard practices for preventing accidents. Use the statement number provided at the end of each warning to locate its translation in the translated safety warnings that accompanied this device. SAVE THESE INSTRUCTIONS Regulatory: Provided for additional information and to comply with regulatory and customer requirements. 7

8 Ultra M Solutions Guide, Release 5.5.x Obtaining Documentation and Submitting a Service Request Obtaining Documentation and Submitting a Service Request For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What s New in Cisco Product Documentation at: Subscribe to What s New in Cisco Product Documentation, which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service. 8

9 Ultra M Overview Ultra M is a pre-packaged and validated virtualized mobile packet core solution designed to simplify the deployment of virtual network functions (VNFs). The solution combines the Cisco Ultra Service Platform (USP) architecture, Cisco Validated OpenStack infrastructure, and Cisco networking and computing hardware platforms into a fully integrated and scalable stack. As such, Ultra M provides the tools to instantiate and provide basic lifecycle management for VNF components on a complete OpenStack virtual infrastructure manager. VNF Support In this release, Ultra M supports the Ultra Gateway Platform (UGP) VNF. The UGP currently provides virtualized instances of the various 3G and 4G mobile packet core (MPC) gateways that enable mobile operators to offer enhanced mobile data services to their subscribers. The UGP addresses the scaling and redundancy limitations of VPC-SI (Single Instance) by extending the StarOS boundaries beyond a single VM. UGP allows multiple VMs to act as a single StarOS instance with shared interfaces, shared service addresses, load balancing, redundancy, and a single point of management. Ultra M Models Multiple Ultra M models are available: Ultra M Small Ultra M Medium Ultra M Large Ultra M Extra Small (XS) These models are differentiated by their scale in terms of the number of active Service Functions (SFs) (as shown in Table 1), numbers of VNFs supported, and architecture. Two architectures are supported: non-hyper-converged and Hyper-Converged. Table 2 identifies which architecture is supported for each Ultra M model. Non-Hyper-Converged Ultra M models are based on OpenStack 9. This architecture implements separate Ceph Storage and Compute nodes. Hyper-Converged models are based on OpenStack 10. The Hyper-Converged architecture combines the Ceph Storage and Compute node. The converged node is referred to as an OSD compute node. Cisco Systems, Inc. 9

10 Ultra M Overview Functional Components Table 1 Active Service Functions to Ultra M Model Comparison Ultra M Model # of Active Service Functions (SFs) Ultra M XS 6 Ultra M Small 7 Ultra M Medium 10 Ultra M Large 14 Table 2 Ultra M Model Supported VNF and Architecture Variances Non-Hyper-Converged* Architecture Hyper-Converged Single VNF Ultra M Small Ultra M XS Ultra M Medium Ultra M Large Multi-VNF* NA Ultra M XS * Multi-VNF provides support for multiple VNF configurations (up to 4 maximum). Contact your Sales or Support representative for additional information. Functional Components As described in Hardware Specifications, the Ultra M solution consists of multiple hardware components including multiple servers that function as controller, compute, and storage nodes. The various functional components that comprise the Ultra M are deployed on this hardware: OpenStack Controller: Serves as the Virtual Infrastructure Manager (VIM). NOTE: In this release, all VNFs in a multi-vnf Ultra M are deployed as a single site leveraging a single VIM. Ultra Automation Services (UAS): A suite of tools provided to simplify the deployment process: AutoIT-NFVI: Automates the VIM Orchestrator and VIM installation processes. AutoIT-VNF: Provides storage and management for system ISOs. AutoDeploy: Initiates the deployment of the VNFM and VNF components through a single deployment script. AutoVNF: Initiated by AutoDeploy, AutoVNF is directly responsible for deploying the VNFM and VNF components based on inputs received from AutoDeploy. Ultra Web Service (UWS): The Ultra Web Service (UWS) provides a web-based graphical user interface (GUI) and a set of functional modules that enable users to manage and interact with the USP VNF. 10

11 Ultra M Overview Virtual Machine Allocations Cisco Elastic Services Controller (ESC): Serves as the Virtual Network Function Manager (VNFM). NOTE: ESC is the only VNFM supported in this release. VNF Components: USP-based VNFs are comprised of multiple components providing different functions: Ultra Element Manager (UEM): Serves as the Element Management System (EMS, also known as the VNF-EM); it manages all of the major components of the USP-based VNF architecture. Control Function (CF): A central sub-system of the UGP VNF, the CF works with the UEM to performs lifecycle events and monitoring for the UGP VNF. Service Function (SF): Provides service context (user I/O ports), handles protocol signaling, session processing tasks, and flow control (demux). Figure 1 - Ultra M Components Virtual Machine Allocations Each of the Ultra M functional components are deployed on one or more virtual machines (VMs) based on their redundancy requirements as identified in Table 3. Some of these component VMs are deployed on a single compute node as described in VM Deployment Per Node Type. All deployment models use three OpenStack controllers to provide VIM layer redundancy and upgradability. Table 3 - Function VM Requirements per Ultra M Model Non-Hyper-Converged Hyper-Converged Function(s) Small Medium Large XS Single VNF XS Multi VNF OSP-D* Not applicable Not applicable Not applicable 1 1 AutoIT-NFVI Not applicable Not applicable Not applicable 1 1 AutoIT-VNF AutoDeploy

12 Ultra M Overview Virtual Machine Allocations Non-Hyper-Converged Hyper-Converged Function(s) Small Medium Large XS Single VNF XS Multi VNF AutoVNF per VNF ESC (VNFM) per VNF UEM per VNF CF per VNF SF per VNF * OSP-D is deployed as a VM for Hyper-Converged Ultra M models. VM Requirements The CF, SF, UEM, and ESC VMs require the resource allocations identified in Table 4. The host resources are included in these numbers. Table 4 - VM Resource Allocation Virtual Machine vcpu RAM (GB) Root Disk (GB) OSP-D* AutoIT-NFVI ** AutoIT-VNF AutoDeploy** AutoVNF ESC UEM CF SF NOTE: 4 vcpus, 2 GB RAM, and 54 GB root disks are reserved for host reservation. * OSP-D is deployed as a VM for Hyper-Converged Ultra M models. Though the recommended root disk size is 200GB, additional space can be allocated if available. ** AutoIT-NFVI is used to deploy the VIM Orchestrator (Undercloud) and VIM (Overcloud) for Hyper- Converged Ultra M models. AutoIT-NFVI, AutoDeploy, and OSP-D are installed as VMs on the same physical server in this scenario. 12

13 Hardware Specifications Ultra M deployments use the following hardware: Cisco Catalyst Switches Cisco Nexus Switches UCS C-Series Servers NOTE: The specific component software and firmware versions identified in the sections that follow have been validated in this Ultra M solution release. Cisco Catalyst Switches Cisco Catalyst Switches provide as physical layer 1 switching for Ultra M components to the management and provisioning networks. One of two switch models is used based on the Ultra M model being deployed: Catalyst C2960XR-48TD-I Switch Catalyst T-S Switch Catalyst C2960XR-48TD-I Switch The Catalyst C2960XR-48TD-I has 48 10/100/1000 ports. Table 5 - Catalyst 2960-XR Switch Information Ultra M Models Quantity Software Version Firmware Version Ultra M Small 1 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 Ultra M Medium 2 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 Ultra M Large 2 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 Ultra M XS Single VNF 2 IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 Ultra M XS Multi-VNF 1 per rack IOS 15.2.(2) E5 Boot Loader: 15.2(3r)E1 Catalyst T-S Switch The Catalyst T-S has 48 10/100/1000 ports. Cisco Systems, Inc. 13

14 Hardware Specifications Cisco Nexus Switches Table 6 - Catalyst T-S Switch Information Ultra M Models Quantity Software Version Firmware Version Ultra M XS Single VNF 2 IOS: E Boot Loader: 3.58 Ultra M XS Multi-VNF 1 per Rack IOS: E Boot Loader: 3.58 Cisco Nexus Switches Cisco Nexus Switches serve as top-of-rack (TOR) leaf and end-of-rack (EOR) spine switches provide out-of-band (OOB) network connectivity between Ultra M components. Two switch models are used for the various Ultra M models: Nexus YC-EX Nexus 9236C Nexus YC-EX Nexus switches serve as network leafs within the Ultra M solution. Each switch has 48 10/25-Gbps Small Form Pluggable Plus (SFP+) ports and 6 40/100-Gbps Quad SFP+ (QSFP+) uplink ports. Table 7 - Nexus YC-EX Ultra M Models Quantity Software Version Firmware Version Ultra M Small 2 NX-OS: 7.0(3)I2(3) 7.41 Ultra M Medium 4 NX-OS: 7.0(3)I2(3) 7.41 Ultra M Large 4 NX-OS: 7.0(3)I2(3) 7.41 Ultra M XS Single VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59 Ultra M XS Multi-VNF 2 per Rack NX-OS: 7.0(3)I5(2) BIOS: 7.59 Nexus 9236C Nexus 9236 switches serve as network spines within the Ultra M solution. Each switch provides 36 10/25/40/50/100 Gbps ports. Table 8 - Nexus 9236C Ultra M Models Quantity Software Version Firmware Version Ultra M Small 0 NX-OS: 7.0(3)I4(2) BIOS: 7.51 Ultra M Medium 2 NX-OS: 7.0(3)I4(2) BIOS:

15 Hardware Specifications UCS C-Series Servers Ultra M Models Quantity Software Version Firmware Version Ultra M Large 2 NX-OS: 7.0(3)I4(2) BIOS: 7.51 Ultra M XS Single VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59 Ultra M XS Multi-VNF 2 NX-OS: 7.0(3)I5(2) BIOS: 7.59 UCS C-Series Servers Cisco UCS C240 M4S SFF servers host the functions and virtual machines (VMs) required by Ultra M. Server Functions and Quantities Server functions and quantity differ depending on the Ultra M model you are deploying: Staging Server Node: For non-hyper-converged Ultra M models, this server hosts the Undercloud function responsible for bringing up the other servers that form the VIM. Ultra M Manager Node: Required only for Ultra M models based on the Hyper-Converged architecture, this server hosts the following: AutoIT-NFVI VM AutoDeploy VM OSP-D VM OpenStack Controller Nodes: These servers host the high availability (HA) cluster that serves as the VIM within the Ultra M solution. In addition, they facilitate the Ceph storage monitor function required by the Ceph Storage Nodes and/or OSD Compute Nodes. Ceph Storage Nodes: Required only for non-hyper-converged Ultra M models, these servers are deployed in a HA cluster with each server containing a Ceph Object Storage Daemon (OSD) providing storage capacity for the VNF and other Ultra M elements. OSD Compute Nodes: Required only for Hyper-converged Ultra M models, these servers provide Ceph storage functionality in addition to hosting VMs for the following: AutoIT-VNF VM AutoVNF HA cluster VMs Elastic Services Controller (ESC) Virtual Network Function Manager (VNFM) active and standby VMs Ultra Element Manager (UEM) VM HA cluster Ultra Service Platform (USP) Control Function (CF) active and standby VMs Compute Nodes: For all Ultra M models, these servers host the active, standby, and demux USP Service Function (SF) VMs. However, for non-hyper-converged Ultra M models, these servers also host the VMs pertaining to the: AutoIT-VNF VM 15

16 Hardware Specifications UCS C-Series Servers AutoDeploy AutoVNF HA cluster VMs Elastic Services Controller (ESC) Virtual Network Function Manager (VNFM) active and standby VMs Ultra Element Manager (UEM) VM HA cluster Ultra Service Platform (USP) Control Function (CF) active and standby VMs Table 9 provides information on server quantity requirements per function for each Ultra M model. Table 9 Ultra M Server Quantities by Model and Function Ultra M Models Server Quantity (max) OSP-D / Staging Server Node Controller Nodes Ceph Storage Nodes OSD Compute Nodes Compute Nodes (max) Additional Specifications Ultra M Small Ultra M Medium Ultra M Large Ultra M XS Single VNF Ultra M XS Multi-VNF Based on node type as described in Table Based on node type as described in Table Based on node type as described in Table Based on node type as described in Table * 38** Based on node type as described in Table 11. * 3 for the first VNF, 2 per each additional VNF. ** Supports a maximum of 4 VNFs. VM Deployment per Node Type Just as the server functions and quantities differ depending on the Ultra M model you are deploying, so does the VM distribution across those nodes as shown in Figure 2, Figure 3, and Figure 4. 16

17 Hardware Specifications UCS C-Series Servers Figure 2 VM Distribution on Server Nodes for Non-Hyper-converged Ultra M Models 17

18 Hardware Specifications UCS C-Series Servers Figure 3 - VM Distribution on Server Nodes for Hyper-converged Ultra M Single VNF Models 18

19 Hardware Specifications UCS C-Series Servers Figure 4 - VM Distribution on Server Nodes for Hyper-converged Ultra M Multi-VNF Models 19

20 Hardware Specifications UCS C-Series Servers Server Configurations Table 10 Non-Hyper-converged Ultra M Model UCS C240 Server Specifications by Node Type Node Type CPU RAM Storage Software Version Firmware Version Staging Server 2x 2.60 GHz 4x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(1g) CIMC: 2.0(10e) System BIOS: C240M e Controller 2x 2.60 GHz 4x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(1g) CIMC: 2.0(10e) System BIOS: C240M e Compute 2x 2.60 GHz 8x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(1g) CIMC: 2.0(10e) System BIOS: C240M e Storage 2x 2.60 GHz 4x 32GB DDR MHz RDIMM/PC4 2x 300GB 12G SAS 10K RPM SFF HDD MLOM: 4.1(1g) CIMC: 2.0(10e) System BIOS: C240M e Table 11 - Hyper-converged Ultra M Single and Multi-VNF UCS C240 Server Specifications by Node Type Node Type CPU RAM Storage Software Version Firmware Version Ultra M Manager Node* 2x 2.60 GHz 4x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(3a) CIMC: 3.0(3e) System BIOS: C240M c Controller 2x 2.60 GHz 4x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(3a) CIMC: 3.0(3e) System BIOS: C240M c Compute 2x 2.60 GHz 8x 32GB DDR MHz RDIMM/PC4 2x 1.2 TB 12G SAS HDD MLOM: 4.1(3a) CIMC: 3.0(3e) System BIOS: C240M c

21 Hardware Specifications UCS C-Series Servers Node Type CPU RAM Storage Software Version Firmware Version OSD Compute 2x 2.60 GHz 8x 32GB DDR MHz RDIMM/PC4 4x 1.2 TB 12G SAS HDD 2x 300G 12G SAS HDD HDD MLOM: 4.1(3a) CIMC: 3.0(3e) System BIOS: C240M c x 480G 6G SAS SATA SSD * OSP-D is deployed as a VM on the Ultra M Manager Node for Hyper-Converged Ultra M models. Storage Figure 5 displays the storage disk layout for the UCS C240 series servers used in the Ultra M solution. Figure 5 UCS C240 Front-Plane NOTES: The Boot disks contain the operating system (OS) image with which to boot the server. The Journal disks contain the Ceph journal file(s) used to repair any inconsistencies that may occur in the Object Storage Disks. The Object Storage Disks store object data for USP-based VNFs. Ensure that the HDD and SSD used for the Boot Disk, Journal Disk, and object storage devices (OSDs) are available as per Ultra-M BoM and installed in the appropriate slots as identified in Table

22 Hardware Specifications UCS C-Series Servers Table 12 - UCS C240 M4S SFF Storage Specifications by Node Type Ultra M Manager Node and Staging Server: 2 x 1.2 TB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 Controllers, Computes: Ceph Nodes and OSD Computes*: 2 x 1.2 TB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 2 x 300 GB HDD For Boot OS configured as Virtual Drive in RAID1 placed on Slots 1 & 2 1 x 480 GB SSD For Journal Disk as Virtual Drive in RAID0 Slot 3 (Reserve for SSD Slot 3,4,5,6 future scaling needs) 4 x 1.2 TB HDD For OSD s configured as Virtual Drive in RAID0 each Slot 7,8,9,10 (Reserve for OSD 7,8,9,10.,24) * Ceph Nodes are used in the non-hyper-converged Ultra M models while OSD Computes are used in Hyperconverged Ultra M models. Refer to Server Functions and Quantities for details. Ensure that the RAIDs are sized such that: Boot Disks < Journal Disk(s) < OSDs Ensure that FlexFlash is disabled on each UCS-C240 M4 (default Factory). Ensure that all nodes are in Unconfigured Good state under Cisco SAS RAID Controllers (factory default). 22

23 Software Specifications Table 13 - Required Software Software Value/Description Operating System Red Hat Enterprise Linux 7.3 Hypervisor VIM Qemu (KVM) Non-Hyper-converged Ultra M Models: Red Hat OpenStack Platform 9 (OSP 9 - Mitaka) VNF VNFM ESC UEM UEM USP USP Hyper-converged Ultra M Single and Multi-VNF Models: Red Hat OpenStack Platform 10 (OSP 10 - Newton) Cisco Systems, Inc. 23

24 Networking Overview This section provides information on Ultra M networking requirements and considerations. UCS-C240 Network Interfaces Figure 6 - UCS-C240 Back-Plane Number Designation Description Applicable Node Types 1 CIMC/IPMI/M The server s Management network interface used for accessing the UCS Cisco Integrated Management Controller (CIMC) application, performing Intelligent Platform Management Interface (IPMI) operations. 2 Intel Onboard Port 1: VIM Orchestration (Undercloud) Provisioning network interface. Port 2: External network interface for Internet access. It must also be routable to External floating IP addresses on other nodes. All All Ultra M Manager Node Staging Server 3 Modular LAN on Motherboard (mlom) VIM networking interfaces used for: External floating IP network. Controller Internal API network Controller Cisco Systems, Inc. 24

25 Networking Overview VIM Network Topology Number Designation Description Applicable Node Types Storage network Controller Compute OSD Compute Ceph Storage Management network Controller Compute OSD Compute Ceph Tenant network (virtio only VIM provisioning, VNF Management, and VNF Orchestration) Controller Compute OSD Compute 4 PCIe 4 Port 1: With NIC bonding enabled, this port provides the active Service network interfaces for VNF ingress and egress connections. Port 2: With NIC bonding enabled, this port provides the standby Di-internal network interface for inter- VNF component communication. 5 PCIe 1 Port 1: With NIC bonding enabled, this port provides the active Di-internal network interface for inter-vnf component communication. Port 2: With NIC bonding enabled, this port provides the standby Service network interfaces for VNF ingress and egress connections. Compute Compute OSD Compute Compute OSD Compute Compute VIM Network Topology Ultra M s VIM is based on the OpenStack project TripleO ( OpenStack-On-OpenStack") which is the core of the OpenStack Platform Director (OSP-D). TripleO allows OpenStack components to install a fully operational OpenStack environment. Two cloud concepts are introduced through TripleO: VIM Orchestrator (Undercloud): The VIM Orchestrator is used to bring up and manage the VIM. Though OSP-D and Undercloud are sometimes referred to synonymously, the OSP-D bootstraps the Undercloud deployment and provides the underlying components (e.g. Ironic, Nova, Glance, Neutron, etc.) leveraged by the Undercloud to deploy the VIM. Within the Ultra M Solution, OSP-D and the Undercloud are hosted on the same server. VIM (Overcloud): The VIM consists of the compute, controller, and storage nodes on which the VNFs are deployed. 25

26 Networking Overview VIM Network Topology This VIM Orchestrator-VIM model requires multiple networks as identified in Figure 7 and Figure 8. Figure 7 Non-Hyper-converged Ultra M Small, Medium, Large Model OpenStack VIM Network Topology 26

27 Networking Overview VIM Network Topology Figure 8 Hyper-converged Ultra M Single and Multi-VNF Model OpenStack VIM Network Topology Some considerations for VIM Orchestrator and VIM deployment are as follows: External network access (e.g. Internet access) can be configured in one of the following ways: Across all node types: A single subnet is configured on the Controller HA, VIP address, floating IP addresses and OSP-D/Staging server s external interface provided that this network is data-center routable as well as it is able to reach the internet. Limited to OSP-D: The External IP network is used by Controllers for HA and Horizon dashboard as well as later on for Tenant Floating IP address requirements. This network must be data-center routable. In addition, the External IP network is used only by OSP-D/Staging Server node s external interface that has a single IP address. The External IP network must be lab/data-center routable must also have internet access to Red Hat cloud. It is used by OSP-D/Staging Server for subscription purposes and also acts as an external gateway for all controllers, computes and Ceph-storage nodes. IPMI must be enabled on all nodes. Two networks are needed to deploy the VIM Orchestrator: IPMI/CIMC Network Provisioning Network 27

28 Networking Overview Openstack Tenant Networking The OSP-D/Staging Server must have reachability to both IPMI/CIMC and Provisioning Networks. (VIM Orchestrator networks need to be routable between each other or have to be in one subnet.) DHCP-based IP address assignment for Introspection PXE from Provisioning Network (Range A) DHCP based IP address assignment for VIM PXE from Provisioning Network (Range B) must be separate from Introspection The Ultra M Manager Node/Staging Server acts as a gateway for Controller, Ceph and Computes. Therefore, the external interface of this node/server needs to be able to access the Internet. In addition, this interface needs to be routable with the Data-center network. This allows the External interface IP-address of the Ultra M Manager Node/Staging Server to reach Data-center routable Floating IP addresses as well as the VIP addresses of Controllers in HA Mode. Prior to assigning floating and virtual IP addresses, make sure that they are not already allocated through OpenStack. If the addresses are already allocated, then they must be freed up for use or you must assign a new IP address that is available in the VIM. Multiple VLANs are required in order to deploy OpenStack VIM: 1 for the Management and Provisioning networks interconnecting all of the nodes regardless of type 1 for the Staging Server/ OSP-D Node external network 1 for Compute, Controller, and Ceph Storage or OSD Compute Nodes 1 for Management network interconnecting the Leafs and Spines Login to individual Compute nodes will be from OSP-D/Staging Server using heat user login credentials. The OSP-D/Staging Server acts as a jump server where the br-ctlplane interface address is used to login to the Controller, Ceph or OSD Computes, and Computes post VIM deployment using heat-admin credentials. Layer 1 networking guidelines for the VIM network are provided in Layer 1 Leaf and Spine Topology. In addition, a template is provided in Appendix: Network Definitions (Layer 2 and 3) to assist you with your Layer 2 and Layer 3 network planning. Openstack Tenant Networking The interfaces used by the VNF are based on the PCIe architecture. Single root input/output virtualization (SR-IOV) is used on these interfaces to allow multiple VMs on a single server node to use the same network interface as shown in Figure 9. SR-IOV Networking is network type Flat under OpenStack configuration. NIC Bonding is used to ensure port level redundancy for PCIe Cards involved in SR-IOV Tenant Networks as shown in Figure

29 Networking Overview VNF Tenant Networks Figure 9 - Physical NIC to Bridge Mappings Figure 10 NIC Bonding VNF Tenant Networks While specific VNF network requirements are described in the documentation corresponding to the VNF, Figure 11 displays the types of networks typically required by USP-based VNFs. 29

30 Networking Overview VNF Tenant Networks Figure 11 Typical USP-based VNF Networks The USP-based VNF networking requirements and the specific roles are described here: Public: External public network. The router has an external gateway to the public network. All other networks (except DI-Internal and ServiceA-n) have an internal gateway pointing to the router. And the router performs secure network address translation (SNAT). DI-Internal: This is the DI-internal network which serves as a backplane for CF-SF and CF-CF communications. Since this network is internal to the UGP, it does not have a gateway interface to the router in the OpenStack network topology. A unique DI internal network must be created for each instance of the UGP. The interfaces attached to these networks use performance optimizations. Management: This is the local management network between the CFs and other management elements like the UEM and VNFM. This network is also used by OSP-D to deploy the VNFM and AutoVNF. To allow external access, an OpenStack floating IP address from the Public network must be associated with the UGP VIP (CF) address. You can ensure that the same floating IP address can assigned to the CF, UEM, and VNFM after a VM restart by configuring parameters in the AutoDeploy configuration file or the UWS service delivery configuration file. 30

31 Networking Overview Layer 1 Leaf and Spine Topology NOTE: Prior to assigning floating and virtual IP addresses, make sure that they are not already allocated through OpenStack. If the addresses are already allocated, then they must be freed up for use or you must assign a new IP address that is available in the VIM. Orchestration: This is the network used for VNF deployment and monitoring. It is used by the VNFM to onboard the USP-based VNF. ServiceA-n: These are the service interfaces to the SF. Up to 12 service interfaces can be provisioned for the SF with this release. The interfaces attached to these networks use performance optimizations. Layer 1 networking guidelines for the VNF network are provided in Layer 1 Leaf and Spine Topology. In addition, a template is provided in Appendix: Network Definitions (Layer 2 and 3) to assist you with your Layer 2 and Layer 3 network planning. Supporting Trunking on VNF Service ports Service ports within USP-based VNFs are configured as trunk ports and traffic is tagged using the VLAN command. In This configuration is supported by trunking to the uplink switch via the sriovnicswitch mechanism driver. This driver supports Flat network types in OpenStack, enabling the guest OS to tag the packets. Flat networks are untagged networks in OpenStack. Typically, these networks are previously existing infrastructure, where OpenStack guests can be directly applied. Layer 1 Leaf and Spine Topology Ultra M implements a Leaf and Spine network topology. Topology details differ between Ultra M models based on the scale and number of nodes. NOTE: When connecting component network ports, ensure that the destination ports are rated at the same speed as the source port (e.g. connect a 10G port to a 10G port). Additionally, the source and destination ports must support the same physical medium (e.g. Ethernet) for interconnectivity. Non-Hyper-converged Ultra M Model Network Topology Figure 12 illustrates the logical leaf and spine topology for the various networks required for the non-hyper-converged models. 31

32 Networking Overview Layer 1 Leaf and Spine Topology Figure 12 Non-Hyper-converged Ultra M Model Leaf and Spine Topology As identified in Cisco Nexus Switches, the number of leaf and spine switches differ between the Ultra M models. Similarly, the specific leaf and spine ports used also depend on the Ultra M solution model you are deploying. General guidelines for interconnecting the leaf and spine switches in the non-hyper-converged Large model are provided in Table 14 through Table 21. Use the information in these tables to make appropriate adjustments to your network topology based on your deployment scenario (e.g. number of Compute Nodes). 32

33 Networking Overview Layer 1 Leaf and Spine Topology Table 14 Catalyst Management Switch 1 Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1 Staging Server Management CIMC 2-4 Controller Nodes 5-7 Ceph Storage Nodes Management CIMC 3 sequential ports - 1 per Controller Node Management CIMC 3 sequential ports - 1 per Ceph Storage Node 8 Staging Server Provisioning Mgmt 9-11 Controller Nodes Ceph Storage Nodes 15 C2960XR Switch 2 Provisioning Mgmt 3 sequential ports - 1 per Controller Node Provisioning Mgmt 3 sequential ports - 1 per Ceph Storage Node InterLink 39 Configured as type Trunk 23 Staging Server (OSP- D) External Intel Onboard Port 2 (LAN 2) 24 External Router External Configured as type Trunk 43 Leaf 1 InterLink 48 Configured as type Trunk 44 Leaf 2 InterLink 48 Configured as type Trunk 45 Leaf 1 Management Mgmt 0 46 Leaf 2 Management Mgmt 0 47 Spine 1 Management Mgmt 0 Table 15 Catalyst Management Switch 2 Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1-19 Compute Nodes Management CIMC 19 sequential ports - 1 per Compute Node Provisioning Management Mgmt 19 sequential ports - 1 per Compute Node 33

34 Networking Overview Layer 1 Leaf and Spine Topology From Switch Port(s) To Device Network Port(s) Notes 39 C2960XR Switch 1 InterLink Leaf 3 Management Mgmt 0 46 Leaf 4 Management Mgmt 0 47 Spine 2 Management Mgmt 0 Table 16 - Leaf 1 Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes 2-4 Controller Management & Orchestration (active) MLOM 1 3 sequential ports - 1 per Controller Node 5-7 Ceph Storage Nodes Management & Orchestration (active) MLOM 1 3 sequential ports - 1 per Ceph Storage Node 8-26 Compute Nodes Management & Orchestration (active) MLOM 1 19 sequential ports 1 per Compute Node 48 C2960XR Switch 1 Management & Orchestration Spine 1 Uplink 1-3 Leaf 1 port 49 connects to Spine 1 port 1 Leaf 1 port 50 connects to Spine 1 port 2 Leaf 1 port 51 connects to Spine 1 port Spine 2 Uplink 1-3 Leaf 1 port 52 connects to Spine 2 port 1 Leaf 1 port 53 connects to Spine 2 port 2 Leaf 1 port 54 connects to Spine 2 port 3 34

35 Networking Overview Layer 1 Leaf and Spine Topology Table 17 - Leaf 2 Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes 2-4 Controller Management & Orchestration (redundant) MLOM 2 3 sequential ports - 1 per Controller Node 5-7 Ceph Storage Nodes Management & Orchestration (redundant) MLOM 2 3 sequential ports - 1 per Ceph Storage Node 8-26 Compute Nodes Management & Orchestration (redundant) MLOM 2 19 sequential ports 1 per Compute Node 48 C2960XR Switch 1 Management & Orchestration Spine 1 Uplink 4-6 Leaf 1 port 49 connects to Spine 1 port 4 Leaf 1 port 50 connects to Spine 1 port 5 Leaf 1 port 51 connects to Spine 1 port Spine 2 Uplink 4-6 Leaf 1 port 52 connects to Spine 2 port 4 Leaf 1 port 53 connects to Spine 2 port 5 Leaf 1 port 54 connects to Spine 2 port 6 Table 18 - Leaf 3 Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes 1-37 odd Compute Nodes Di-internal (active) PCIe ports - 1 per Compute Node 2-28 even Compute Nodes Service (active) PCIe ports - 1 per Compute Node Spine 1 Uplink 7-9 Leaf 1 port 49 connects to Spine 1 port 7 Leaf 1 port 50 connects to Spine 1 port 8 Leaf 1 port 51 connects to Spine 1 port 9 35

36 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes Spine 2 Uplink 7-9 Leaf 1 port 52 connects to Spine 2 port 7 Leaf 1 port 53 connects to Spine 2 port 8 Leaf 1 port 54 connects to Spine 2 port 9 Table 19 - Leaf 4 Port Interconnect From Leaf Port(s) To Device Network Port(s) Notes 1-37 odd Compute Nodes Service (redundant) PCIe ports - 1 per Compute Node 2-28 even Compute Nodes Di-internal (redundant) PCIe ports - 1 per Compute Node Spine 1 Uplink Leaf 1 port 49 connects to Spine 1 port 10 Leaf 1 port 50 connects to Spine 1 port 11 Leaf 1 port 51 connects to Spine 1 port Spine 2 Uplink Leaf 1 port 52 connects to Spine 2 port 10 Leaf 1 port 53 connects to Spine 2 port 11 Leaf 1 port 54 connects to Spine 2 port 12 Table 20 - Spine 1 Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes 1-3 Leaf 1 Uplink Spine 1 port 1 connects to Leaf 1 port 49 Spine 1 port 2 connects to Leaf 1 port 50 Spine 1 port 3 connects to Leaf 1 port Leaf 2 Uplink Spine 1 port 4 connects to Leaf 2 port 49 Spine 1 port 5 connects to Leaf 2 port 50 Spine 1 port 6 connects to Leaf 2 port 51 36

37 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes 7-9 Leaf 3 Uplink Spine 1 port 7 connects to Leaf 3 port 49 Spine 1 port 8 connects to Leaf 3 port 50 Spine 1 port 9 connects to Leaf 3 port Leaf 4 Uplink Spine 1 port 10 connects to Leaf 4 port 49 Spine 1 port 11 connects to Leaf 4 port 50 Spine 1 port 12 connects to Leaf 4 port Spine 2 InterLink Spine 1 port 20 connects to Spine 2 port 20 Spine 1 port 21 connects to Spine 2 port 21 Spine 1 port 22 connects to Spine 2 port 22 Spine 1 port 31 connects to Spine 2 port 31 Table 21 - Spine 2 Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes 1-3 Leaf 1 Uplink Spine 2 port 1 connects to Leaf 1 port 49 Spine 2 port 2 connects to Leaf 1 port 50 Spine 2 port 3 connects to Leaf 1 port Leaf 2 Uplink Spine 2 port 4 connects to Leaf 2 port 49 Spine 2 port 5 connects to Leaf 2 port 50 Spine 2 port 6 connects to Leaf 2 port Leaf 3 Uplink Spine 2 port 7 connects to Leaf 3 port 49 Spine 2 port 8 connects to Leaf 3 port 50 Spine 2 port 9 connects to Leaf 3 port Leaf 4 Uplink Spine 2 port 10 connects to Leaf 4 port 49 Spine 2 port 11 connects to Leaf 4 port 50 Spine 2 port 12 connects to Leaf 4 port 51 37

38 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes Spine 2 InterLink Spine 2 port 20 connects to Spine 1 port 20 Spine 2 port 21 connects to Spine 1 port 21 Spine 2 port 22 connects to Spine 1 port 22 Spine 2 port 31 connects to Spine 1 port 31 38

39 Networking Overview Layer 1 Leaf and Spine Topology Hyper-converged Ultra M Single and Multi-VNF Model Network Topology Figure 13 illustrates the logical leaf and spine topology for the various networks required for the Hyper-converged Ultra M models. In this figure, two VNFs are supported. (Leafs 1 and 2 pertain to VNF1, Leafs 3 and 4 pertain to VNF 2). If additional VNFs are supported, additional Leafs are required (e.g. Leafs 5 and 6 are needed for VNF 3, Leafs 7 and 8 for VNF4). Each set of additional Leafs would have the same meshed network interconnects with the Spines and with the Controller, OSD Compute, and Compute Nodes. For single VNF models, Leaf 1 and Leaf 2 facilitate all of the network interconnects from the server nodes and from the Spines. Figure 13 Hyper-converged Ultra M Single and Multi-VNF Leaf and Spine Topology 39

40 Networking Overview Layer 1 Leaf and Spine Topology As identified in Cisco Nexus Switches, the number of leaf and spine switches differ between the Ultra M models. Similarly, the specific leaf and spine ports used also depend on the Ultra M solution model being deployed. That said, general guidelines for interconnecting the leaf and spine switches in an Ultra M XS multi-vnf deployment are provided in Table 22 through Table 31. Using the information in these tables, you can make appropriate adjustments to your network topology based on your deployment scenario (e.g. number of VNFs and number of Compute Nodes). Table 22 - Catalyst Management Switch 1 (Rack 1) Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1, 2, 11 OSD Compute Nodes Management CIMC 3 non-sequential ports - 1 per OSD Compute Node 3-10 Compute Nodes 12 Ultra M Manager Node Management CIMC 6 sequential ports - 1 per Compute Node Management CIMC Management Switch 1 only 13 Controller 0 Management CIMC 21, 22, 31 OSD Compute Nodes Provisioning Mgmt 3 non-sequential ports - 1 per OSD Compute Node Compute Nodes Ultra M Manager Node Provisioning Mgmt 6 sequential ports - 1 per Compute Node Provisioning Mgmt 2 sequential ports 34 Controller 0 Management CIMC 47 Leaf 1 Management 48 Switch port 47 connects with Leaf 1 port Leaf 2 Management 48 Switch port 48 connects with Leaf 2 port 48 Table 23 - Catalyst Management Switch 2 (Rack 2) Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1-10 Compute Nodes Management CIMC 10 sequential ports - 1 per Compute Node 40

41 Networking Overview Layer 1 Leaf and Spine Topology From Switch Port(s) To Device Network Port(s) Notes 14 Controller 1 Management CIMC 15 Controller 2 Management CIMC Compute Nodes Provisioning Mgmt 10 sequential ports - 1 per Compute Node 35 Controller 1 Provisioning Mgmt 36 Controller 2 Provisioning Mgmt 47 Leaf 3 Management 48 Switch port 47 connects with Leaf 3 port Leaf 4 Management 48 Switch port 48 connects with Leaf 4 port 48 Table 24 - Catalyst Management Switch 3 (Rack 3) Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1-10 Compute Nodes Management CIMC 10 sequential ports - 1 per Compute Node Compute Nodes Provisioning Mgmt 10 sequential ports - 1 per Compute Node 47 Leaf 5 Management 48 Switch port 47 connects with Leaf 5 port Leaf 6 Management 48 Switch port 48 connects with Leaf 6 port 48 Table 25 - Catalyst Management Switch 4 (Rack 4) Port Interconnects From Switch Port(s) To Device Network Port(s) Notes 1-10 Compute Nodes Management CIMC 10 sequential ports - 1 per Compute Node Compute Nodes Provisioning Mgmt 10 sequential ports - 1 per Compute Node 41

42 Networking Overview Layer 1 Leaf and Spine Topology From Switch Port(s) To Device Network Port(s) Notes 47 Leaf 7 Management 48 Switch port 47 connects with Leaf 7 port Leaf 8 Management 48 Switch port 48 connects with Leaf 8 port 48 Table 26 Leaf 1 and 2 (Rack 1) Port Interconnects* From Leaf Port(s) To Device Network Port(s) Notes Leaf 1 1, 2, 11 OSD Compute Nodes Management & Orchestration (active) MLOM P1 3 non-sequential ports - 1 per OSD Compute Node 12 Controller 0 Node Management & Orchestration (active) MLOM P1 17, 18, 27 OSD Compute Nodes Di-internal (active) PCIe01 P1 3 non-sequential ports - 1 per OSD Compute Node 3-10 (inclusive) Compute Nodes Management & Orchestration (active) MLOM P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node (inclusive) Compute Nodes Di-internal (active) PCIe01 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node (inclusive) Compute Nodes/ OSD Compute Nodes Service (active) PCIe04 P1 Sequential ports based on the number of Compute Nodes and/or OSD Compute Nodes - 1 per OSD Compute Node and/or Compute Node NOTE: Though the OSD Compute Nodes do not use the Service Networks, they are provided to ensure compatibility within the OpenStack Overcloud (VIM) deployment. 48 Catalyst Management Switches Management 47 Leaf 1 connects to Switch Spine 1 Downlink 1-2 Leaf 1 port 49 connects to Spine 1 port 1 Leaf 1 port 50 connects to Spine 1 port 2 42

43 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes Spine 2 Downlink 3-4 Leaf 1 port 51 connects to Spine 2 port 3 Leaf 2 Leaf 1 port 52 connects to Spine 2 port 4 1, 2, 11 OSD Compute Nodes Management & Orchestration (redundant) MLOM P2 3 non-sequential ports - 1 per OSD Compute Node 12 Controller 0 Node Management & Orchestration (redundant) MLOM P2 17, 18, 27 OSD Compute Nodes Di-internal (redundant) PCIe04 P2 3 non-sequential ports - 1 per OSD Compute Node 3-10 (inclusive) Compute Nodes Management & Orchestration (redundant) MLOM P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node (inclusive) Compute Nodes Di-internal (redundant) PCIe04 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node (inclusive) Compute Nodes / OSD Compute Nodes Service (redundant) PCIe01 P2 Sequential ports based on the number of Compute Nodes and/or OSD Compute Nodes - 1 per OSD Compute Node and/or Compute Node NOTE: Though the OSD Compute Nodes do not use the Service Networks, they are provided to ensure compatibility within the OpenStack Overcloud (VIM) deployment. 48 Catalyst Management Switches Management 48 Leaf 2 connects to Switch Spine 1 Downlink 1-2 Leaf 2 port 49 connects to Spine 1 port 1 Leaf 2 port 50 connects to Spine 1 port Spine 2 Downlink 3-4, 7-8, 11-12, Leaf 2 port 51 connects to Spine 2 port 3 Leaf 2 port 52 connects to Spine 2 port 4 43

44 Networking Overview Layer 1 Leaf and Spine Topology Table 27 - Leaf 3 and 4 (Rack 2) Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes Leaf (inclusive) Compute Nodes Management & Orchestration (active) MLOM P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1 (Rack 1). These are used to host managementrelated VMs as shown in Figure (inclusive) Controller Nodes Management & Orchestration (active) MLOM P1 Leaf 3 port 13 connects to Controller 1 MLOM P1 port Leaf 3 port 14 connects to Controller 1 MLOM P1 port (inclusive) Compute Nodes Di-internal (active) PCIe01 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (active) PCIe04 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 47 Leaf 3 connects to Switch Spine 1 Downlink 5-6 Leaf 3 port 49 connects to Spine 1 port 5 Leaf 3 port 50 connects to Spine 1 port Spine 2 Downlink 7-8 Leaf 3 port 51 connects to Spine 2 port 7 Leaf 3 port 52 connects to Spine 2 port 8 Leaf (inclusive) Compute Nodes Management & Orchestration (redundant) MLOM P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure 4. 44

45 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes (inclusive) Controller Nodes Management & Orchestration (redundant) MLOM P2 Leaf 4 port 13 connects to Controller 1 MLOM P2 port Leaf 4 port 14 connects to Controller 1 MLOM P2 port (inclusive) Compute Nodes Di-internal (redundant) PCIe04 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (redundant) PCIe01 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 48 Leaf 4 connects to Switch Spine 1 Downlink 5-6 Leaf 4 port 49 connects to Spine 1 port 5 Leaf 4 port 50 connects to Spine 1 port Spine 2 Downlink 7-8 Leaf 4 port 51 connects to Spine 2 port 7 Leaf 4 port 52 connects to Spine 2 port 8 Table 28 - Leaf 5 and 6 (Rack 3) Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes Leaf (inclusive) Compute Nodes Management & Orchestration (active) MLOM P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure 4. 45

46 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes (inclusive) Compute Nodes Di-internal (active) PCIe01 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (active) PCIe04 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 47 Leaf 5 connects to Switch Spine 1 Downlink 9-10 Leaf 5 port 49 connects to Spine 1 port 9 Leaf 5 port 50 connects to Spine 1 port Spine 2 Downlink 3-4, 7-8, 11-12, Leaf 5 port 51 connects to Spine 2 port 11 Leaf 5 port 52 connects to Spine 2 port 12 Leaf (inclusive) Compute Nodes Management & Orchestration (redundant) MLOM P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Di-internal (redundant) PCIe04 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (redundant) PCIe01 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 48 Leaf 6 connects to Switch Spine 1 Downlink 9-10 Leaf 6 port 49 connects to Spine 1 port 9 46 Leaf 6 port 50 connects to Spine 1 port 10

47 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes Spine 2 Downlink Leaf 6 port 51 connects to Spine 2 port 11 Leaf 6 port 52 connects to Spine 2 port 12 Table 29 - Leaf 7 and 8 (Rack 4) Port Interconnects From Leaf Port(s) To Device Network Port(s) Notes Leaf (inclusive) Compute Nodes Management & Orchestration (active) MLOM P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Di-internal (active) PCIe01 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (active) PCIe04 P1 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 47 Leaf 7 connects to Switch Spine 1 Downlink Leaf 7 port 49 connects to Spine 1 port 13 Leaf 7 port 50 connects to Spine 1 port Spine 2 Downlink Leaf 7 port 51 connects to Spine 2 port 15 Leaf 7 port 52 connects to Spine 2 port 16 Leaf 8 47

48 Networking Overview Layer 1 Leaf and Spine Topology From Leaf Port(s) To Device Network Port(s) Notes 1-10 (inclusive) Compute Nodes Management & Orchestration (redundant) MLOM P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 1 and 2 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Di-internal (redundant) PCIe04 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node NOTE: Leaf Ports 17 and 18 are used for the first two Compute Nodes on VNFs other than VNF1. These are used to host management-related VMs as shown in Figure (inclusive) Compute Nodes Service (redundant) PCIe01 P2 Sequential ports based on the number of Compute Nodes - 1 per Compute Node 48 Catalyst Management Switches Management 48 Leaf 8 connects to Switch Spine 1 Downlink Leaf 8 port 49 connects to Spine 1 port 13 Leaf 8 port 50 connects to Spine 1 port Spine 2 Downlink Leaf 8 port 51 connects to Spine 2 port 15 Leaf 8 port 52 connects to Spine 2 port 16 Table 30 - Spine 1 Port Interconnect Guidelines From Spine Port(s) To Device Network Port(s) Notes 1-2, 5-6, 9-10, Leaf 1, 3, 5, 7 Downlink Spine 1 ports 1 and 2 connect to Leaf 1 ports 49 and 50 Spine 1 ports 5 and 6 connect to Leaf 3 ports 49 and 50 Spine 1 ports 9 and 10 connect to Leaf 5 ports 49 and 50 Spine 1 ports 13 and 14 connect to Leaf 7 ports 49 and 50 48

49 Networking Overview Layer 1 Leaf and Spine Topology From Spine Port(s) To Device Network Port(s) Notes 3-4, 7-8, 11-12, Leaf 2, 4, 6, 8 Downlink Spine 1 ports 3 and 4 connect to Leaf 2 ports 49 and 50 Spine 1 ports 7 and 8 connect to Leaf 4 ports 49 and , 31, 32, , 23-24, Spine 2 Interlink 29-30, 31, 32, Router Uplink - Spine 1 ports 11 and 12 connect to Leaf 6 ports 49 and 50 Spine 1 ports 15 and 16 connect to Leaf 8 ports 49 and 50 Spine 1 ports connect to Spine 2 ports Spine 1 port 31 connects to Spine 2 port 31 Spine 1 port 32 connects to Spine 2 port 32 Spine 1 ports connect to Spine 2 ports Table 31 - Spine 2 Port Interconnect Guidelines From Spine Port(s) To Device Network Port(s) Notes 1-2, 5-6, 9-10, Leaf 1, 3, 5, 7 Downlink Spine 1 ports 1 and 2 connect to Leaf 1 ports 51 and 52 Spine 1 ports 5 and 6 connect to Leaf 3 ports 51 and 52 Spine 1 ports 9 and 10 connect to Leaf 5 ports 51 and 52 Spine 1 ports 13 and 14 connect to Leaf 7 ports 51 and 52 49

50 Networking Overview Layer 1 Leaf and Spine Topology From Spine Port(s) To Device Network Port(s) Notes 3-4, 7-8, 11-12, Leaf 2, 4, 6, 8 Downlink Spine 1 ports 3 and 4 connect to Leaf 2 ports 51 and 52 Spine 1 ports 7 and 8 connect to Leaf 4 ports 51 and , 31, 32, , 23-24, Spine 1 Interconnect 29-30, 31, 32, Router Uplink - Spine 1 ports 11 and 12 connect to Leaf 6 ports 51 and 52 Spine 1 ports 15 and 16 connect to Leaf 8 ports 51 and 52 Spine 2 ports connect to Spine 1 ports Spine 2 port 31 connects to Spine 1 port 31 Spine 2 port 32 connects to Spine 1 port 32 Spine 2 ports connect to Spine 1 ports

51 Deploying the Ultra M Solution Ultra M is a multi-product solution. Detailed instructions for installing each of these products is beyond the scope of this document. Instead, the sections that follow identify the specific, non-default parameters that must be configured through the installation and deployment of those products in order to deploy the entire solution. Deployment Workflow Figure 14 Ultra M Deployment Workflow Plan Your Deployment Before deploying the Ultra M solution, it is very important to develop and plan your deployment. Network Planning Networking Overview provides a general overview and identifies basic requirements for networking the Ultra M solution. With this background, use the tables in Appendix: Network Definitions (Layer 2 and 3) to help plan the details of your network configuration. Install and Cable the Hardware This section describes the procedure to install all the components included in the Ultra M Solution. Cisco Systems, Inc. 51

52 Deploying the Ultra M Solution Install and Cable the Hardware Related Documentation Use the information and instructions found in the installation documentation for the respective hardware components to properly install the Ultra M solution hardware. Refer to the following installation guides for more information: Catalyst 2960-XR Switch Catalyst T-S Switch Nexus YC 48 Port s_mode_hardware_install_guide.html Nexus 9236C 36 Port hardware_install_guide.html UCS C240 M4SX Server Rack Layout Non-Hyper-converged Ultra M Small Deployment Rack Configuration Table 32 provides details for the recommended rack layout for the non-hyper-converged Ultra M Small deployment model. Table 32 - Non-Hyper-converged Ultra M Small Deployment Rack Unit Rack #1 1 Mgmt Switch: Catalyst TD-I 2 Empty 3 Leaf TOR Switch A: Nexus YC-EX 4 Leaf TOR Switch A: Nexus YC-EX 5/6 Staging Server: UCS C240 M4 SFF 7/8 Controller Node A: UCS C240 M4 SFF 9/10 Controller Node B: UCS C240 M4 SFF 11/12 Controller Node C: UCS C240 M4 SFF 13/14 UEM VM A: UCS C240 M4 SFF 52

53 Deploying the Ultra M Solution Install and Cable the Hardware Rack Unit Rack #1 15/16 UEM VM B: UCS C240 M4 SFF 17/18 UEM VM C: UCS C240 M4 SFF 19/20 Demux SF VM: UCS C240 M4 SFF 21/22 Standby SF VM: UCS C240 M4 SFF 23/24 Active SF VM 1: UCS C240 M4 SFF 25/26 Active SF VM 2: UCS C240 M4 SFF 27/28 Active SF VM 3: UCS C240 M4 SFF 29/30 Active SF VM 4: UCS C240 M4 SFF 31/32 Active SF VM 5: UCS C240 M4 SFF 33/34 Active SF VM 6: UCS C240 M4 SFF 35/36 Active SF VM 7: UCS C240 M4 SFF 37/38 Ceph Node A: UCS C240 M4 SFF 39/40 Ceph Node B: UCS C240 M4 SFF 41/42 Ceph Node C: UCS C240 M4 SFF Non-Hyper-converged Ultra M Medium Deployment Table 33 provides details for the recommended rack layout for the non-hyper-converged Ultra M Medium deployment model. Table 33 - Non-Hyper-converged Ultra M Medium Deployment Rack Unit Rack #1 Rack #2 1 Mgmt Switch: Catalyst TD-I Mgmt Switch: Catalyst TD-I 2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch A: Nexus 9236C 3 4 Leaf TOR Switch A: Nexus YC- EX Leaf TOR Switch A: Nexus YC- EX Leaf TOR Switch A: Nexus YC- EX Leaf TOR Switch A: Nexus YC- EX 5/6 Empty UEM VM A: UCS C240 M4 SFF 7/8 Staging Server: UCS C240 M4 SFF UEM VM B: UCS C240 M4 SFF 53

54 Deploying the Ultra M Solution Install and Cable the Hardware Rack Unit Rack #1 Rack #2 9/10 Controller Node A: UCS C240 M4 SFF UEM VM C: UCS C240 M4 SFF 11/12 Controller Node B: UCS C240 M4 SFF Demux SF VM: UCS C240 M4 SFF 13/14 Controller Node C: UCS C240 M4 SFF Standby SF VM: UCS C240 M4 SFF 15/16 Empty Active SF VM 1: UCS C240 M4 SFF 17/18 Empty Active SF VM 2: UCS C240 M4 SFF 19/20 Empty Active SF VM 3: UCS C240 M4 SFF 21/22 Empty Active SF VM 4: UCS C240 M4 SFF 23/24 Empty Active SF VM 5: UCS C240 M4 SFF 25/26 Empty Active SF VM 6: UCS C240 M4 SFF 27/28 Empty Active SF VM 7: UCS C240 M4 SFF 29/30 Empty Active SF VM 8: UCS C240 M4 SFF 31/32 Empty Active SF VM 9: UCS C240 M4 SFF 33/34 Empty Active SF VM 10: UCS C240 M4 SFF 35/36 Empty Empty 37/38 Ceph Node A: UCS C240 M4 SFF Empty 39/40 Ceph Node B: UCS C240 M4 SFF Empty 41/42 Ceph Node C: UCS C240 M4 SFF Empty Non-Hyper-converged Ultra M Large Deployment Table 34 provides details for the recommended rack layout for the non-hyper-converged Ultra M Large deployment model. Table 34 - Non-Hyper-converged Ultra M Large Deployment Rack Unit Rack #1 Rack #2 1 Mgmt Switch: Catalyst TD-I Mgmt Switch: Catalyst TD-I 2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch A: Nexus 9236C 3 Leaf TOR Switch A: Nexus YC- EX Leaf TOR Switch A: Nexus YC-EX 54

55 Deploying the Ultra M Solution Install and Cable the Hardware Rack Unit Rack #1 Rack #2 4 Leaf TOR Switch A: Nexus YC- EX Leaf TOR Switch A: Nexus YC-EX 5/6 Empty UEM VM A: UCS C240 M4 SFF 7/8 Staging Server: UCS C240 M4 SFF UEM VM B: UCS C240 M4 SFF 9/10 Controller Node A: UCS C240 M4 SFF UEM VM C: UCS C240 M4 SFF 11/12 Controller Node B: UCS C240 M4 SFF Demux SF VM: UCS C240 M4 SFF 13/14 Controller Node C: UCS C240 M4 SFF Standby SF VM: UCS C240 M4 SFF 15/16 Empty Active SF VM 1: UCS C240 M4 SFF 17/18 Empty Active SF VM 2: UCS C240 M4 SFF 19/20 Empty Active SF VM 3: UCS C240 M4 SFF 21/22 Empty Active SF VM 4: UCS C240 M4 SFF 23/24 Empty Active SF VM 5: UCS C240 M4 SFF 25/26 Empty Active SF VM 6: UCS C240 M4 SFF 27/28 Empty Active SF VM 7: UCS C240 M4 SFF 29/30 Empty Active SF VM 8: UCS C240 M4 SFF 31/32 Empty Active SF VM 9: UCS C240 M4 SFF 33/34 Empty Active SF VM 10: UCS C240 M4 SFF 35/36 Empty Active SF VM11: UCS C240 M4 SFF 37/38 39/40 41/42 Ceph Node A: UCS C240 M4 SFF Ceph Node B: UCS C240 M4 SFF Ceph Node C: UCS C240 M4 SFF Active SF VM 12: UCS C240 M4 SFF Active SF VM 13: UCS C240 M4 SFF Active SF VM 14: UCS C240 M4 SFF Hyper-converged Ultra M XS Single VNF Deployment Table 35 provides details for the recommended rack layout for the Hyper-converged Ultra M XS Single VNF deployment model. 55

56 Deploying the Ultra M Solution Install and Cable the Hardware Table 35 - Hyper-converged Ultra M XS Single VNF Deployment Rack Layout Rack #1 Rack #2 RU-1 Empty Empty RU-2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch B: Nexus 9236C RU-3 Empty Empty RU-4 RU-5 RU-6 VNF Mgmt Switch: Catalyst C T- S OR C2960XR-48TD VNF Leaf TOR Switch A: Nexus 93180YC-EX VNF Leaf TOR Switch B: Nexus 93180YC-EX Empty Empty Empty RU-7/8 Ultra VNF-EM 1A: UCS C240 M4 SFF Empty RU-9/10 Ultra VNF-EM 1B: UCS C240 M4 SFF Empty RU-11/12 Empty Empty RU-13/14 Demux SF: UCS C240 M4 SFF Empty RU-15/16 Standby SF: UCS C240 M4 SFF Empty RU-17/18 Active SF 1: UCS C240 M4 SFF Empty RU-19/20 Active SF 2: UCS C240 M4 SFF Empty RU-21/22 Active SF 3: UCS C240 M4 SFF Empty RU-23/24 Active SF 4: UCS C240 M4 SFF Empty RU-25/26 Active SF 5: UCS C240 M4 SFF Empty RU-27/28 Active SF 6: UCS C240 M4 SFF Empty RU-29/30 Empty Empty RU-31/32 Empty Empty RU-33/34 Empty Empty RU-35/36 Ultra VNF-EM 1C OpenStack Control C: UCS C240 M4 SFF RU-37/38 Ultra M Manager: UCS C240 M4 SFF Empty RU-39/40 OpenStack Control A: UCS C240 M4 SFF OpenStack Control B: UCS C240 M4 SFF RU-41/42 Empty Empty Cables Controller Rack Cables Controller Rack Cables 56

57 Deploying the Ultra M Solution Install and Cable the Hardware Rack #1 Rack #2 Cables Spine Uplink/Interconnect Cables Spine Uplink/Interconnect Cables Cables Leaf TOR To Spine Uplink Cables Empty Cables VNF Rack Cables Empty Hyper-converged Ultra M XS Multi-VNF Deployment Table 36 provides details for the recommended rack layout for the Hyper-converged Ultra M XS Multi-VNF deployment model. Table 36 - Hyper-converged Ultra M XS Multi-VNF Deployment Rack Layout Rack #1 Rack #2 Rack #3 Rack #4 RU-1 Empty Empty Empty Empty RU-2 Spine EOR Switch A: Nexus 9236C Spine EOR Switch B: Nexus 9236C Empty Empty RU-3 Empty Empty Empty Empty RU-4 VNF Mgmt Switch: Catalyst C T-S OR C2960XR- 48TD VNF Mgmt Switch: Catalyst C T-S OR C2960XR- 48TD VNF Mgmt Switch: Catalyst C T-S OR C2960XR- 48TD VNF Mgmt Switch: Catalyst C T-S OR C2960XR- 48TD RU-5 VNF Leaf TOR Switch A: Nexus 93180YC-EX VNF Leaf TOR Switch A: Nexus 93180YC-EX VNF Leaf TOR Switch A: Nexus 93180YC-EX VNF Leaf TOR Switch A: Nexus 93180YC-EX RU-6 VNF Leaf TOR Switch B: Nexus 93180YC-EX VNF Leaf TOR Switch B: Nexus 93180YC-EX VNF Leaf TOR Switch B: Nexus 93180YC-EX VNF Leaf TOR Switch B: Nexus 93180YC-EX RU-7/8 Ultra VNF-EM 1A: UCS C240 M4 SFF Ultra VNF-EM 2A: UCS C240 M4 SFF Ultra VNF-EM 3A: UCS C240 M4 SFF Ultra VNF-EM 4A: UCS C240 M4 SFF RU-9/10 Ultra VNF-EM 1B: UCS C240 M4 SFF Ultra VNF-EM 2B: UCS C240 M4 SFF Ultra VNF-EM 3B: UCS C240 M4 SFF Ultra VNF-EM 4B: UCS C240 M4 SFF RU-11/12 Empty Empty Empty Empty RU-13/14 Demux SF: UCS C240 M4 SFF Demux SF: UCS C240 M4 SFF Demux SF: UCS C240 M4 SFF Demux SF: UCS C240 M4 SFF RU-15/16 Standby SF: UCS C240 M4 SFF Standby SF: UCS C240 M4 SFF Standby SF: UCS C240 M4 SFF Standby SF: UCS C240 M4 SFF 57

58 Deploying the Ultra M Solution Install and Cable the Hardware RU-17/18 Active SF 1: UCS C240 M4 SFF Active SF 1: UCS C240 M4 SFF Active SF 1: UCS C240 M4 SFF Active SF 1: UCS C240 M4 SFF RU-19/20 Active SF 2: UCS C240 M4 SFF Active SF 2: UCS C240 M4 SFF Active SF 2: UCS C240 M4 SFF Active SF 2: UCS C240 M4 SFF RU-21/22 Active SF 3: UCS C240 M4 SFF Active SF 3: UCS C240 M4 SFF Active SF 3: UCS C240 M4 SFF Active SF 3: UCS C240 M4 SFF RU-23/24 Active SF 4: UCS C240 M4 SFF Active SF 4: UCS C240 M4 SFF Active SF 4: UCS C240 M4 SFF Active SF 4: UCS C240 M4 SFF RU-25/26 Active SF 5: UCS C240 M4 SFF Active SF 5: UCS C240 M4 SFF Active SF 5: UCS C240 M4 SFF Active SF 5: UCS C240 M4 SFF RU-27/28 Active SF 6: UCS C240 M4 SFF Active SF 6: UCS C240 M4 SFF Active SF 6: UCS C240 M4 SFF Active SF 6: UCS C240 M4 SFF RU-29/30 Empty Empty Empty Empty RU-31/32 Empty Empty Empty Empty RU-33/34 Empty Empty Empty Empty RU-35/36 Ultra VNF-EM 1C,2C,3C,4C OpenStack Control C: UCS C240 M4 SFF Empty Empty RU-37/38 Ultra M Manager: UCS C240 M4 SFF Empty Empty Empty RU-39/40 OpenStack Control A: UCS C240 M4 SFF OpenStack Control B: UCS C240 M4 SFF Empty Empty RU-41/42 Empty Empty Empty Empty Cables Controller Rack Cables Controller Rack Cables Controller Rack Cables Empty Cables Spine Uplink/Interconnect Cables Spine Uplink/Interconnect Cables Empty Empty Cables Leaf TOR To Spine Uplink Cables Leaf TOR To Spine Uplink Cables Leaf TOR To Spine Uplink Cables Leaf TOR To Spine Uplink Cables Cables VNF Rack Cables VNF Rack Cables VNF Rack Cables VNF Rack Cables 58

59 Deploying the Ultra M Solution Configure the Switches Cable the Hardware Once all of the hardware has been installed, install all power and network cabling for the hardware using the information and instructions in the documentation for the specific hardware product. Refer to Related Documentation for links to the hardware product documentation. Ensure that you install your network cables according to your network plan. Configure the Switches All of the switches must be configured according to your planned network specifications. NOTE: Refer to Network Planning for information and consideration for planning your network. Refer to the user documentation for each of the switches for configuration information and instructions: Catalyst C2960XR-48TD-I: Catalyst T-S: Nexus YC-EX: Nexus 9236C: Prepare the UCS C-Series Hardware UCS-C hardware preparation is performed through the Cisco Integrated Management Controller (CIMC). The tables in the following sections list the non-default parameters that must be configured per server type: Prepare the Staging Server/Ultra M Manager Node Prepare the Controller Nodes Prepare the Compute Nodes Prepare the OSD Compute Nodes Prepare the Ceph Nodes Refer to the UCS C-series product documentation for more information: UCS C-Series Hardware: CIMC Software: NOTE: Part of the UCS server preparation is the configuration of virtual drives. If there are virtual drives present which need to be deleted, select the Virtual Drive Info tab, select the virtual drive you wish to delete, then click Delete Virtual Drive. Refer to the CIMC documentation for more information. NOTE: The information in this section assumes that the server hardware was properly installed per the information and instructions in Install and Cable the Hardware. 59

60 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Prepare the Staging Server/Ultra M Manager Node Table 37 Staging Server/Ultra M Manager Node Parameters Parameters and Settings Description CIMC Utility Setup Enable IPV4 Configures parameters for the dedicated management port. Dedicated No redundancy IP address Subnet mask Gateway address DNS address Admin > User Management Username Password Configures administrative user credentials for accessing the CIMC utility. Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Server > BIOS > Configure BIOS > Advanced Intel(R) Hyper-Threading Technology = Disabled Disable hyper-threading on server CPUs to optimize Ultra M system performance. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info Status = Unconfigured Good Ensures that the hardware is ready for use. Prepare the Controller Nodes Table 38 Controller Node Parameters Parameters and Settings Description CIMC Utility Setup 60

61 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Enable IPV4 Description Configures parameters for the dedicated management port. Dedicated No redundancy IP address Subnet mask Gateway address DNS address Admin > User Management Username Password Configures administrative user credentials for accessing the CIMC utility. Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Server > BIOS > Configure BIOS > Advanced Intel(R) Hyper-Threading Technology = Disabled Intel(R) Hyper-Threading Technology = Disabled Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info Status = Unconfigured Good Ensures that the hardware is ready for use. Storage > Cisco 12G SAS Modular RAID Controller > Controller Info 61

62 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Virtual Drive Name = OS Read Policy = No Read Ahead Description Creates the virtual drives required for use by the operating system (OS). RAID Level = RAID 1 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Prepare the Compute Nodes Table 39 Compute Node Parameters Parameters and Settings Description CIMC Utility Setup Enable IPV4 Configures parameters for the dedicated management port. Dedicated No redundancy IP address Subnet mask Gateway address DNS address Admin > User Management Username Password Configures administrative user credentials for accessing the CIMC utility. 62

63 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Description Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Server > BIOS > Configure BIOS > Advanced Intel(R) Hyper-Threading Technology = Disabled Intel(R) Hyper-Threading Technology = Disabled Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info Status = Unconfigured Good Ensures that the hardware is ready for use. Storage > Cisco 12G SAS Modular RAID Controller > Controller Info Virtual Drive Name = BOOTOS Read Policy = No Read Ahead Creates the virtual drives required for use by the operating system (OS). RAID Level = RAID 1 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS Initialize Type = Fast Initialize Set as Boot Drive Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Sets the BOOTOS virtual drive as the system boot drive. Prepare the OSD Compute Nodes NOTE: OSD Compute Nodes are only used in Hyper-converged Ultra M models as described in UCS C-Series Servers. Table 40 - OSD Compute Node Parameters Parameters and Settings Description CIMC Utility Setup 63

64 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Enable IPV4 Description Configures parameters for the dedicated management port. Dedicated No redundancy IP address Subnet mask Gateway address DNS address Admin > User Management Username Password Configures administrative user credentials for accessing the CIMC utility. Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Server > BIOS > Configure BIOS > Advanced Intel(R) Hyper-Threading Technology = Disabled Intel(R) Hyper-Threading Technology = Disabled Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info Status = Unconfigured Good SLOT-HBA Physical Drive Numbers = 1 Ensures that the hardware is ready for use. Ensure the UCS slot host-bus adapter for the drives are configured accordingly Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 1 64

65 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Virtual Drive Name = BOOTOS Read Policy = No Read Ahead RAID Level = RAID 1 Description Creates a virtual drive leveraging the storage space available to physical drive number 1. NOTE: Ensure that the size of this virtual drive is less than the size of the designated journal and storage drives. Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 1 Initialize Type = Fast Initialize Set as Boot Drive Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Sets the BOOTOS virtual drive as the system boot drive. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 2 Virtual Drive Name = BOOTOS Read Policy = No Read Ahead RAID Level = RAID 1 Creates a virtual drive leveraging the storage space available to physical drive number 2. NOTE: Ensure that the size of this virtual drive is less than the size of the designated journal and storage drives. Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 2 Initialize Type = Fast Initialize Set as Boot Drive Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Sets the BOOTOS virtual drive as the system boot drive. 65

66 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Description Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 3 Virtual Drive Name = JOURNAL Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 3. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, JOURNAL, Physical Drive Number = 3 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 7 Virtual Drive Name = OSD1 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 7. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD1, Physical Drive Number = 7 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 8 66

67 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Virtual Drive Name = OSD2 Read Policy = No Read Ahead Description Creates a virtual drive leveraging the storage space available to physical drive number 8. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD2, Physical Drive Number = 8 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 9 Virtual Drive Name = OSD3 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 9. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD3, Physical Drive Number = 9 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 10 67

68 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Virtual Drive Name = OSD4 Read Policy = No Read Ahead Description Creates a virtual drive leveraging the storage space available to physical drive number 10. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD4, Physical Drive Number = 10 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Prepare the Ceph Nodes NOTE: Ceph Nodes are only used in non-hyper-converged Ultra M models as described in UCS C-Series Servers. Table 41 Ceph Node Parameters Parameters and Settings Description CIMC Utility Setup Enable IPV4 Configures parameters for the dedicated management port. Dedicated No redundancy IP address Subnet mask Gateway address DNS address Admin > User Management 68

69 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Username Password Description Configures administrative user credentials for accessing the CIMC utility. Admin > Communication Services IPMI over LAN Properties = Enabled Enables the use of Intelligent Platform Management Interface capabilities over the management port. Server > BIOS > Configure BIOS > Advanced Intel(R) Hyper-Threading Technology = Disabled Intel(R) Hyper-Threading Technology = Disabled Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Info Status = Unconfigured Good SLOT-HBA Physical Drive Numbers = 1 Ensures that the hardware is ready for use. Ensure the UCS slot host-bus adapter for the drives are configured accordingly Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 1 Virtual Drive Name = BOOTOS Read Policy = No Read Ahead RAID Level = RAID 1 Creates a virtual drive leveraging the storage space available to physical drive number 1. NOTE: Ensure that the size of this virtual drive is less than the size of the designated journal and storage drives. Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 1 69

70 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Initialize Type = Fast Initialize Set as Boot Drive Description Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Sets the BOOTOS virtual drive as the system boot drive. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 2 Virtual Drive Name = BOOTOS Read Policy = No Read Ahead RAID Level = RAID 1 Creates a virtual drive leveraging the storage space available to physical drive number 2. NOTE: Ensure that the size of this virtual drive is less than the size of the designated journal and storage drives. Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, BOOTOS, Physical Drive Number = 2 Initialize Type = Fast Initialize Set as Boot Drive Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Sets the BOOTOS virtual drive as the system boot drive. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 3 Virtual Drive Name = JOURNAL Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 3. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through 70

71 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Description Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, JOURNAL, Physical Drive Number = 3 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 7 Virtual Drive Name = OSD1 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 7. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD1, Physical Drive Number = 7 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 8 Virtual Drive Name = OSD2 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 8. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD2, Physical Drive Number = 8 71

72 Deploying the Ultra M Solution Prepare the UCS C-Series Hardware Parameters and Settings Initialize Type = Fast Initialize Description Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 9 Virtual Drive Name = OSD3 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 9. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD3, Physical Drive Number = 9 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. Storage > Cisco 12G SAS Modular RAID Controller > Physical Drive Number = 10 Virtual Drive Name = OSD4 Read Policy = No Read Ahead Creates a virtual drive leveraging the storage space available to physical drive number 10. RAID Level = RAID 0 Cache Policy: Direct IO Strip Size: 64KB Disk Cache Policy: Unchanged Access Policy: Read Write Size: MB Write Policy: Write Through Storage > Cisco 12G SAS Modular RAID Controller > Virtual Drive Info, OSD4, Physical Drive Number = 10 Initialize Type = Fast Initialize Initializes this virtual drive. A fast initialization quickly writes zeroes to the first and last 10-MB regions of the new virtual drive and completes the initialization in the background. 72

73 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Deploy the Virtual Infrastructure Manager Within the Ultra M solution, OpenStack Platform Director (OSP-D) functions as the virtual infrastructure manager (VIM). The method by which the VIM is deployed depends on the architecture of your Ultra M model. Refer to one of the following sections for information related to your deployment scenario: Deploy the VIM for Hyper-Converged Ultra M Models Deploy the VIM for Non-Hyper-Converged Ultra M Models Deploy the VIM for Hyper-Converged Ultra M Models Deploying the VIM for Hyper-Converged Ultra M Models is performed using an automated workflow enabled through software modules within Ultra Automation Services (UAS). These services leverage user-provided configuration information automatically deploy the VIM Orchestrator (Undercloud) and the VIM (Overcloud). Information and instructions for using this automated process can be found in the Virtual Infrastructure Manager Installation Automation section of the USP Deployment Automation Guide. Refer to that document for details. Deploy the VIM for Non-Hyper-Converged Ultra M Models Deploying the VIM for use with non-hyper-converged Ultra M models involves the following procedures: Deploy the VIM Orchestrator Deploy the VIM Deploy the VIM Orchestrator The VIM Orchestrator is the primary OSP-D node that is responsible for the deployment, configuration, and management of nodes (controller, compute, and Ceph storage) in the VIM. Within non-hyper-converged Ultra M models, the Staging Server is the VIM Orchestrator. Deploying the VIM Orchestrator involves installing and configuring Red Hat Enterprise Linux (RHEL) and OSP-D as shown in the workflow depicted in Figure

74 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Figure 15 - Workflow for Manually Installing and Configuring Red Hat and OSP-D on a UCS-C Server NOTE: The information in this section assumes that that the Staging Server was configured as described in Prepare the Staging Server/Ultra M Manager Node. Install RHEL General installation information and procedures are located in the RHEL and OSP-D product documentation: Prior to installing these products on the Staging Server, refer to Table 42 for settings required by Ultra M. Instructions for configuring these specific settings are available here: Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures NOTE: Table 42 assumes that you are using the product s graphical user interface (GUI) for Red Hat installation. Table 42 Staging Server Red Hat Installation Settings Parameters and Settings Description Installation Summary > Network & Host Name > Ethernet (enp16s0f0) > Configure > IPv4 Setting IP Address Netmask Gateway DNS Server Configure and save settings for the network interface by which the Staging Server can be accessed externally. NOTE: The first, or top-most interface shown in the list in the Network & Host Name screen should be used as the external interface for the Staging Server. Search Domain 74

75 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Parameters and Settings Description Installation Summary > Installation Destination > CiscoUCSC-MRAID12G (sda) > I will configure partitioning > Click here to create them automatically Mount Point = / Desired Capacity = 500 GB Allocates capacity in the root partition ( / ) in which to install Red Hat. NOTE: Do not use LVM-based partitioning. Instead, delete /home/ and allocate the freed capacity under root ( / ). Installation Summary > KDUMP kdump = disabled It is recommended that kdump be disabled. Installation Summary > Begin Installation > Root Password Root Password Configure and confirm the root user password. Configure RHEL Once RHEL is installed, you must configure it prior to installing the OpenStack Undercloud (VIM Orchestrator). Table 43 identifies the RHEL configuration required by Ultra M. Instructions for configuring these specific settings are available here: Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures NOTE: The parameters described in Table 43 assume configuration through the Red Hat command line interface. Table 43 - Staging Server Red Hat Configuration Parameters and Settings Description Non-root user account: User = stack Password = stack New directories: /images The OSP-D installation process requires a non-root user called stack to execute commands. NOTE: It is recommended that you disable password requirements for this user when using sudo. This is done using the NOPASSWD:ALL parameter in sudoers.d as described in the OpenStack documentation. OSP-D uses system images and Heat templates to create the VIM environment. These files must be organized and stored in the stack user s /home directory. /templates Subscription manager HTTP proxy If needed, the subscription manager service can be configured to use a HTTP proxy. 75

76 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Parameters and Settings System hostname Description Configures a fully qualified domain name (FQDN) for use on the Staging Server. NOTE: Be sure to configure the hostname as both static and transient. NOTE: Beyond using the hostnamectl command, make sure the FQDN is also configured in the etc/hosts file. Attach the subscription-manager Pool ID Disable all default content repositories Enable Red Hat repositories: rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-rh-common-rpms rhel-7-ha-for-rhel7-server-rpms rhel-7-server-openstack-9-rpms Attaches the RHEL installation to the subscription-manager s entitlement pool id as displayed in the output of the sudo subscription-manager list --available all command. Disables the default content repositories associated with the entitlement certificate. Installs packages required by OSP-D and provides access to the latest server and OSP-D entitlements. NOTE: As indicated in Software SpecificationsSoftware Specifications, the supported OpenStack version differs based on the Ultra M model being deployed. If OpenStack 9 is required, only enable rhel-7-server-openstack-9-rpms and rhel-7-serveropenstack-9-director-rpms. If OpenStack 10 is required, only enable rhel-7-server-openstack-10-rpms. rhel-7-server-openstack-9-directorrpms rhel-7-server-openstack-10-rpms Install the python-tripleoclient package Configure the undercloud.conf file with: undercloud_hostname local_interface Installs command-line tools that are required for OSP-D installation and configuration. Configures the Undercloud server s FQDN hostname and local interface. NOTE: undercloud.conf is based on a template which is provided as part of the OSP-D installation. By default, this template is located in /usr/share/instack-undercloud/. This template must first be copied to the stack user s home directory before being modified. NOTE: The default setting for the local_interface should be eno1. This is the recommended local_interface setting when the VIM is deployed manually. 76

77 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Parameters and Settings Install OSP-D images: rhosp-director-images rhosp-director-images-ipa Description Download and install the disk images required by OSP-D for provisioning VIM nodes. NOTE: The installed image archives must be copied to the /images folder in the stack user s home directory and extracted. Install libguestfs-tools virt-manager Install and start Virtual Machine Manager to improve debugging capability and to customize the VIM image NOTE: It is recommended that you enable root password authentication and permit root login in the SSHD configuration. This is done using the following command: Disable chronyd virt-customize -a overcloud-full.qcow2 --root-password password:<password> --run-command 'sed -i -e "s/.*passwordauthentication.*/passwordauthentication yes/" /etc/ssh/sshd_config' --run-command 'sed -i -e "s/.*permitrootlogin.*/permitrootlogin yes/" /etc/ssh/sshd_config' This is needed only for Hyper-Converged Ultra M models. Chronyd can be disabled using the following command: sudo virt-customize -a overcloud-full.qcow2 --runcommand "systemctl disable chronyd" Configure nameserver Specifies the nameserver IP address to be used by the VIM nodes to resolve hostnames through DNS. Deploy the VIM The VIM consists of the Controller nodes, Compute nodes, Ceph storage nodes (for non-hyper-converged Ultra M models), and/or OSD Compute nodes (for Hyper-converged Ultra M models). Deploying the VIM involves the process workflow depicted in Figure

78 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Figure 16 - Workflow for Installing and Configuring Ultra M VIM Nodes NOTE: This information also assumes that the OSP-D-based VIM Orchestrator server was installed and configured as described in Deploy the VIM Orchestrator. In addition, the information in this section assumes that that the VIM node types as identified above were configured as described in the relevant sections of this document: Prepare the Controller Nodes Prepare the Compute Nodes Prepare the OSD Compute Nodes Prepare the Ceph Nodes Table 44 identifies the RHEL configuration required by Ultra M. Instructions for configuring these specific settings are available here: Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures NOTE: The parameters described in Table 44 assume configuration through the Red Hat command line interface. 78

79 Deploying the Ultra M Solution Deploy the Virtual Infrastructure Manager Table 44 OSP-D VIM Configuration Parameters Parameters and Settings Description Configure the instackenv.json file with: mac pm_user pm_password pm_addr instackenv.json defines a manifest of the server nodes that comprise the VIM. Cisco provides a version of this file to use as a template. Place this template in the stack user s home directory and modify the following parameters for all nodes (controllers, compute, and Ceph storage) based on your hardware: mac: The server s mac address as identified in CIMC. pm_user: The name of the CIMC administrative user for the server. pm_password: The CIMC administrative user s password. pm_addr: The address of the server s management (CIMC) port. NOTE: Server names are pre-populated in the template provided by Cisco. It is highly recommend that you do not modify these names for your deployment. Copy the OpenStack Environment templates to /home/stack Configure the network.yaml template with: ExternalNetCidr Aspects of the OpenStack VIM environment are specified through custom configuration templates that override settings in the OSP-D s default HEAT templates. These files are contained in a /customtemplates directory that must be downloaded to the stack user s home directory (e.g. /home/stack/custom-templates). Specifies network parameters for use within the Ultra M VIM. Descriptions for these parameters are provided in the network.yaml file. ExternalAllocationPools ExterrnalInterfaceDefaultRoute ExternalNetworkVlanID 79

80 Deploying the Ultra M Solution Configure SR-IOV Parameters and Settings Configure the layout.yaml template with: NtpServer Description Specifies parameters that dictate how to deploy the VIM nodes such as the NTP server used by the VIM, the number of each type of node, and the IP addresses used by each of those nodes on the VIM networks. ControllerCount ComputeCount CephStorageCount OsdComputeCount ControllerIPs ComputeIPs OsdComputeIPs Configure the controller-nics.yaml template with: Configures network interface card (NIC) parameters for the controller nodes. ExternalNetworkVlanID ExternalInterfaceDefaultRoute Configure the only-computenics.yaml template with: Configures NIC parameters for the compute nodes. ExternalNetworkVlanID ExternalInterfaceDefaultRoute Configure the compute-nics.yaml template with: Configures NIC parameters for the compute nodes. ExternalNetworkVlanID ExternalInterfaceDefaultRoute Configure SR-IOV The non-hyper-converged Ultra M models implement OSP-D version 9. This version of OSP-D requires the manual configuration of SR-IOV after the VIM has been deployed as described in this section. NOTE: The need to manually configure SR-IOV is specific to non-hyper-converged Ultra M models. Hyper Converged Ultra M XS Single VNF and Multi-VNF models are based on a later version of OSP-D in which SR-IOV configuration is performed as part of the VIM deployment. 80

81 Deploying the Ultra M Solution Configure SR-IOV SR-IOV is configured on each of the compute and controller VIM nodes. Table 45 provides information on the parameters that must be configured on the compute nodes and Table 46 provides information on the parameters that must be configured on the controller nodes. Instructions for configuring these specific settings are available here: Appendix: Non-Hyper-Converged Ultra M VIM Deployment Procedures NOTE: The parameters described in Table 45 and Table 46 assume configuration through the Red Hat command line interface. Table 45 Compute Node SR-IOV Configuration Parameters Parameters and Settings Description Enable intel_iommu Edit the /etc/default/grub file and append the intel_iommu=on setting to the GRUB_CMDLINE_LINUX parameter. NOTE: These changes must be saved to the boot grub using the following command: grub2-mkconfig -o /boot/grub2/grub.cfg NOTE: The compute node must be rebooted after executing this command. Configure ixgbe.conf to support a maximum of 16 VFs Specifies the number of virtual F ports supported by ixgbe, the base PCIe NIC driver to be 16 using the following command: echo "options ixgbe max_vfs=16" >>/etc/modprobe.d/ixgbe.conf NOTE: The Ramdisk must be rebuilt after editing the ixgbe.conf file using the following command: cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%h%m%s).bak NOTE: The initramfs must be configured via Dracut to reflect the pointer using the following command: dracut -f -v -M --install /etc/modprobe.d/ixgbe.conf 81

82 Deploying the Ultra M Solution Configure SR-IOV Parameters and Settings Set the MTU for all NICs to 9000 Description Sets the MTU for each NIC to 9000 octets using the following command: sed -i -re 's/mtu.*//g' /etc/sysconfig/networkscripts/ifcfg-<interface_name> echo -e ' MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg- <interface_name> sed -i -re '/^$/d' /etc/sysconfig/network-scripts/ifcfg- <interface_name> ip link set enp10s0f0 mtu 9000 NOTE: To persist MTU configuration after the reboot, either disable the NetManager or add NM_CONTROLLED=no in the interface configuration: service NetworkManager stop scho NM_CONTROLLED=no >> /etc/sysconfig/network-scripts/ifcfg-<interface_name> NOTE: The compute nodes must be rebooted after configuring the interface MTUs. Disable default repositories and enable the following repositories: rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-openstack-9-rpms rhel-7-server-openstack-9-directorrpms rhel-7-server-rh-common-rpms Install the sriov-nic-agent. Installs the SR-IOV NIC agent using the following command: sudo yum install openstack-neutron-sriov-nic-agent Enable NoopFirewallDriver in the /etc/neutron/plugin.ini The NoopFirewallDriver can be enabled using the following commands: openstack-config --set /etc/neutron/plugin.ini securitygroup firewall_driver neutron.agent.firewall.noopfirewalldriver openstack-config --get /etc/neutron/plugin.ini securetygroup firewall_driver 82

83 Deploying the Ultra M Solution Configure SR-IOV Parameters and Settings Configure neutron-server.service to use the ml2_conf_sriov.ini file Description Add the --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini setting to the ExecStart parameter. Remove the --config-file /etc/neutron/plugins/ml2/sriov_agent.ini setting. NOTE: The OpenStack networking SR-IOV Agent must be started after executing the above commands. This is done using the following commands: systemctl enable neutron-sriov-nic-agent.service systemctl start neutron-sriov-nic-agent.service Associate the available VFs with each physical network in the nova.conf file The VF-physical network associations can be made using the following command: openstack-config --set /etc/nova/nova.conf DEFAULT pci_passthrough_whitelist '[{"devname":"enp10s0f0", "physical_network":"phys_pcie1_0"},{"devname":"enp10s0f1", "physical_network":"phys_pcie1_1"},{"devname": "enp133s0f0", "physical_network":"phys_pcie4_0"},{"devname":"enp133s0f1", "physical_network":"phys_pcie4_1"}]' NOTE: The nova-compute service must be restarted after modifying the nova.conf file using the following command: systemctl restart openstack-nova-compute Table 46 - Controller Node SR-IOV Configuration Parameters Parameters and Settings Description Configure Neutron to support jumbo MTU sizes Sets dhcp-option-force=26,9000 in in dnsmasq-neutron.conf and global_physnet_mtu = 9000 in neutron.conf. NOTE: The neutron server must be rebooted after making this configuration changes. 83

84 Deploying the Ultra M Solution Configure SR-IOV Parameters and Settings Enable sriovnicswitch in the /etc/neutron/plugin.ini file. Description sriovnicswitch can be enabled using the following commands: openstack-config --set /etc/neutron/plugin.ini ml2 tenant_network_types vlan openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vlan openstack-config --set /etc/neutron/plugin.ini ml2 mechanism_drivers openvswitch,sriovnicswitch openstack-config --set /etc/neutron/plugin.ini ml2_type_vlan network_vlan_ranges datacentre:1001:1500 NOTE: The VLAN ranges must match the ones you have planned for in Appendix: Network Definitions (Layer 2 and 3). For example, the mappings are as follows: datacenter = Other-Virtio phys_pcie1(0,1) = SR-IOV (Phys-PCIe1) phys_pcie4(0,1) = SR-IOV (Phys-PCIe4) Add associated SRIOV physnets and physnet network_vlan_range in the /etc/neutron/plugins/ml2/ml2_conf_s riov.ini file. SR-IOV physnets and VLAN ranges can be added using the following commands: openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov agent_required True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini ml2_sriov supported_pci_vendor_devs 8086:10ed openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic physical_device_mappings phys_pcie1_0:enp10s0f0,phys_pcie1_1:enp10s0f1,phys_pcie4_0: enp133s0f0,phys_pcie4_1:enp133s0f1 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf_sriov.ini sriov_nic exclude_devices '' Configure the neutron-server.service to use the ml2_conf_sriov.ini file. Append the --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini to the ExecStart entry. NOTE: The neutron-server service must be restarted to apply the configuration. 84

85 Deploying the Ultra M Solution Deploy the USP-Based VNF Parameters and Settings Enable PciPassthrough and AvailabilityZoneFilter under filters in nova.conf Description Allows proper scheduling of SRIOV devices. The compute scheduler on the controller node needs to use FilterScheduler and PciPassthroughFilter filters using the following commands: openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_available_filters nova.scheduler.filters.all_filters openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter, ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroup AntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter,Pci PassthroughFilter,NUMATopologyFilter,SameHostFilter,Differe nthostfilter,aggregateinstanceextraspecsfilter NOTE: The nova conductor and scheduler must be restarted after executing the above commands. Deploy the USP-Based VNF Once the OpenStack Undercloud (VIM Orchestrator) and Overcloud (VIM) have been successfully deployed on the Ultra M hardware, you must deploy the USP-based VNF. This process is performed through the Ulra Automation Services (UAS). UAS is an automation framework consisting of a set of software modules used to automate the USP-based VNF deployment and related components such as the VNFM. These automation workflow are described in detail in the Ultra Service Platform Deployment Automation Guide. Refer to that document for more information. 85

86 Event and Syslog Management Within the Ultra M Solution Hyper-Converged Ultra M solution models support a centralized monitor and management function. This function provides a central aggregation point for events (faults and alarms) and a proxy point for syslogs generated by the different components within the solution as identified in Table 48. The monitor and management function runs on the OSP-D Server which is also referred to as the Ultra M Manager Node. Figure 17 Ultra M Manager Functions The software to enable this functionality is distributed as a stand-alone RPM as described in Install the Ultra M Manager RPM. (It is not packaged with the Ultra Services Platform (USP) release ISO.) Once installed, additional configuration is required based on the desired functionality as described in the following sections: Syslog Proxy Event Aggregation Syslog Proxy The Ultra M Manager Node can be configured as a proxy server for syslogs received from UCS servers and/or OpenStack. As a proxy, the Ultra M Manager Node acts a single logging collection point for syslog messages from these components and relays them to a remote collection server. NOTES: This functionality is currently supported only with Ultra M deployments based on OSP 10 and that leverage the Hyper-Converged architecture. Cisco Systems, Inc. 86

Ultra M Solutions Guide, Release 6.0

Ultra M Solutions Guide, Release 6.0 First Published: 2018-01-25 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches)

Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches) Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches) First Published: 2017-07-31 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine

Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute First Published: August 09, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive

More information

Videoscape Distribution Suite Software Installation Guide

Videoscape Distribution Suite Software Installation Guide First Published: August 06, 2012 Last Modified: September 03, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco UCS Director API Integration and Customization Guide, Release 5.4

Cisco UCS Director API Integration and Customization Guide, Release 5.4 Cisco UCS Director API Integration and Customization Guide, Release 5.4 First Published: November 03, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco TEO Adapter Guide for SAP Java

Cisco TEO Adapter Guide for SAP Java Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

OpenStack Group-Based Policy User Guide

OpenStack Group-Based Policy User Guide First Published: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference July 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 First Published: August 12, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco ACI with OpenStack OpFlex Architectural Overview

Cisco ACI with OpenStack OpFlex Architectural Overview First Published: February 11, 2016 Last Modified: March 30, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x First Published: 2018-07-05 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco ACI Simulator Installation Guide

Cisco ACI Simulator Installation Guide First Published: 2014-11-11 Last Modified: 2018-02-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide Release 1.5 October, 2013 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone

More information

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x First Published: August 01, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco CIMC Firmware Update Utility User Guide

Cisco CIMC Firmware Update Utility User Guide Cisco CIMC Firmware Update Utility User Guide For Cisco UCS C-Series Servers September 17, 2010 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Host Upgrade Utility 1.5(1) User Guide

Cisco Host Upgrade Utility 1.5(1) User Guide First Published: March 04, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 10.0 Modified: 2018-04-04 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 13.0 Modified: 2018-08-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

Cisco TEO Adapter Guide for

Cisco TEO Adapter Guide for Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

Cisco TEO Adapter Guide for Microsoft Windows

Cisco TEO Adapter Guide for Microsoft Windows Cisco TEO Adapter Guide for Microsoft Windows Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide First Published: 2011-09-06 Last Modified: 2015-09-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA

More information

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x First Published: August 01, 2014 Last Modified: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2)

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) First Published: 2016-07-15 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco TEO Adapter Guide for SAP ABAP

Cisco TEO Adapter Guide for SAP ABAP Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: July 2017 Release 2.5.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:

More information

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Cisco UCS C-Series IMC Emulator Quick Start Guide Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Revised: October 6, 2017, Cisco IMC Emulator Overview About

More information

Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x First Published: August 01, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution First Published: 2016-12-21 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Embedded Packet Capture Configuration Guide

Embedded Packet Capture Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco FindIT Plugin for Kaseya Quick Start Guide

Cisco FindIT Plugin for Kaseya Quick Start Guide First Published: 2017-10-23 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Getting Started Guide for Cisco UCS E-Series Servers, Release 2.x

Getting Started Guide for Cisco UCS E-Series Servers, Release 2.x First Published: August 09, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0 Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.0 First Published: 2017-03-15 Last Modified: 2017-08-03 Summary Steps Setting up your Cisco Cloud Services Platform 2100 (Cisco CSP 2100)

More information

Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4

Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4 Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4 First Published: November 05, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes First Published: October 2014 Release 1.0.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408

More information

Smart Software Manager satellite Installation Guide

Smart Software Manager satellite Installation Guide Smart Software Manager satellite Installation Guide Published: Nov, 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances Recovery Guide for Cisco Digital Media Suite 5.4 Appliances September 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Installation and Configuration Guide for Visual Voic Release 8.5

Installation and Configuration Guide for Visual Voic Release 8.5 Installation and Configuration Guide for Visual Voicemail Release 8.5 Revised October 08, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Migration and Upgrade: Frequently Asked Questions

Migration and Upgrade: Frequently Asked Questions First Published: May 01, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

VCS BSS/OSS Adaptor (BOA) 17.2 Release Notes

VCS BSS/OSS Adaptor (BOA) 17.2 Release Notes Last Updated: August 8th, 2017 Introduction This release includes new features in the REST and web service interfaces, in addition to bug fixes. System Requirements Requirement Minimum Recommend Comments

More information

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 First Published: 2017-05-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5

Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5 Cisco Cloud Services Platform 2100 Quick Start Guide, Release 2.2.5 First Published: 2018-03-30 Summary Steps Setting up your Cisco Cloud Services Platform 2100 (Cisco CSP 2100) and creating services consists

More information

Tetration Cluster Cloud Deployment Guide

Tetration Cluster Cloud Deployment Guide First Published: 2017-11-16 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes

Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes First Published: December 19, 2014 This method of procedure (MOP) provides the HNBGW configuration on redundant Serving nodes

More information

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs May 10, 2017 Version 1.0 Cisco Systems, Inc. Corporate Headquarters 170 West Tasman Drive San Jose, CA 95134-1706

More information

TechNote on Handling TLS Support with UCCX

TechNote on Handling TLS Support with UCCX TechNote on Handling TLS Support with UCCX Contents Introduction UCCX Functions as a Server UCCX Functions as a Client TLS 1.0 Support is being Deprecated Next Steps TLS Support Matrix Current Support

More information

Getting Started Guide for Cisco UCS E-Series Servers, Release 1.0(2) Installed in the Cisco ISR 4451-X

Getting Started Guide for Cisco UCS E-Series Servers, Release 1.0(2) Installed in the Cisco ISR 4451-X Getting Started Guide for Cisco UCS E-Series Servers, Release 1.0(2) Installed in the Cisco ISR 4451-X First Published: June 24, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server December 17 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA95134-1706 USA http://www.cisco.com

More information

Cisco UCS Integrated Management Controller Faults Reference Guide

Cisco UCS Integrated Management Controller Faults Reference Guide First Published: 2017-05-05 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco Nexus 7000 Series NX-OS Quality of Service Command Reference

Cisco Nexus 7000 Series NX-OS Quality of Service Command Reference Cisco Nexus 7000 Series NX-OS Quality of Service Command Reference August 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Application Launcher User Guide

Application Launcher User Guide Application Launcher User Guide Version 1.0 Published: 2016-09-30 MURAL User Guide Copyright 2016, Cisco Systems, Inc. Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

CPS UDC MoP for Session Migration, Release

CPS UDC MoP for Session Migration, Release CPS UDC MoP for Session Migration, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Installation and Configuration Guide for Cisco Services Ready Engine Virtualization

Installation and Configuration Guide for Cisco Services Ready Engine Virtualization Installation and Configuration Guide for Cisco Services Ready Engine Virtualization Software Release 2.0 December 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Embedded Packet Capture Configuration Guide

Embedded Packet Capture Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Release Notes First Published: June 2015 Release 1.1.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Enterprise Chat and Upgrade Guide, Release 11.6(1)

Enterprise Chat and  Upgrade Guide, Release 11.6(1) Enterprise Chat and Email Upgrade Guide, Release 11.6(1) For Unified Contact Center Enterprise August 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing. SUSE OpenStack Cloud Production Deployment Architecture Guide Solution Guide Cloud Computing Table of Contents page Introduction... 2 High Availability Configuration...6 Network Topography...8 Services

More information

Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2

Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2 Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2 First Published: January 31, 2013 Last Modified: February 06, 2013 Americas Headquarters Cisco Systems, Inc.

More information

SAML SSO Okta Identity Provider 2

SAML SSO Okta Identity Provider 2 SAML SSO Okta Identity Provider SAML SSO Okta Identity Provider 2 Introduction 2 Configure Okta as Identity Provider 2 Enable SAML SSO on Unified Communications Applications 4 Test SSO on Okta 4 Revised:

More information

Cisco Meeting Management

Cisco Meeting Management Cisco Meeting Management Cisco Meeting Management 1.1 User Guide for Administrators September 19, 2018 Cisco Systems, Inc. www.cisco.com Contents 1 Introduction 4 1.1 The software 4 2 Deployment overview

More information

NetFlow Configuration Guide

NetFlow Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco CSPC 2.7x. Configure CSPC Appliance via CLI. Feb 2018

Cisco CSPC 2.7x. Configure CSPC Appliance via CLI. Feb 2018 Cisco CSPC 2.7x Configure CSPC Appliance via CLI Feb 2018 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 5 Contents Table of Contents 1. CONFIGURE CSPC

More information

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0 First Published: July 31, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text

More information

Cisco Jabber IM for iphone Frequently Asked Questions

Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions 2 Basics 2 Connectivity 3 Contacts 4 Calls 4 Instant Messaging 4 Meetings 5 Support and Feedback

More information

Cisco Unified Communications Self Care Portal User Guide, Release

Cisco Unified Communications Self Care Portal User Guide, Release Cisco Unified Communications Self Care Portal User Guide, Release 10.0.0 First Published: December 03, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x

Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x First Published: 2012-12-01 Last Modified: 2013-05-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman

More information

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Installation and Configuration Guide

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Installation and Configuration Guide Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Installation and Configuration Guide Release 1.5 October, 2013 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide.

More information

Process Automation Guide for Automation for SAP BOBJ Enterprise

Process Automation Guide for Automation for SAP BOBJ Enterprise Process Automation Guide for Automation for SAP BOBJ Enterprise Release 3.0 December 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

IP Addressing: Fragmentation and Reassembly Configuration Guide

IP Addressing: Fragmentation and Reassembly Configuration Guide First Published: December 05, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-10-13 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco Business Edition 6000 Installation Guide, Release 10.0(1)

Cisco Business Edition 6000 Installation Guide, Release 10.0(1) First Published: January 15, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Downloading and Licensing. (for Stealthwatch System v6.9.1)

Downloading and Licensing. (for Stealthwatch System v6.9.1) Downloading and Licensing (for Stealthwatch System v6.9.1) Contents Contents 2 Introduction 5 Purpose 5 Audience 5 Preparation 5 Trial Licenses 5 Download and License Center 6 Contacting Support 6 Registering

More information

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x)

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) First Published: May 17, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,

More information

Cisco UCS Director UCS Central Management Guide, Release 6.5

Cisco UCS Director UCS Central Management Guide, Release 6.5 First Published: 2017-07-11 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco TelePresence Management Suite 15.4

Cisco TelePresence Management Suite 15.4 Cisco TelePresence Management Suite 15.4 Software Release Notes First Published: December 2016 Cisco Systems, Inc. 1 www.cisco.com 2 Preface Change History Table 1 Software Release Notes Change History

More information

Cisco Unified Communications Manager Device Package 8.6(2)( ) Release Notes

Cisco Unified Communications Manager Device Package 8.6(2)( ) Release Notes Cisco Unified Communications Manager Device Package 8.6(2)(26169-1) Release Notes First Published: August 31, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco TelePresence Server 4.2(3.72)

Cisco TelePresence Server 4.2(3.72) Cisco TelePresence Server 4.2(3.72) Release Notes October 2016 Product Documentation The following sites contain documents covering installation, initial configuration, and operation of the product: Release

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco Meeting App. Cisco Meeting App (OS X) Release Notes. July 21, 2017

Cisco Meeting App. Cisco Meeting App (OS X) Release Notes. July 21, 2017 Cisco Meeting App Cisco Meeting App (OS X) 1.9.19.0 Release Notes July 21, 2017 Cisco Systems, Inc. www.cisco.com Contents 1 Introduction 1 1.1 Installation instructions 1 1.2 Using or troubleshooting

More information

Enterprise Chat and Supervisor s Guide, Release 11.5(1)

Enterprise Chat and  Supervisor s Guide, Release 11.5(1) Enterprise Chat and Email Supervisor s Guide, Release 11.5(1) For Unified Contact Center Enterprise August 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Getting Started Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine, Release 3.1.1

Getting Started Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine, Release 3.1.1 Getting Started Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine, First Published: July 06, 2016 Last Modified: July 06, 2016 Americas Headquarters Cisco Systems,

More information

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco CSPC 2.7.x. Quick Start Guide. Feb CSPC Quick Start Guide

Cisco CSPC 2.7.x. Quick Start Guide. Feb CSPC Quick Start Guide CSPC Quick Start Guide Cisco CSPC 2.7.x Quick Start Guide Feb 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 17 Contents Table of Contents 1. INTRODUCTION

More information

Cisco Business Edition 7000 Installation Guide, Release 11.5

Cisco Business Edition 7000 Installation Guide, Release 11.5 First Published: August 08, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.0

Cisco Terminal Services (TS) Agent Guide, Version 1.0 First Published: 2016-08-29 Last Modified: 2018-01-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco StadiumVision Management Dashboard Monitored Services Guide

Cisco StadiumVision Management Dashboard Monitored Services Guide Cisco StadiumVision Management Dashboard Monitored Services Guide Release 2.3 May 2011 Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Wired Network Summary Data Overview

Wired Network Summary Data Overview Wired Network Summary Data Overview Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE.

More information

Cisco Dynamic Fabric Automation Solution Guide

Cisco Dynamic Fabric Automation Solution Guide First Published: January 31, 2014 Last Modified: February 25, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco TelePresence TelePresence Server MSE 8710

Cisco TelePresence TelePresence Server MSE 8710 Cisco TelePresence TelePresence Server MSE 8710 Installation Guide 61-0025-05 August 2013 Contents General information 3 About the Cisco TelePresence Server MSE 8710 3 Port and LED locations 3 LED behavior

More information

Deploying VNFs Using AutoVNF

Deploying VNFs Using AutoVNF This chapter describes the following topics: Introduction, page 1 VNF Deployment Automation Overview, page 1 Pre-VNF Installation Verification, page 5 Deploy the USP-based VNF, page 5 Upgrading/Redeploying

More information

Smart Software Manager satellite Installation Guide

Smart Software Manager satellite Installation Guide Smart Software Manager satellite Installation Guide Published: Jul, 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco TelePresence VCS CE1000 Appliance

Cisco TelePresence VCS CE1000 Appliance Cisco TelePresence VCS CE1000 Appliance Installation Guide X8.2 or later D15056.02 June 2014 Contents Introduction 3 About this document 3 About the Cisco VCS appliance 3 Related documents 4 Training 4

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-12-19 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Deploying Devices. Cisco Prime Infrastructure 3.1. Job Aid

Deploying Devices. Cisco Prime Infrastructure 3.1. Job Aid Deploying Devices Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION,

More information

Backup and Restore Guide for Cisco Unified Communications Domain Manager 8.1.3

Backup and Restore Guide for Cisco Unified Communications Domain Manager 8.1.3 Communications Domain Manager 8.1.3 First Published: January 29, 2014 Last Modified: January 29, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information