High Availability for Cisco RAN Management Systems

Size: px
Start display at page:

Download "High Availability for Cisco RAN Management Systems"

Transcription

1 First Published: Last Modified: Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA USA Tel: NETS (6387) Fax:

2 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) Cisco Systems, Inc. All rights reserved.

3 CONTENTS CHAPTER 1 Preface 1 Objectives 1 Audience 1 Conventions 2 Related Documentation 2 Obtaining Documentation and Submitting a Service Request 2 CHAPTER 2 5 Information About High Availability 5 Recommended Requirements for High Availability 6 Guidelines and Limitations 6 High Availability Status 7 CHAPTER 3 Configuring High Availability for the Central Node 9 Prerequisites 9 Creating a High Availability Cluster 10 Adding Hosts to the High Availability Cluster 10 Adding Storage to Hosts 11 Adding Network Redundancy for Hosts and Configuring vmotion 11 Creating vsphere Distributed Switch With Two Uplinks 12 Adding Port Groups for vmotion and VLANs 12 Adding Hosts to Newly Created vsphere Distributed Switch 13 Configure vmotion Ports 13 Creating Redundancy for Uplinks 14 Installing the OVA 14 Viewing the Snapshot of Central Node VM 15 Updating Cluster Configuration 15 Testing High Availability on the Central Node 16 iii

4 Contents Testing High Availability Failover 17 Testing the vsphere High Availability Reset 17 Troubleshooting 18 Configuring Cold Standby for the Central Node 19 Prerequisites 19 Installing Cold Standby Central VM For RMS Distributed Setup 20 Configuring Additional Setup 21 Starting Backups on Primary Central VM 29 Transferring Backups to Cold Standby Central VM 30 Restoring Cold Standby Central VM Using Backups 30 Restoring the INSEE-SAC Information on New Central Server 34 Switching Serving VMs to Point to Cold Standby Central VM 34 Switching Upload VMs to Point to Cold Standby Central VM 37 Enabling Cold Standby Central VM 38 Enabling Primary Site After Restoration 39 Transferring Backups From Cold Standby Central VM to Primary Central VM 40 Restoring Primary Central VM Using Backups 40 Switching Serving VMs to Point to Primary Central VM 43 CHAPTER 4 Configuring High Availability for VMware vcenter in RMS Distributed Setup 47 Prerequisites 47 Configuring Hot Standby for vcenter VM 48 Testing Hot Standby for vcenter VM 48 Configuring Cold Standby for vcenter VM 49 Testing Cold Standby for vcenter VM 49 Recovering the Primary of vcenter VM 50 Backing Up vcenter VM Database 50 Restoring vcenter VM Database 52 CHAPTER 5 Configuring High Availability for VMware vcenter in RMS All-In-One Setup 55 Prerequisites 55 Guidelines and Limitations 56 Creating a High Availability Cluster 56 Adding Hosts to the High Availability Cluster 56 Adding NFS Datastore to the Host 56 iv

5 Contents Adding Network Redundancy for Hosts and Configuring vmotion 57 Installing the OVA 57 Updating Cluster Configuration 57 Migrating Central Node Datastore to NFS 57 Testing High Availability on the Central Node and vcenter VM 58 Testing High Availability Failover 58 Testing the vsphere High Availability Reset 58 Testing Accidental Failure on a Host 59 CHAPTER 6 Configuring High Availability for the PMG DB 61 Prerequisites 62 Configuration Workflow 63 Configuring the Standby Server Setup 64 Configuring the Primary Server 64 Logging 64 Initializing Parameters 65 Setting Up the Service 67 Starting the Listener 67 Backing Up the Primary Database 68 Creating Standby Control File and PFILE 69 Configuring the Hot Standby Server 70 Copying Files 71 Starting the Listener on Standby Server 72 Restoring Backup 73 Creating Redo Logs 74 Starting the Apply Process 75 Checking Status 75 Setting Up the Oracle Data Guard Broker 76 Verifying Log Synchronization on Standby Server 79 Enabling Flashback 80 Configuring Hot Standby for PMG DB 81 Setting Up the Hot Standby 81 Enabling Failover 82 Checking Status 84 Initializing Parameters for Standby Server 84 v

6 Contents Setting Up the Service for Standby Server 85 Backing Up the Primary Database 85 Creating the Standby Control File and PFILE for Standby Server 86 Configuring the Standby Server 87 Copying Files to the Standby Server 88 Starting the Listener on Standby Server 88 Restoring Backup on the Standby Server 89 Creating Redo Logs for Standby Server 91 Starting the Apply Process on Standby Server 92 Configuring the Data Guard Broker on Standby Server 92 Verifying Log Synchronization on Standby Server 93 Enabling Flashback on Standby Server 94 Configuring Cold Standby 94 Configuring Primary With Only Cold Standby 94 Configuring Primary With Hot and Cold Standby 94 Testing Hot Standby 95 Testing Failover Process 95 Testing Failover From Primary Database to Standby Database 95 Testing Failover Revert From New Primary to Original Primary Database 99 Testing Switchover Process 102 Testing Switchover From Primary to Standby Database 102 Testing Switchover Revert From New Primary to Original Primary Database 104 Testing Cold Standby 105 Testing Site Failure 106 Recovering Original Primary After Site Failure 108 Converting a Failed Primary into a Standby Database Using RMAN Backups 108 Converting Failed Primary Into a Standby Database Using RMAN Backups 112 Rolling Back and Cleaning Up Standby and Primary Configurations 115 Removing Data Guard Broker Configuration 115 Removing Configuration Files from Primary Server 116 Removing Configuration Files From Standby Server 117 Removing Configuration Files From Additional Standby Server 119 Removing the Standby Database 120 Removing the Additional Standby Database 121 Cleaning Up the Primary Database 121 vi

7 Contents Cleaning Up the Redo Log Files 121 Cleaning Up Initialization Parameters 122 Using the Backup Pfile 122 Using SQL Statements 123 Verifying the Database 124 Recreating Standby Servers 124 Deleting the Primary Database 124 Troubleshooting Data Guard on PMG DB 125 Reverting Original Primary Database After Site Failure 125 Verifying the Data Guard Broker Configuration 133 Reverting From Disk Space Issues 133 vii

8 Contents viii

9 CHAPTER 1 Preface This section describes the objectives, audience, organization, and conventions of the Cisco RAN Management System (RMS) High Availability document. Objectives, page 1 Audience, page 1 Conventions, page 2 Related Documentation, page 2 Obtaining Documentation and Submitting a Service Request, page 2 Objectives This document provides an overview of the Cisco RMS High Availability feature and describes how to configure and troubleshoot high availability on the Central node, VMware vcenter, and PMG DB. Audience The primary audience for this guide includes network operations personnel and system administrators. This guide assumes that you are familiar with the following products and topics: Basic internetworking terminology and concepts Network topology and protocols Microsoft Windows 2000, Windows XP, Windows Vista, and Windows 7 Linux administration Red Hat Enterprise Linux Edition v6.7 VMware vsphere Standard Edition v6.0 1

10 Conventions Preface Conventions This document uses the following conventions: Convention bold font Italic font Courier font Bold Courier font [x] string < > [ ]!, # Description Commands and keywords and user-entered text appear in bold font. Document titles, new or emphasized terms, and arguments for which you supply values are in italic font. Terminal sessions and information the system displays appear in courier font. Bold Courier font indicates text that the user must enter. Elements in square brackets are optional. A nonquoted set of characters. Do not use quotation marks around the string or the string will include the quotation marks. Nonprinting characters such as passwords are in angle brackets. Default responses to system prompts are in square brackets. An exclamation point (!) or a pound sign (#) at the beginning of a line of code indicates a comment line. Related Documentation For additional information about the Cisco RAN Management Systems, refer to the following documents: Cisco RAN Management System Installation Guide Cisco RAN Management System Administration Guide Cisco RAN Management System API Guide Cisco RAN Management System SNMP/MIB Guide Cisco RAN Management System Release Notes Obtaining Documentation and Submitting a Service Request For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What's New in Cisco Product Documentation. 2

11 Preface Obtaining Documentation and Submitting a Service Request To receive new and revised Cisco technical content directly to your desktop, you can subscribe to the What's New in Cisco Product Documentation RSS feed. RSS feeds are a free service. 3

12 Obtaining Documentation and Submitting a Service Request Preface 4

13 CHAPTER 2 High Availability for Cisco RAN Management Systems This chapter provides an overview of the Cisco RAN Management Systems (RMS) high availability feature and covers the following sections: Note High availability feature is applicable from Cisco RMS, Release 4.1 onwards. Information About High Availability, page 5 Recommended Requirements for High Availability, page 6 Guidelines and Limitations, page 6 High Availability Status, page 7 Information About High Availability The high availability feature for Cisco RMS is designed to ensure continued operation of Cisco RMS sites in case of network failures such as network outage, node reboot, and so on, with minimal downtime. High availability provides a redundant setup that is activated automatically or manually when an active Central node or Provisioning & Management Gateway (PMG) database (DB) fails at one RMS site. This setup ensures that the Central node and PMG DB are connected at all times. For example, if there are 10,000 connected access points (APs) that are operational and there is a business need for these APs to continue to remain functioning at all times despite network failures, a backup or standby node is created such that if one of the nodes go down, the backup node takes over the responsibility of the Central node. This redundant setup is implemented to the Central node and PMG DB in two ways: hot standby and cold standby. In a high availability setup, when the host goes down, AP services are seamlessly transitioned, however provisioning services are impacted until the hot standby host comes up. High availability also protects the data of the connected devices in an external storage location that can be accessed by redundant hosts during a disaster recovery procedure. To implement this, an external database, SAN is configured. Therefore, instead of saving data on local disks, within the hosts, all the data is directly 5

14 Recommended Requirements for High Availability saved on the SAN, which has better capacity and capability of running across networks. Basically this is why you would an external storage. SAN also provides high availability for data. High availability ensures minimal downtime of active of APs during any unforeseen circumstances wherein if a Central node goes down, the virtual machine (VM) goes down. Therefore, at this time a provisioning service downtime is expected. However, because of the redundant configuration more than one Serving node and AP service stays up. When a hot standby node is available to take over, there is minimum provisioning service downtime of just 5 minutes (for up to 10,000 APs) during which the node goes down and the host standby node comes up and takes over. It is possible to achieve a minimal downtime because in a hot standby configuration, the redundancy process is automated. However, in a cold standby, the downtime expected is greater than the hot standby, for example, a day. In a cold standby, the redundancy process is manual process. It requires certain operations to be performed to switch over to cold standby. To configure high availability for the Central node, see Configuring High Availability for the Central Node, on page 9. To configure high availability for VMware vcenter, see Configuring High Availability for VMware vcenter in RMS Distributed Setup, on page 47. To configure high availability for the PMG DB, see Configuring High Availability for the PMG DB, on page 61. To configure high availability for VMware vcenter in an All-In-One RMS Setup, see Configuring High Availability for VMware vcenter in RMS All-In-One Setup, on page 55. Recommended Requirements for High Availability Hardware Two Cisco UCS 5108/UCS 240 Servers containing eight blades with eight VMs installed for hot and cold standby. 1 active Serving node, 1 active ULS running on host 1/blade 1/chassis 1 1 active Central node running on host 2/blade 2/chassis 1 1 empty host to be used for the hot standby cluster running on host 3/blade 3/chassis 1 1 cold standby Central node running on host 2/blade 2/chassis 2 If network setup has PMG DB, the following hardware is required: 1 active PMG DB running on host 4/blade 4/chassis 1 and 1 hot standby PMG DB running on host 5/blade 5/chassis 1 For PMGDB cold standby: 1 cold standby PMGDB running on host 3/blade 3/chassis 2 Software VMware vsphere client software and vcenter server installed. SAN storage availability and configuration on all connected hosts RMS hosts. All the required nodes and VMs for high availability setup are deployed and powered on. Guidelines and Limitations For supporting hot standby for a RMS server VM it is necessary for the hardware to be identical to that of the primary server VM. 6

15 High Availability Status A reliable high-speed wired network must exist between the primary RMS site and disaster recovery standby site. The primary and standby RMS server VM must be running the same RMS software release. All servers VM should be reachable at both sites. High-reliable network must exist between two RMS sites. The primary site and disaster recovery host site are not required to share the same subnet. They can be geographically separated. The ports over which the RMS VMs communicate should be open (not blocked with network firewalls, application firewalls, gateways, and so on). When the host containing the Central VM fails and the hot standby host is available, the AP service for all existing APs stays up, but provisioning service is impacted till hot standby VM on standby host in the cluster comes up and ready to accept provisioning messages. This provisioning downtime varies based on AP data base size, for 10,000 APs it is around 5 minutes. When the Central VM fails, it gets restarted by the hot standby and has the same provisioning downtime as the host. If both primary and hot standby Central host fail, cold standby host on the disaster recovery site can be brought up using backups from the primary site. AP service stays up until Serving node VM is up on either sites. Provisioning service will take longer to recover as cold standby Central VM needs to be restored using backups and APs provisioned when backup was taken, needs to be re-provisioned from the customer Operations Support Systems (OSS). When cold standby Central node takes over, Serving node VMs, Upload node VMs, and customer OSS need to switch to using the new active Central node VM. If PMG DB is part of the deployment then Central node VM can be configured to point to the PMG DB VM from the primary site or from the disaster recovery site based on whichever is available. Backups are required to be regularly transferred to cold standby disaster recovery sites; minimum daily backups are recommended. An additional backup on an external storage server is recommended. If complete primary site goes down then redundant Serving and Upload VMs on the disaster recovery site will cover for the AP service; provisioning service will recover only after cold standby Central VM is made operational and Serving node VMs are synchronized with it. High Availability Status The high availability status can be viewed in the VMware vcenter client. To view the status of a cluster, log in to vcenter. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the cluster and click the Summary tab. The vsphere HA window should show the following as enabled : Admission Control, Host monitoring, and VM Monitoring. In the vsphere HA window, the cluster status should be shown as follows: Host connected to master as 1, which should be the slave host. Total number of VMs in the cluster, under Protected status. Heartbeat data stores should show at least two data stores. 7

16 High Availability Status 8

17 CHAPTER 3 Configuring High Availability for the Central Node This chapter describes the process of configuring high availability for the Central node. It provides the prerequisites and procedures required to add vsphere high availability clusters and associate hosts and VMs to it. It also includes the procedures to add network redundancy for supporting vsphere high availability. Follow these procedures to configure high availability for the Central node. Prerequisites, page 9 Creating a High Availability Cluster, page 10 Adding Hosts to the High Availability Cluster, page 10 Adding Storage to Hosts, page 11 Adding Network Redundancy for Hosts and Configuring vmotion, page 11 Installing the OVA, page 14 Viewing the Snapshot of Central Node VM, page 15 Updating Cluster Configuration, page 15 Testing High Availability on the Central Node, page 16 Configuring Cold Standby for the Central Node, page 19 Prerequisites Network CIQ should be available and VLAN and IPs for host and VMs and for vmotion should be known. VMware vcenter should be installed and loaded with ESXi 6.0. Hosts should be installed with VMware ESXi version 6.0 as mentioned in install guide (see Cisco RAN Management System Installation Guide), and their login IP, FQDN, and login credentials should be available. Appropriate VMware license key should be added to hosts. VMware vsphere distributed switch with VLAN port groups to be used by RMS as per CIQ should be created. 9

18 Creating a High Availability Cluster Configuring High Availability for the Central Node Users of this procedures should have knowledge on basic use of VMware vsphere client and have login access to it. Creating a High Availability Cluster Create a high availability cluster to which a set of hosts and related properties are added along with the vsphere high availability related properties. These hosts that are part of the high availability cluster, provide hot standby for the Central node VM. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Step 11 Step 12 Log in to vsphere client. Navigate to Home > Inventory > Hosts and Clusters > and select a data center. In the Getting Started tab, click Create a cluster to open the New Cluster Wizard. In the Cluster Features screen, enter the name of the cluster in the Name field. For example, HA-cluster-central. Check the Turn On vsphere HA and Turn On vsphere DRS check boxes and click Next. In the vsphere DRS screen, select the Fully automated option in the Automation level area. Click Next. In the Power Management screen, select the Off option because there are two hosts and both the hosts required to be always powered on for high availability. Click Next. In the vsphere HA screen... a) Check the Enable Host Monitoring check box in the Host Monitoring Status area. b) Select the Enable: Disallow VM power on operations that violate availability constraints option in the Admission Control area. c) Select the Host failures the cluster tolerates option and define a value in the Admission Control Policy area. For example, 1. d) Click Next. In the Virtual Machine Options screen, select Medium from the VM restart priority drop-down list. Select Leave powered on from the Host Isolation response drop-down list. This lets the VM stay powered on for network isolation. Click Next. In the VM Monitoring screen, select VM and Application Monitoring from the VM Monitoring drop-down list. Next, slide the Monitoring sensitivity bar to the right to turn it to High. This ensures faster failure detection. Click Next. In the VMware EVC screen, select the Disable EVC option. This option can be disabled because two compatible hosts of the same type will be added for cluster creation. Click Next. In the Virtual Machine Swapfile Location screen, select the Store the swapfile in the same directory as the virtual machine (recommended) option. Click Next. In the Ready to Complete screen, the options selected for the cluster are displayed. Click Finish to complete the cluster creation. Adding Hosts to the High Availability Cluster Step 1 Log in to vsphere client. To add hosts, proceed to Step 2 or 3 based on your network setup. 10

19 Configuring High Availability for the Central Node Adding Storage to Hosts Step 2 Step 3 Add hosts that are not present in the inventory, to the high availability cluster: a) In the navigation pane, expand Home > Inventory > Hosts and Clusters and select a cluster. b) In the Getting Started tab, click Add a host to open the Add Host Wizard. c) In the Specify Connection Settings screen, enter the host name in the Host field. For example, blrms cisco.com. Next, enter the host username and password in the Username and Password fields. Click Next. d) In the Assign License screen, select the Assign an existing license key to this host option to select the VMware vsphere 5 Enterprise and select from the list of available product license keys. For example, 7M4A5-48KEH-08J4M-0JLH0-297M4. Click Next. e) In the Configure Lockdown Mode screen, do not select any option for the lockdown. Click Next. f) In the Choose the Destination Resource Pool screen, select Put all of this hosts s virtual machines in the cluster s root resource pool. Resource pools currently present on the host will be deleted, option. Click Next. g) In the Ready to Complete screen, the options selected for the host is displayed. Click Finish to complete adding the host. h) Repeat steps 2a to 2f to add all the hosts to the cluster. Add hosts that already present in the inventory, to the high availability cluster: a) In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the host to be added to the cluster. Right-click on the host and click Enter Maintenance Mode. In the Confirm Maintenance Mode dialog that is displayed, click Yes. b) Click on the host that has entered maintenance mode and drag it towards the newly-created cluster. When you drag and drop the host, the Add a host is displayed. Click it to open the Add Host Wizard. c) In the Choose the Destination Resource Pool screen, select Put all of this hosts s virtual machines in the cluster s root resource pool. Resource pools currently present on the host will be deleted, option. Click Next. d) In the Ready to Complete screen, the resource destination is displayed. Click Finish to complete adding the host. e) In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the host that is in maintenance mode, right-click and click Exit Maintenance Mode. f) Repeat steps 3a to 3e to add all the hosts to the cluster. Adding Storage to Hosts To add shared SAN data store to individual hosts, see the Configuring SAN for Cisco RMS section in the Cisco RAN Management System Installation Guide. Adding Network Redundancy for Hosts and Configuring vmotion To add network redundancy for hosts and configure vmotion, perform the following procedures: Creating vsphere Distributed Switch With Two Uplinks, on page 12 Adding Port Groups for vmotion and VLANs, on page 12 Adding Hosts to Newly Created vsphere Distributed Switch, on page 13 Configure vmotion Ports, on page 13 Creating Redundancy for Uplinks, on page 14 11

20 Creating vsphere Distributed Switch With Two Uplinks Configuring High Availability for the Central Node Creating vsphere Distributed Switch With Two Uplinks Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Log in to vsphere client. If the vsphere distributed switch is not already created, proceed to Step 2. Else, go to Adding Port Groups for vmotion and VLANs, on page 12. Create the vsphere distributed switch if the vsphere distributed switch is not already created. In the navigation pane, expand Home > Inventory > Networking and select the datacenter. Right-click on the datacenter and click New vsphere Distributed Switch to open the Create vsphere Distributed Switch wizard. In the Select vsphere Distributed Switch Version screen, select the vsphere Distributed Switch Version v6.0.0 option. Click Next. In the General Properties screen, enter the name for the vsphere distributed switch in the Name field. For example, RMS-VDS. Enter the number of uplink ports as 2. This will provide the redundancy for management and vmotion network, which would be defined later. Click Next. In the Add Hosts and Physical Adapters screen, select the Add later option to add the host and adapters later. Click Next. In the Ready to Complete screen, uncheck the Automatically create a default port group check box. Click Finish. Adding Port Groups for vmotion and VLANs Step 1 Step 2 Step 3 Step 4 Step 5 Log in to vsphere client. Add the port group for vmotion and VLANs used by RMS. In the navigation pane, expand Home > Inventory > Networking and select the vsphere distributed switch. Right-click on the switch and click New Port Group to open the Create Distributed Port Group wizard. Each VLAN required for RMS hosts and VMs should have a separate port group. A separate port group for vmotion too should be added. In the Properties screen, enter the port group name in the Name field to associate to a VLAN used by RMS. For example, VLAN 11. Enter the number of ports as 128. Select the VLAN type from the drop-down list. For example, VLAN. Enter the VLAN ID. For example, 11. Click Next. In the Ready to Complete screen, click Finish. Repeat steps 2 to 4 to create separate port groups for each VLAN as per the network CIQ and all port group names should be unique. For vmotion, choose the same VLAN that is chosen in the port group for the management VLAN. 12

21 Configuring High Availability for the Central Node Adding Hosts to Newly Created vsphere Distributed Switch Adding Hosts to Newly Created vsphere Distributed Switch Step 1 Log in to vsphere client. Step 2 To add hosts to the newly created vsphere distributed switch, in the navigation pane, expand Home > Inventory > Networking and select the vsphere distributed switch. Right-click on the switch and click Add a host to open the Add Host to vsphere Distributed Switch wizard. Step 3 Step 4 Step 5 Step 6 Step 7 In the Select Hosts and Physical Adapters screen, select the host from the newly created cluster to be associated with the vsphere distributed switch, and select the available VMNIC to be associated with vsphere distributed switch. Select two VMNIC such that one is used for the management port group adapter and the other for the vmotion port group adapter. For example, blrms cisco.com and vmnic0. Click Next. In the Network Connectivity screen, click Next. In the Virtual Machine Networking screen, leave the default options as-is and click Next. In the Ready to Complete screen, the settings of the new vsphere distributed switch is displayed. Click Finish. Repeat steps 4 to 8 to add all the remaining hosts in the cluster to the vsphere distributed switch. Configure vmotion Ports Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Log in to vsphere client. To configure the vmotion ports on the hosts, in the navigation pane, expand Home > Inventory > Hosts and Clusters and select the host where vmotion needs to be enabled. Select the Configuration tab and click the Networking option in the Hardware list provided on the screen. Click the vsphere Distributed Switch option in the View area. The details of the distributed switch are displayed on the screen. Click Manage Virtual Adapters to open the Manage Virtual Adapters dialog box. In the Manage Virtual Adapters dialog box, click Add to open the Add Virtual Adapter wizard. In the Creation Type screen, select the New virtual adapter option. Click Next. In the Virtual Adapter Type screen, select the VMkernel option. Click Next. In the Connection Settings screen, a) Select the newly created port groups from the Select port groups drop-down list. b) Check the Use the virtual adapter for vmotion check box if the virtual adapter is being used for vmotion. If the virtual adapter is being used for management (and not for vmotion), then uncheck this check box. c) Click Next. Step 9 Step 10 Step 11 In the VMkernel - IP Connection Setting screen, enter the IP address manually in the IP address field. This IP address should be based on the IP assigned in CIQ. Enter the subnet in the Subnet field. Click Next. In the Ready to Complete screen, the virtual adapter configuration is displayed. Click Finish. This returns you to the Manage Virtual Adapters dialog box. Click Close to exit this window. Repeat steps 2 to 10 to configure the vmotion ports on the other host. 13

22 Creating Redundancy for Uplinks Configuring High Availability for the Central Node Creating Redundancy for Uplinks Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Log in to vsphere client. To create redundancy for uplinks, specify one uplink primary for the port group used for vmotion and a second uplink for a port group associated with the management and other RMS VLANs. To implement this, in the navigation pane, expand Home > Inventory > Networking. Select the vsphere distributed port group and right-click the port group and click Edit Settings to open the Vmotion Settings dialog box. Click Teaming and Failover in the pane. Select the uplink port that you want to set as the standby port and click Move Up such that it is under Standby Uplinks. For example, Uplink2. Select the other uplink port that you want to set as the active port and click Move Up such that it is under Active Uplinks. Repeat Steps 3 and 4 for other port groups such that the uplink active for the port group associated with vmotion will be standby for other port groups, including management VLAN port group, and uplink standby for port group associated with vmotion will be active for other port groups. This will provide redundancy for management of the vmotion network and provide high availability of the network. The vsphere distributed switch with this redundancy is displayed in the Configuration tab of the host. Similarly, the association is visible in the Configuration tab of the vsphere Distributed Switch in Networking. Repeat steps 2 to 5 to create redundancy for uplinks on the other host. Installing the OVA If you have completed the OVA installation, proceed to Updating Cluster Configuration, on page 15. If you have not completed the OVA installation, see the Preparing the OVA Descriptor Files section in the Cisco RAN Management System Installation Guide. Note If you are configuring high availability in an all-in-one RMS setup: Deploy the setup on any of the two hosts and ensure that only the Central node has the NFS data store. Complete this procedure, "Installing the OVA", only after you have completed the "All-In-One Redundant Deployment" procedure provided in the Cisco RAN Management System Installation Guide. 14

23 Configuring High Availability for the Central Node Viewing the Snapshot of Central Node VM Viewing the Snapshot of Central Node VM Below is the snapshot of the Central VM inside a cluster and associated with multiple data stores as covered in the Configuring SAN section in the Cisco RAN Management System Installation Guide. Updating Cluster Configuration Step 1 Step 2 Step 3 Step 4 Log in to vsphere client. In the navigation pane, expand Home > Inventory > Hosts & Clusters and select the cluster. Right-click on the cluster and click Edit Settings to open the HA-cluster-central Settings dialog box. In the HA-cluster-central Settings dialog box, select the vsphere HA option from the pane. Set the properties of the vsphere HA: 15

24 Testing High Availability on the Central Node Configuring High Availability for the Central Node a) In the Host Monitoring Status area, check the Enable Host Monitoring check box. b) In the Admission Control Area, select the Enable: Disallow VM power on operations that violate availability constraints option. c) In the Admission Control Policy area, enter 1 in the Host failures the cluster tolerates field. d) Click Advanced Options to open the Advanced Options (vsphere HA) dialog box. e) Provide the recommended value for each advanced configuration option to minimize downtime. f) Click Ok. das.config.fdm.policy.unknownstatemonitorperiod: 30 das.failureinterval: 30 das.iostatsinterval: 10 das.vmfailoverenabled: true Step 5 Step 6 Step 7 Step 8 Step 9 Click VM Monitoring option in the pane. In the Virtual Machines Settings pane, select exclude from the Application Monitoring list to exclude application monitoring for VMs. Click the Datastore Heartbeating option in the pane. In the Datastores available for Heartbeat. Select those that you prefer area, check the check boxes of a minimum of two data stores for heartbeat monitoring. For example, DATA and backup1. Select the vsphere DRS option in the pane and click Virtual Machine Options. Select all VMs in the Virtual Machines Settings pane including vcenter VM for automatic failover. Check the Enable individual virtual machine automation levels check box. Click Ok to save the changes in the HA-cluster Settings dialog box. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the host. Right-click on the host and click Reconfigure for vsphere HA to apply changes made to the cluster settings or host, or both. Repeat this step for all hosts present in the cluster. Testing High Availability on the Central Node To test high availability on the Central node, follow these procedures: Testing High Availability Failover, on page 17 Testing the vsphere High Availability Reset, on page 17 Troubleshooting, on page 18 16

25 Configuring High Availability for the Central Node Testing High Availability Failover Testing High Availability Failover Step 1 Step 2 Step 3 Step 4 Log in to vsphere client. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select individual hosts under the new cluster and select the Virtual Machines tab and verify individual host settings. Select one of the hosts in the navigation pane, right-click and click Reboot. After 5 minutes, select the other host and click the Virtual Machines tab. The Virtual Machines tab should display the VM from the rebooted host under the current host. The host that was rebooted should not have any VM under it. Testing the vsphere High Availability Reset Before You Begin Perform basic failover tests to validate if the vsphere high availability cluster is functioning as expected for a VM failure. Step 1 Login to the VM and enter the sudo mode and trigger its failure. Establish ssh connection to the VM. Step 2 ssh The system responds by connecting the user to the SantaClara RDU server. Use the sudo command to gain access to the root user account. Step 3 sudo su - The system responds with a password prompt. Enter your individual user password to gain access. Step 4 Step 5 [enter your password] The system responds with a command prompt. Create the VM reset. echo c > /proc/sysrq-trigger The system responds with a command prompt. Select the VM on which the reset was attempted (after the trigger of the VM reset from the command prompt) and select the Tables & Events tab. Click Events. This tab should be updated with information about the CPU of the VM being disabled. Later the vsphere should detect the VM being disabled, and on missing the heartbeat from the VM, the vsphere high availability should reset the VM. 17

26 Troubleshooting Configuring High Availability for the Central Node Troubleshooting Problem Primary host in high availability cluster running when VM is rebooted. Data store connectivity loss with host. Failure for VM monitored by high availability cluster. High availability failure Data store heartbeat not functioning. Description When the host is rebooted, the VM running on the host will be reset and come up on other host in the cluster. To see the high availability event sequence, expand Home > Inventory > Hosts and Clusters and select the VM on which was reset was attempted. Click the Tables & Events > Events tabs. The sequence of events will be displayed in the Events tab. Loss of data store will be detected in the Event logs. After the SAN data store is disconnected for the host in the high availability cluster, expand Home > Inventory > Hosts and Clusters and select the hosts associated with them. Click the Tables & Events > Events tabs. The events will be displayed in the Events tab corresponding to the hosts, associated with those storages. VM failure will be detected by high availability and it will be reset, To see the event, expand Home > Inventory > Hosts and Clusters and select the VM on which was reset was attempted. Click the Tables & Events > Events tabs. The event will be displayed in the Events tab. If both host in the high availability cluster fails, high availability will fail and VMs that are part of a cluster will not recover until one of the hosts inside the cluster restores. This event can be seen in the Tables & Events > Events tabs for the associated VM. The data store heartbeating is not selected for any of the available data stores. In this situation if a host is isolated, the following notification can be seen on the Summary tab of the host: Configuration Issues: The number of vsphere HA heartbeat datastores for this host is 0, which is less than required: 2. If high availability reset is attempted on this isolated host, it will succeed with the following errors, which can be seen in the Tasks & Events tab: vsphere HA restarted this virtual machine. Warning message from blrms cisco.com: Virtual device ide0:0will 18

27 Configuring High Availability for the Central Node Configuring Cold Standby for the Central Node Configuring Cold Standby for the Central Node Prerequisites This section describes the process of configuring high availability for Cisco RMS using cold standby Central VM. These procedures apply to Distributed deployment. The following sections provide the prerequisites and procedures required to configure cold standby. Prerequisites, on page 19 Installing Cold Standby Central VM For RMS Distributed Setup, on page 20 Starting Backups on Primary Central VM, on page 29 Transferring Backups to Cold Standby Central VM, on page 30 Restoring Cold Standby Central VM Using Backups, on page 30 Switching Serving VMs to Point to Primary Central VM, on page 43 Enabling Cold Standby Central VM, on page 38 Enabling Primary Site After Restoration, on page 39 Operator or user of this procedure should be familiar with the usage of SSH, vi, or an equivalent editor. Installation of primary Central VM on primary site, Serving VM, and Upload VM on both sites should be complete and redundancy, hot standby, and SAN configured as described in the Install guide (see Cisco RAN Management System Installation Guide and HA_for_RMS_41 Tech Note), Note the following information on your setup before proceeding to configure cold standby for the Central VM: Node Location Central Node VM 1 Central Node VM 2 Serving Node VM 1 Serving Node VM 2 Upload Node VM 1 Upload Node VM 2 PMG DB VM Node IP Address <CENTRAL_IP_1> <CENTRAL_IP_2> <SERVING_IP_1 > <SERVING_IP_2 <UPLOAD_IP_1> <UPLOAD_IP_2> <PMG_DB_IP_1> Node Description Primary Central Node VM Cold standby Central Node VM Serving Node VM on primary site Redundant Serving Node VM Upload Node VM on primary site Redundant Upload Node VM Primary PMG DB VM 19

28 Installing Cold Standby Central VM For RMS Distributed Setup Configuring High Availability for the Central Node PMG DB VM 2 PMG DB VM <PMG_DB_IP_2> <PMG_DB_IP_3> Hot Standby PMG DB VM Cold Standby PMG DB VM Ensure that the following ssh commands are working, where admin1 is sample userid and the user should use the userid provided to them. ssh admin1@<central_ip_1> ssh admin1@<central_ip_2> In a different shell, from <CENTRAL_IP_1> use the following command: ssh admin1@<serving_ip_2>, ssh admin1@<serving_ip_1> ssh admin1@<upload_ip_2>, ssh admin1@<upload_ip_1> If PMG DB VMs are part of deployment, then check access to them. ssh admin1@<pmgdb_ip_1>, ssh admin1@<pmgdb_ip_2> Note Operator or user of this procedure should have root privileges to the above VMs. Installing Cold Standby Central VM For RMS Distributed Setup Prepare a descriptor file for the cold standby Central node with the new IPs given specifically for cold standby central server. Install the OVA for the Central node VM on the cold standby site using the OVF tool as described in the procedure given in the RAN Management System Installation Guide. Manually configure any additional routes that is required be needed for network reachability. Execute the utility shell script (central-multi-nodes-config.sh) to configure the network and application properties on the Central node. To prepare and execute the utility shell, see the RMS Redundant Deployment section of the RAN Management System Installation Guide. Manually append cold standby central server "hostname" and "eth0 IP" to the existing /etc/hosts file of the Serving nodes and Upload nodes. Navigate to the location /rms/ova/scripts/post_install/. In RMS4.1 and RMS5.0 deployments and execute the configure_func1.sh or configure_func2.sh script based on what was executed earlier on the primary Central server. For more details on these scripts and their usage, see the Cisco RAN Management System Administration Guide. In RMS5.1 deployment, execute the configure_reservedmode.sh or configure_insee_rf_alarmsprofile.sh script based on what was executed earlier on the primary Central server. For more details on these scripts and their usage, see the Cisco RAN Management System Administration Guide. 20

29 Configuring High Availability for the Central Node Configuring Additional Setup Configuring Additional Setup Step 1 Establish an ssh connection and log in as admin user to the cold standby Central VM. Step 2 ssh admin1@central_ip_2 The system responds by connecting the user to the cold standby central VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Add routes for physical connectivity if cold standby Central node VM has Eth0 interface IP in a different subnet than primary Central node VM Eth0 interface IP. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 6 route add -net <subnet CENTRAL_IP_1> netmask gw <default Gateway for CENTRAL_IP_2> route add -net netmask gw Run the script to enable the cold Central node to communicate with the primary Central node VM. cd /rms/ova/scripts/redundancy/;./allow_cold_standby_central.sh [blr-rms-ha-central03] /rms/ova/scripts/redundancy #./allow_cold_standby_central.sh Eth0 Address for current central node: eth0 Link encap:ethernet HWaddr 00:50:56:B6:A4:F0 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (5.7 GiB) TX bytes: (31.6 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:22:82 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (97.9 MiB) TX bytes: (27.4 MiB) lo Link encap:local Loopback 21

30 Configuring Additional Setup Configuring High Availability for the Central Node inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (329.9 MiB) TX bytes: (329.9 MiB) Enter Eth0 address of current central node: Step 7 Enter Eth0 address of remote central node: Adding IPtables rules to allow communication over port 22 with remote central node server iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] IPTables rules added successfully [blr-rms-ha-central03] /rms/ova/scripts/redundancy # Add routes for physical connectivity if cold standby Central node VM has Eth0 interface IP in a different subnet than the primary Serving node VM Eth0 interface IP, and route is not already present. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 8 route add -net <subnet SERVING_IP_1> netmask gw <default Gateway for CENTRAL_IP_2> route add -net netmask gw Run the script to enable the cold Central node to communicate with the primary Serving node VM. cd /rms/ova/scripts/redundancy/;./allow_serving_from_central.sh [blr-rms-ha-central03] /rms/ova/scripts/redundancy #./allow_serving_from_central.sh Address of for Central node server VM: eth0 Link encap:ethernet HWaddr 00:50:56:B6:A4:F0 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (5.7 GiB) TX bytes: (31.6 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:22:82 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (97.9 MiB) TX bytes: (27.4 MiB) lo Link encap:local Loopback inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (329.9 MiB) TX bytes: (329.9 MiB) Enter Eth0 address of current Central node VM: Enter Eth1 Address for current Central node VM: Enter Eth0 Address for serving node VM: Adding IPtables rules to allow communication over port 22 with remote central node server configuring bac firmware sync configuring bac db sync 22

31 Configuring High Availability for the Central Node Configuring Additional Setup Step 9 Step 10 configuring bac db control iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] IPTables rules added successfully. [blr-rms-ha-central03] /rms/ova/scripts/redundancy # Repeat steps 7 and 8 for redundant Serving node VMs. Add routes for physical connectivity if the cold standby Central node VM has Eth0 interface IP in a different subnet than the Upload node VM Eth0 interface IP, and route is not already added. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 11 route add -net <subnet UPLOAD_IP_1> netmask gw <default Gateway for CENTRAL_IP_2> route add -net netmask gw Run the script to enable the cold Central node VM to communicate with the Upload node VM. cd /rms/ova/scripts/redundancy/;./allow_upload_from_central.sh [blr-rms-ha-central03] /rms/ova/scripts/redundancy #./allow_upload_from_central.sh Address of Central Node Server: eth0 Link encap:ethernet HWaddr 00:50:56:B6:A4:F0 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (5.7 GiB) TX bytes: (31.7 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:22:82 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (100.5 MiB) TX bytes: (29.1 MiB) lo Link encap:local Loopback inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (332.7 MiB) TX bytes: (332.7 MiB) Enter Eth0 address of Central node VM: Enter Eth1 address of Central node VM: Step 12 Step 13 Enter Eth0 address of upload node VM: iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] Completed configure of iptables [blr-rms-ha-central03] /rms/ova/scripts/redundancy # Repeat steps 10 and 11 for all Upload node VMs. Note On cold standby Central node VM, do not configure PMB DB VM to connect even if PMG DB VMs are part of the deployment at the primary site. The PMG DB VM will be configured only when the cold standby Central node VM is made operational. Set the cron job to run at a specified hour of the day to clean-up backups copied from the primary Central node VM and are older than the specified number of days; default is 3 days. 23

32 Configuring Additional Setup Configuring High Availability for the Central Node Note Existing cron jobs are not backed up automatically. The cron jobs must be backed up manually to be used later to start the desired cron on the cold standby Central node VM and also to restart the primary Central VM when the Central VM is restored after a disaster. The cron jobs for all users must be backed up to be restored later across all the Central nodes during disaster recovery. Step 14 export EDITOR=vi; crontab -e i 0 6 * * * /rms/ova/scripts/redundancy/central_vm_cleanup.cron.hourly <number of days of backup retention> :wq no crontab for root - using an empty one 0 6 * * * /rms/ova/scripts/redundancy/central_vm_cleanup.cron.hourly 3 "/tmp/crontab.brsh9h" 1L, 85C written crontab: installing new crontab Establish an ssh connection and log in as admin user to the primary Central VM. Step 15 ssh admin1@central_ip_1 The system responds by connecting the user to the cold standby central VM. Enter your individual user password to gain access. Step 16 Step 17 [enter your password] The system responds with a command prompt. Enter superuser mode. su The system responds with a prompt to enter password Enter superuser password. [enter super user password] The system responds with a command prompt. 24

33 Configuring High Availability for the Central Node Configuring Additional Setup Step 18 Add routes for physical connectivity if cold standby Central node VM has Eth0 interface IP in a different subnet than the primary Central node VM Eth0 interface IP. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 19 route add -net <subnet CENTRAL_IP_2> netmask gw <default Gateway for CENTRAL_IP_1> route add -net netmask gw Run the script on the primary Central node VM to allow communication with the cold standby Central node VM. cd /rms/ova/scripts/redundancy/;./allow_cold_standby_central.sh [blr-rms-ha-central03] /rms/ova/scripts/redundancy #./allow_cold_standby_central.sh Eth0 Address for current central node: eth0 Link encap:ethernet HWaddr 00:50:56:B6:A4:F0 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (5.7 GiB) TX bytes: (31.6 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:22:82 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (97.9 MiB) TX bytes: (27.4 MiB) lo Link encap:local Loopback inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (329.9 MiB) TX bytes: (329.9 MiB) Enter Eth0 address of current central node: Step 20 Enter Eth0 address of remote central node: Adding IPtables rules to allow communication over port 22 with remote central node server iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] IPTables rules added successfully [blr-rms-ha-central03] /rms/ova/scripts/redundancy # Establish an ssh connection and log in as admin user to the primary Serving node VM. Step 21 ssh admin1@serving_ip_1 The system responds by connecting the user to the primary serving node VM. Enter your individual user password to gain access. 25

34 Configuring Additional Setup Configuring High Availability for the Central Node Step 22 Step 23 [enter your password] The system responds with a command prompt. Enter superuser mode. su The system responds with a prompt to enter password Enter superuser password. Step 24 [enter super user password] The system responds with a command prompt. Add routes for physical connectivity if cold standby Central node VM has Eth0 interface IP in a different subnet than the primary Serving node VM Eth0 interface IP. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 25 route add -net <subnet CENTRAL_IP_2> netmask gw <default Gateway for SERVING_IP_1> route add -net netmask gw Run the script to allow communication of the primary Serving node with the cold standby Central node VM. cd /rms/ova/scripts/redundancy/;./allow_central_from_serving.sh [root@blr-rms-ha-serving03 tmp]#./allow_central_from_serving.sh Address of for Serving node server VM: eth0 Link encap:ethernet HWaddr 00:50:56:B6:B4:45 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (14.3 GiB) TX bytes: (207.9 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:9F:8D inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (37.8 MiB) TX bytes:252 (252.0 b) lo Link encap:local Loopback inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets: errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (294.1 MiB) TX bytes: (294.1 MiB) Enter Eth0 address of current serving node VM: Enter Eth0 Address for cold standby central node VM: Enter Eth1 Address for cold standby central node VM: Adding IPtables rules to allow communication over port 22 with remote central node server configuring bac 26

35 Configuring High Availability for the Central Node Configuring Additional Setup Step 26 Step 27 Step 28 firmware sync configuring bac db sync configuring bac db control configuring bac gui iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] IPTables rules added successfully. Exit from primary Serving node VM using the exit command. Repeat steps 20 to 26 for redundant Serving node VM. Establish an ssh connection and log in as admin user to the primary Upload node VM. Step 29 Step 30 ssh admin1@upload_ip_1 The system responds by connecting the user to the primary serving node VM. Enter your individual user password to gain access. [enter your password] The system responds with a command prompt. Enter superuser mode. Step 31 su The system responds with a prompt to enter password Enter superuser password. Step 32 [enter super user password] The system responds with a command prompt. Add routes for physical connectivity if cold standby Central node VM has Eth0 interface IP in a different subnet than the Upload node VM Eth0 interface IP. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. Step 33 route add -net <subnet CENTRAL_IP_2> netmask gw <default Gateway for UPLOAD_IP_1> route add -net netmask gw Run the script to allow communication of the primary Upload node VM with the cold standby Central node VM. cd /rms/ova/scripts/redundancy/;./allow_cold_standby_upload.sh Address of for Upload node server: eth0 Link encap:ethernet HWaddr 00:50:56:B6:80:55 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets:49271 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (50.1 MiB) TX bytes: (6.8 MiB) eth1 Link encap:ethernet HWaddr 00:50:56:B6:6E:B0 inet addr: Bcast: Mask: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes: (38.0 MiB) TX bytes:168 (168.0 b) lo Link encap:local Loopback 27

36 Configuring Additional Setup Configuring High Availability for the Central Node inet addr: Mask: UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:60971 errors:0 dropped:0 overruns:0 frame:0 TX packets:60971 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes: (12.3 MiB) TX bytes: (12.3 MiB) Enter Eth0 address of Upload node server VM: Step 34 Step 35 Step 36 Enter Eth0 address of Cold standby Central node VM: iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] Completed configure of iptables Exit from the primary Upload server node VM using the exit command. Repeat steps 28 to 34 for redundant Upload server VM. Perform these steps only if PMG DB VM is part of the deployment. Else proceed to the next step. a) Establish an ssh connection and log in as admin user to the primary PMG DB VM. ssh admin1@pmg_db_ip_1 The system responds by connecting the user to the PMG DB VM. b) Enter your individual user password to gain access. [enter your password] The system responds with a command prompt. c) Enter superuser mode. su The system responds with a prompt to enter password d) Enter superuser password. [enter super user password] The system responds with a command prompt. e) Add routes for physical connectivity if the cold standby Central node VM has Eth0 interface IP in a different subnet than the PMG DB VM Eth0 interface IP. Perform this step only if you have to add routes for physical connectivity. Else, proceed to the next step. route add -net <subnet CENTRAL_IP_2> netmask gw <default Gateway for PMG_DB_IP_1> route add -net netmask gw f) Repeat the previous step (Step 36) for hot standby PMG DB VM, if available. g) Repeat Step 36 for cold standby PMG DB VM, if available. 28

37 Configuring High Availability for the Central Node Starting Backups on Primary Central VM Starting Backups on Primary Central VM Step 1 Establish an ssh connection and log in as admin user to the primary Central VM. Step 2 ssh admin1@central_ip_1 The system responds by connecting the user to the cold standby central VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Set the cron job to take periodic backups of the configuration and databases at a specified hour of day with inputs for number of days of retention of backups and postgress DB password. Note Existing cron jobs are not backed up automatically. The cron jobs must be backed up manually to be used later to start the desired cron on the cold standby Central node VM and also to restart the primary Central VM when the Central VM is restored after a disaster. The cron jobs for all users must be backed up to be restored later across all the Central nodes during disaster recovery. export EDITOR=vi; crontab -e i 0 6 * * * /rms/ova/scripts/redundancy/backup_central_vm.cron.hourly <number of days of backup retention> <Postgress DB password> :wq no crontab for root - using an empty one 0 6 * * * /rms/ova/scripts/redundancy/backup_central_vm.cron.hourly 3 dccapp 29

38 Transferring Backups to Cold Standby Central VM Configuring High Availability for the Central Node Step 6 Step 7 "/tmp/crontab.brsh9h" 1L, 85C written crontab: installing new crontab Check for the backup file at the location /rms/backups/ after the hour of day specified in the cron job. Sample filename is centralvmbackup_ tar.gz. Manually backup the INSEE-SAC Mapping CSV file along with the above created tar.gz file: /rms/app/cscobac/rdu/mappingfiles/insee_sac_femto.csv This file can be backed up whenever there is any update done on this file. Transferring Backups to Cold Standby Central VM Location of backups to be copied on cold standby Central node VM is /rms/backups/restore/. If directory is not present create one using the following command: mkdir -p /rms/backups/restore Transfer of backups from primary Central node VM can be performed using scp to an intermediate FTP server or using direct scp of backup files from primary Central node VM to cold standby Central node VM. Restoring Cold Standby Central VM Using Backups If both primary and hot standby hosts for Central VM are down, cold standby can be restored from backups taken from the primary site and brought up. Step 1 Establish an ssh connection and log in as admin user to the cold standby Central VM. Step 2 ssh admin1@central_ip_2 The system responds by connecting the user to the cold standby central VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. 30

39 Configuring High Availability for the Central Node Restoring Cold Standby Central VM Using Backups Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Run the script to restore database and configuration on cold standby using backups from the primary Central node VM. cd /rms/ova/scripts/redundancy/;./restore_central_vm_from_bkup.sh script lists down all available backups copied from Primary central node for restore. It will ask to provide postgess password used on cold Standby Central VM. Existing backup files: central-config.tar.gz central-config tar centralvmbackup_ tar.gz centralvmbackup_ tar.gz forcoldtest2.tar forcoldvm.tar.gz... Completed restore of Central VM configuration files. BAC Process Watchdog has started. Step 6 Restore done. (Applicable only from RMS, Release 5.0 onwards) Update the fm-server URL IP address with the new Central server eth0 IP for all client applications on the new Central server: a) Log in to the Central server as root and enter the respective password. b) Navigate to the directory /rms/app/rms/conf/. c) Open the FMCommon.properties file. d) Change the value for property "fm_common.server.notification.url" with the new Central server eth0 IP. For example, Output [rms-aio-central] # vi /rms/app/rms/conf/fmcommon.properties ############################# FM Common Properties ############################## # Server interface settings fm_common.server.notification.port=8084 fm_common.server.notification.url= fm_common.http.request.uri=/fmserver fm_common.http.request.timeout=1000 fm_common.http.request.retries=2 #HTTP Digest fm_common.http.digest.username=fmsuser fm_common.http.digest.password=duwqa9trkza= # Fault Definitions File fm_common.faultdefinitionfile=faultdefinitions.csv 31

40 Restoring Cold Standby Central VM Using Backups Configuring High Availability for the Central Node Step 7 fm_common.app.pmgserver.dnprefix=region-01 fm_common.app.uploadserver.dnprefix=region-02 fm_common.app.dccui.dnprefix=region-01 Restart PMG and other processes on the cold standby Central node server: service god restart Output # service god restart Sending 'stop' command... The following watches were affected: PMGServer Sending 'stop' command The following watches were affected: AlarmHandler.. Stopped all watches Stopped god Sending 'load' command The following tasks were affected: PMGServer Sending 'load' command Step 8 The following tasks were affected: AlarmHandler Configure cold standby Central VM to point to the available PMG DB VM. Perform this step only if PMG DB is part of the deployment at the primary site. Else, proceed to the next step. cd /rms/app/rms/install/ The system responds with a command prompt. Pmgdb_Enabled -> To enable pmgdb set it to "true" Pmgdb_Primary_Dbserver_Address -> PMG DB primary server ip address for example, Pmgdb_Primary_Dbserver_Port -> PMG DB primary server port for example, 1521 Pmgdb_Standby1_Dbserver_Address -> PMG DB standby 1 server (hot standby) IP address. For example, Optional, if not specified, connection failover to hot standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby1_Dbserver_Port -> PMG DB standby 1 server (hot standby) port. For example, Optional, if not specified, connection failover to hot standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby2_Dbserver_Address -> PMG DB standby 2 server (cold standby) IP address. For example, Optional, if not specified, connection failover to cold standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby2_Dbserver_Port -> PMG DB standby 2 server (cold standby) port. For example, Optional, if not specified, connection failover to cold standby database will not be available. To enable the failover feature later, script has to be executed again. Usage: pmgdb_configure.sh <Pmgdb_enabled> <Pmgdb_Dbserver_Address> <Pmgdb_Dbserver_Port> [<Pmgdb_Stby1_Dbserver_Address>] [<Pmgdb_Stby1_Dbserver_Port>] [<Pmgdb_Stby2_Dbserver_Address>] [<Pmgdb_Stby2_Dbserver_Port>] Enter DbUser PMGUSER Password -> Will be prompted, Password of the database user "PMGUSER". #./pmgdb_configure.sh true Executing as root user Enter DbUser PMGUSER Password: 32

41 Configuring High Availability for the Central Node Restoring Cold Standby Central VM Using Backups Step 9 Confirm Password: Pmgdb_Dbuser_Password - [rms-distr-central] /rms/app/rms/install #./pmgdb_configure.sh true Executing as root user Enter DbUser PMGUSER Password: Confirm Password: Central_Node_Eth0_Address Central_Node_Eth1_Address Script input: Pmgdb_Enabled=true Pmgdb_Prim_Dbserver_Address= Pmgdb_Prim_Dbserver_Port=1521 Pmgdb_Stby1_Dbserver_Address= Pmgdb_Stby1_Dbserver_Port=1521 Pmgdb_Stby2_Dbserver_Address= Pmgdb_Stby2_Dbserver_Port=1521 Executing in 10 sec, enter <cntrl-c> to exit Start configure dcc props dcc.properties already exists in conf dir END configure dcc props Start configure pmgdb props pmgdb.properties already exists in conf dir Changed jdbc url to jdbc:oracle:thin:@(description=(address_list=(address=(protocol=tcp)(host= )(port=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=1521)) (FAILOVER=on) (LOAD_BALANCE=off))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PMGDB_PRIMARY))) End configure pmgdb props Configuring iptables for Primary server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Configuring iptables for Standby server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Configuring iptables for Standby server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Done PmgDb configuration [rms-distr-central] /rms/app/rms/install # Restart PMG cold standby Central node VM. su ciscorms; /rms/app/pmg/bin/pmgserver.sh stop; /rms/app/pmg/bin/pmgserver.sh start su ciscorms $ /rms/app/pmg/bin/pmgserver.sh stop; /usr/bin/java PMGServer[26097]: PMGServer has stopped by request (watchdog may restart it) [blr-rms-ha-central03] /rms/ova/scripts/redundancy $ /rms/app/pmg/bin/pmgserver.sh start /usr/bin/java /usr/sbin/daemonize 33

42 Restoring the INSEE-SAC Information on New Central Server Configuring High Availability for the Central Node Restoring the INSEE-SAC Information on New Central Server For setups that need the INSEE-SAC Mapping feature, follow this procedure: Step 1 Step 2 Log in as root user to the Central server. Navigate to the location /rms/app/cscobac/rdu/mappingfiles/. Manually copy to this location the previously backed-up INSEE-SAC CSV file "insee_sac_femto.csv" from the primary Central server. If this file needs to be updated, see the Cisco RAN Management System Administration Guide. Switching Serving VMs to Point to Cold Standby Central VM Step 1 Establish an ssh connection and log in as admin user to the primary Serving Node VM. Step 2 ssh admin1@serving_ip_1 The system responds by connecting the user to the Primary serving node VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Enter into CLI mode on the Distributed Provisioning Engine (DPE). Step 6 telnet localhost 2323 The system responds with a password prompt. Enter DPE CLI administration password. The password is the same as the administration password for the Serving node VM specified in descriptor file at the time installation. 34

43 Configuring High Availability for the Central Node Switching Serving VMs to Point to Cold Standby Central VM Step 7 <password for DPE administration> The system responds with a command prompt. Enable administration commands. Step 8 en The system responds with a password prompt. Enter administration enable password. The password is the same as the administration password for the Serving node VM specified in descriptor file at the time installation. Step 9 <enable password for DPE administration> The system responds with a command prompt. Change the Regional Distribution Unit (RDU) of the cold standby Central node VM to Eth0 IP. Step 10 dpe rdu-server CENTRAL_IP_ % OK (Requires DPE restart '# dpe reload") Reload the DPE process on the Serving node VM to apply the change. Step 11 Step 12 dpe reload Process [dpe] has been restarted. Exit from CLI mode using the exit command. Change the RDU FQDN on the Serving node VM for Cisco Access Registrar (CAR) EP properties to Eth0 IP of the cold standby Central node VM. cp -p /rms/app/cscobac/car_ep/conf/car_ep.properties /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp sed -i {s/<central_ip_1>/central_ip_2/g} /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp mv /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp /rms/app/cscobac/car_ep/conf/car_ep.properties Validate fqdn after change. grep "/rdu/fqdn" rms/app/cscobac/car_ep/conf/car_ep.properties /rdu/fqdn=central_ip_2 service arserver stop service arserver start #cp -p /rms/app/cscobac/car_ep/conf/car_ep.properties /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp # sed -i {s/ / /g} /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp #mv /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp /rms/app/cscobac/car_ep/conf/car_ep.properties mv: overwrite `/rms/app/cscobac/car_ep/conf/car_ep.properties'? yes #grep "/rdu/fqdn" /rms/app/cscobac/car_ep/conf/car_ep.properties /rdu/fqdn= # service arserver stop Waiting for these processes to die (this may take some time): Cisco Prime AR RADIUS server running (pid: 1680) Cisco Prime AR Server Agent running (pid: 1667) Cisco Prime AR MCD lock manager running (pid: 1670) Cisco Prime AR MCD server running (pid: 1678) 35

44 Switching Serving VMs to Point to Cold Standby Central VM Configuring High Availability for the Central Node Cisco Prime AR GUI running (pid: 1681) 4 processes left.3 processes left..2 processes left...k0 processes left Step 13 Cisco Prime Access Registrar Server Agent shutdown complete. # service arserver start Starting Cisco Prime Access Registrar Server Agent...completed. Change the Cisco Network Registrar (CNR) configuration to point to the cold standby Central VM Eth0 address. cp -p /rms/app/nwreg2/local/conf/cnr.conf /rms/app/nwreg2/local/conf/cnr.conf.tmp cat /rms/app/nwreg2/local/conf/cnr.conf.tmp sed "s?cnr.regional-ip=.*?cnr.regional-ip=central_ip_2?" > /rms/app/nwreg2/local/conf/cnr.conf grep "cnr.regional-ip" /rms/app/nwreg2/local/conf/cnr.conf # cp -p /rms/app/nwreg2/local/conf/cnr.conf /rms/app/nwreg2/local/conf/cnr.conf.tmp Step 14 #cat /rms/app/nwreg2/local/conf/cnr.conf.tmp sed "s?cnr.regional-ip=.*?cnr.regional-ip= ?" > /rms/app/nwreg2/local/conf/cnr.conf # grep "cnr.regional-ip" /rms/app/nwreg2/local/conf/cnr.conf cnr.regional-ip= Change the CNR EP to point to the cold standby Central node Eth0 address. cd /rms/app/cscobac/cnr_ep/bin/;./changenrproperties.sh -f CENTRAL_IP_2;./runCopyFile.sh service nwreglocal stop service nwreglocal start #./changenrproperties.sh -f Current NR Properties: RDU Port: RDU FQDN: Provisioning Group: rtp-red Shared Secret: fggtalg0xwkrs You must restart your NR DHCP server for the changes to take effect #./runcopyfile.sh # service nwreglocal stop # Stopping Network Registrar Local Server Agent INFO: waiting for Network Registrar Local Server Agent to exit... INFO: waiting for Network Registrar Local Server Agent to exit... Step 15 INFO: waiting for Network Registrar Local Server Agent to exit... # service nwreglocal start # Starting Network Registrar Local Server Agent Restart bpragent. Step 16 service bpragent restart BAC Process Watchdog has restarted. Perform steps 1 to 15 for the redundant Serving node VM. Wait for the Serving node VM to complete synchronizing with the cold standby Central VM. Synchronization will be shown as complete in the BAC admin UI, where the DPE corresponding to the Serving VM is shown in Ready state. 36

45 Configuring High Availability for the Central Node Switching Upload VMs to Point to Cold Standby Central VM Switching Upload VMs to Point to Cold Standby Central VM Step 1 Establish an ssh connection and log in as admin user to the primary Upload Node VM. Step 2 ssh admin1@upload_ip_1 The system responds by connecting the user to the Primary Upload node VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password. Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Navigate to the directory /rms/app/rms/conf/. Step 6 cd /rms/app/rms/conf The system responds with a command prompt. Open the FMCommon.properties file, Change the value for property "fm_common.server.notification.url" with the new Central server eth0 IP. For example, Output[root@rms-distr-upload ]# cat /rms/app/rms/conf/fmcommon.properties ############################# FM Common Properties ############################## # Server interface settings fm_common.server.notification.port=8084 fm_common.server.notification.url= fm_common.http.request.uri=/fmserver fm_common.http.request.timeout=1000 fm_common.http.request.retries=2 #HTTP Digest fm_common.http.digest.username=fmsuser fm_common.http.digest.password=duwqa9trkza= # Fault Definitions File fm_common.faultdefinitionfile=faultdefinitions.csv fm_common.app.pmgserver.dnprefix=region-01 fm_common.app.uploadserver.dnprefix=region-02 fm_common.app.dccui.dnprefix=region-01 [root@rms-distr-upload ]# 37

46 Enabling Cold Standby Central VM Configuring High Availability for the Central Node Enabling Cold Standby Central VM Ensure the following on the cold standby setup: Additional routes that are required for cross-server communications are configured. Additional routes that are required for external network entities communication are configured. For example, ASR5K communication SysLog Server communication. Additional IPTables that were manually created on the primary Central node are recreated on the new cold standby Central node. End-to-end system is operational. Previously connected APs to Site 1 Central node are in the same state after switchover to cold standby Central node in Site 2. Customer OSS points to the cold standby Central node in Site 2 and resends provisioning requests for APs from the time that backups were triggered on the primary Central node on Site 1. After cold standby Central VM at Site 2 is fully operational, backup cron can be started on the cold standby Central node. export EDITOR=vi; crontab -e add entry as below with specific hour of day backup script is required to be executed. 0 6 * * * /rms/ova/scripts/redundancy/backup_central_vm.cron.hourly <number of days of backup retention> <Postgress DB password> no crontab for root - using an empty one 0 6 * * * /rms/ova/scripts/redundancy/backup_central_vm.cron.hourly 3 dccapp "/tmp/crontab.brsh9h" 1L, 85C written crontab: installing new crontab 38

47 Configuring High Availability for the Central Node Enabling Primary Site After Restoration Enabling Primary Site After Restoration After the primary Central host and VM are restored, use the cold standby Central node backup file present in the cold standby Central node itself or external storage server to restore on the primary Central VM. After disabling NB traffic to OSS, take the backup to ensure that the backup has the latest provisioning details. If backup from cron is not latest, take the latest backup manually. Step 1 Establish an ssh connection and log in as admin user to the cold standby Central VM. Step 2 ssh admin1@central_ip_2 The system responds by connecting the user to the cold standby central VM. Enter your individual user password to gain access. Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Enter the manual script to take the backup. cd /rms/ova/scripts/redundancy;./backup_central_vm.sh./backup_central_vm.sh Existing backup directories: restore Enter name of new backup directory: coldstandby_bkup_july22_2014 Enter password for postgresdb: Ch@ngeme1 Doing backup of Central VM configuration files. tar: Removing leading `/' from member names -rw root root Jul 22 21:05 /rms/backups/coldstandby_bkup_july22_2014//central-config.tar.gz Completed backup of Central VM configuration files. Doing backup of Central VM Postgress DB. -rw root root 4182 Jul 22 21:05 /rms/backups/coldstandby_bkup_july22_2014//postgres_db_bkup Completed backup of Central VM Postgress DB. Doing backup of Central VM RDU Berklay DB. Database backup started Back up to: /rms/backups/coldstandby_bkup_july22_2014/rdu-db/rdu-backup Copying DB_VERSION. DB_VERSION: 100% completed. 39

48 Enabling Primary Site After Restoration Configuring High Availability for the Central Node Copied DB_VERSION. Size: 396 bytes. Step 6... Database recovery started Recovering in: /rms/backups/coldstandby_bkup_july22_2014/rdu-db/rdu-backup This process may take a few minutes. Database recovery completed rdu-db/ rdu-db/rdu-backup / rdu-db/rdu-backup /rdu.db rdu-db/rdu-backup /history.log rdu-db/rdu-backup /db_version rdu-db/rdu-backup /log rw root root Jul 22 21:05 /rms/backups/coldstandby_bkup_july22_2014//rdu-db.tar.gz Completed backup of Central VM RDU Berklay DB. coldstandby_bkup_july22_2014/ coldstandby_bkup_july22_2014/central-config.tar.gz coldstandby_bkup_july22_2014/postgres_db_bkup coldstandby_bkup_july22_2014/rdu-db.tar.gz coldstandby_bkup_july22_2014/.rdufiles_backup -rwxrwxrwx. 1 root root Jul 22 21:05 /rms/backups/coldstandby_bkup_july22_2014.tar.gz backup done. Check for the backup file created in /rms/backups/ directory. ls -l /rms/backups -rwxrwxrwx. 1 root root Jul 22 21:05 coldstandby_bkup_july22_2014.tar.gz Transferring Backups From Cold Standby Central VM to Primary Central VM Location of backups to be copied on the primary Central node VM is /rms/backups/restore/. If directory is not present create one using the following command: mkdir p /rms/backups/restore Transfer of backups from cold standby Central node VM can be performed using scp to an intermediate FTP server or using direct scp of backup files from cold standby Central node VM to primary Central node VM. Restoring Primary Central VM Using Backups Use the backup taken from the cold standby Central node VM to restore it on the primary Central VM. Step 1 Establish an ssh connection and log in as admin user to the primary Central VM. Step 2 ssh admin1@central_ip_1 The system responds by connecting the user to the primary central VM. Enter your individual user password to gain access. 40

49 Configuring High Availability for the Central Node Enabling Primary Site After Restoration Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Run the script to restore the database and configuration on the primary Central VM using backups from the cold standby Central node VM. cd /rms/ova/scripts/redundancy/;./restore_central_vm_from_bkup.sh script lists down all available backups copied from cold standby central node for restore. It will ask to provide postgess password to be used on primary Central VM. Existing backup files: central-config.tar.gz central-config tar centralvmbackup_ tar.gz centralvmbackup_ tar.gz forcoldtest2.tar forcoldvm.tar.gz postgres-db-17jun.tar postgres-db-19jun.tar postgres-db.tar.gz postgres-dcc-db.tar rdu-db.tar.gz rdu-db_jun13_50plusk.tar.gz rdu-db_jun13_56plusk.tar.gz rdu-db_jun14_59k.tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz rdubackup_ tar.gz test3.tar.gz test4.tar.gz test5.tar.gz test6.tar.gz Enter name of backup file to restore from: centralvmbackup_ tar.gz Enter password for postgresdb: Ch@ngeme1 centralvmbackup_ / centralvmbackup_ /.rdufiles_backup centralvmbackup_ /postgres_db_bkup centralvmbackup_ /rdu-db.tar.gz centralvmbackup_ /central-config.tar.gz.. 41

50 Enabling Primary Site After Restoration Configuring High Availability for the Central Node. Completed restore of Central VM configuration files. BAC Process Watchdog has started. Step 6 Restore done. Configure the primary Central VM to point to the available PMG DB VM. Perform this step if PMG DB VM is part of deployment at the primary site. Else proceed to the next step. cd /rms/app/rms/install/ The system responds with a command prompt. Pmgdb_Enabled -> To enable pmgdb set it to "true" Pmgdb_Primary_Dbserver_Address -> PMG DB primary server ip address for example, Pmgdb_Primary_Dbserver_Port -> PMG DB primary server port for example, 1521 Pmgdb_Standby1_Dbserver_Address -> PMG DB standby 1 server (hot standby) IP address. For example, Optional, if not specified, connection failover to hot standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby1_Dbserver_Port -> PMG DB standby 1 server (hot standby) port. For example, Optional, if not specified, connection failover to hot standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby2_Dbserver_Address -> PMG DB standby 2 server (cold standby) IP address. For example, Optional, if not specified, connection failover to cold standby database will not be available. To enable the failover feature later, script has to be executed again. Pmgdb_Standby2_Dbserver_Port -> PMG DB standby 2 server (cold standby) port. For example, Optional, if not specified, connection failover to cold standby database will not be available. To enable the failover feature later, script has to be executed again. Usage: pmgdb_configure.sh <Pmgdb_enabled> <Pmgdb_Dbserver_Address> <Pmgdb_Dbserver_Port> [<Pmgdb_Stby1_Dbserver_Address>] [<Pmgdb_Stby1_Dbserver_Port>] [<Pmgdb_Stby2_Dbserver_Address>] [<Pmgdb_Stby2_Dbserver_Port>] Enter DbUser PMGUSER Password -> Will be prompted, Password of the database user "PMGUSER". #./pmgdb_configure.sh true Executing as root user Enter DbUser PMGUSER Password: Confirm Password: Pmgdb_Dbuser_Password - [rms-distr-central] /rms/app/rms/install #./pmgdb_configure.sh true Executing as root user Enter DbUser PMGUSER Password: Confirm Password: Central_Node_Eth0_Address Central_Node_Eth1_Address Script input: Pmgdb_Enabled=true Pmgdb_Prim_Dbserver_Address= Pmgdb_Prim_Dbserver_Port=1521 Pmgdb_Stby1_Dbserver_Address= Pmgdb_Stby1_Dbserver_Port=1521 Pmgdb_Stby2_Dbserver_Address= Pmgdb_Stby2_Dbserver_Port=1521 Executing in 10 sec, enter <cntrl-c> to exit Start configure dcc props dcc.properties already exists in conf dir 42

51 Configuring High Availability for the Central Node Enabling Primary Site After Restoration Step 7 Step 8 END configure dcc props Start configure pmgdb props pmgdb.properties already exists in conf dir Changed jdbc url to jdbc:oracle:thin:@(description=(address_list=(address=(protocol=tcp)(host= )(port=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST= )(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST= ) (PORT=1521))(FAILOVER=on) (LOAD_BALANCE=off))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PMGDB_PRIMARY))) End configure pmgdb props Configuring iptables for Primary server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Configuring iptables for Standby server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Configuring iptables for Standby server Start configure_iptables Removing old entries first, may show error if rule does not exist Removing done, add rules iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] end configure_iptables Done PmgDb configuration [rms-distr-central] /rms/app/rms/install # Restore PMG DB VM redundancy using the step in the install guide (see Cisco RAN Management System Installation Guide), if PMG DB VM redundancy was originally present. Restart PMG on the primary Central node VM. su ciscorms; /rms/app/pmg/bin/pmgserver.sh stop; /rms/app/pmg/bin/pmgserver.sh start su ciscorms $ /rms/app/pmg/bin/pmgserver.sh stop; /usr/bin/java PMGServer[26097]: PMGServer has stopped by request (watchdog may restart it) [blr-rms-ha-central03] /rms/ova/scripts/redundancy $ /rms/app/pmg/bin/pmgserver.sh start /usr/bin/java /usr/sbin/daemonize Switching Serving VMs to Point to Primary Central VM Step 1 Establish an ssh connection and log in as admin user to the primary Serving node VM. Step 2 ssh admin1@serving_ip_1 The system responds by connecting the user to the Primary serving node VM. Enter your individual user password to gain access. 43

52 Enabling Primary Site After Restoration Configuring High Availability for the Central Node Step 3 [enter your password] The system responds with a command prompt. Enter superuser mode. Step 4 su The system responds with a prompt to enter password Enter superuser password. Step 5 [enter super user password] The system responds with a command prompt. Enter into CLI mode on the DPE. Step 6 telnet localhost 2323 The system responds with a password prompt. Enter DPE CLI administration password. The password is the same as the administration password for the Serving node VM specified in descriptor file at the time installation. Step 7 <password for DPE administration> The system responds with a command prompt. Enable administration commands. Step 8 en The system responds with a password prompt. Enter administration enable password. The password is the same administration password for the Serving node VM specified in descriptor file at the time installation. Step 9 <enable password for DPE administration> The system responds with a command prompt. Change the RDU of the primary Central node VM to Eth0 IP. Step 10 dpe rdu-server CENTRAL_IP_ % OK (Requires DPE restart '# dpe reload") Reload the DPE process on the Serving node VM to apply the RDU change. Step 11 Step 12 dpe reload Process [dpe] has been restarted. Exit from the CLI mode using the exit command. Change the RDU FQDN on the Serving node VM for CAR EP properties to Eth0 IP of the primary Central node VM. 44

53 Configuring High Availability for the Central Node Enabling Primary Site After Restoration cp -p /rms/app/cscobac/car_ep/conf/car_ep.properties /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp sed -i {s/<central_ip_2>/central_ip_1/g} /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp mv /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp /rms/app/cscobac/car_ep/conf/car_ep.properties Validate fqdn after change. grep "/rdu/fqdn" rms/app/cscobac/car_ep/conf/car_ep.properties /rdu/fqdn=central_ip_1 service arserver stop service arserver start #cp -p /rms/app/cscobac/car_ep/conf/car_ep.properties /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp # sed -i {s/ / /g} /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp #mv /rms/app/cscobac/car_ep/conf/car_ep.properties.tmp /rms/app/cscobac/car_ep/conf/car_ep.properties mv: overwrite `/rms/app/cscobac/car_ep/conf/car_ep.properties'? yes #grep "/rdu/fqdn" /rms/app/cscobac/car_ep/conf/car_ep.properties /rdu/fqdn= # service arserver stop Waiting for these processes to die (this may take some time): Cisco Prime AR RADIUS server running (pid: 1680) Cisco Prime AR Server Agent running (pid: 1667) Cisco Prime AR MCD lock manager running (pid: 1670) Cisco Prime AR MCD server running (pid: 1678) Cisco Prime AR GUI running (pid: 1681) 4 processes left.3 processes left..2 processes left...k0 processes left Step 13 Cisco Prime Access Registrar Server Agent shutdown complete. # service arserver start Starting Cisco Prime Access Registrar Server Agent...completed. Change the CNR configuration to point to the primary Central VM Eth0 address. cp -p /rms/app/nwreg2/local/conf/cnr.conf /rms/app/nwreg2/local/conf/cnr.conf.tmp cat /rms/app/nwreg2/local/conf/cnr.conf.tmp sed "s?cnr.regional-ip=.*?cnr.regional-ip=central_ip_1?" > /rms/app/nwreg2/local/conf/cnr.conf grep "cnr.regional-ip" /rms/app/nwreg2/local/conf/cnr.conf # cp -p /rms/app/nwreg2/local/conf/cnr.conf /rms/app/nwreg2/local/conf/cnr.conf.tmp Step 14 #cat /rms/app/nwreg2/local/conf/cnr.conf.tmp sed "s?cnr.regional-ip=.*?cnr.regional-ip= ?" > /rms/app/nwreg2/local/conf/cnr.conf # grep "cnr.regional-ip" /rms/app/nwreg2/local/conf/cnr.conf cnr.regional-ip= Change the CNR EP to point to the primary Central node Eth0 address. cd /rms/app/cscobac/cnr_ep/bin/;./changenrproperties.sh -f CENTRAL_IP_1;./runCopyFile.sh service nwreglocal stop service nwreglocal start #./changenrproperties.sh -f

54 Enabling Primary Site After Restoration Configuring High Availability for the Central Node Current NR Properties: RDU Port: RDU FQDN: Provisioning Group: rtp-red Shared Secret: fggtalg0xwkrs You must restart your NR DHCP server for the changes to take effect #./runcopyfile.sh # service nwreglocal stop # Stopping Network Registrar Local Server Agent INFO: waiting for Network Registrar Local Server Agent to exit... INFO: waiting for Network Registrar Local Server Agent to exit... Step 15 INFO: waiting for Network Registrar Local Server Agent to exit... # service nwreglocal start # Starting Network Registrar Local Server Agent Restart bpragent. Step 16 Step 17 service bpragent restart BAC Process Watchdog has restarted. Perform steps 1 to 15 for the redundant Serving node VM. Wait for the Serving node VM to complete synchronizing with the primary Central VM. Synchronization will be shown as complete in the BAC admin UI, where the DPE corresponding to the Serving VM is shown in Ready state. Restore hot standby Central VM, if any, as per the hot standby procedure. 46

55 CHAPTER 4 Configuring High Availability for VMware vcenter in RMS Distributed Setup This chapter describes the process of configuring high availability for the VMware vcenter. It provides the prerequisites and procedures required to configure and test the hot standby and cold standby for Cisco RMS. It also includes procedures required to back up and restore VMware vcenter. These procedures apply to Distributed RMS deployment. Prerequisites, page 47 Configuring Hot Standby for vcenter VM, page 48 Testing Hot Standby for vcenter VM, page 48 Configuring Cold Standby for vcenter VM, page 49 Testing Cold Standby for vcenter VM, page 49 Recovering the Primary of vcenter VM, page 50 Backing Up vcenter VM Database, page 50 Restoring vcenter VM Database, page 52 Prerequisites Operator or user of this procedure should have the following experience: Knowledge basic database/dba know-how Knowledge of basic linux/shell commands Ability to edit files with vi or vim Ability to view files (cat, tail, more, less) Ability to use SSH, and basic VMware commands. Complete installation of vcenter VM and add SAN support as recommended in the Cisco RAN Management System Installation Guide and the RMS VMs are added as described in Configuring Hot Standby for vcenter VM, on page

56 Configuring Hot Standby for vcenter VM Configuring High Availability for VMware vcenter in RMS Distributed Setup For primary vcenter VM, install VM as part of the host that is planned to be part of the high availability cluster for the Central VM. Install cold standby vcenter VM as part of the host for cold standby Central VM. Note the following information on your setup before proceeding to configure high availability for vcenter VM: Node Location Node IP Address Node Description vcenter VM 1 vcenter VM 2 <VCENTER_IP_1> <VCENTER_IP_2> Primary vcenter VM Cold standby vcenter VM Ensure that the following ssh commands are working; use the root login. ssh root@<vcenter_ip_1> ssh root@<vcenter_ip_2> Note This section covers only additional steps required to achieve high availability for vcenter VM. Configuring Hot Standby for vcenter VM Step 1 Step 2 Step 3 Install VMware vcenter as described in the Installing the VMware vcenter section of the Cisco RAN Management System Installation Guide to install primary vcenter VM on the same host where the primary Central VM is going to be installed or is already installed. Add vsphere high availability cluster as described in the Creating a High Availability Cluster, on page 10 to add vsphere high availability cluster for the host containing Central VM and vcenter VM. Take a database backup as described in the Backing Up vcenter VM Database, on page 50 and take a snapshot of the primary vcenter for use in recovery in case of a disaster after the primary vcenter is completely configured and all primary and hot standby VMs and hosts for Cisco RMS are being managed by the primary vcenter VM, Note Primary vcenter is configured to manage primary and hot standby Central VM, active-active pairs for Serving VM and Upload VM, and primary and hot standby PMG DB VM (if applicable). Testing Hot Standby for vcenter VM To test hot standby for vcenter VM, see the Testing High Availability Failover, on page 17. The testing of high availability for vcenter VM is same as testing high availability on the Central node. For more information, see Testing High Availability on the Central Node, on page

57 Configuring High Availability for VMware vcenter in RMS Distributed Setup Configuring Cold Standby for vcenter VM Configuring Cold Standby for vcenter VM Step 1 Step 2 Step 3 Step 4 Install VMware vcenter as described in the Cisco RAN Management System Installation Guide to install vcenter VM with VCENTER_IP_2 on the host to be used for cold standby Central VM. Take a backup of the vcenter database as described in the Backing Up vcenter VM Database, on page 50 after the primary vcenter VM is completely configured and is managing all primary, hot standby hosts, and other VMs for Cisco RMS, Use the backed up database to restore it on the cold standby vcenter VM as described in Restoring vcenter VM Database, on page 52. Proceed with steps (see, Testing Cold Standby for vcenter VM, on page 49) to connect the hosts running the cold standby Central VM and cold standby PMG DB VM (if applicable) after the DB is restored on the cold standby vcenter VM. Note Cold standby vcenter VM continues to manage the hosts running the cold standby Central VM, and the cold standby PMG DB VM (if applicable). Take a VM snapshot and database backup of the cold standby vcenter VM to be used in case the cold standby vcenter also fails and needs reinstallation. Testing Cold Standby for vcenter VM Step 1 Step 2 Check if one of the hosts can be recovered to reinstall vcenter from the backup snapshot. If available, recover the primary vcenter. Perform this step only when both hosts fail in the high availability cluster at the primary site containing the primary vcenter VM. Else, proceed to the next step. Perform these steps if there is no host at the primary site to restore primary vcenter and there is a urgent need to manage remaining Serving, Upload, and PMG DB hosts and VMs at primary and secondary site. a) Connect hosts on the new vcenter. To do this, log in to vcenter. b) Go to Home > Inventory > Hosts and Clusters and right-click on the desired host and select Connect. c) In the Reconnect host window that is displayed, click Yes. d) In the Reconnect host error window that is displayed, which indicates that you need to enter the correct login credentials for the respective host, click Close. This automatically opens the Add Host Wizard. e) In the Specify Connection Settings screen of the wizard, enter your username and password for the host. f) Click Next. g) In the Security Alert pop-up window that is displayed, indicating that you need to add SHA1 thumbprint certificate, click Yes. h) In the Host Summary screen, which displays details about the host and VMs running under this host, click Next. i) In the Virtual Machines Location screen, which indicates the location where the host is present, expand the datacenter and select the respective folder under which the host is running. j) Click Next. 49

58 Recovering the Primary of vcenter VM Configuring High Availability for VMware vcenter in RMS Distributed Setup k) In the Ready to Complete screen, which displays the summary of the hosts that will be connected, click Finish. l) Repeat steps a to k on all hosts that need to be connected to the new vcenter. This will ensure successful connection of hosts on the new vcenter. m) Connect back the hosts mapping to active-active Serving node and Upload node after the primary site VMs and hosts are restored on the cold standby vcenter. n) Connect back the primary PMG DB, hot standby PMG DB (if applicable). o) Proceed to connect the host on the cold standby site for redundant Serving and Upload VM to the cold standby vcenter VM after the working hosts from primary site, that is, host for Serving node and Upload node and primary and hot standby PMG DB( if applicable) are connected to the cold standby vcenter VM. p) Proceed to configure cold standby for the Central node as described in the Configuring Cold Standby for vcenter VM, on page 49, which deals with testing cold standby for Central node. Recovering the Primary of vcenter VM Using the saved VM snapshots of the vcenter VM or saved database for vcenter, restore the primary vcenter after the replacement hosts for primary vcenter VM and central VM are installed or original failed hosts are recovered. For information on performing primary recovery of the Central node, see Enabling Primary Site After Restoration, on page 39. Backing Up vcenter VM Database Step 1 Establish an ssh connection using the root user to connect to the vcenter VM. Step 2 ssh root@vcenter_ip_1 The system responds by connecting the user to the vcenter VM. Enter the root user password to gain access. Step 3 [enter root password] The system responds with a command prompt. Stop the vcenter server service. service vmware-vpxd stop Stopping VMware vsphere Profile-Driven Storage Service... Stopped VMware vsphere Profile-Driven Storage Service. Stopping tomcat: success 50

59 Configuring High Availability for VMware vcenter in RMS Distributed Setup Backing Up vcenter VM Database Step 4 Step 5 Stopping vmware-vpxd: success Shutting down ldap-server..done Display the vpostgres database configuration file and make a note of the values for EMD_DB_INSTANCE, EMB_DB_USER and EMB_DB_PASSWORD. cat /etc/vmware-vpx/embedded_db.cfg cat /etc/vmware-vpx/embedded_db.cfg EMB_DB_INSTALL_DIR='/opt/vmware/vpostgres/9.0' EMB_DB_TYPE='PostgreSQL' EMB_DB_SERVER=' ' EMB_DB_PORT='5432' EMB_DB_INSTANCE='VCDB' EMB_DB_USER='vc' EMB_DB_PASSWORD='I&8rx)A=rLs6u}22' EMB_DB_STORAGE='/storage/db/vpostgres' On the vcenter Server Appliance virtual machine, navigate to the vpostgres utility directory. Step 6 Step 7 cd /opt/vmware/vpostgres/1.0/bin The system responds with a command prompt. Take a backup of the vcenter server database../pg_dump <EMB_DB_INSTANCE> -U <EMB_DB_USER> -Fp -c > <path/vcdbbackupfile> Fill in the EMD_DB_INSTANCE and EMB_DB_USER from the embedded_db.cfg configuration information listed in Step 4. Fill in the path/vcdbbackupfile with the location and file name to be generated./pg_dump VCDB -U vc -Fp -c > /tmp/vcdbackupfile1 Verify that the backup file created. Step 8 ls -l /VCDBackUpfile> ls -l /tmp/vcdbackupfile1 -rw root root Aug 20 05:23 /tmp/vcdbackupfile1 Start the vcenter Server service. Step 9 service vmware-vpxd start Waiting for the embedded database to start up: success Verifying EULA acceptance: success Executing pre-startup scripts... Updating the vcenter endpoint in the Lookup Service. Intializing registration provider... Getting SSL certificates for Service with name 'vpxd-blrrms-vcenter-ha-278ddba1-f0bd-4da5-8f5c-7e52daca9685' and ID 'local:f7bf9c8d-ea7c-459d-be49-b6b6281abdb4' was updated. Return code is: Success Starting ldap-server..done Starting vmware-vpxd: success Waiting for vpxd to initialize:.success Starting tomcat: success Executing startup scripts... Autodeploy service is disabled, skipping registration. Starting VMware vsphere Profile-Driven Storage Service...Waiting for VMware vsphere Profile-Driven Storage Service... VMware vsphere Profile-Driven Storage Service started. Using WinSCP or SCP, connect to the vcenter VM and download the VCDBackUpFile from /tmp/. scp /VCDBackUpfile> root@ :/tmp/ scp /tmp/vcdbackupfile1 root@ :/tmp/ VMware vcenter Server Appliance root@ 's password: VCDBackUpfile1 51

60 Restoring vcenter VM Database Configuring High Availability for VMware vcenter in RMS Distributed Setup 198MB 65.9MB/s 00:03 100% Restoring vcenter VM Database Before You Begin Step 1 Establish an ssh connection using the root user to connect to the vcenter VM. Step 2 ssh root@vcenter_ip_1 The system responds by connecting the user to the vcenter VM. Enter the root user password to gain access. Step 3 [enter root password] The system responds with a command prompt. Using WinSCP or SCP, connect to the vcenter Server Appliance and upload the backup copy of the VCDBackUp file into the /tmp/ directory. Step 4 scp /tmp/vcdbackupfile1 root@ :/tmp/ scp /tmp/vcdbackupfile1 root@ :/tmp/ VMware vcenter Server Appliance root@ 's password: VCDBackUpfile1 198MB 65.9MB/s 00:03 Stop the vcenter Server service. 100% Step 5 service vmware-vpxd stop service vmware-vpxd stop Stopping VMware vsphere Profile-Driven Storage Service... Stopped VMware vsphere Profile-Driven Storage Service. Stopping tomcat: success Stopping vmware-vpxd: success Shutting down ldap-server..done Display the vpostgres database configuration file and make a note of values for EMD_DB_INSTANCE, EMB_DB_USER and EMB_DB_PASSWORD. cat /etc/vmware-vpx/embedded_db.cfg EMB_DB_INSTALL_DIR='/opt/vmware/vpostgres/9.0' EMB_DB_TYPE='PostgreSQL' EMB_DB_SERVER=' ' EMB_DB_PORT='5432' EMB_DB_INSTANCE='VCDB' 52

61 Configuring High Availability for VMware vcenter in RMS Distributed Setup Restoring vcenter VM Database Step 6 EMB_DB_USER='vc' EMB_DB_PASSWORD='$vq1oOh_CTmgG5E6' EMB_DB_STORAGE='/storage/db/vpostgres' On the vcenter Server Appliance virtual machine, navigate to the vpostgres utility directory. Step 7 cd /opt/vmware/vpostgres/1.0/bin The system responds with a command prompt. Restore the vcenter Server vpostgres database from backup. Step 8 PGPASSWORD='EMB_DB_PASSWORD';./psql -d <EMB_DB_INSTANCE> -Upostgres -f </path/vcdbbackupfile> #PGPASSWORD='$vq1oOh_CTmgG5E6' #./psql -d VCDB -Upostgres -f /tmp/vcdbackfile1 SET SET SET SET SET SET SET ALTER TABLE ALTER TABLE... ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES ALTER DEFAULT PRIVILEGES Restart the VMware vcenter Server service for the database restore to take effect. service vmware-vpxd start Waiting for the embedded database to start up: success Verifying EULA acceptance: success Executing pre-startup scripts... Updating the vcenter endpoint in the Lookup Service. Intializing registration provider... Getting SSL certificates for Service with name 'vpxd-blrrms-vcenter-ha-278ddba1-f0bd-4da5-8f5c-7e52daca9685' and ID 'local:f7bf9c8d-ea7c-459d-be49-b6b6281abdb4' was updated. Return code is: Success Starting ldap-server..done Starting vmware-vpxd: success Waiting for vpxd to initialize:.success Starting tomcat: success Executing startup scripts... Autodeploy service is disabled, skipping registration. Starting VMware vsphere Profile-Driven Storage Service...Waiting for VMware vsphere Profile-Driven Storage Service... VMware vsphere Profile-Driven Storage Service started. 53

62 Restoring vcenter VM Database Configuring High Availability for VMware vcenter in RMS Distributed Setup 54

63 CHAPTER 5 Configuring High Availability for VMware vcenter in RMS All-In-One Setup This chapter describes the process of configuring high availability for the VMware vcenter in an RMS All-In-One setup. It provides the prerequisites and procedures required to configure RMS VMs and customize clusters and test high availability on the Central node and vcenter VM. Prerequisites, page 55 Guidelines and Limitations, page 56 Creating a High Availability Cluster, page 56 Adding Hosts to the High Availability Cluster, page 56 Adding NFS Datastore to the Host, page 56 Adding Network Redundancy for Hosts and Configuring vmotion, page 57 Installing the OVA, page 57 Updating Cluster Configuration, page 57 Migrating Central Node Datastore to NFS, page 57 Testing High Availability on the Central Node and vcenter VM, page 58 Prerequisites Hardware Two Cisco UCS 5108 or UCS 240 servers Software VMware vcenter should be installed and loaded with ESXi 6.0. Network file system (NFS) data store with minimum 500 GB space should be available with both the physical servers 55

64 Guidelines and Limitations Configuring High Availability for VMware vcenter in RMS All-In-One Setup Guidelines and Limitations To support redundancy, it is necessary for both the servers to be identical. A reliable high-speed wired network must exist. Redundancy is available for only Central node and vcenter VM. There will be downtime of around 10 minutes if one host fails. Common NFS data store availability must be ensured for the High Availability feature to work successfully. Common network port group should exist on DVS. Creating a High Availability Cluster To create a high availability cluster, see Creating a High Availability Cluster, on page 10. Adding Hosts to the High Availability Cluster To add Hosts to the high availability cluster, see Adding Hosts to the High Availability Cluster, on page 10. Adding NFS Datastore to the Host Before You Begin NFS server details should already be available. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Log in to vsphere client. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the host and click the Configuration tab. In the Configuration tab, click the Storage option in the Hardware list provided to view the storage options. Click the Add Storage option to open the Add Storage wizard. In the Select Storage Type screen, select the Network File System option and click Next. In the Locate Network File System screen, enter the network file system server details, path, and datastore name. Uncheck the Mount NFS read only checkbox and then click Next. In the Ready to Complete screen, review the summary and click Finish. The newly-added NFS datastore information is displayed in the Configuration tab window. Repeat steps 1 to 7 to add NFS datastores to other hosts. 56

65 Configuring High Availability for VMware vcenter in RMS All-In-One Setup Adding Network Redundancy for Hosts and Configuring vmotion Adding Network Redundancy for Hosts and Configuring vmotion To add network redundancy for hosts and configuring vmotion, see Adding Network Redundancy for Hosts and Configuring vmotion, on page 11. Installing the OVA If you have completed the OVA installation, proceed to Updating Cluster Configuration, on page 15. If you have not completed the OVA installation, see the Preparing the OVA Descriptor Files section in the Cisco RAN Management System Installation Guide. Note If you are configuring high availability in an all-in-one RMS setup: Deploy the setup on any of the two hosts and ensure that only the Central node has the NFS data store. Complete this procedure, "Installing the OVA", only after you have completed the "All-In-One Redundant Deployment" procedure provided in the Cisco RAN Management System Installation Guide. Updating Cluster Configuration To update the cluster configuration, see Updating Cluster Configuration, on page 15. Migrating Central Node Datastore to NFS Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Log in to vcenter. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select the Central node. Right-click on the Central node and click Power > Shut Down Guest to shut down the Central node. Right-click on the Central node after the shut down and click Migrate to open the Migrate Virtual Machine wizard. In the Select Migration Type screen, select the Change Datastore option and click Next. In the Storage screen, select the NFS datastore from the list that is displayed and click Next. In the Ready to Complete screen, check the summary and click Finish. The migration will take some time to complete, verify by viewing the progress bar. 57

66 Testing High Availability on the Central Node and vcenter VM Configuring High Availability for VMware vcenter in RMS All-In-One Setup Testing High Availability on the Central Node and vcenter VM To test high availability on the Central node and vcenter VM, follow these procedures: Testing High Availability Failover, on page 17 Testing the vsphere High Availability Reset, on page 17 Testing Accidental Failure on a Host, on page 59 Testing High Availability Failover Step 1 Step 2 Step 3 Step 4 Log in to vsphere client. In the navigation pane, expand Home > Inventory > Hosts and Clusters and select individual hosts under the new cluster and select the Virtual Machines tab and verify individual host settings. Select one of the hosts in the navigation pane, right-click and click Reboot. After 5 minutes, select the other host and click the Virtual Machines tab. The Virtual Machines tab should display the VM from the rebooted host under the current host. The host that was rebooted should not have any VM under it. Testing the vsphere High Availability Reset Before You Begin Perform basic failover tests to validate if the vsphere high availability cluster is functioning as expected for a VM failure. Step 1 Login to the VM and enter the sudo mode and trigger its failure. Establish ssh connection to the VM. Step 2 ssh The system responds by connecting the user to the SantaClara RDU server. Use the sudo command to gain access to the root user account. Step 3 sudo su - The system responds with a password prompt. Enter your individual user password to gain access. 58

67 Configuring High Availability for VMware vcenter in RMS All-In-One Setup Testing Accidental Failure on a Host Step 4 Step 5 [enter your password] The system responds with a command prompt. Create the VM reset. echo c > /proc/sysrq-trigger The system responds with a command prompt. Select the VM on which the reset was attempted (after the trigger of the VM reset from the command prompt) and select the Tables & Events tab. Click Events. This tab should be updated with information about the CPU of the VM being disabled. Later the vsphere should detect the VM being disabled, and on missing the heartbeat from the VM, the vsphere high availability should reset the VM. Testing Accidental Failure on a Host This procedure describes how to test accidental failure or crash of one of the hosts. Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Log in to vcenter. In the navigation pane, expand Home > Inventory > Hosts & Clusters and select the HA cluster. Identify the host (for example, primary) in which the Central node and vcenter VM is present. Log in to the Cisco Integrated Management Controller (CIMC) console of the UCS server where the high availability VMs are present (Central node and vcenter VM). In The Server Summary screen, click on Power Cycle Server to simulate accidental failure. Click Ok in the dialog box that is displayed. Return to vcenter. Select the other host (for example, secondary) in the cluster after five to eight 8 minutes and click on the Virtual Machines tab. The Virtual Machines (Central and vcenter VM) tab should display the VM from the rebooted host under the current host. 59

68 Testing Accidental Failure on a Host Configuring High Availability for VMware vcenter in RMS All-In-One Setup 60

69 CHAPTER 6 Configuring High Availability for the PMG DB This chapter describes the process of configuring high availability for the PMG DB. It provides the prerequisites and procedures required to configure and test the hot standby and cold standby for Cisco RMS. It also includes procedures required to back up and restore standby and primary configurations, troubleshoot Data Guard on PMG DB, and upgrade Cisco RMS. The Oracle Data Guard tool is used to ensure high availability and disaster recovery for PMG DB in Cisco RMS. This tool is used to configure one primary database and one or more standby databases. These databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. The primary and standby databases can be managed using the SQL CLIs or the Data Guard Broker interfaces, including a CLI and a GUI that is integrated in the Oracle Enterprise Manager. 61

70 Prerequisites Configuring High Availability for the PMG DB The Oracle Enterprise Manager GUI or the Data Guard (DGMGRL) CLI is used to enable fast-start failover to fail over automatically when the primary database becomes unavailable. When a fast-start failover is enabled, the Data Guard Broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, without a need for DBA intervention. A failover occurs when the primary database is unavailable. Failover is performed only in the event of a failure of the primary database, and the failover results in a transition of a standby database to the primary role. A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role, and the standby database transitions to the primary role. The following sections describes the prerequisites, process for configuring high availability for the PMG DB, and Cisco RMS upgrade information. Note The output of some of the commands may slightly differ based on the installation setup and time of the execution. Prerequisites, page 62 Configuration Workflow, page 63 Configuring the Standby Server Setup, page 64 Setting Up the Oracle Data Guard Broker, page 76 Enabling Flashback, page 80 Configuring Hot Standby for PMG DB, page 81 Configuring Cold Standby, page 94 Testing Hot Standby, page 95 Testing Cold Standby, page 105 Rolling Back and Cleaning Up Standby and Primary Configurations, page 115 Troubleshooting Data Guard on PMG DB, page 125 Prerequisites Primary server should have a running instance. Standby server should have a software only installation. Operator or user of this procedure should have the following experience: Knowledge of basic database/dba know-how Knowledge of basic Linux/shell commands Ability to edit files with vi or vim Ability to view files (cat, tail, more, less) 62

71 Configuring High Availability for the PMG DB Configuration Workflow ORACLE_HOME directory path should be /u01/app/oracle/product/11.2.0/dbhome_1. If this path is different, respective file paths (for example, data files) would differ. To enable the failover feature for Cisco RMS applications connecting to PMGDB, the Cisco RMS Central node should be configured for PMG DB as per the install guide (see Cisco RAN Management System Installation Guide). Note The installation can be performed on either UCS 5108 Blade Server or UCS 240 Server. There is no specific dependency based type of hardware. Configuration Workflow The following table provides the general flow which can be followed to perform tasks for different types of redundancy setup for PMGDB using the Oracle Data Guard tool. Sl. No Task Configure the Standby Server Add Only Hot Standby to Primary Server Add Only Cold Standby to Primary Server Add Both, Hot and Cold Standby to Primary Server Test Hot Standby Setup Test Cold Standby Setup Procedure Configuring the Standby Server Setup, on page 64 Configuring the Hot Standby Server, on page 70 Configuring Primary With Only Cold Standby, on page 94 Configuring Primary With Hot and Cold Standby, on page 94 Testing Hot Standby, on page 95 Testing Site Failure, on page 106 Recovering Original Primary After Site Failure, on page Troubleshooting PMG DB Rollback and Clean-up Troubleshooting Data Guard on PMG DB, on page 125 Rolling Back and Cleaning Up Standby and Primary Configurations, on page

72 Configuring the Standby Server Setup Configuring High Availability for the PMG DB Configuring the Standby Server Setup The following procedure describes how to configure a standby server for the primary server. If the PMG DB redundancy setup involves primary with only cold standby, this standby server can be configured as cold standby server. If the redundancy setup involves primary with only hot standby or primary with both hot and cold standby, this standby can be configured as a hot standby. Configuring the Primary Server, on page 64 Configuring the Hot Standby Server, on page 70 Configuring the Primary Server Step 1 Step 2 Log in to the primary server with the username, oracle. Log in to the sql prompt. $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba Step 3 SQL*Plus: Release Production on Mon Jun 16 17:51: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Create the pfile as a backup vsphere client. This pfile can be used later when rollback is needed. The specific path for initpmgdb_pre_dgsetup.ora file can be specified if the file is to be stored in some backup directory (for example, /backup/initpmgdb_pre_dgsetup.ora). If the path is not specified the file gets created under the default directory, that is, $ORACLE_HOME/dbs. SQL> CREATE PFILE='initPMGDB_pre_dgsetup.ora' FROM SPFILE; File created. Logging Step 1 Step 2 Ensure that the primary database is in archivelog mode. SQL> SELECT log_mode FROM v$database; LOG_MODE NOARCHIVELOG 64

73 Configuring High Availability for the PMG DB Configuring the Primary Server If it is noarchivelog mode, switch to archivelog mode: SQL> SHUTDOWN IMMEDIATE; Database closed. Database dismounted. ORACLE instance shut down. SQL> STARTUP MOUNT; ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. SQL> ALTER DATABASE ARCHIVELOG; Database altered. SQL> ALTER DATABASE OPEN; Database altered. Step 3 SQL> SELECT log_mode FROM v$database; LOG_MODE ARCHIVELOG Enable forced logging by using the following command. SQL> ALTER DATABASE FORCE LOGGING; Database altered. Initializing Parameters Step 1 Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to "PMGDB" on the primary database. SQL> show parameter db_name NAME TYPE VALUE db_name string PMGDB Step 2 SQL> show parameter db_unique_name NAME TYPE VALUE db_unique_name string PMGDB Enter the DB_NAME of the standby database. It should be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter. For this example, the standby database has the value "PMGDB_STBY". 65

74 Configuring the Primary Server Configuring High Availability for the PMG DB Step 3 SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(PMGDB,PMGDB_STBY)'; System altered. Set the suitable remote archive log destinations. In this case, flash recovery area for the local location is used; however, other location can be specified. Note the SERVICE and the DB_UNIQUE_NAME for the remote location reference the standby location. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=pmgdb_stby NOAFFIRM ASYNC VALID_FOR= (ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PMGDB_STBY'; LOG_ARCHIVE_CONFIG='DG_CONFIG=(PMGDB,PMGDB_STBY)'; System altered. Step 4 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered. Set the LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters to appropriate values and the REMOTE_LOGIN_PASSWORDFILE must be set to exclusive. SQL> ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE; System altered. SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30; System altered. Step 5 SQL> ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE; System altered. Set the following parameters (in addition to the previous setting) to ensure that the primary server is ready to switch to standby mode. (This setting is recommended.) Adjust the *_CONVERT parameters to account for filename and path differences between the servers. SQL> ALTER SYSTEM SET FAL_SERVER=PMGDB_STBY; System altered. SQL> ALTER SYSTEM SET DB_FILE_NAME_CONVERT='PMGDB_STBY','PMGDB' SCOPE=SPFILE; System altered. SQL> ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='PMGDB_STBY','PMGDB' SCOPE=SPFILE; System altered. Step 6 SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; System altered. Restart the database to implement the modifications made to the parameters. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area bytes 66

75 Configuring High Availability for the PMG DB Configuring the Primary Server Step 7 Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. Exit from sql prompt bytes bytes bytes bytes SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Setting Up the Service Step 1 Provide the values for the primary and standby databases in the "$ORACLE_HOME/network/admin/tnsnames.ora" files on both the servers. The following values are required. The values of database server hostname or IP address, ports, and oracle home should be specified as per the installation setup. Step 2 PMGDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <primary-server-host-address>)(port = <port number>) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) PMGDB_STBY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <standby-server-host-address>)(port = <port number>)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) Edit "$ORACLE_HOME/network/admin/listener.ora" to add the entry that the Data Guard Broker will refer to. If SID_LIST is already present, add SID_DESC to the list. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_NAME = PMGDB_DGMGRL) (ORACLE_HOME = <oracle-home-directory-path>) (SID_NAME = PMGDB) ) ) Starting the Listener Ensure that the listener is started on the primary server. 67

76 Configuring the Primary Server Configuring High Availability for the PMG DB Note All of the output is not displayed. $ lsnrctl status If not running: LSNRCTL for Linux: Version Production on 02-JUL :35:37 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<host address>)(port=<port number>)) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 111: Connection refused If running: LSNRCTL for Linux: Version Production on 02-JUL :43:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<host address>)(port=<port number>))) STATUS of the LISTENER Services Summary... The command completed successfully 1 If listener is not running (that is, TNS-12541: TNS:no listener), start it. $ lsnrctl start LSNRCTL for Linux: Version Production on 17-JUN :10:31 Copyright (c) 1991, 2009, Oracle. All rights reserved. Starting /u01/app/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait... The command completed successfully 2 If listener is running, reload it. $ lsnrctl reload LSNRCTL for Linux: Version Production on 17-JUN :12:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to The command completed successfully Backing Up the Primary Database Step 1 Step 2 Back up the primary database, which will be restored on the standby database. To do this, open another console for the primary database server and log in as an oracle user. Set the ORACLE_SID to PMGDB and take a backup. Note The complete output is not displayed. $ export ORACLE_SID=PMGDB $ rman target=/ Recovery Manager: Release Production on Thu Jul 3 18:53: Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. 68

77 Configuring High Availability for the PMG DB Configuring the Primary Server connected to target database: PMGDB (DBID= ) RMAN> BACKUP DATABASE PLUS ARCHIVELOG; Starting backup at 03-JUL-14 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=97 device type=disk channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=15 RECID=93 STAMP= input archived log thread=1 sequence=16 RECID=95 STAMP= channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=/u01/app/oracle/oradata/pmgdb/mapinfo_ts.dbf input datafile file number=00006 name=/u01/app/oracle/oradata/pmgdb/pmgdb_ts.dbf input datafile file number=00001 name=/u01/app/oracle/oradata/pmgdb/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/pmgdb/sysaux01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/pmgdb/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/pmgdb/users01.dbf piece handle=/u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_03/o1_mf_annnn_tag t185830_9vbpmykl_.bkp tag=tag t comment=none channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 03-JUL-14 RMAN> exit Recovery Manager complete. Note To check if the backup is completed successfully, check the last statement in the output. For example, Finished backup at 03-JUL-14. Creating Standby Control File and PFILE Step 1 Login to the sql prompt. Step 2 $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 17:51: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Create a control file for the standby database using the following command on the primary database. 69

78 Configuring the Hot Standby Server Configuring High Availability for the PMG DB Step 3 SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/pmgdb_stby.ctl'; Database altered. Create a parameter file for the standby database. Step 4 SQL> CREATE PFILE='/tmp/initPMGDB_stby.ora' FROM SPFILE; File created. Exit from the sql prompt. Step 5 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Edit the PFILE generated, that is, /tmp/initpmgdb_stby.ora, making the entries relevant for the standby database. Because this is a replica of the original server, only the following parameters should be modified. Modify the parameter if it exists, otherwise add the parameter with the specified value. *.db_unique_name='pmgdb_stby' *.fal_server='pmgdb' *.log_archive_dest_2='service=pmgdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PMGDB' Configuring the Hot Standby Server Step 1 Step 2 Log in to the secondary or standby server with the username oracle. Add the tnsnames entries to "$ORACLE_HOME/network/admin/tnsnames.ora". PMGDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <primary-server-host-address>)(port = <port number>)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) PMGDB_STBY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <standby-server-host-address>)(port = <port number>)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) Step 3 Edit "$ORACLE_HOME/network/admin/listener.ora" to add the entry that the Data Guard Broker will refer to. 70

79 Configuring High Availability for the PMG DB Configuring the Hot Standby Server SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_NAME = PMGDB_STBY_DGMGRL) (ORACLE_HOME = <oracle-home-directory-path>) (SID_NAME = PMGDB) ) ) Copying Files Step 1 Create the necessary directories on the standby server. $ mkdir -p /u01/app/oracle/oradata/pmgdb The system responds with a command prompt. $ mkdir -p /u01/app/oracle/flash_recovery_area/pmgdb The system responds with a command prompt. Step 2 $ mkdir -p /u01/app/oracle/admin/pmgdb/adump The system responds with a command prompt. Copy the files from the primary to the standby server. Copy the standby control file to all locations. $ scp oracle@<primary-server-host-address>:/tmp/pmgdb_stby.ctl /u01/app/oracle/oradata/pmgdb/control01.ctl <scp-output> Step 3 $ cp /u01/app/oracle/oradata/pmgdb/control01.ctl /u01/app/oracle/flash_recovery_area/pmgdb/control02.ctl The system responds with a command prompt. Copy archivelogs and backups. $ scp -r oracle@<primary-server-host-address>:/u01/app/oracle/flash_recovery_area/pmgdb/ archivelog /u01/app/oracle/flash_recovery_area/pmgdb <scp-output> Step 4 $ scp -r oracle@<primary-server-host-address>:/u01/app/oracle/flash_recovery_area/pmgdb/backupset /u01/app/oracle/flash_recovery_area/pmgdb <scp-output> Copy the parameter file. Step 5 $ scp oracle@<primary-server-host-address>: /tmp/initpmgdb_stby.ora /tmp/initpmgdb_stby.ora <scp-output> Copy the remote login password file. 71

80 Configuring the Hot Standby Server Configuring High Availability for the PMG DB $ scp oracle@<primary-server-host-address>:$oracle_home/dbs/orapwpmgdb $ORACLE_HOME/dbs <scp-output> Note The backups are copied across to the standby server as part of the flash recovery area (FRA) copy. If backups are not held within the FRA, ensure that those are copied to the standby server and make them available from the same path as used on the primary server. Starting the Listener on Standby Server Ensure that the listener is started on the standby server. Note All of the output is not displayed. $ lsnrctl status If not running: LSNRCTL for Linux: Version Production on 02-JUL :35:37 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<primary-server-host-address>)(PORT=<port number>)) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 111: Connection refused If running: LSNRCTL for Linux: Version Production on 02-JUL :43:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<primary-server-host-address>)(PORT=<port number>))) STATUS of the LISTENER Services Summary... The command completed successfully 1 If listener is not running (that is, TNS-12541: TNS:no listener), start it. $ lsnrctl start LSNRCTL for Linux: Version Production on 17-JUN :10:31 Copyright (c) 1991, 2009, Oracle. All rights reserved. Starting /u01/app/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait... The command completed successfully 72

81 Configuring High Availability for the PMG DB Configuring the Hot Standby Server 2 If listener is running, reload it. $ lsnrctl reload LSNRCTL for Linux: Version Production on 17-JUN :12:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to The command completed successfully Restoring Backup Step 1 Create the SPFILE from the modified PFILE. $ export ORACLE_SID=PMGDB The system responds with a command prompt. $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 12:24: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. Step 2 SQL> CREATE SPFILE FROM PFILE='/tmp/initPMGDB_stby.ora'; File created. Exit from the sql prompt. Step 3 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Restore the backup files. Depending on database size, restore time will vary. Note All of the output is not displayed. $ export ORACLE_SID=PMGDB $ rman target=/ Recovery Manager: Release Production on Thu Jul 3 19:23: Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> STARTUP MOUNT; Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers bytes bytes bytes bytes bytes RMAN> RESTORE DATABASE; Starting restore at 03-JUL-14 73

82 Configuring the Hot Standby Server Configuring High Availability for the PMG DB Starting implicit crosscheck backup at 03-JUL-14 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_03/ o1_mf_nnndf_tag t185812_9vbpmf2x_.bkp tag=tag t channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:26 Finished restore at 03-JUL-14 RMAN> exit Recovery Manager complete. Creating Redo Logs Step 1 Exit from the sqlplus prompt if already logged in and re-login. SQL> exit Disconnected $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 12:24: Copyright (c) 1982, 2009, Oracle. All rights reserved. Step 2 Connected to an idle instance. Create online redo logs for the standby. SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL; System altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo02.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo03.log') SIZE 50M; Database altered. SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; System altered. Create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In this case, the following standby redo logs must be created on both servers. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo02.log') SIZE 50M; 74

83 Configuring High Availability for the PMG DB Configuring the Hot Standby Server Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo03.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo04.log') SIZE 50M; Database altered. Step 3 Create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In this case, the following standby redo logs must be created on both servers. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo02.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo03.log') SIZE 50M; Database altered. Step 4 SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo04.log') SIZE 50M; Database altered. Repeat steps 1 to 3 to create standby redo logs on the primary server. After this is complete, the apply process can be started. Starting the Apply Process Start the apply process on the standby server. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. Checking Status Step 1 Check status on primary database. Note The status LOG SWITCH GAP or RESOLVABLE GAP will change after the Data Guard Broker is set up. SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database Step 2 NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB READ WRITE PMGDB PRIMARY LOG SWITCH GAP Check status on secondary database. Note The switchover status could be TO PRIMARY or SESSIONS ACTIVATE. 75

84 Setting Up the Oracle Data Guard Broker Configuring High Availability for the PMG DB SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB MOUNTED PMGDB_STBY PHYSICAL STANDBY TO PRIMARY Setting Up the Oracle Data Guard Broker The Oracle Data Guard Broker is used to create a broker configuration that allows the broker to manage and monitor primary and standby databases together as an integrated unit. Step 1 Step 2 Log in to the primary server with the username oracle. Check the Data Guard Broker process on the primary database. SQL> sho parameter dg_broker NAME TYPE VALUE dg_broker_config_file1 string /u01/app/oracle/product/ /dbhome_1/dbs/dr1pmgdb.dat dg_broker_config_file2 string /u01/app/oracle/product/ /dbhome_1/dbs/dr2pmgdb.dat Step 3 dg_broker_start boolean FALSE Start the Data Guard Broker process on the primary database. Step 4 SQL> alter system set dg_broker_start=true scope=both; System altered. Check the DG_BROKER on the standby database and start it. SQL> sho parameter dg_broker NAME TYPE VALUE dg_broker_start boolean FALSE Step 5 SQL> alter system set dg_broker_start=true scope=both ; System altered. Verify the "$ORACLE_HOME/network/admin/listener.ora" file. If required, edit the file that includes the db_unique_name_dgmgrl.db_domain values for the GLOBAL_DBNAME in both primary and standby database. To set the value, check the db_domain value on the primary and standby database. 76

85 Configuring High Availability for the PMG DB Setting Up the Oracle Data Guard Broker SQL> show parameter db_domain NAME TYPE VALUE db_domain string Because the value of the db_domain is null so is the value of GLOBAL_DBNAME = PMGDB_DGMGRL for primary database and for standby GLOBAL_DBNAME = PMGDB_STBY_DGMGRL. The primary "$ORACLE_HOME/network/admin/listener.ora" file is as. Step 6 Step 7 SID_LIST_LISTENER = (SID_DESC = (GLOBAL_NAME = PMGDB_DGMGRL) (ORACLE_HOME = <oracle-home-directory-path>) (SID_NAME = PMGDB) ) ) Verify the "$ORACLE_HOME/network/admin/listener.ora" file on the standby database. Configure the Data Guard configuration. Open another console on primary or standby server to open the DGMGRL CLI. Note DGMGRL CLI can be accessed from any server where Oracle Client Administrator is installed. $ dgmgrl DGMGRL for Linux: Version bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@pmgdb Password:. Connected. Step 8 DGMGRL> create configuration 'dgpmgdb' as primary database is 'PMGDB' connect identifier is PMGDB ; Configuration "dgpmgdb" created with primary database " PMGDB". Check the status of the configuration after the configuration is created. Step 9 DGMGRL> show configuration Configuration - dgpmgdb Protection Mode : MaxPerformance Databases : PMGDB - Primary database Fast-Start Failover : DISABLED Configuration Status : DISABLED Add the standby database to the Data Guard Broker configuration. DGMGRL> add database 'PMGDB_STBY' as connect identifier is PMGDB_STBY maintained as physical ; Database " PMGDB_STBY " added DGMGRL> show configuration Configuration - dgpmgdb Protection Mode : MaxPerformance Databases : PMGDB - Primary database : PMGDB_STBY - Physical standby database 77

86 Setting Up the Oracle Data Guard Broker Configuring High Availability for the PMG DB Step 10 Fast-Start Failover : DISABLED Configuration Status : DISABLED Enable the configuration. DGMGRL> enable configuration Enabled. Step 11 DGMGRL> show configuration Configuration - dgpmgdb Protection Mode : MaxPerformance Databases : PMGDB - Primary database : PMGDB_STBY - Physical standby database Fast-Start Failover : DISABLED Configuration Status : SUCCESS View the primary and standby database properties. Note While copying the command from the document, the single quotes (for example, as in 'PMGDB') may not be copied correctly, the command may have to be entered at the prompt. DGMGRL> show database verbose 'PMGDB' Database - PMGDB Role: PRIMARY Intended State: TRANSPORT-ON Instance(s): PMGDB Properties: DGConnectIdentifier = 'pmgdb' ObserverConnectIdentifier = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'optional' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'AUTO' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '30' LogArchiveMinSucceedDest = '1' DbFileNameConvert = 'PMGDB_STBY, PMGDB' LogFileNameConvert = 'PMGDB_STBY, PMGDB' FastStartFailoverTarget = '' StatusReport = '(monitor)' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' HostName = 'oracle-vm-primary' SidName = 'PMGDB' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oracle-vm-primary)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PMGDB_DGMGRL)(INSTANCE_NAME=PMGDB)(SERVER=DEDICATED)))' StandbyArchiveLocation AlternateLocation LogArchiveTrace LogArchiveFormat TopWaitEvents = 'USE_DB_RECOVERY_FILE_DEST' = '' = '0' = '%t_%s_%r.arc' = '(monitor)' 78

87 Configuring High Availability for the PMG DB Verifying Log Synchronization on Standby Server Database Status: SUCCESS DGMGRL> show database verbose 'PMGDB_STBY' Database - PMGDB_STBY Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds Apply Lag: 0 seconds Real Time Query: OFF Instance(s): PMGDB Properties: DGConnectIdentifier = 'pmgdb_stby' ObserverConnectIdentifier = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'OPTIONAL' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'AUTO' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '30' LogArchiveMinSucceedDest = '1' DbFileNameConvert = 'PMGDB_STBY, PMGDB' LogFileNameConvert = 'PMGDB_STBY, PMGDB' FastStartFailoverTarget = '' StatusReport = '(monitor)' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' HostName = 'blr-oracle2-standby' SidName = 'PMGDB' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=blr-oracle2-standby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PMGDB_STBY_DGMGRL)(INSTANCE_NAME=PMGDB)(SERVER=DEDICATED)))' StandbyArchiveLocation AlternateLocation LogArchiveTrace LogArchiveFormat TopWaitEvents Database Status: SUCCESS = 'USE_DB_RECOVERY_FILE_DEST' = '' = '0' = '%t_%s_%r.arc' = '(monitor)' Verifying Log Synchronization on Standby Server Step 1 Identify the existing archived redo log files on the standby server. Note the latest sequence number. Note Sequence number and timestamp in the output will vary depending on the installation setup. 79

88 Enabling Flashback Configuring High Availability for the PMG DB SQL> ALTER SESSION SET nls_date_format='dd-mon-yyyy HH24:MI:SS'; Session altered. Step 2 SQL> SELECT sequence#, first_time, next_time, applied FROM v$archived_log ORDER BY sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED JUL :06:27 10-JUL :06:53 YES JUL :06:53 10-JUL :36:09 YES JUL :36:09 10-JUL :51:38 IN-MEMORY Force a log switch to archive the current online redo log file on the primary server. Step 3 SQL> ALTER SYSTEM SWITCH LOGFILE; System altered. Verify on the standby, the new redo data that was archived on the standby database. That is, new sequence number with latest timestamp of redo apply is displayed. Note The value of the APPLIED column for the most recently received log file will either be IN-MEMORY or YES if that log file has been applied. SQL> SELECT sequence#, first_time, next_time, applied FROM v$archived_log ORDER BY sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED JUL :06:27 10-JUL :06:53 YES JUL :06:53 10-JUL :36:09 YES JUL :36:09 10-JUL :51:38 YES JUL :51:38 10-JUL :03:48 IN-MEMORY Enabling Flashback Flashback database is needed to enable fast-start failover for hot standby. In case of cold standby also it is advisable to enable Flashback database because it is helpful in recovering the database. Step 1 Step 2 Check if flashback is on from the primary database sql prompt. SQL> select flashback_on from v$database; FLASHBACK_ON NO Set flashback "on" if it is not already on. 80

89 Configuring High Availability for the PMG DB Configuring Hot Standby for PMG DB Step 3 Database altered. Check if the flashback is on from the standby database sql prompt. Step 4 SQL> select flashback_on from v$database; FLASHBACK_ON NO Cancel standby apply process if flashback mode has to be made on. Step 5 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; Database altered. Set flashback on if it is not already on. Step 6 SQL> alter database flashback on; Database altered. Start back apply process. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. Configuring Hot Standby for PMG DB Hot standby refers to a standby server that becomes active and acts as primary when the original primary server fails. The failover is automatic without manual intervention and with a little or no downtime. Typically hot standby servers are placed on different hosts in the same site as that of the primary server. The following procedure describes how to add a hot standby. Skip this section if there is no need to configure a hot standby. Setting Up the Hot Standby Before You Begin Ensure that you have completed the Configuring the Standby Server Setup, on page 64 to create a standby server. 81

90 Setting Up the Hot Standby Configuring High Availability for the PMG DB Enabling Failover DETAILED STEPS Step 1 Command or Action To enable failover, return to the console where the DGMGRL tool is running and change the LogXptMode property for the PMGDB. Purpose Step 2 DGMGRL> edit database "PMGDB" set property LogXptMode='SYNC'; Property "logxptmode" updated Change the LogXptMode PMGDB_STBY. Step 3 DGMGRL> edit database "PMGDB_STBY" set property LogXptMode='SYNC'; Property "logxptmode" updated Check the LogXptStatus. Step 4 DGMGRL> show database 'PMGDB' 'LogXptStatus'; LOG TRANSPORT STATUS PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME PMGDB PMGDB_STBY Set the protection mode to MaxAvailability STATUS Step 5 DGMGRL> edit configuration set protection mode as MaxAvailability; Succeeded. Return to the DGMGRL console and enable failover. If not connected, connect as sys@pmgdb. Step 6 DGMGRL> enable fast_start failover; Enabled Open another console for the standby server and login as an oracle user. Start the Observer as a background process. The Observer monitors both primary and standby databases and detects failures, if any. $ nohup dgmgrl -silent sys/<password>@pmgdb "start observer" & $ cat nohup.out Observer started Note It is recommended running the Observer on a server host that is separate from the primary and standby servers. This host needs Oracle Client Administrator installed to run DGMGRL tool and the Observer. Otherwise, run the Observer on the standby server. In case of failure, if primary and standby roles are switched, ensure that the observer is running on the new standby server. 82

91 Configuring High Availability for the PMG DB Setting Up the Hot Standby Command or Action Purpose Step 7 Return to the previous DGMGRL console and verify the configuration. DGMGRL> show configuration verbose Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database Warning: ORA-16826: apply service state is inconsistent with the DelayMins property (*) Fast-Start Failover target Step 8 Fast-Start Failover: ENABLED Threshold: 30 seconds Target: PMGDB_STBY Observer: oracle-vm-primary Lag Limit: 30 seconds (not in use) Shutdown Primary: TRUE Auto-reinstate: TRUE Configuration Status: WARNING Start recovery from the current log file on the standby server if the above error (mentioned in the previous step) is observed. SQL> alter database recover managed standby database cancel; Database altered. Step 9 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; Database altered. Return to the DGMGRL console to check the configuration again. DGMGRL> show configuration verbose Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database (*) Fast-Start Failover target Fast-Start Failover: ENABLED Threshold: 30 seconds Target: PMGDB_STBY Observer: oracle-vm-primary Lag Limit: 30 seconds (not in use) Shutdown Primary: TRUE Auto-reinstate: TRUE Configuration Status: SUCCESS 83

92 Setting Up the Hot Standby Configuring High Availability for the PMG DB Checking Status Step 1 Check status on primary database. Note The status LOG SWITCH GAP or RESOLVABLE GAP will change after the Data Guard Broker is set up. SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database Step 2 NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB READ WRITE PMGDB PRIMARY LOG SWITCH GAP Check status on secondary database. Note The switchover status could be TO PRIMARY or SESSIONS ACTIVATE. SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB MOUNTED PMGDB_STBY PHYSICAL STANDBY TO PRIMARY Initializing Parameters for Standby Server Step 1 Add db_unique_name to log_archive_config Step 2 SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(PMGDB,PMGDB_STBY,PMGDB_STBY2)'; System altered. Set and enable LOG_ARCHIVE_DEST_3. Step 3 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='SERVICE=PMGDB_stby2 NOAFFIRM ASYNC VALID_FOR=( ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PMGDB_STBY2'; System altered. Set the parameters to ensure that the primary is ready to switch roles to become a standby. SQL> ALTER SYSTEM SET FAL_SERVER=PMGDB_STBY,PMGDB_STBY2; System altered. SQL> ALTER SYSTEM SET DB_FILE_NAME_CONVERT='PMGDB','PMGDB_STBY', 'PMGDB', 'PMGDB_STBY2' SCOPE=SPFILE; System altered. 84

93 Configuring High Availability for the PMG DB Setting Up the Hot Standby Step 4 SQL> ALTER SYSTEM SET LOG_FILE_NAME_CONVERT ='PMGDB','PMGDB_STBY', 'PMGDB', 'PMGDB_STBY2' SCOPE=SPFILE; System altered. Restart the database to implement the modifications made to the parameters. SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Database opened. Setting Up the Service for Standby Server Provide the values for the new standby databases in the "$ORACLE_HOME/network/admin/tnsnames.ora" files on both the servers. The following values are used during the setup. PMGDB_STBY2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <standby2-server-host-address>)(port = <port number>)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) Backing Up the Primary Database Step 1 Step 2 Back up the primary database, which will be restored on the standby database. To do this, open another console for the primary database server and log in as an oracle user. Set the ORACLE_SID to PMGDB and take a backup. Note The complete output is not displayed. $ export ORACLE_SID=PMGDB $ rman target=/ Recovery Manager: Release Production on Thu Jul 3 18:53: Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: PMGDB (DBID= ) 85

94 Setting Up the Hot Standby Configuring High Availability for the PMG DB RMAN> BACKUP DATABASE PLUS ARCHIVELOG; Starting backup at 03-JUL-14 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=97 device type=disk channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=15 RECID=93 STAMP= input archived log thread=1 sequence=16 RECID=95 STAMP= channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=/u01/app/oracle/oradata/pmgdb/mapinfo_ts.dbf input datafile file number=00006 name=/u01/app/oracle/oradata/pmgdb/pmgdb_ts.dbf input datafile file number=00001 name=/u01/app/oracle/oradata/pmgdb/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/pmgdb/sysaux01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/pmgdb/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/pmgdb/users01.dbf piece handle=/u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_03/o1_mf_annnn_tag t185830_9vbpmykl_.bkp tag=tag t comment=none channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 03-JUL-14 RMAN> exit Recovery Manager complete. Note To check if the backup is completed successfully, check the last statement in the output. For example, Finished backup at 03-JUL-14. Creating the Standby Control File and PFILE for Standby Server Step 1 Return to the sqlplus prompt of the primary database and create a control file for the standby database using the following command on the primary database. Step 2 SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/pmgdb_stby2.ctl'; Database altered. Create a parameter file for the standby database. Step 3 SQL> CREATE PFILE='/tmp/initPMGDB_stby2.ora' FROM SPFILE; File created. Modify the PFILE /tmp/initpmgdb_stby2.ora making the entries relevant for the standby database. Because this is a replica of the original server, only the following parameters need to be modified, added, or updated. Modify the parameter if it exists otherwise add the parameter with specified value. 86

95 Configuring High Availability for the PMG DB Setting Up the Hot Standby *.db_unique_name='pmgdb_stby2' *.fal_server='pmgdb' *.log_archive_dest_2='service=pmgdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PMGDB' *.log_archive_config='dg_config=(pmgdb,pmgdb_stby2)' *.db_file_name_convert='pmgdb_stby2','pmgdb' *.log_file_name_convert='pmgdb_stby2','pmgdb' Also, remove the following parameters: *.log_archive_dest_3='enable' *.log_archive_dest_state_3='enable' Configuring the Standby Server Step 1 Step 2 Log in to the standby 2 server (to be used as cold standby) with the username oracle. Add tnsnames entries to "$ORACLE_HOME/network/admin/tnsnames.ora". PMGDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <primary-server-host-address>)(port = <port number>)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) PMGDB_STBY2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <standby2-server-host-address>)(port = <port number>)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PMGDB) ) ) Step 3 Edit "$ORACLE_HOME/network/admin/listener.ora to add a entry that the Data Guard Broker will refer to. SID_LIST_LISTENER = (SID_DESC = (GLOBAL_NAME = PMGDB_STBY2_DGMGRL) (ORACLE_HOME = <oracle-home-directory-path>) (SID_NAME = PMGDB) ) ) 87

96 Setting Up the Hot Standby Configuring High Availability for the PMG DB Copying Files to the Standby Server Step 1 Create the necessary directories on the standby server. $ mkdir -p /u01/app/oracle/oradata/pmgdb The system responds with a command prompt. $ mkdir -p /u01/app/oracle/flash_recovery_area/pmgdb The system responds with a command prompt. Step 2 $ mkdir -p /u01/app/oracle/admin/pmgdb/adump The system responds with a command prompt. Copy the files from the primary to the standby server. Copy the standby control file to all locations. $ scp oracle@<primary-server-host-address>:/tmp/pmgdb_stby.ctl /u01/app/oracle/oradata/pmgdb/control01.ctl <scp-output> Step 3 $ cp /u01/app/oracle/oradata/pmgdb/control01.ctl /u01/app/oracle/flash_recovery_area/pmgdb/control02.ctl The system responds with a command prompt. Copy archivelogs and backups. $ scp -r oracle@<primary-server-host-address>:/u01/app/oracle/flash_recovery_area/pmgdb/ archivelog /u01/app/oracle/flash_recovery_area/pmgdb <scp-output> Step 4 $ scp -r oracle@<primary-server-host-address>:/u01/app/oracle/flash_recovery_area/pmgdb/backupset /u01/app/oracle/flash_recovery_area/pmgdb <scp-output> Copy the parameter file. Step 5 $ scp oracle@<primary-server-host-address>: /tmp/initpmgdb_stby.ora /tmp/initpmgdb_stby.ora <scp-output> Copy the remote login password file. $ scp oracle@<primary-server-host-address>:$oracle_home/dbs/orapwpmgdb $ORACLE_HOME/dbs <scp-output> Starting the Listener on Standby Server Ensure that the listener is started on the standby server. 88

97 Configuring High Availability for the PMG DB Setting Up the Hot Standby Note All of the output is not displayed. $ lsnrctl status If not running: LSNRCTL for Linux: Version Production on 02-JUL :35:37 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<primary-server-host-address>)(PORT=<port number>)) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 111: Connection refused If running: LSNRCTL for Linux: Version Production on 02-JUL :43:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<primary-server-host-address>)(PORT=<port number>))) STATUS of the LISTENER Services Summary... The command completed successfully 1 If listener is not running (that is, TNS-12541: TNS:no listener), start it. $ lsnrctl start LSNRCTL for Linux: Version Production on 17-JUN :10:31 Copyright (c) 1991, 2009, Oracle. All rights reserved. Starting /u01/app/oracle/product/11.2.0/dbhome_1/bin/tnslsnr: please wait... The command completed successfully 2 If listener is running, reload it. $ lsnrctl reload LSNRCTL for Linux: Version Production on 17-JUN :12:23 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to The command completed successfully Restoring Backup on the Standby Server Step 1 Create the SPFILE from the modified PFILE. $ export ORACLE_SID=PMGDB The system responds with a command prompt. $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 12:24:

98 Setting Up the Hot Standby Configuring High Availability for the PMG DB Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. Step 2 SQL> CREATE SPFILE FROM PFILE='/tmp/initPMGDB_stby.ora'; File created. Exit from the sql prompt. Step 3 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Restore the backup files. Depending on database size, restore time will vary. Note All of the output is not displayed. $ export ORACLE_SID=PMGDB $ rman target=/ Recovery Manager: Release Production on Thu Jul 3 19:23: Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> STARTUP MOUNT; Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers bytes bytes bytes bytes bytes RMAN> RESTORE DATABASE; Starting restore at 03-JUL-14 Starting implicit crosscheck backup at 03-JUL-14 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_03/ o1_mf_nnndf_tag t185812_9vbpmf2x_.bkp tag=tag t channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:26 Finished restore at 03-JUL-14 RMAN> exit Recovery Manager complete. 90

99 Configuring High Availability for the PMG DB Setting Up the Hot Standby Creating Redo Logs for Standby Server Step 1 Exit from the sqlplus prompt if already logged in and re-login. SQL> exit Disconnected $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 12:24: Copyright (c) 1982, 2009, Oracle. All rights reserved. Step 2 Connected to an idle instance. Create online redo logs for the standby. SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL; System altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo02.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/pmgdb/online_redo03.log') SIZE 50M; Database altered. SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; System altered. Create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In this case, the following standby redo logs must be created on both servers. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo02.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo03.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo04.log') SIZE 50M; Database altered. Step 3 Create standby redo logs on both the standby and the primary database (in case of switchovers). The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs. In this case, the following standby redo logs must be created on both servers. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo01.log') SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo02.log') SIZE 50M; 91

100 Setting Up the Hot Standby Configuring High Availability for the PMG DB Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo03.log') SIZE 50M; Database altered. Step 4 SQL> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo04.log') SIZE 50M; Database altered. Repeat steps 1 to 3 to create standby redo logs on the primary server. After this is complete, the apply process can be started. Starting the Apply Process on Standby Server Start the apply process on the standby server. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. Configuring the Data Guard Broker on Standby Server Step 1 Start the Data Guard Broker Process on the standby server. Step 2 SQL> alter system set dg_broker_start=true scope=both; System altered. Configure the Data Guard Broker by opening another console on the primary or standby server to invoke the DGMGRL CLI. Note DGMGRL CLI can be accessed from any server where the Oracle Client Administrator is installed. $ dgmgrl DGMGRL for Linux: Version bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Step 3 Step 4 DGMGRL> connect sys@pmgdb Password:. Connected. Add the standby database to the Data Guard Broker configuration. DGMGRL> add database 'PMGDB_STBY2' as connect identifier is PMGDB_STBY2 maintained as physical ; Database " PMGDB_STBY2 " added Update the primary database property and enable the database. DGMGRL> edit database 'PMGDB' set property DbFileNameConvert='PMGDB,PMGDB_STBY, PMGDB,PMGDB_STBY2'; Property "dbfilenameconvert" updated 92

101 Configuring High Availability for the PMG DB Setting Up the Hot Standby DGMGRL> edit database 'PMGDB' set property LogFileNameConvert='PMGDB,PMGDB_STBY,PMGDB, PMGDB_STBY2'; Property "logfilenameconvert" updated Step 5 DGMGRL> enable database 'PMGDB'; Enabled. Enable the standby database. Step 6 DGMGRL> enable database 'PMGDB_STBY2'; Enabled. Verify the configuration. DGMGRL> show configuration Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database PMGDB_STBY2 - Physical standby database Fast-Start Failover: ENABLED Configuration Status: SUCCESS Verifying Log Synchronization on Standby Server Step 1 Identify the existing archived redo log files on the standby server. Note the latest sequence number. Note Sequence number and timestamp in the output will vary depending on the installation setup. SQL> ALTER SESSION SET nls_date_format='dd-mon-yyyy HH24:MI:SS'; Session altered. Step 2 SQL> SELECT sequence#, first_time, next_time, applied FROM v$archived_log ORDER BY sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED JUL :06:27 10-JUL :06:53 YES JUL :06:53 10-JUL :36:09 YES JUL :36:09 10-JUL :51:38 IN-MEMORY Force a log switch to archive the current online redo log file on the primary server. Step 3 SQL> ALTER SYSTEM SWITCH LOGFILE; System altered. Verify on the standby, the new redo data that was archived on the standby database. That is, new sequence number with latest timestamp of redo apply is displayed. 93

102 Configuring Cold Standby Configuring High Availability for the PMG DB Note The value of the APPLIED column for the most recently received log file will either be IN-MEMORY or YES if that log file has been applied. SQL> SELECT sequence#, first_time, next_time, applied FROM v$archived_log ORDER BY sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED JUL :06:27 10-JUL :06:53 YES JUL :06:53 10-JUL :36:09 YES JUL :36:09 10-JUL :51:38 YES JUL :51:38 10-JUL :03:48 IN-MEMORY Enabling Flashback on Standby Server Follow the steps in Enabling Flashback, on page 80 to enable flashback. Read the standby server as cold standby server. Configuring Cold Standby Cold standby refers to a standby server that is made active and switched as a primary when the original primary server or site fails due to any unforeseen event. The switchover needs manual intervention and there is some downtime involved. Typically cold standby servers are placed in a different site other than the primary server site. There are two options available to configure cold standby: Configuring Primary With Only Cold Standby, on page 94 Configuring Primary With Hot and Cold Standby, on page 94 Configuring Primary With Only Cold Standby Ensure that you complete the Configuring the Standby Server Setup, on page 64 to create a standby server. No additional steps are required to configure for primary with only cold standby setup. Configuring Primary With Hot and Cold Standby Setting up Hot Standby Complete the Configuring Hot Standby for PMG DB, on page 81 to set up the hot standby server. Setting Up Additional Standby Server as Cold Standby This will be an additional standby database added to the primary and should be used as a cold standby server for disaster recovery such as site failure recovery. Following configuration steps mentioned assume hot standby is already created. 94

103 Configuring High Availability for the PMG DB Testing Hot Standby Testing Hot Standby Testing Failover Process You can test the following two hot standby processes for the database: Testing Failover Process, on page 95 Testing Switchover Process, on page 102 To test the following failover processes, follow these procedures. Testing Failover From Primary Database to Standby Database, on page 95 Testing Failover Revert From New Primary to Original Primary Database, on page 99 Testing Failover From Primary Database to Standby Database Step 1 Log in to the primary server with the username oracle. If you are already logged in, proceed to Step 2. Step 2 Shutdown the primary database and check if standby is implemented. SQL> select db_unique_name from v$database; DB_UNIQUE_NAME PMGDB Step 3 SQL> shut abort ORACLE instance shut down. Check alert logs on the standby server. Note The Oracle base directory path may vary based on your installation. Step 4 $ cd /u01/app/oracle/diag/rdbms/pmgdb_stby/pmgdb/trace $ tail -f alert_pmgdb.log -n100 Failover succeeded. Primary database is now PMGDB_STBY. Check the log status. SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET

104 Testing Failover Process Configuring High Availability for the PMG DB Step 5 PMGDB_STBY PRIMARY READ WRITE REINSTATE REQUIRED PMGDB Synchronize failure status by starting up and mounting the database on the original primary server. SQL> startup mount ORACLE instance started. Step 6 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Open a new console on the original primary server, and keep tailing the log file drcpmgdb.log. Wait until the tail output stops at "DMON: >> DMON Process Shutdown <<". Step 7 $cd /u01/app/oracle/diag/rdbms/pmgdb/pmgdb/trace $tail -f drcpmgdb.log <tail output> :41: DMON: status from posting instances for Database QUIESCE = ORA :41: INSV: Received message for inter-instance publication :41: req ID , opcode CTL_QUIESCE, phase TEARDOWN, flags :41: DMON: Releasing (convert to NULL) Health Check Master lock :41: DMON: Releasing FSFP HOME lock :41: INSV: Reply received for message with :41: req ID , opcode CTL_QUIESCE, phase TEARDOWN :41: DMON: Entered rfm_release_chief_lock() for CTL_QUIESCE :41: Fore: FSFO shutting down :41: DMON: Data Guard Broker shutting down :41: DMON: Terminating RSM processes :41: RSM0: delete state object for RSM :41: DMON: RSM0 successfully terminated :41: DMON: Terminating NetSlave processes :41: DMON: Freeing all task elements :41: DMON: Terminating Instance Slave process :41: INSV: Shutting down :41: DMON: INSV successfully terminated :41: DMON: Zeroing metadata root pointer :41: DMON: Clearing Primary State :41: DMON: Freeing Broker SGA heap :41: DMON: Freeing PGA heap :41: DMON: Removing DMON's state object :41: DMON: Resetting DMON context structure :41: DMON: >> DMON Process Shutdown << Return to the sql prompt of the original primary server, exit the sql prompt, relogin to the sql prompt, and redo the startup mount. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options $ sqlplus / as sysdba SQL*Plus: Release Production on Thu Jul 3 21:57: Copyright (c) 1982, 2009, Oracle. All rights reserved. 96

105 Configuring High Availability for the PMG DB Testing Failover Process Connected to an idle instance. SQL> startup mount ORACLE instance started. Step 8 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Connect the DGMGRL to PMGDB_STBY, which is the primary now and reinstate the failover status. Step 9 DGMGRL> connect sys@pmgdb_stby Password: Connected View the configuration. Note that the database roles are changed, however, a warning to reinstate standby is displayed. DGMGRL> show configuration In case of primary with only hot standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB_STBY - Primary database Warning: ORA-16817: unsynchronized fast-start failover configuration PMGDB - (*) Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Configuration Status: WARNING In case of primary with hot and cold standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB_STBY - Primary database Warning: ORA-16817: unsynchronized fast-start failover configuration PMGDB - (*) Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Step 10 Configuration Status: WARNING Reinstate the standby database. Note If Error: ORA-16653: failed to reinstate database is observed in the output, go to the original primary database, exit the sql prompt, re-login to the sql prompt, startup, and mount the database again. Step 11 DGMGRL> REINSTATE DATABASE 'PMGDB' Reinstating database "PMGDB", please wait... Reinstatement of database "PMGDB" succeeded View the configuration again to check if the configuration status is a "SUCCESS" and with no errors. 97

106 Testing Failover Process Configuring High Availability for the PMG DB Note In case of primary with hot and cold standby, the error or warning of cold standby (PMGDB_STBY2) can be ignored. DGMGRL> show configuration In case of primary with only hot standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB_STBY - Primary database PMGDB - (*) Physical standby database Fast-Start Failover: ENABLED Configuration Status: SUCCESS In case of primary with hot and cold standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB_STBY - Primary database PMGDB - (*) Physical standby database PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Step 12 Configuration Status: SUCCESS Check the status on the original primary server. Step 13 SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB PHYSICAL STANDBY MOUNTED SYNCHRONIZED PMGDB Check the status on the new primary server. SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB_STBY PRIMARY READ WRITE SYNCHRONIZED PMGDB 98

107 Configuring High Availability for the PMG DB Testing Failover Process Testing Failover Revert From New Primary to Original Primary Database Step 1 Log in to the primary server with the username oracle. If you are already logged in, proceed to Step 2 Step 2 Shutdown the new primary (original hot standby) database and check if failover reverts to the original primary database. SQL> select db_unique_name from v$database; DB_UNIQUE_NAME PMGDB_STBY Step 3 SQL> shut abort ORACLE instance shut down Check the alert logs on the original primary server. Note The Oracle base directory path may vary based on your installation. Step 4 $ cd /u01/app/oracle/diag/rdbms/pmgdb/pmgdb/trace $ tail -f alert_pmgdb.log -n100 Failover succeeded. Primary database is now PMGDB. Check the log status on the original primary server. Step 5 SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB PRIMARY READ WRITE REINSTATE REQUIRED PMGDB_STBY Synchronize the failover status on the original hot standby server, start up, and mount the database. SQL> startup mount ORACLE instance started. Step 6 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Open a new console on the original hot standby server and keep tailing the log file drcpmgdb.log. Wait until the tail output stops at "DMON: >> DMON Process Shutdown <<". Note The Oracle base directory path may vary based on your installation. $cd /u01/app/oracle/diag/rdbms/pmgdb_stby/pmgdb/trace $tail -f drcpmgdb.log 99

108 Testing Failover Process Configuring High Availability for the PMG DB Step 7 <tail output> :27: DMON: status from posting instances for Database QUIESCE = ORA :27: DMON: Releasing (convert to NULL) Health Check Master lock :27: DMON: Releasing FSFP HOME lock :27: INSV: Reply received for message with :27: req ID , opcode CTL_QUIESCE, phase TEARDOWN :27: DMON: Entered rfm_release_chief_lock() for CTL_QUIESCE :27: Fore: FSFO shutting down :27: DMON: Data Guard Broker shutting down :27: DMON: Terminating RSM processes :27: RSM0: delete state object for RSM :28: DMON: RSM0 successfully terminated :28: DMON: Terminating NetSlave processes :28: DMON: Freeing all task elements :28: DMON: Terminating Instance Slave process :28: INSV: Shutting down :28: DMON: INSV successfully terminated :28: DMON: Zeroing metadata root pointer :28: DMON: Clearing Primary State :28: DMON: Freeing Broker SGA heap :28: DMON: Freeing PGA heap :28: DMON: Removing DMON's state object :28: DMON: Resetting DMON context structure :28: DMON: >> DMON Process Shutdown << Return to the sql prompt of the original hot standby server, exit the sql prompt, relogin to the sql prompt, and redo the startup mount. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options $ sqlplus / as sysdba SQL*Plus: Release Production on Thu Jul 3 21:57: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Step 8 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Connect the DGMGRL to the PMGDB, which is the primary now and reinstate the failover status. Step 9 DGMGRL> connect sys@pmgdb Password: Connected. View the configuration. Note that the database roles are changed, however, a warning to reinstate standby is displayed. DGMGRL> show configuration In case of primary with only hot standby: Configuration - dgpmgdb 100

109 Configuring High Availability for the PMG DB Testing Failover Process Protection Mode: MaxAvailability Databases: PMGDB - Primary database Warning: ORA-16817: unsynchronized fast-start failover configuration PMGDB_STBY - (*) Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Configuration Status: WARNING In case of primary with hot and cold standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database Warning: ORA-16817: unsynchronized fast-start failover configuration PMGDB_STBY - (*) Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Step 10 Configuration Status: WARNING Reinstate the standby database. Note If Error: ORA-16653: failed to reinstate database is observed in the output, go to the original hot standby database, exit the sql prompt, relogin to the sql prompt, start up, and mount the database again. Step 11 DGMGRL> REINSTATE DATABASE 'PMGDB_STBY' Reinstating database "PMGDB_STBY", please wait... Reinstatement of database "PMGDB_STBY" succeeded View the configuration again to check if the configuration status is a "SUCCESS" and with no errors. Note Sometimes processes from the previous command may not be complete, therefore the show configuration may show a different output. Allow a couple of minutes and execute the command again. In case of primary with hot and cold standby, an error or warning of cold standby (PMGDB_STBY2) can be ignored. DGMGRL> show configuration In case of primary with only hot standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database Fast-Start Failover: ENABLED Configuration Status: SUCCESS In case of primary with hot and cold standby: Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: 101

110 Testing Switchover Process Configuring High Availability for the PMG DB PMGDB - Primary database PMGDB_STBY - (*) Physical standby database PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: ENABLED Step 12 Configuration Status: SUCCESS Check the status on the original hot standby server. Step 13 SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB_STBY PHYSICAL STANDBY MOUNTED SYNCHRONIZED PMGDB_STBY Check the status on the original primary server. SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB PRIMARY READ WRITE SYNCHRONIZED PMGDB_STBY Testing Switchover Process Switchover from the primary to secondary or standby database and vice versa is a manual process typically carried out during planned maintenance or disaster recovery. The following procedures describe the switchover from PMGDB (primary) to PMGDB_STBY (hot standby). Testing Switchover From Primary to Standby Database, on page 102 Testing Switchover Revert From New Primary to Original Primary Database, on page 104 Testing Switchover From Primary to Standby Database Step 1 Start the DGMGRL utility from the primary or standby server (whichever is up) or from another host where the Oracle client is installed. $ dgmgrl DGMGRL for Linux: Version bit Production 102

111 Configuring High Availability for the PMG DB Testing Switchover Process Step 2 Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connect to the PMG DB. Step 3 DGMGRL> connect sys@pmgdb Password: Connected. Switchover to PMGDB_STBY. DGMGRL> SWITCHOVER TO 'PMGDB_STBY' Performing switchover NOW, please wait... New primary database "PMGDB_STBY" is opening... Operation requires shutdown of instance "PMGDB" on database "PMGDB" Shutting down instance "PMGDB"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires startup of instance "PMGDB" on database "PMGDB" Starting instance "PMGDB"... Unable to connect to database ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Failed. Warning: You are no longer connected to ORACLE. Step 4 Please complete the following steps to finish switchover: start up and mount instance "PMGDB" of database "PMGDB" Exit from the previous sql prompt to start up and mount the PMGDB. Step 5 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Connect to sqlplus. Step 6 $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 16:50: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. Start up and mount the database. Step 7 SQL> startup mount ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Verify that the roles are switched. a) Verify on the original primary (PMGDB). SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; 103

112 Testing Switchover Process Configuring High Availability for the PMG DB DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB PHYSICAL STANDBY MOUNTED SYNCHRONIZED PMGDB b) Verify on the new primary (PMGDB_STBY). SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB_STBY PRIMARY READ WRITE SYNCHRONIZED PMGDB Testing Switchover Revert From New Primary to Original Primary Database Before You Begin Step 1 Start the DGMGRL utility from the primary or standby server or from another host where the Oracle client is installed. Step 2 $ dgmgrl DGMGRL for Linux: Version bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. Connect to PMGDB_STBY. Step 3 DGMGRL> connect sys@pmgdb_stby Password: Connected. Switchover to PMGDB. DGMGRL> SWITCHOVER TO 'PMGDB' Performing switchover NOW, please wait... New primary database "PMGDB" is opening... Operation requires shutdown of instance "PMGDB" on database "PMGDB_STBY" Shutting down instance "PMGDB"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires startup of instance "PMGDB" on database "PMGDB_STBY" Starting instance "PMGDB"... Unable to connect to database ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Failed. Warning: You are no longer connected to ORACLE. 104

113 Configuring High Availability for the PMG DB Testing Cold Standby Step 4 Please complete the following steps to finish switchover: start up and mount instance "PMGDB" of database "PMGDB_STBY" Exit from previous sql prompt to start up and mount the PMGDB_STBY. Step 5 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Connect to sqlplus. Step 6 $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 16:50: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. Start up and mount the database. Step 7 SQL> startup mount ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Verify that the roles are switched. a) Verify on the hot standby (PMGDB_STBY). SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB_STBY PHYSICAL STANDBY MOUNTED SYNCHRONIZED PMGDB_STBY b) Verify on the original primary (PMGDB). SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,CURRENT_SCN,OPEN_MODE,FS_FAILOVER_STATUS, FS_FAILOVER_CURRENT_TARGET FSFO_CURR_TARGET from v$database; DB_UNIQUE_NAME DATABASE_ROLE CURRENT_SCN OPEN_MODE FS_FAILOVER_STATUS FSFO_CURR_TARGET PMGDB PRIMARY READ WRITE SYNCHRONIZED PMGDB_STBY Testing Cold Standby Use these procedures to test cold standby on PMG DB. 105

114 Testing Site Failure Configuring High Availability for the PMG DB Testing Site Failure, on page 106 Recovering Original Primary After Site Failure, on page 108 Testing Site Failure Use the following steps to test unscheduled failure of primary and first hot standby (if available) servers. To prevent applications (PMG, OpsTools) connection failures without any manual intervention, ensure that the PMGDB configuration script (/rms/app/rms/install/pmgdb_configure.sh) for the Central node is executed with the standby server configuration as per the instructions in the installation guide. Usage: pmgdb_configure.sh <Pmgdb_Enabled> <Pmgdb_Primary_Dbserver_Address> <Pmgdb_Primary_Dbserver_Port> [<Pmgdb_Stby1_Dbserver_Address>] [<Pmgdb_Stby1_Dbserver_Port>] [<Pmgdb_Stby2_Dbserver_Address>] [<Pmgdb_Stby2_Dbserver_Port>] If not configured earlier, execute the script again with the appropriate values for database servers and ports. For more details, see the Cisco RAN Management System Installation Guide. Step 1 To test site failure, verify if the applications connecting to the PMGDB database are able to connect. For example, the OpsTools script getareas.sh can be executed. Log in to Central node as admin1 user and execute the script. Step 2 $ getareas.sh -key 1001 Config files script-props/private/getareas.properties or script-props/public/getareas.properties not found. Continuing with default settings. Execution parameters: key=1001 GetAreas processing can take some time please do not terminate. Received areas, total areas 1 Writing to file: /home/admin1/getareas.csv The report captured in csv file: /home/admin1/getareas.csv **** GetAreas End Script *** Log in to the hot standby server as an oracle user and shut down the database server. Note Perform this step only if the hot standby server is configured. Else, proceed to the next step. Step 3 $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. Log in to the primary server an oracle user and shut down the database server. $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL> shutdown immediate Database closed. 106

115 Configuring High Availability for the PMG DB Testing Site Failure Step 4 Database dismounted. ORACLE instance shut down. Verify that the applications connecting to the PMGDB database are failing considering that the primary and hot standby are down. Log in to the Central node as admin1 user and execute the getareas.sh script. $ getareas.sh -key 1001 Config files script-props/private/getareas.properties or script-props/public/getareas.properties not found. Continuing with default settings. Execution parameters: key=1001 GetAreas processing can take some time please do not terminate. Jul 01, :45:14 PM org.apache.tomcat.jdbc.pool.connectionpool init SEVERE: Unable to create initial connections of pool..... Failed to Get Areas : Error while connecting to PmgDb, java.sql.sqlexception: Listener refused the connection with the following error: ORA-12514, TNS:listener does not currently know of service requested in connect descriptor.... Alternate Output: Failed to Get Areas : Error while connecting to PmgDb, java.sql.sqlrecoverableexception: Io exception: Step 5 The Network Adapter could not establish the connection Log in to the cold standby server as an oracle user and log in to sql prompt. Step 6 $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba Connected to database Stop the redo apply process. Step 7 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; Database altered. Finish applying all received redo data. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH; Database altered. Step 8 SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE; Database altered. Switch the physical standby database to the primary role. Step 9 SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN; ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN * ERROR at line 1: ORA-01109: database not open Open the new primary database. This may take some time to complete. 107

116 Recovering Original Primary After Site Failure Configuring High Availability for the PMG DB Step 10 SQL> ALTER DATABASE OPEN; Database altered. Check the status of the database. The switchover status could be FAILED DESTINATION or NOT ALLOWED. Step 11 SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB READ WRITE PMGDB_STBY2 PRIMARY FAILED DESTINATION Verify that applications connecting to the PMGDB database are getting connected now that the cold standby is up as a new primary. Log in to the Central node as admin1 user and execute the getareas.sh script. $ getareas.sh -key 1001 Config files script-props/private/getareas.properties or script-props/public/getareas.properties not found. Continuing with default settings. Execution parameters: key=1001 GetAreas processing can take some time please do not terminate. Received areas, total areas 1 Writing to file: /home/admin1/getareas.csv The report captured in csv file: /home/admin1/getareas.csv **** GetAreas End Script *** Recovering Original Primary After Site Failure The database can be recovered using either the RMAN backup or Flashback database. Following steps describe recovery using RMAN backup. Converting a Failed Primary into a Standby Database Using RMAN Backups DETAILED STEPS Step 1 Step 2 Command or Action Determine the SCN at which the old/cold standby database became the primary database. Run the following query on the new primary database to determine the SCN at which the original/cold standby database became the new primary database: Step 3 SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE; TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) Restore and recover the entire database on the original primary. 108

117 Configuring High Availability for the PMG DB Recovering Original Primary After Site Failure Command or Action Step 4 Run the following RMAN commands on the original primary. $ export ORACLE_SID=PMGDB $ rman target=/ Recovery Manager: Release Production on Tue Jul 8 19:30: Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup mount Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers bytes bytes bytes bytes bytes RMAN> RUN { SET UNTIL SCN <recovery_scn> ; RESTORE DATABASE; RECOVER DATABASE; } executing command: SET until clause Starting restore at 08-JUL-14 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=11 device type=disk channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/system01.dbf channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/sysaux01.dbf channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/undotbs01.dbf channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/users01.dbf channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/mapinfo_ts.dbf channel ORA_DISK_1: restoring datafile to /u01/app/oracle/oradata/pmgdb/pmgdb_ts.dbf channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_08/o1_mf_nnndf_tag t162646_9vqmmgsv_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/pmgdb/backupset/2014_07_08/o1_mf_nnndf_tag t162646_9vqmmgsv_. tag=tag t channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:25 109

118 Recovering Original Primary After Site Failure Configuring High Availability for the PMG DB Command or Action Finished restore at 08-JUL-14 Starting recover at 08-JUL-14 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 24 is already on disk as file /u01/app/oracle/flash_recovery_area/pmgdb/archivelog/2014_07_08/o1_mf_1_24_9vqmn043_.arc archived log for thread 1 with sequence 25 is already on disk as file /u01/app/oracle/flash_recovery_area/pmgdb/archivelog/2014_07_08/o1_mf_1_25_9vqv5j39_.arc archived log for thread 1 with sequence 26 is already on disk as file /u01/app/oracle/flash_recovery_area/pmgdb/archivelog/2014_07_08/o1_mf_1_26_9vqw8v1v_.arc archived log file name=/u01/app/oracle/flash_recovery_area/pmgdb/archivelog/2014_07_08/o1_mf_1_24_9vqmn043_.arc thread=1 sequence=24 media recovery complete, elapsed time: 00:00:00 Finished recover at 08-JUL-14 Step 5 RMAN> exit Recovery Manager complete. Convert original primary database to a physical standby database. Perform the following steps on the original primary database. $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL*Plus: Release Production on Mon Jun 16 17:51: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Step 6 SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Database altered. Shut down and start up the original primary. SQL> SHUTDOWN IMMEDIATE; ORA-01507: database not mounted ORACLE instance shut down. SQL> STARTUP MOUNT; ORACLE instance started. Step 7 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Open the database as read-only. SQL> ALTER DATABASE OPEN READ ONLY; Database altered. 110

119 Configuring High Availability for the PMG DB Recovering Original Primary After Site Failure Command or Action Step 8 Mount the new standby (original primary) again. SQL> SHUTDOWN IMMEDIATE; Database closed. Database dismounted. ORACLE instance shut down. SQL> STARTUP MOUNT; ORACLE instance started. Step 9 Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Log in to the sql prompt on the new primary (old cold standby) and make sure the archive destination is enabled. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered. Step 10 Start Redo Apply on the new standby (original primary) database. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT; Database altered. Step 11 Verify status, it could be SWITCHOVER PENDING or SWITCHOVER LATENT or TO PRIMARY. SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS SWITCHOVER PENDING Step 12 Verify the status on the new primary as well. A value of TO STANDBY or SESSIONS ACTIVE indicates that the primary database be switched to the standby role. SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS TO STANDBY 111

120 Recovering Original Primary After Site Failure Configuring High Availability for the PMG DB Converting Failed Primary Into a Standby Database Using RMAN Backups DETAILED STEPS Step 1 Command or Action Issue the following SQL statement on the primary database to switch it to the standby role: Purpose SQL> select DATABASE_ROLE from v$database; DATABASE_ROLE PRIMARY Step 2 SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN; Database altered. Shut down and then mount this current primary database. SQL> SHUTDOWN IMMEDIATE; ORA-01507: database not mounted ORACLE instance shut down. Step 3 SQL> STARTUP MOUNT; ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Check the switchover status on the current primary which now has become cold standby again. A value of TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready. If neither of these values is returned, continue to query this column until the value returned is either TO PRIMARY or SESSIONS ACTIVE. Step 4 SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; SWITCHOVER_STATUS TO_PRIMARY Return to the original primary to revert its database role to the primary role. SQL> RECOVER MANAGED STANDBY DATABASE FINISH; Media recovery complete. SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN; Database altered. 112

121 Configuring High Availability for the PMG DB Recovering Original Primary After Site Failure Step 5 Command or Action Now open the original primary database. Purpose Step 6 Step 7 SQL> ALTER DATABASE OPEN; Database opened. Return to the new physical standby (cold standby) database and start redo apply. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; Database altered. Check status on the original primary and cold standby to make sure that the database role is correct. SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB READ WRITE PMGDB PRIMARY FAILED DESTINATION Note Ignore and continue if error "ORA-01153: an incompatible media recovery is active" is displayed. On original primary: On cold standby: Step 8 SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE SWITCHOVER_STATUS PMGDB MOUNTED PMGDB_STBY2 PHYSICAL STANDBY SESSIONS ACTIVE Start up and mount the hot standby if hot standby is present in the setup. $ sqlplus / as sysdba SQL*Plus: Release Production on Thu Jul 10 11:39: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes 113

122 Recovering Original Primary After Site Failure Configuring High Availability for the PMG DB Step 9 Step 10 Command or Action Redo Buffers Database mounted bytes Purpose Verify the dgbroker status, log in to DGMGRL either from primary or standby. Note that in case of hot and cold standby to primary setup, WARNING is displayed The status should be a SUCCESS, which indicates reverting to the original as hot standby server is not enabled. For only cold standby to primary setup, it roles is complete for hot and cold should show SUCCESS which means reverting to the original role is complete. standby to primary setup. $ dgmgrl DGMGRL for Linux: Version bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys@pmgdb Password:. Connected. DGMGRL> show configuration Configuration Protection Mode Databases Fast-Start Failover Configuration Status - dgpmgdb : MaxPerformance : PMGDB - Primary database : PMGDB_STBY - Physical standby database : DISABLED : SUCCESS Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database PMGDB_STBY2 - Physical standby database Fast-Start Failover: ENABLED Configuration Status: SUCCESS Enable cold standby if cold standby (PMGDB_STBY2) is showing disabled or with warning. DGMGRL> enable database 'PMGDB_STBY2' Enabled. DGMGRL> show configuration Note If the output of command shows ERROR or WARNING status, see the Troubleshooting Data Guard on PMG DB, on page 125 to resolve the issues. Sometimes processes from the previous command may not be complete, therefore the show configuration command may show different output. Allow a couple of minutes and execute the command again. In case of primary and cold standby setup: In case of primary hot and cold standby setup: In case of primary hot and cold standby setup: Note If the output of command shows ERROR or WARNING status, see the Troubleshooting Data Guard on PMG DB, on page 125 to resolve the issues. Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database PMGDB_STBY2 - Physical standby database Fast-Start Failover: ENABLED 114

123 Configuring High Availability for the PMG DB Rolling Back and Cleaning Up Standby and Primary Configurations Command or Action Configuration Status: SUCCESS Purpose Rolling Back and Cleaning Up Standby and Primary Configurations This section covers common scenarios for rollback. For specific scenarios, like database corruption and so on, it is recommended that you refer to standard DBA practices or Oracle documentation. Note Before starting the rollback, it is recommended that the primary database is backed up. The following procedures describe how to roll back the standby and primary configurations. Removing Data Guard Broker Configuration, on page 115 Removing the Standby Database, on page 120 Removing the Additional Standby Database, on page 121 Removing Data Guard Broker Configuration Step 1 Connect to the primary database from DGMGRL. Step 2 DGMGRL> connect sys@pmgdb Password: Connected. Disable FAST_START FAILOVER. Note Perform this step only if the setup has hot standby configured. Step 3 DGMGRL> DISABLE FAST_START FAILOVER FORCE Disabled. Change the protection mode. Note Perform this step only if the setup has hot standby configured. 115

124 Removing Data Guard Broker Configuration Configuring High Availability for the PMG DB Step 4 DGMGRL> edit configuration set protection mode as MaxPerformance; Succeeded. Remove the configuration. Step 5 DGMGRL> remove configuration Removed configuration Check if the configuration is removed. DGMGRL> show configuration ORA-16532: Data Guard broker configuration does not exist Configuration details cannot be determined by DGMGRL Alternate Output ORA-16596: database not part of the Data Guard broker configuration Configuration details cannot be determined by DGMGRL Removing Configuration Files from Primary Server Step 1 Log in to the primary server as an oracle user and connect to sqlplus. Step 2 $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL*Plus: Release Production on Wed Jul 2 20:17: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Stop the Data Guard Broker process. Step 3 SQL> alter system set dg_broker_start=false; System altered. Check the path of the Data Guard Broker configuration files. SQL> select name, value from v$parameter where name like '%dg_broker%'; NAME VALUE dg_broker_start FALSE dg_broker_config_file1 116

125 Configuring High Availability for the PMG DB Removing Data Guard Broker Configuration /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb.dat Step 4 dg_broker_config_file2 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb.dat Exit from the sql prompt. Step 5 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Remove dg_broker_config_file1 and dg_broker_config_file2. Note The Oracle base directory path may vary based on your installation. $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb.dat Step 6 $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb.dat The system responds with a command prompt. Remove the standby control files if they still exist in /tmp directory. Ignore the error if the standby control files do not exist. $ rm /tmp/pmgdb_stby.ctl $ rm /tmp/pmgdb_stby2.ctl The system responds with a command prompt. Removing Configuration Files From Standby Server Note If the setup is only cold standby added to the primary, then this standby is cold standby. If the setup is only hot standby added to the primary, then this standby is hot standby. If the setup is both hot and cold standby added to the primary, this standby is hot standby. DETAILED STEPS Command or Action Step 1 Log in to the first standby server as an oracle user and connect to sqlplus. $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL*Plus: Release Production on Wed Jul 2 20:17: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL 117

126 Removing Data Guard Broker Configuration Configuring High Availability for the PMG DB Command or Action Step 2 Stop the redo apply process. SQL> alter database recover managed standby database cancel; Database altered. Step 3 Stop Data Guard Broker process. SQL> alter system set dg_broker_start=false; System altered. Step 4 Check the path of the Data Guard Broker configuration files. SQL> select name, value from v$parameter where name like '%dg_broker%'; NAME VALUE dg_broker_start FALSE dg_broker_config_file1 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb_stby.dat dg_broker_config_file2 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb_stby.dat Step 5 Open the database in read-only mode. SQL> alter database open read only; alter database open read only; Step 6 Exit from the sql prompt. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Step 7 Remove the dg_broker_config_file1 and dg_broker_config_file2. $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb_stby.dat The system responds with a command prompt. $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb_stby.dat The system responds with a command prompt. 118

127 Configuring High Availability for the PMG DB Removing Data Guard Broker Configuration Removing Configuration Files From Additional Standby Server Note This section is applicable only to the setup with both hot and cold standby added to primary. This additional standby is the cold standby in the setup. DETAILED STEPS Command or Action Step 1 Log in to the standby server as an oracle user and connect to sqlplus. $ export ORACLE_SID=PMGDB $ sqlplus / as sysdba SQL*Plus: Release Production on Wed Jul 2 20:17: Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Step 2 Stop the redo apply process. SQL> alter database recover managed standby database cancel; Database altered. Step 3 Stop the Data Guard Broker process. SQL> alter system set dg_broker_start=false; System altered. Step 4 Check the path of the Data Guard Broker configuration files. SQL> select name, value from v$parameter where name like '%dg_broker%'; NAME VALUE dg_broker_start FALSE dg_broker_config_file1 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb_stby2.dat dg_broker_config_file2 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb_stby2.dat Step 5 Open the database in read-only mode. SQL> alter database open read only; alter database open read only; Step 6 Exit from the sql prompt 119

128 Removing the Standby Database Configuring High Availability for the PMG DB Command or Action SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release bit Production Step 7 Remove the dg_broker_config_file1 and dg_broker_config_file2. $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr1pmgdb_stby2.dat The system responds with a command prompt. $ rm /u01/app/oracle/product/11.2.0/dbhome_1/dbs/dr2pmgdb_stby2.dat The system responds with a command prompt. Removing the Standby Database Note If the setup is only cold standby added to the primary, then this standby is cold standby. If the setup is only hot standby added to the primary, then this standby is hot standby. If the setup is both hot and cold standby added to the primary, this standby is hot standby. Step 1 From first standby server console, execute the dbca command. Step 2 $ dbca -silent -deletedatabase -sourcedb PMGDB Connecting to database 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/pmgdb.log" for further details. Move the respective directories as backup directories. Note The Oracle base directory path may vary based on your installation. $ mv /u01/app/oracle/oradata/pmgdb /u01/app/oracle/oradata/pmgdb-bak The system responds with a command prompt. $ mv /u01/app/oracle/flash_recovery_area/pmgdb /u01/app/oracle/flash_recovery_area/pmgdb-bak The system responds with a command prompt. $ mv /u01/app/oracle/flash_recovery_area/pmgdb_stby /u01/app/oracle/flash_recovery_area/pmgdb-stby-bak The system responds with a command prompt. 120

129 Configuring High Availability for the PMG DB Cleaning Up the Primary Database $ mv /u01/app/oracle/admin/pmgdb /u01/app/oracle/admin/pmgdb-bak The system responds with a command prompt. Removing the Additional Standby Database DETAILED STEPS This section is applicable only to the setup with both hot and cold standby added to primary. This additional standby is the cold standby in this setup. Step 1 Command or Action From the standby server console, execute the dbca command. Purpose Step 2 $ dbca -silent -deletedatabase -sourcedb PMGDB Connecting to database 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/pmgdb.log" for further details. Move the respective directories as backup directories. $ mv /u01/app/oracle/oradata/pmgdb /u01/app/oracle/oradata/pmgdb-bak The system responds with a command prompt. $ mv /u01/app/oracle/flash_recovery_area/pmgdb /u01/app/oracle/flash_recovery_area/pmgdb-bak The system responds with a command prompt. $ mv /u01/app/oracle/flash_recovery_area/pmgdb_stby2 /u01/app/oracle/flash_recovery_area/pmgdb-stby2-bak The system responds with a command prompt. $ mv /u01/app/oracle/admin/pmgdb /u01/app/oracle/admin/pmgdb-bak The system responds with a command prompt. Cleaning Up the Primary Database Cleaning Up the Redo Log Files Step 1 Drop the standby redo log files from the primary database. SQL> ALTER DATABASE DROP STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo01.log'); Database altered. 121

130 Cleaning Up the Primary Database Configuring High Availability for the PMG DB SQL> ALTER DATABASE DROP STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo02.log'); Database altered. SQL> ALTER DATABASE DROP STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo03.log'); Database altered. Step 2 SQL> ALTER DATABASE DROP STANDBY LOGFILE ('/u01/app/oracle/oradata/pmgdb/standby_redo04.log'); Database altered. Remove the standby redo log files from the directory. $ rm /u01/app/oracle/oradata/pmgdb/standby_redo01.log The system responds with a command prompt. $ rm /u01/app/oracle/oradata/pmgdb/standby_redo02.log The system responds with a command prompt. $ rm /u01/app/oracle/oradata/pmgdb/standby_redo03.log The system responds with a command prompt. $ rm /u01/app/oracle/oradata/pmgdb/standby_redo04.log The system responds with a command prompt. Cleaning Up Initialization Parameters The initialization parameters can be reset using either one of these methods: Note (Recommended) Create spfile from backup pfile that was generated during initial steps. For more information, see Using the Backup Pfile, on page 122. If the backup pfile is not available, the parameters can be reset using sql statements. For more information, see Using SQL Statements, on page 123. Using the Backup Pfile Use this option to restore the spfile from the backup pfile. Step 1 Shutdown the database. Step 2 SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. From the sql prompt, create the spfile from the backup pfile. Note f the specific path for the initpmgdb_pre_dgsetup.ora file (for example, /backup/initpmgdb_pre_dgsetup.ora) was specified when the file was backed up during the initial steps, specify the full path (for example, /backup/initpmgdb_pre_dgsetup.ora). If the path is not specified, the file will be picked up from the default directory, that is, $ORACLE_HOME/dbs. 122

131 Configuring High Availability for the PMG DB Cleaning Up the Primary Database Step 3 SQL> CREATE SPFILE FROM PFILE='initPMGDB_pre_dgsetup.ora'; File created. Start up the database. SQL> startup ORACLE instance started. Total System Global Area bytes Fixed Size bytes Variable Size bytes Database Buffers bytes Redo Buffers bytes Database mounted. Database opened. Using SQL Statements If the backup pfile is not available, the parameters can be reset through sql statements. This section can be skipped if spfile is already restored from the backup pfile in the previous section. From the sql prompt, reset the initialization parameters. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_CONFIG; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_DEST_2; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_DEST_STATE_2; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_FORMAT SCOPE=SPFILE; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_MAX_PROCESSES; System altered. SQL> ALTER SYSTEM RESET FAL_SERVER; System altered. SQL> ALTER SYSTEM RESET DB_FILE_NAME_CONVERT SCOPE=SPFILE; System altered. SQL> ALTER SYSTEM RESET LOG_FILE_NAME_CONVERT SCOPE=SPFILE; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_DEST_3; System altered. SQL> ALTER SYSTEM RESET LOG_ARCHIVE_DEST_STATE_3; System altered. 123

132 Recreating Standby Servers Configuring High Availability for the PMG DB Verifying the Database Before You Begin Step 1 From the sql prompt, check if the database role is Primary. Step 2 SQL> select name, open_mode, DB_UNIQUE_NAME, DATABASE_ROLE, SWITCHOVER_STATUS from v$database; NAME TYPE VALUE service_names string PMGDB_PRIMARY Ensure that the database service PMGDB_PRIMARY is present. Step 3 SQL> show parameter service NAME TYPE VALUE service_names string PMGDB_PRIMARY Verify that applications connecting to the PMGDB database are getting connected. For example, the OpsTools script getareas.sh can be executed. Log in to the Central node as admin1 user and execute the script. $ getareas.sh -key 1001 Config files script-props/private/getareas.properties or script-props/public/getareas.properties not found. Continuing with default settings. Execution parameters: key=1001 GetAreas processing can take some time please do not terminate. Received areas, total areas 1 Writing to file: /home/admin1/getareas.csv The report captured in csv file: /home/admin1/getareas.csv **** GetAreas End Script *** Recreating Standby Servers Now that the standby server and primary server configurations are cleaned up, reconfigure the Data Guard Broker. For more information, see the Setting Up the Oracle Data Guard Broker procedure on page 58. Deleting the Primary Database The primary database is now rolled back to previous state using steps in the previous sections. However, if there is a need to clean up all configurations, including the primary database, ensure that you do the following: Delete the primary database and create it again using the standard DBA process (for example, using Oracle DBCA utility, and so on). Install the PMG DB schema and populate data as per steps mentioned in the install guide. For more information, see, Cisco RAN Management System Installation Guide. 124

133 Configuring High Availability for the PMG DB Troubleshooting Data Guard on PMG DB To create the standby servers again, follow the tasks listed in the Configuration Workflow, on page 63. Troubleshooting Data Guard on PMG DB This section provides solutions to some of the errors that may be encountered while configuring Data Guard on the PMG DB. Reverting Original Primary Database After Site Failure, on page 125 Verifying the Data Guard Broker Configuration, on page 133 Reverting From Disk Space Issues, on page 133 Reverting Original Primary Database After Site Failure If the Data Guard Broker "show configuration" status is not a SUCCESS, check the error code and try to resolve using steps under respective errors. If not connected to DGMGRL tool, connect as sys@pmgdb. Note The outputs of the steps are from the setup with primary, hot standby, and cold standby. ORA ORA-16825: multiple errors or warnings, including fast-start failover-related errors or warnings, detected for the database If DGMGRL> show configuration is showing this error for primary database, check verbose view to see exact errors related to the database. DGMGRL> show database verbose 'PMGDB' Database - PMGDB Role: Intended State: Instance(s): PMGDB PRIMARY TRANSPORT-ON Database Error(s): ORA-16628: broker protection mode inconsistent with the database setting Database Warning(s): ORA-16817: unsynchronized fast-start failover configuration ORA ORA-16628: broker protection mode inconsistent with the database setting If DGMGRL> show configuration is showing this error for the primary database, change to MaxAvailability mode. Disable Fast-Start Failover, if configuration shows it as ENABLED. DGMGRL> disable fast_start failover Disabled. DGMGRL> edit configuration set protection mode as MaxAvailability; Succeeded. 125

134 Reverting Original Primary Database After Site Failure Configuring High Availability for the PMG DB Check configuration; if it is a SUCCESS enable fast_start failover, otherwise troubleshoot for the error displayed. DGMGRL> show configuration Fast-Start Failover: DISABLED Configuration Status: SUCCESS DGMGRL> enable fast_start failover Enabled. ORA ORA-16810: multiple errors or warnings detected for the database If DGMGRL> show configuration is showing this error for standby database, check verbose view to see exact errors related to the database. DGMGRL> show database verbose 'PMGDB_STBY' Database - PMGDB_STBY Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: (unknown) Apply Lag: (unknown) Real Time Query: OFF Instance(s): PMGDB Database Error(s): ORA-16700: the standby database has diverged from the primary database ORA-16766: Redo Apply is stopped ORA ORA-16700: the standby database has diverged from the primary database If DGMGRL> show database verbose is showing this error for the standby database it indicates that the the primary and standby server files have a mismatch. View alertpmgdb.log of the standby database to see if it is due to ORA error. If ORA is in the log, follow the trouble steps for ORA Following is sample output of the logs. $ cd /u01/app/oracle/diag/rdbms/pmgdb_stby/pmgdb/trace $ tail -f alert_pmgdb.log -n100 Or $ vi alert_pmgdb.log MRP0 started with pid=66, OS id=24771 MRP0: Background Managed Standby Recovery process started (PMGDB) Serial Media Recovery started Managed Standby Recovery starting Real Time Apply Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles. MRP0: Detected orphaned datafiles! Recovery will possibly be retried after flashback... Errors in file /u01/app/oracle/diag/rdbms/pmgdb_stby/pmgdb/trace/pmgdb_mrp0_24771.trc: ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/u01/app/oracle/oradata/pmgdb/system01.dbf' 126

135 Configuring High Availability for the PMG DB Reverting Original Primary Database After Site Failure Managed Standby Recovery not using Real Time Apply Note The log directory may vary based on the installation setup. ORA ORA-19909: datafile 1 belongs to an orphan incarnation If alert_pmgdb.log shows ORA error, it indicates that the primary database was opened with RESETLOGS after the database recovery that causes the database incarnation to be different between the primary and standby database. The following steps describe how to avoid re-creating a physical standby database to resolve this issue. Note These steps require FLASHBACK enabled. FLASHBACK is enabled as part of configuring the standby server setup and is enabled for standby databases. However, for any reason if it is not enabled, the standby database needs to be recreated. To do this, see ORA Before proceeding ensure that the fast_start failover in DGMRL configuration is DISABLED. If not disabled, disable it. DGMGRL> disable fast_start failover Disabled. 2 Determine the SCN before the RESETLOGS operation occurred. On the primary database, use the following query to obtain the value of the system change number (SCN) that is two SCNs before the RESETLOGS operation occurred on the primary database. SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE; TO_CHAR(RESETLOGS_CHANGE#-2) Obtain the current SCN on the standby database. SQL> SELECT TO_CHAR(CURRENT_SCN) FROM V$DATABASE; TO_CHAR(CURRENT_SCN) If the value of CURRENT_SCN is larger than the value of resetlogs_change# - 2, issue the following statement on the standby database to flash back the standby database. Note If the value of CURRENT_SCN is less than the value of the resetlogs_change# - 2, skip to next step. SQL> FLASHBACK STANDBY DATABASE TO SCN ; Flashback complete. 5 Restart Redo Apply on the standby server. 127

136 Reverting Original Primary Database After Site Failure Configuring High Availability for the PMG DB Note This step may have to be repeated if following step shows configuration status Error or Warning. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT; Database altered. 6 Connect to DGMGRL as sys@pmgdb if not connected and view the DGMGRL show configuration; if the status is error with error ORA-16766, repeat previous step to restart redo apply and continue with this step till no error is displayed. DGMGRL> show configuration Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - Physical standby database Error: ORA-16766: Redo Apply is stopped PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: DISABLED Configuration Status: ERROR 7 If the previous step, shows Warning: ORA-16826: apply service state is inconsistent with the DelayMins property, use this step on the standby server sql prompt. Else continue with the next step. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE Database altered. CANCEL; SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; Database altered. DGMGRL> show configuration Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - Physical standby database PMGDB_STBY2 - Physical standby database (disabled) ORA-16661: the standby database needs to be reinstated Fast-Start Failover: DISABLED Configuration Status: SUCCESS 8 Now enable fast_start failover and if cold standby is present in the setup, enable cold standby database. DGMGRL> enable fast_start failover Enabled. DGMGRL> enable database 'PMGDB_STBY2' Enabled. 9 Verify that errors are resolved and the setup is reverted to the original state successfully. DGMGRL> show configuration Configuration - dgpmgdb Protection Mode: MaxAvailability Databases: PMGDB - Primary database PMGDB_STBY - (*) Physical standby database PMGDB_STBY2 - Physical standby database 128

137 Configuring High Availability for the PMG DB Reverting Original Primary Database After Site Failure Fast-Start Failover: ENABLED Configuration Status: SUCCESS RMAN RMAN-20208: UNTIL CHANGE is before RESETLOGS change The following command may throw error as shown. RMAN> RUN { SET UNTIL SCN <recovery_scn> ; RESTORE DATABASE; RECOVER DATABASE; } Starting restore at 17-JUL-14 using target database control file instead of recovery catalog RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 07/17/2014 RMAN-20208: UNTIL CHANGE is before RESETLOGS change Solution: The error RMAN occurred because the command is trying to go to an SCN (<recovery_scn>) before the current incarnation of the database. Follow the steps to resolve this. 1 List the incarnation of the original primary database in RMAN. $ export ORACLE_SID=PMGDB $ rman target=/ RMAN> list incarnation of database; using target database control file instead of recovery catalog List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time PMGDB PARENT 1 15-AUG PMGDB PARENT JUL PMGDB PARENT JUL PMGDB CURRENT JUL-14 2 Now mark the incarnation having SCN before the <recovery_scn>. In this example, if value of recovery_scn is , select incarnation 2 with SCN Reset the database to this incarnation. RMAN> reset database to incarnation 2; database reset to incarnation 2 3 Now execute the RUN command which was failed; it should show output as expected. RMAN> RUN { SET UNTIL SCN <recovery_scn> ; RESTORE DATABASE; RECOVER DATABASE; } executing command: SET until clause media recovery complete, elapsed time: 00:00:00 Finished recover at 08-JUL-14 ORA ORA-38729: Not enough flashback database log data to do FLASHBACK. 129

138 Reverting Original Primary Database After Site Failure Configuring High Availability for the PMG DB This error may occur while resolving error RMAN If FLASHBACK DATABASE is run when the target SCN is outside the flashback window, then FLASHBACK DATABASE fails with an ORA error. In this case the database will not be changed. The possible reason to get this error could be intentionally or unintentionally removing/moving some flashback database logs. To overcome this error, recreate the standby again, that is, clean up as described in Rolling Back and Cleaning Up Standby and Primary Configurations, on page 115 and create the standby as described in Configuring the Primary Server, on page 64 and Configuring the Hot Standby Server, on page 70. Though most of the steps are same, some additional changes are required. Hence those steps are repeated here. 1 Log in to primary sql prompt and defer LOG_ARCHIVE_DEST_STATE SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER; System altered. 2 Log in to DGMGRL from primary or hot standby; Disable FAST_START FAILOVER is not disabled already. DGMGRL> DISABLE FAST_START FAILOVER FORCE Disabled. 3 Change protection mode in Data Guard Broker configuration. DGMGRL> edit configuration set protection mode as MaxPerformance; Succeeded. 4 Remove hot standby from Data Guard Broker configuration. DGMGRL> REMOVE DATABASE 'PMGDB_STBY' Removed database "PMGDB_STBY" from the configuration 5 To remove the configuration files from the hot standby, see #unique_135. Note Redo apply process may have already stopped for the hot standby. Ignore the error if the step "Stop redo apply process" throws any error. 6 To remove the standby database, see Removing the Standby Database, on page Return and log in to the primary sql prompt and enable LOG_ARCHIVE_DEST_STATE. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered. 8 Back up the primary database, see Backing Up the Primary Database, on page Move existing file as backup if pfile and control file already exist in the /tmp file. $ mv /tmp/pmgdb_stby.ctl /tmp/pmgdb_stby.ctl_back $ mv /tmp/initpmgdb_stby.ora /tmp/initpmgdb_stby.ora_back System responds with command prompt. 10 Create the standby control file and PFILE, see the Creating Standby Control File and PFILE, on page Stop the observer by logging in to DGMGRL and connecting as sys@pmgdb if the observer is running. DGMGRL> stop observer Done. 12 Create a hot standby, see Configuring the Hot Standby Server, on page 70. ORA ORA-16820: fast-start failover observer is no longer observing this database This error may be displayed in DGMGRL show configuration output, if the observer is not running or the drcpmgdb.log (under /u01/app/oracle/diag/rdbms/pmgdb/pmgdb/trace) on primary shows warning as ObserverHB indicates lack of Observer pings at Standby. 130

Videoscape Distribution Suite Software Installation Guide

Videoscape Distribution Suite Software Installation Guide First Published: August 06, 2012 Last Modified: September 03, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco TEO Adapter Guide for SAP Java

Cisco TEO Adapter Guide for SAP Java Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

Cisco TEO Adapter Guide for Microsoft Windows

Cisco TEO Adapter Guide for Microsoft Windows Cisco TEO Adapter Guide for Microsoft Windows Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco TEO Adapter Guide for

Cisco TEO Adapter Guide for Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

Configuring High Availability for VMware vcenter in RMS All-In-One Setup

Configuring High Availability for VMware vcenter in RMS All-In-One Setup Configuring High Availability for VMware vcenter in RMS All-In-One Setup This chapter describes the process of configuring high availability for the VMware vcenter in an RMS All-In-One setup. It provides

More information

Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches)

Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches) Software Configuration Guide, Cisco IOS XE Everest 16.6.x (Catalyst 9300 Switches) First Published: 2017-07-31 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x

Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x Cisco Nexus 9000 Series NX-OS Virtual Machine Tracker Configuration Guide, Release 9.x First Published: 2018-07-05 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco TEO Adapter Guide for SAP ABAP

Cisco TEO Adapter Guide for SAP ABAP Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text Part

More information

Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine

Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute Engine Host Upgrade Utility User Guide for Cisco UCS E-Series Servers and the Cisco UCS E-Series Network Compute First Published: August 09, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive

More information

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference July 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Installation and Configuration Guide for Visual Voic Release 8.5

Installation and Configuration Guide for Visual Voic Release 8.5 Installation and Configuration Guide for Visual Voicemail Release 8.5 Revised October 08, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 First Published: August 12, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide

Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide Cisco Connected Grid Design Suite (CGDS) - Substation Workbench Designer User Guide Release 1.5 October, 2013 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone

More information

Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes

Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes Method of Procedure for HNB Gateway Configuration on Redundant Serving Nodes First Published: December 19, 2014 This method of procedure (MOP) provides the HNBGW configuration on redundant Serving nodes

More information

Cisco UCS Director API Integration and Customization Guide, Release 5.4

Cisco UCS Director API Integration and Customization Guide, Release 5.4 Cisco UCS Director API Integration and Customization Guide, Release 5.4 First Published: November 03, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances Recovery Guide for Cisco Digital Media Suite 5.4 Appliances September 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9

Cisco UCS C-Series IMC Emulator Quick Start Guide. Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Cisco UCS C-Series IMC Emulator Quick Start Guide Cisco IMC Emulator 2 Overview 2 Setting up Cisco IMC Emulator 3 Using Cisco IMC Emulator 9 Revised: October 6, 2017, Cisco IMC Emulator Overview About

More information

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x)

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) First Published: May 17, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,

More information

Smart Software Manager satellite Installation Guide

Smart Software Manager satellite Installation Guide Smart Software Manager satellite Installation Guide Published: Nov, 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM REST API Configuration Guide, Release 5.x First Published: August 01, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco CIMC Firmware Update Utility User Guide

Cisco CIMC Firmware Update Utility User Guide Cisco CIMC Firmware Update Utility User Guide For Cisco UCS C-Series Servers September 17, 2010 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco FindIT Plugin for Kaseya Quick Start Guide

Cisco FindIT Plugin for Kaseya Quick Start Guide First Published: 2017-10-23 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Migration and Upgrade: Frequently Asked Questions

Migration and Upgrade: Frequently Asked Questions First Published: May 01, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Application Launcher User Guide

Application Launcher User Guide Application Launcher User Guide Version 1.0 Published: 2016-09-30 MURAL User Guide Copyright 2016, Cisco Systems, Inc. Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

CPS UDC MoP for Session Migration, Release

CPS UDC MoP for Session Migration, Release CPS UDC MoP for Session Migration, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes First Published: October 2014 Release 1.0.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408

More information

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Backup and Restore Guide for Cisco Unified Communications Domain Manager 8.1.3

Backup and Restore Guide for Cisco Unified Communications Domain Manager 8.1.3 Communications Domain Manager 8.1.3 First Published: January 29, 2014 Last Modified: January 29, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4

Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4 Cisco UCS Director PowerShell Agent Installation and Configuration Guide, Release 5.4 First Published: November 05, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco WebEx Meetings Server Administration Guide Release 1.5

Cisco WebEx Meetings Server Administration Guide Release 1.5 First Published: August 16, 2013 Last Modified: April 18, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco WebEx Meetings Server Administration Guide

Cisco WebEx Meetings Server Administration Guide First Published: October 23, 2012 Last Modified: October 23, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800

More information

Cisco Unified Communications Self Care Portal User Guide, Release

Cisco Unified Communications Self Care Portal User Guide, Release Cisco Unified Communications Self Care Portal User Guide, Release 10.0.0 First Published: December 03, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0

Cisco UCS Director F5 BIG-IP Management Guide, Release 5.0 First Published: July 31, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 Text

More information

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide

Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide Cisco UCS Virtual Interface Card Drivers for Windows Installation Guide First Published: 2011-09-06 Last Modified: 2015-09-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA

More information

Cisco Videoscape Distribution Suite Transparent Caching Troubleshooting Guide

Cisco Videoscape Distribution Suite Transparent Caching Troubleshooting Guide Cisco Videoscape Distribution Suite Transparent Caching Troubleshooting Guide Release 5.7.3 March 2016 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers,

More information

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2)

Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 7.0(3)I4(2) First Published: 2016-07-15 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Smart Software Manager satellite Installation Guide

Smart Software Manager satellite Installation Guide Smart Software Manager satellite Installation Guide Published: Jul, 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: July 2017 Release 2.5.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-10-13 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Process Automation Guide for Automation for SAP BOBJ Enterprise

Process Automation Guide for Automation for SAP BOBJ Enterprise Process Automation Guide for Automation for SAP BOBJ Enterprise Release 3.0 December 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Business Edition 6000 Installation Guide, Release 10.0(1)

Cisco Business Edition 6000 Installation Guide, Release 10.0(1) First Published: January 15, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2

Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2 Release Notes for Cisco Virtualization Experience Client 2111/2211 PCoIP Firmware Release 4.0.2 First Published: January 31, 2013 Last Modified: February 06, 2013 Americas Headquarters Cisco Systems, Inc.

More information

Method of Procedure to Upgrade RMS OS to Red Hat Enterprise Linux 6.7

Method of Procedure to Upgrade RMS OS to Red Hat Enterprise Linux 6.7 First Published: November 20, 2015 Contents Scope of MOP... 4 Release Components... 4 Pre Requisites... 4 Assumptions... 4 Process Information... 5 Upgrade Timing... 5 Requirements... 5 Pre Maintenance...

More information

Cisco CSPC 2.7x. Configure CSPC Appliance via CLI. Feb 2018

Cisco CSPC 2.7x. Configure CSPC Appliance via CLI. Feb 2018 Cisco CSPC 2.7x Configure CSPC Appliance via CLI Feb 2018 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 5 Contents Table of Contents 1. CONFIGURE CSPC

More information

Cisco Meeting Management

Cisco Meeting Management Cisco Meeting Management Cisco Meeting Management 1.1 User Guide for Administrators September 19, 2018 Cisco Systems, Inc. www.cisco.com Contents 1 Introduction 4 1.1 The software 4 2 Deployment overview

More information

Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM OpenStack REST API Configuration Guide, Release 5.x First Published: August 01, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution First Published: 2016-12-21 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Release Notes First Published: June 2015 Release 1.1.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-12-19 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Embedded Packet Capture Configuration Guide

Embedded Packet Capture Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

SAML SSO Okta Identity Provider 2

SAML SSO Okta Identity Provider 2 SAML SSO Okta Identity Provider SAML SSO Okta Identity Provider 2 Introduction 2 Configure Okta as Identity Provider 2 Enable SAML SSO on Unified Communications Applications 4 Test SSO on Okta 4 Revised:

More information

Cisco Business Edition 7000 Installation Guide, Release 11.5

Cisco Business Edition 7000 Installation Guide, Release 11.5 First Published: August 08, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Method of Procedure for Multiple ASR 5000 Server Integration with Cisco RMS

Method of Procedure for Multiple ASR 5000 Server Integration with Cisco RMS Method of Procedure for Multiple First Published: November 6, 2014 Revised: May 15, 2015 Cisco Systems, Inc. www.cisco.com 1 Table of Contents Scope of MOP... 3 Architecture Diagram... 3 HW Requirements...

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.0

Cisco Terminal Services (TS) Agent Guide, Version 1.0 First Published: 2016-08-29 Last Modified: 2018-01-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Installation and Configuration Guide for Cisco Services Ready Engine Virtualization

Installation and Configuration Guide for Cisco Services Ready Engine Virtualization Installation and Configuration Guide for Cisco Services Ready Engine Virtualization Software Release 2.0 December 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Unified Communications Self Care Portal User Guide, Release 11.5(1)

Cisco Unified Communications Self Care Portal User Guide, Release 11.5(1) Cisco Unified Communications Self Care Portal User Guide, Release 11.5(1) Unified Communications Self Care Portal 2 Unified Communications Self Care Settings 2 Phones 4 Additional Settings 12 Revised:

More information

Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.6(1)

Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.6(1) Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.6(1) First Published: December 15, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 15S Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco Policy Suite Upgrade Guide Release 9.0.0

Cisco Policy Suite Upgrade Guide Release 9.0.0 First Published: March 18, 2016 Last Modified: March 18, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Process Automation Guide for Automation for SAP HANA

Process Automation Guide for Automation for SAP HANA Process Automation Guide for Automation for SAP HANA Release 3.0 December 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Enterprise Chat and Supervisor s Guide, Release 11.5(1)

Enterprise Chat and  Supervisor s Guide, Release 11.5(1) Enterprise Chat and Email Supervisor s Guide, Release 11.5(1) For Unified Contact Center Enterprise August 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Cisco Unified Communications Manager Device Package 10.5(1)( ) Release Notes

Cisco Unified Communications Manager Device Package 10.5(1)( ) Release Notes Cisco Unified Communications Manager Device Package 10.5(1)(11008-1) Release Notes First Published: September 02, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.5(1)

Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.5(1) Cisco Unified Contact Center Express Historical Reporting Guide, Release 10.5(1) First Published: June 11, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA

More information

Cisco Policy Suite Backup and Restore Guide

Cisco Policy Suite Backup and Restore Guide Cisco Policy Suite 7.0.5 Backup and Restore Guide First Published: May 19, 2015 Last Updated: June 30, 2015 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone

More information

Cisco StadiumVision Management Dashboard Monitored Services Guide

Cisco StadiumVision Management Dashboard Monitored Services Guide Cisco StadiumVision Management Dashboard Monitored Services Guide Release 2.3 May 2011 Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Jabber IM for iphone Frequently Asked Questions

Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions 2 Basics 2 Connectivity 3 Contacts 4 Calls 4 Instant Messaging 4 Meetings 5 Support and Feedback

More information

Cisco VDS Service Broker Software Installation Guide for UCS Platforms

Cisco VDS Service Broker Software Installation Guide for UCS Platforms Cisco VDS Service Broker 1.0.1 Software Installation Guide for UCS Platforms Revised: May 2013 This document provides detailed instructions for installing the Cisco Videoscape Distribution Suite Service

More information

OpenStack Group-Based Policy User Guide

OpenStack Group-Based Policy User Guide First Published: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Enterprise Chat and Upgrade Guide, Release 11.6(1)

Enterprise Chat and  Upgrade Guide, Release 11.6(1) Enterprise Chat and Email Upgrade Guide, Release 11.6(1) For Unified Contact Center Enterprise August 2017 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server December 17 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA95134-1706 USA http://www.cisco.com

More information

CiscoWorks Network Compliance Manager Horizontal Scalability User s Guide

CiscoWorks Network Compliance Manager Horizontal Scalability User s Guide CiscoWorks Network Compliance Manager 1.7.03 Horizontal Scalability User s Guide February 13, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Cisco Host Upgrade Utility 1.5(1) User Guide

Cisco Host Upgrade Utility 1.5(1) User Guide First Published: March 04, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco TelePresence MCU MSE 8510

Cisco TelePresence MCU MSE 8510 Cisco TelePresence MCU MSE 8510 Version 4.3 Creating and managing an MCU cluster D14718.05 March 2012 Contents Contents Contents... 2 Introduction... 4 Master blades... 4 Slave blades... 4 System requirements...

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs

Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs Prime Service Catalog: UCS Director Integration Best Practices Importing Advanced Catalogs May 10, 2017 Version 1.0 Cisco Systems, Inc. Corporate Headquarters 170 West Tasman Drive San Jose, CA 95134-1706

More information

Enterprise Chat and Administrator s Guide to System Console, Release 11.6(1)

Enterprise Chat and  Administrator s Guide to System Console, Release 11.6(1) Enterprise Chat and Email Administrator s Guide to System Console, Release 11.6(1) For Unified Contact Center First Published: August 2016 Last Modified: August 2017 Americas Headquarters Cisco Systems,

More information

Cisco Business Edition 6000 Installation Guide, Release 10.6

Cisco Business Edition 6000 Installation Guide, Release 10.6 First Published: February 19, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco TelePresence Supervisor MSE 8050

Cisco TelePresence Supervisor MSE 8050 Cisco TelePresence Supervisor MSE 8050 Installation Guide 61-0012-09 July 2014 Contents General information 3 About the Cisco TelePresence Supervisor MSE 8050 3 Port and LED location 3 LED behavior 3 Installing

More information

Cisco CSPC 2.7.x. Quick Start Guide. Feb CSPC Quick Start Guide

Cisco CSPC 2.7.x. Quick Start Guide. Feb CSPC Quick Start Guide CSPC Quick Start Guide Cisco CSPC 2.7.x Quick Start Guide Feb 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 17 Contents Table of Contents 1. INTRODUCTION

More information

Flow Sensor and Load Balancer Integration Guide. (for Stealthwatch System v6.9.2)

Flow Sensor and Load Balancer Integration Guide. (for Stealthwatch System v6.9.2) Flow Sensor and Load Balancer Integration Guide (for Stealthwatch System v6.9.2) THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Administrator s Guide Version 3.1 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, Release 5.2.x

Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, Release 5.2.x Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, 5.2.x First Published: 2016-07-01 Last Modified: 2014-10-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Embedded Packet Capture Configuration Guide

Embedded Packet Capture Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

IP Application Services Configuration Guide, Cisco IOS Release 15SY

IP Application Services Configuration Guide, Cisco IOS Release 15SY Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Tetration Cluster Cloud Deployment Guide

Tetration Cluster Cloud Deployment Guide First Published: 2017-11-16 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x First Published: August 01, 2014 Last Modified: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

First Hop Redundancy Protocols Configuration Guide, Cisco IOS XE Release 3SE (Catalyst 3850 Switches)

First Hop Redundancy Protocols Configuration Guide, Cisco IOS XE Release 3SE (Catalyst 3850 Switches) First Hop Redundancy Protocols Configuration Guide, Cisco IOS XE Release 3SE (Catalyst 3850 Switches) Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Dell Storage Integration Tools for VMware

Dell Storage Integration Tools for VMware Dell Storage Integration Tools for VMware Version 4.1 Administrator s Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION:

More information

Validating Service Provisioning

Validating Service Provisioning Validating Service Provisioning Cisco EPN Manager 2.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Getting Started Guide for Cisco UCS E-Series Servers, Release 2.x

Getting Started Guide for Cisco UCS E-Series Servers, Release 2.x First Published: August 09, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Managing Device Software Images

Managing Device Software Images Managing Device Software Images Cisco DNA Center 1.1.2 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,

More information

Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2

Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2 Cisco TelePresence Management Suite Extension for Microsoft Exchange 5.2 Software Release Notes First Published: April 2016 Software Version 5.2 Cisco Systems, Inc. 1 www.cisco.com 2 Preface Change History

More information

NetFlow Configuration Guide

NetFlow Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

TechNote on Handling TLS Support with UCCX

TechNote on Handling TLS Support with UCCX TechNote on Handling TLS Support with UCCX Contents Introduction UCCX Functions as a Server UCCX Functions as a Client TLS 1.0 Support is being Deprecated Next Steps TLS Support Matrix Current Support

More information