Configuring Control Center 1.0.x for HA

Size: px
Start display at page:

Download "Configuring Control Center 1.0.x for HA"

Transcription

1 The Zenoss Enablement Series: Configuring Control Center 1.0.x for HA Document Version 500-P4 Zenoss, Inc.

2 Copyright Zenoss, Inc Four Points Drive, Bldg. 1 - Suite 300, Austin, Texas 78726, U.S.A. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc. in the United States and other countries. All other trademarks, logos, and service marks are the property of Zenoss or other third parties. Use of these marks is prohibited without the express written consent of Zenoss, Inc. or the thirdparty owner. Cisco, Cisco UCS, Cisco Unified Computing System, Cisco Catalyst, and Cisco Nexus are trademarks or registered trademarks of Cisco and/or its affiliates in the United States and certain other countries. Flash is a registered trademark of Adobe Systems Incorporated. Oracle, the Oracle logo, Java, and MySQL are registered trademarks of the Oracle Corporation and/or its affiliates. Linux is a registered trademark of Linus Torvalds. SNMP Informant is a trademark of Garth K. Williams (Informant Systems, Inc.). Sybase is a registered trademark of Sybase, Inc. Tomcat is a trademark of the Apache Software Foundation. vsphere is a trademark of VMware, Inc. in the United States and/or other jurisdictions. Windows is a registered trademark of Microsoft Corporation in the United States and other countries. All other companies and products mentioned are trademarks and property of their respective owners.

3

4 Table of Contents Introduction... 1 Applies To... 1 Summary... 1 Naming Conventions... 2 Prerequisites... 3 Planning Your Deployment... 5 Prepare both Control Center Master Nodes (Active & Passive)... 6 Prepare the Master Host... 6 Modifications to Standard Preparations... 6 Additional Preparations for Clustering... 6 Install Control Center on the Master Host... 7 Install and Configure DRBD... 8 Add ELRepo to RPM... 8 Install DRBD... 8 Configure DRBD... 9 Perform First-Time DRBD Initialization Install and Configure Cluster Management Software Install the Cluster Management Software Install the Pacemaker Resource Agent for Control Center Configure the Cluster Management Software Create the Cluster in Standby Mode Set Property and Resource Defaults Define Resources Verification Verify DRBD Configuration Verify Pacemaker Configuration Verify Control Center Configuration Verify Cluster Startup Verify Cluster Failover Configure Control Center Populate the Default Pool with HA Nodes Create Pool(s) for RM Install Control Center Resource Pool Agent(s) Deploy Resource Manager Gracefully Shutdown/Startup Cluster Services Enable Fencing Upgrading Control Center and Resource Manager Downloading Images Upgrading serviced Resource Agent Upgrading Control Center Upgrading Resource Manager Converting Docker Storage Driver... 26

5 Appendix A: Fencing... I Using a Private Network Interface... I Using a Public Network Interface... I Appendix B: Handling DRBD Errors... II Split Brain... II Recognizing Split Brain... II How to Resynchronize the Disks... II Appendix C: Command Summary... IV DRBD... IV Pacemaker... IV Appendix D: References... VII

6

7 Introduction Applies To The procedure outlined in this document applies to the following Linux and Zenoss versions: Control Center Version and higher RHEL/CentOS Linux 7.0 Pacemaker version Corosync version DRBD version Summary This document provides an overview and step-by-step directions to deploy highly-available Control Center master agents on a fast local area network with RHEL or CentOS 7. The Control Center (CC) architecture supports the notion of a resource pool which is a collection of compute, network and storage resources necessary to run a service, such as Zenoss Resource Manager. Specifying two or more CC agent hosts to a given a resource pool is the primary means of ensuring availability of the services in that pool in the event one of the hosts in that pool goes down. However, every CC deployment requires a single host to run as a master agent for CC, which creates a potential single point of failure for CC itself. The objective of setting up the Control Center master agent in a 'high availability' cluster is to minimize, to the greatest degree possible, the downtime associated with a hardware or (non Zenoss) software failure of the system hosting the Control Center master agent. High availability clusters can have various configurations, including, but not limited to: Active Passive, non geo diverse Active Active non geo diverse Active Passive, geo diverse This document describes an Active Passive high availability cluster without geo diversity that uses Pacemaker, Corosync and Distributed Replicated Block Device (DRBD). For our scenario, at least two identical servers per cluster are deployed. At any given time, one node serves as the 'primary' active server and a second identical server stands by ready to take over provision of the Control Center master agent in the event the first server fails or otherwise becomes unavailable. This solution is termed lacking geo diversity because the two servers are colocated in the same facility. As such, this solution provides no protection against a scenario that destroys or renders unreachable the facility hosting the Control Center master agent. 1 Configuring CC for HA

8 The following diagram shows the relationship between the HA hardware cluster for master agents and pools of resource agents. Naming Conventions The following naming conventions are used in this document. Replace these example names with the names or values for your environment CLUSTERVIP - the floating Virtual IP (VIP) address on the public network for the cluster. This VIP is automatically assigned to the active node by the cluster management software. All CC applications running outside of the cluster will use this IP (or its associated host name) to communicate with the CC master agent running in the cluster, that includes the CC CLI, web UI, and CC resource pool agents. Configuring Control Center for HA 2

9 NODE{1,2}-PUBLIC - hostname of the CC master node that resolves to its public IP address. The node should have this name as its hostname as returned by uname -n. NODE{1,2}-PUBLIC-IP - the public IP of the CC master node. This IP does not have to be public to the entire enterprise, but it should be public to the machines used by CC administrators. NODE{1,2}-PRIVATE - the hostname of the CC master node that resolves to its private IP NODE{1,2}-PRIVATE-IP - the private IP of the CC master node. The sample commands in this document are presented in a fixed-width font on a gray background. Text highlighted in red identifies values that you can customize for your environment. The values in less than (<) and greater than (>) symbols are ones you must replace with values that are appropriate for your environment. The other text in red represents you can customize if you want to. Consider the following example: # pcs cluster setup --name serviced-ha \ <NODE1-PUBLIC> <NODE2-PUBLIC> The value serviced-ha is the logical name of the cluster in this command, but that value is just a suggestion - you can use another name if you want to. However, you must specify your own environment-specific values for <NODE1-PUBLIC> and <NODE2-PUBLIC>. The following acronyms are used in this guide: CC - Control Center VIP - Virtual IP Address RM - Zenoss Resource Manager RM Install Guide - Zenoss Resource Manager Installation Guide Prerequisites The following hardware requirements must be satisfied to successfully install and configure Control Center in a high availability environment: Two identical RHEL 7.0/7.1 or CentOS 7.0/7.1 machines to run as the master Control Center agents. Consult the RM Install Guide for the required system specifications for CC master hosts. Two or more identical machines to run as resource pool hosts for Control Center. To provide HA for CC resource pools, each pool must have at least N+1 hosts where N is the number of hosts needed to satisfy the performance and scalability requirements for that pool. The additional host is necessary so that if one host fails, the remaining hosts in the pool have sufficient capacity to handle the workload for that pool. Consult the RM Install Guide for the required system specifications for CC resource pool hosts. The same hardware architecture for all node systems. The cluster manager node and cluster nodes should have the same processor architecture ( x86_64) and OS version (RHEL 7.0, 7.1 or CentOS 7.0, 7.1). Because the cluster manager node configures and manages the clusters and creates DRBD RPM packages, it should share the same architecture as the node systems At least two filesystems per node for the CC data that must be replicated. Zenoss recommends using a supported fencing device. When fencing is employed, two network interface cards per machine is recommended with IP addresses configured for both public and private networks. When fencing cannot be employed, all cluster traffic should go through a single network interface. See Appendix A: Fencing for more information. 3 Configuring CC for HA

10 Consider the following prior to implementing Control Center in an HA cluster: The host clocks must be synchronized to a time server via Network Time Protocol (NTP). SELinux must be disabled on all hosts because it is not supported by Zenoss. Nodes should be located within a single LAN with multicast support. Zenoss, Inc. The commands listed in this document are run as the root user unless otherwise specified. If you do not have direct access to the root account, then all of the commands in this guide should be run with sudo. We also assume that the Linux servers hosting the Control Center master agent have access to the standard Yum repositories (this means they have internet access). Configuring Control Center for HA 4

11 Planning Your Deployment The main characteristics of the recommended deployment architecture are: Each host in the HA cluster should have two NICs such that one1 NIC on each host is dedicated to disk synchronization via DRBD and the other is used by RM and CC application traffic. RM services are deployed on dedicated CC resource pool agents outside of the HA cluster. Fencing is employed on production HA clusters. Having two NICs on each HA cluster node allows network traffic required for disk synchronization (DRBD) to be separated from network traffic required by the application itself. This results in more up-to-date disk synchronization and better network responsiveness from the CC and RM applications. However, if you do not have two identical, dual-nic machines available, it is still possible to deploy an HA cluster with single-nic servers. Use two hosts for the CC master agent, where the two are configured in an active/passive configuration as described in this guide. These two hosts will only be used for running Control Center itself - they should NOT be used for running Zenoss Resource Manager. To run Zenoss Resource Manager, plan on deploying at least an additional two Resource Pool hosts. Only the Control Center master agents will be configured in an active/passive cluster using the instructions in this document. The two or more Resource Pool hosts will run as regular Control Center agents. Fencing is a critical consideration. Fencing is a technique to ensure that a failed node in the cluster is completely stopped. This avoids situations where runaway processes on the failed node continue trying to use shared resources, resulting in application conflicts or conflicts with the cluster management software. The enormous number of fencing solutions makes it impractical for Zenoss to document solutions for every possible scenario. Control Center administrators must work with their IT departments to implement a fencing solution that works for their particular infrastructure. While fencing is recommended for production environments, CC administrators who want to stand up an initial test deployment of a CC HA cluster can do so without fencing, and then add fencing later. See Appendix A: Fencing for more information. The general guidelines for deployment planning of the master agents are the same as those in Master host requirements in the Planning your deployment section of the RM Install Guide, except the following two file systems will be mirrored within the cluster: i. /opt/serviced/var/isvcs (ext4) ii. /opt/serviced/var/volumes (btrfs) Notes: 1. The volume for /var/lib/docker does not need to be mirrored because the docker images seldom change, and docker manages downloading those images from public repos as necessary. 2. Because the volume /opt/serviced/var/backups does not contain frequently changing runtime data that must be mirrored on a real-time basis, it should not be mirrored. 3. Create partitions, but do not make file systems, for both of the volumes /opt/serviced/var/isvcs and /opt/serviced/var/volumes on both nodes. The file systems for both of these volumes will be created as part of configuring DRBD. 5 Configuring CC for HA

12 Prepare both Control Center Master Nodes (Active & Passive) Perform the steps in this section on both machines. Prepare the Master Host Modifications to Standard Preparations Following the steps outlined in Preparing the master host operating system in the RM Install Guide with the following exception - for the Create an XFS file system step, only create the XFS file system for /var/lib/docker. The following is a short summary of the steps necessary to create just that one volume: # mkdir -p /var/lib/docker # DOCKER_PART=/dev/sdb1 # mkfs -t xfs $DOCKER_PART # echo "$DOCKER_PART /var/lib/docker xfs defaults 0 0" >> /etc/fstab # mount -a; mount egrep docker Additional Preparations for Clustering Each master host must be assigned two unique static IP address; for example NODE{1,2}-PUBLIC-IP and NODE{1,2}-PRIVATE-IP. Using DHCP-leased IP addresses with machines in a cluster will interfere with the cluster management software. Reserve a third static IP address to be used as the floating VIP (Virtual IP Address) for the cluster. Each cluster must have a floating VIP on the public network. The VIP is assigned to the active node automatically by the cluster management software. In case of interruption, the VIP is re-assigned to the standby node. The VIP is used by all processes outside of the cluster to contact the master. This includes end-users trying to login to CC or RM, as well as resource pool agents. Modify the /etc/hosts file on both hosts to specify the static IP addresses and host names for both hosts. Make sure autostart for NFS is disabled as a precaution to avoid unnecessary conflicts managing NFS-mounted volumes. For example: # systemctl stop nfs && systemctl disable nfs The CC master agent uses NFS to share files with CC agents in the various resource pools. Normally, having NFS running on a CC master agent is not a problem. However, special care must be taken in an HA cluster when NFSmounted volumes are one of the shared cluster resources accessed via the floating VIP. In our case, the cluster management software will handle starting/stopping NFS in the cluster such that NFS is only running on the active node. Configuring Control Center for HA 6

13 Install Control Center on the Master Host Follow the steps outlined in the subsection Installing a master host for Installing on RHEL or CentOS hosts in the RM Install Guide with the following exceptions: 1. After you Stop and Restart Docker, perform two additional tasks: pre-pull the zenoss/serviced-isvcs docker image disable automatic startup of docker Note that the Control Center resource agent configuration for Pacemaker has a startup timeout. If Control Center fails to start within this timeframe, Pacemaker initiates a failover. The first time a new version of serviced starts it must first pull the zenoss/serviced-isvcs docker image. Typically, the time required to pull that image is greater than Pacemaker s startup timeout. Because of this time requirement, the docker image must be pulled locally before starting anything. Use the following command to pull the zenoss/serviced-isvcs docker image: # docker pull zenoss/serviced-isvcs:v27.1 NOTES: In the example above, the image label, v27.1, is specific to Control Center The image label for Control Center is v27.2 The serviced-isvcs version may vary from one Control Center release to another. Consult the documentation for your specific release to verify which version of zenoss/serviced-isvcs is required for your installation. 2. When the isvcs image has been prestaged, docker must be stopped and disabled because the cluster manager will be responsible for starting/stopping docker as necessary: # systemctl stop docker && systemctl disable docker 3. Skip the Change the volume type for application data step. 4. For the Configure the master host for a multi-host deployment section, you must modify the default configuration to ensure that Control Center runs as a master agent using the VIP for the cluster. Perform the following: a) Create a variable named CLUSTERVIP with a value equal to the floating VIP of your cluster: # CLUSTERVIP=x.x.x.x NOTE: This step completely replaces the Configure the Master host for a multi-host deployment section b) Run the following commands to modify /etc/default/serviced: # EXT=$(date +"%j-%h%m%s") # test! -z "${CLUSTERVIP}" && \ sudo sed -i.${ext} -e 's ^#[^H]*\(HOME=/root\) \1 ' \ -e 's ^#[^S]*\(SERVICED_AGENT=\).*$ \11 ' \ -e 's ^#[^S]*\(SERVICED_ENDPOINT=\).*$ \1'${CLUSTERVIP}':4979 ' \ -e 's ^#[^S]*\(SERVICED_FS_TYPE=\).*$ \1btrfs ' \ -e 's ^#[^S]*\(SERVICED_MASTER=\).*$ \11 ' \ -e 's ^#[^S]*\(SERVICED_OUTBOUND_IP=\).*$ \1'${CLUSTERVIP}' ' \ 7 Configuring CC for HA

14 -e 's ^#[^S]*\(SERVICED_REGISTRY=\).*$ \11 ' \ -e 's ^#[^S]*\(SERVICED_VARPATH=\).*$ \1/opt/serviced/var ' \ /etc/default/serviced # test 1 -eq `grep SERVICED_OUTBOUND_IP= /etc/default/serviced \ >/dev/null 2>&1;echo $?` && \ sudo echo "SERVICED_OUTBOUND_IP=${CLUSTERVIP}" >> \ /etc/default/serviced Zenoss, Inc. c) Verify that /etc/default/serviced contains the proper contents. Run the following: # grep ^SERVICED /etc/default/serviced sort The following is example output: SERVICED_AGENT=1 SERVICED_ENDPOINT=<CLUSTERVIP>:4979 SERVICED_FS_TYPE=btrfs SERVICED_MASTER=1 SERVICED_OUTBOUND_IP=<CLUSTERVIP> SERVICED_REGISTRY=1 SERVICED_VARPATH=/opt/serviced/var 5. SKIP the Start the Control Center step; Do NOT start serviced. Instead, disable autostart for serviced. The HA cluster management software will handle starting and stopping CC. For example: # systemctl disable serviced 6. Follow the steps outlined in the subsection Enabling access to the Control Center web interface in the RM Install Guide. Install and Configure DRBD Add ELRepo to RPM DRBD packages are available on ELRepo. Issue the following commands to include ELRepo for both master hosts: Install DRBD # rpm --import # rpm -Uvh # yum update y Use the following command to install DRBD on BOTH nodes of the cluster: # yum install -y drbd84-utils kmod-drbd84 Configuring Control Center for HA 8

15 Configure DRBD Perform the following steps on BOTH nodes of the cluster: 1. Edit /etc/drbd.d/global_common.conf Using a text editor, modify the file /etc/drbd.d/global_common.conf on both nodes to add usage-count yes; and protocol C; in the global and common sections as illustrated below: global { } usage-count yes; common { } net { } protocol C; 2. Create /etc/drbd.d/serviced-dfs.res Using a text editor, create the file /etc/drbd.d/serviced-dfs.res with the following content on both nodes. The example below assumes /dev/sdb is the block device used for both of the mirrored volumes, /opt/serviced/var/isvcs and /opt/serviced/var/volumes. Use the device(s) appropriate for your installation. resource serviced-dfs { volume 0 { device /dev/drbd0; disk /dev/sdb1; meta-disk internal; } volume 1 { device /dev/drbd1; disk /dev/sdb2; meta-disk internal; } syncer { rate 30M; } net { after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; 9 Configuring CC for HA

16 NOTES: } } on <NODE1-PUBLIC> { } address <NODE1-PRIVATE-IP>:7789; on <NODE2-PUBLIC> { } address <NODE2-PRIVATE-IP>:7789; Zenoss, Inc. 1. This example assumes that each node in the cluster has two NICs such that the private IP/hostnames are reserved for use by DRBD. This is recommended so that real-time writes for disk synchronization between the DRBD master to slave are not in contention with application level network traffic. However, it is possible to use DRBD with a single NIC is the default port number used by DRBD. 3. By putting both volumes in the same DRBD resource definition, we are telling DRBD to synchronize and failover both volumes together. 4. DRBD will store its metadata on each volume (meta-disk internal;) so the total amount of space reported on the logical device /dev/drbd<n> will always be less than the amount of physical space available on the underlying physical disk partition. 5. The value of syncer { rate: <value> controls the rate, in bytes per second, at which DRBD synchronizes disks after the master/slave nodes have become out of sync. This rate should be adjusted based on 30% of the available replication bandwidth. This is the slowest of either the I/O subsystem or the network interface. The example above assumes 90MB/s available for total replication bandwidth (0.30 X 90MB/s = 30MB/s) Initial Enablement of the DRBD Device The steps to enable the DRBD device depend on the status of your disks. If the disks are new, skip to step 2. If they are previously used, start with step If you are creating the DRBD device on disks that have already been used, you must use the dd command to zero out the existing filesystem. For example, if sdb1 and sdb2 are the two disks to be synchronized: dd if=/dev/zero of=/dev/sdb1 bs=1m count=128 dd if=/dev/zero of=/dev/sdb2 bs=1m count= Execute the following commands to create the device metadata and enable the new DRBD resource on both nodes on the cluster. # drbdadm create-md all # drbdadm up all Create Mount Points Create the mount points for both volumes on both nodes of the cluster. # mkdir -p /opt/serviced/var/isvcs /opt/serviced/var/volumes Configuring Control Center for HA 10

17 Perform First-Time DRBD Initialization Perform these steps on the primary cluster node only. 1. Perform Initial Device Synchronization On the primary cluster node, enter the following command to synchronize the devices on both nodes. # drbdadm primary --force serviced-dfs Although the command may return right away, the synchronization process is running in the background. Depending on the size of the partitions, this process can require several minutes. Use the following command to monitor the progress of the synchronization process: # drbd-overview Do not proceed until the synchronization is complete. The process is complete when the status of both devices on both nodes is UpToDate/UpToDate as illustrated in the example output, below: # drbd-overview 0:serviced-dfs/0 Connected Primary/Secondary UpToDate/UpToDate /opt/serviced/var/isvcs ext4 4.8G 196M 4.4G 5% 1:serviced-dfs/1 Connected Primary/Secondary UpToDate/UpToDate /opt/serviced/var/volumes btrfs 45G 544K 43G 1% # Note that in the command above, the field Primary/Secondary shows that this command was run on the primary cluster node. If you run the command on the secondary or slave node, the field will have the value Secondary/Primary. Likewise, in the next field, UpToDate/UpToDate, the first value is the status on the current node (the node where you executed the command), and the second value is the status on the remote node. 2. Format the Partitions On the primary node, format the two partitions with the following commands: # mkfs.ext4 /dev/drbd0 # mkfs.btrfs --nodiscard /dev/drbd1 Note that DRBD takes care of mirroring the file system partitioning to the slave machine Install and Configure Cluster Management Software Pacemaker is an open source cluster resource manager. It can use more than one type of cluster infrastructure application for communication and membership services within the cluster. For the purposes of this document, we are using Corosync as the cluster infrastructure application for communication and membership services. The Pacemaker/Corosync daemon, pcs.d, communicates across nodes in the cluster. When pcs.d is installed, started and configured, the majority of pcs commands can be run on any available node in the cluster. Install the Cluster Management Software Run the following on both machines to install the cluster management software. Note: For RHEL systems, you might need to add the Centos Yum repositories to retrieve these packages. # yum install corosync pacemaker pcs 11 Configuring CC for HA

18 Install the Pacemaker Resource Agent for Control Center Configuring Control Center for HA 12 Zenoss, Inc. Pacemaker uses resource agents that are scripts that implement a standardized interface to manage arbitrary resources in a cluster. Zenoss has developed a Pacemaker Resource Agent to manage the Control Center master in a cluster. Run the following on both machines to install the Pacemaker Resource Agent for Control Center: # yum --enablerepo=zenoss-stable install -y serviced-resource-agents Configure the Cluster Management Software Start and enable the PCS daemon on both nodes of the cluster: # systemctl start pcsd.service # systemctl enable pcsd.service The Pacemaker installation creates a user account called hacluster. This account must have the same password on both boxes. Set the account password before proceeding: # passwd hacluster NOTE: All preceding steps must be completed on BOTH nodes in the cluster BEFORE continuing to the next step. Create the Cluster in Standby Mode To create a new cluster, you must authenticate each node with the following command. When prompted, enter the password for the hacluster user on any one of the nodes: # pcs cluster auth <NODE1-PUBLIC> <NODE2-PUBLIC> After the cluster nodes authenticate with each other, use the following command on the same node to generate and synchronize an initial (empty) cluster definition: # pcs cluster setup --name serviced-ha \ <NODE1-PUBLIC> <NODE2-PUBLIC> NOTE: Unless noted otherwise, all of the remaining commands can be executed on either node. To configure the cluster, start the pcs management agents on all nodes in the cluster. At this stage, the cluster definition is empty, so starting the cluster management agents will have no side effects. Use the following command on any one of the nodes to start the cluster management agents on all of the nodes: # pcs cluster start --all At this point the cluster should be running with both nodes, but because there are no resources defined yet, Pacemaker has nothing to actively manage. To check the status of the cluster at this point, execute the following command: # pcs cluster status Both nodes should be Online. The next sections describe cluster configuration using a series of different pcs commands. By default, Pacemaker starts monitoring and managing the different resources as they are defined, even though all of the resources for our cluster will not be defined until the last one is added. To prevent inadvertent problems caused by incomplete resource definitions, use the following command to put both nodes of the cluster in standby mode. # pcs cluster standby --all

19 NOTE: You have choices for starting the PCS at machine boot. The choices are to enable autostart, or start manually. To start manually, use the command pcs cluster start -all after the machine restarts. Zenoss recommends using the optional manual method for post-mortem investigation of potential problems after a reboot. To autostart the cluster services after reboot, use the following commands: systemctl enable corosync systemctl enable pacemaker For additional information, see the following document: For the purposes of Pacemaker, nodes in standby mode are not used for cluster operations such as failover. Although the rest of the processes on both nodes will continue running normally, the cluster-managed resources will not be active on any nodes in the standby mode. When the configuration process is complete, we can take the nodes out of standby one by one, in a controlled fashion, to verify that the configuration and a manual failover works. Set Property and Resource Defaults Pacemaker can implement a wide variety of large and complex cluster configurations. For the purposes of Control Center HA, this document describes a relatively simple two-node, active/passive configuration. Before creating any resources, you must set some defaults that apply to the entire cluster for a basic two-node, active/passive configuration. Set the following defaults: resource-stickiness=100 - This keeps all resources bound to the same host. no-quorum-policy=ignore - Pacemaker supports the notion of a voting quorum for clusters of three or more nodes. However, in our case, because we have only two nodes, if one of the nodes fails, trying to use the remaining node as a quorum of one does not make sense; therefore, we disable quorums. stonith-enabled=false - stonith is an acronym for shoot the other node in the head ; it is used for fencing or isolating a failed node. Stonith should only be disabled for the initial setup and testing period. This example uses a value of false for simplicity and to aid in verifying an initial configuration. Production deployments should enable and configure a machine/environment-specific stonith (this means set stonith-enabled=true). See Appendix A: Fencing for more details. Use the following commands to set the defaults: # pcs resource defaults resource-stickiness=100 # pcs property set no-quorum-policy=ignore # pcs property set stonith-enabled=false After setting the defaults, you can run the following commands and consult the output to verify they were applied correctly. For example: # pcs property Cluster Properties: cluster-infrastructure: corosync cluster-name: serviced-ha 13 Configuring CC for HA

20 dc-version: a14efad have-watchdog: false no-quorum-policy: ignore stonith-enabled: false Zenoss, Inc. # pcs resource defaults resource-stickiness: 100 Define Resources Seven logical resources need to be defined for the cluster: The DRBD Master/Slave DFS set The two mirrored file systems running on top of DRBD: /opt/serviced/var/isvcs and /opt/serviced/var/volumes The floating IP address that the cluster will assign to the active primary cluster node Docker NFS Control Center Define the DRBD Master/Slave set Define a cluster resource for the DRBD device, and an additional clone of that resource to act as the master. For example: # pcs resource create DFS ocf:linbit:drbd \ drbd_resource=serviced-dfs \ op monitor interval=30s role=master \ op monitor interval=60s role=slave # pcs resource master DFSMaster DFS \ master-max=1 master-node-max=1 \ clone-max=2 clone-node-max=1 notify=true Note that for a master/slave resource, Pacemaker requires separate monitoring intervals for the different roles. In this case, the configuration tells Pacemaker to check on the master every 30 seconds and the slave every 60 seconds. Define File System Resources Define the filesystems that are mounted on the DRBD devices. For example: # pcs resource create serviced-isvcs Filesystem \ device=/dev/drbd/by-res/serviced-dfs/0 \ directory=/opt/serviced/var/isvcs fstype=ext4 Configuring Control Center for HA 14

21 # pcs resource create serviced-volumes Filesystem \ device=/dev/drbd/by-res/serviced-dfs/1 \ directory=/opt/serviced/var/volumes fstype=btrfs Note that in the commands above, serviced-dfs is the name of the DRBD resource defined previously in /etc/drbd.d/serviced-dfs.res. Define the Floating IP Resource Define the resource that represents the floating virtual IP address of the cluster using the following command. # pcs resource create VirtualIP ocf:heartbeat:ipaddr2 \ ip=<clustervip> nic=<vip-nic> \ cidr_netmask=32 op monitor interval=30s Note: In the command above, use the name of the network interface that is bound to <CLUSTERVIP> instead of the place holder <vip-nic>. Define the Docker Resource Issue the following command to define your Docker resource: # pcs resource create docker systemd:docker NOTE: Because Pacemaker will manage Docker startup and shutdown, the normal auto-start of the Docker systemd unit should be disabled. Although this should have been accomplished in the preceding steps that installed Control Center, in case it was overlooked use the following command to stop the service and disable Docker startup at boot time: # systemctl stop docker && systemctl disable docker Define the NFS Resource Control Center uses NFS to share configuration in a multi-host deployment. For failover to work properly, we need to ensure that NFS is stopped on a failed node to completely disconnect any client hosts before restarting on another node. For example: # pcs resource create nfs systemd:nfs NOTE: Because Pacemaker will manage NFS startup and shutdown, the normal auto-start of the NFS systemd unit should be disabled. Although this should have been accomplished in the preceding steps that installed Control Center, in case it was overlooked use the following command to stop the service and disable NFS startup at boot time: # systemctl stop nfs && systemctl disable nfs Define the Control Center Resource Issue the following command to define your Control Center resource: # pcs resource create serviced ocf:zenoss:serviced NOTE: Because Pacemaker manages the Control Center startup and shutdown, the normal auto-start of the Control Center systemd unit should be disabled. Although this should have been accomplished in the preceding steps that installed Control Center, in case it was overlooked, use the following command to stop the service and disable Control Center startup at boot time: # systemctl stop serviced && systemctl disable serviced 15 Configuring CC for HA

22 Pacemaker (cluster management software) uses the default timeouts defined by the Pacemaker Resource Agent for Control Center to decide if serviced is unable to start or shutdown correctly. Starting with version of the Pacemaker Resource Agent for Control Center, the default values for the start and stop timeouts are 360 and 90 seconds respectively. If you are installing a new HA cluster for Control Center or higher with the Pacemaker Resource Agent for Control Center version or higher, you do not need to adjust the start or stop timeouts. If you are upgrading an existing HA cluster to Control Center 1.0.7, you must also upgrade the Pacemaker Resource Agent for Control Center to version 0.0.3, and you must manually increase the start timeout. See the section titled Upgrading serviced Resource Agent. Note that the default startup and shutdown timeouts are based on the worst case scenario. In practice, Control Center will typically start and stop in much less time. This does not mean however that you should decrease these timeouts. There are potential edge cases, especially for startup, where Control Center may take longer than usual to start or stop. If the start/stop timeouts for Pacemaker are set too low, and Control Center encounters one of those edge cases, then Pacemaker will take unnecessary or incorrect actions. For example, if the startup timeout is artificially set too low,2.5 minutes for example, and Control Center startup encounters an unusual case where it requires at least 3 minutes to start, then Pacemaker will failover prematurely. Define the Control Center Resource Group The resources in a resource group are started in the order they appear in the group and stopped in the reverse order they appear in the group. In this case, we want to: 1. Mount the file systems 2. Enable the Virtual IP 3. Start Docker 4. Start NFS 5. Start Control Center In the event of a failover, Pacemaker stops the resources on the failed node in the reverse order they are listed here before starting the resource group on the standby node. Use the following command to create a resource group that implements those ordering rules: # pcs resource group add serviced-group \ serviced-isvcs serviced-volumes \ VirtualIP docker nfs \ serviced Define Resource Constraints Pacemaker resource constraints control when and where resources are deployed in a cluster. Enter the following commands to keep the serviced-group running on the same node as the DFSMaster and ensure that the group is only started after the DFSMaster is started: # pcs constraint colocation add serviced-group with DFSMaster \ INFINITY with-rsc-role=master # pcs constraint order promote DFSMaster then \ start serviced-group Configuring Control Center for HA 16

23 Verification In the Create the Cluster in Standby Mode section above, all nodes in the cluster were placed in a standby state while various configuration changes were applied to the cluster. Before enabling the cluster for the first time, review the configuration and make adjustments as necessary. The following sections describe the review process. Verify DRBD Configuration Use the following command to dump the full DRBD configuration. The output should be consistent with the changes made for the Steps 1 and 2 of the section titled Configure DRBD, on page 9, above. # drbdadm dump Use the following the command to review the synchronization status of the both volumes on both servers: # drbd-overview NOTE: Do NOT proceed until the synchronization is complete. The process is complete when the status of both devices on both nodes is UpToDate/UpToDate. Verify Pacemaker Configuration Review the property and resource defaults for pacemaker. Issue the following command and consult the output. For example: # pcs property Cluster Properties: cluster-infrastructure: corosyncv cluster-name: serviced-ha dc-version: a14efad have-watchdog: false no-quorum-policy: ignore stonith-enabled: false # pcs resource defaults resource-stickiness: 100 NOTE: The value of the stonith-enabled property should be true for production environments with fencing. Although setting the value to false is not recommended for production environments, it can be used in controlled test environments. See Appendix A: Fencing for more information. 17 Configuring CC for HA

24 Review the resource constraints to verify that serviced-group resource is started after the DFSMaster (the DRBD master), and both the serviced-group and DFSMaster are colocated on the same active cluster node. Issue the following command and consult the output. For example: # pcs constraint Location Constraints: Ordering Constraints: promote DFSMaster then start serviced-group (kind:mandatory) Colocation Constraints: serviced-group with DFSMaster (score:infinity) (with-rsc-role:master) Use the following command to review the resource definitions: # pcs resource show --full The order of resources in the resource group serviced-group (from top to bottom) is important because this is the order the resources will be started by Pacemaker. The order should be: 1. serviced-isvcs 2. serviced-volumes 3. VirtualIP 4. Docker 5. nfs 6. serviced Verify Control Center Configuration Verify that Control Center configuration files are identical by running the following command on both nodes and comparing the values: # cksum /etc/default/serviced If the files are identical, their checksums should be identical. If they differ, inspect their contents using the following command: # grep ^SERVICED /etc/default/serviced sort SERVICED_AGENT=1 SERVICED_ENDPOINT=<CLUSTERVIP>:4979 SERVICED_FS_TYPE=btrfs SERVICED_MASTER=1 SERVICED_OUTBOUND_IP=<CLUSTERVIP> SERVICED_REGISTRY=1 SERVICED_VARPATH=/opt/serviced/var Configuring Control Center for HA 18

25 Verify Cluster Startup To verify the initial configuration, attempt startup of the resources on a single-node with the other node in standby mode. If there is a problem, you have an opportunity to pause and diagnose it without Pacemaker automatically stopping the entire node, and failing over to the other one. Assuming both nodes are currently in a standby state, determine which node is currently the DRBD master. Issue the following command: # drbd-overview 0:serviced-dfs/0 Connected Primary/Secondary UpToDate/UpToDate /opt/serviced/var/isvcs ext4 4.8G 196M 4.4G 5% 1:serviced-dfs/1 Connected Primary/Secondary UpToDate/UpToDate /opt/serviced/var/volumes btrfs 45G 544K 43G 1% # In the example above, the current machine is the DRBD master as indicated by the string Primary/Secondary in the output. If the current machine is instead the DRBD slave, the string will be Secondary/Primary. To startup the cluster resources, run the following command. # pcs cluster unstandby <node1> In the command, <node1> is the hostname of the DRBD master. Note that you can run pcs commands on any node in the cluster. You can check the status of cluster resources with the following command: # pcs status Because it can require some time for all of the resources to start, repeat the pcs status command until all resources report a status of Started. If one or more resources fail to start, resolve the issue before continuing. Verify Cluster Failover When all resources have started successfully, you can enable the other node and simulate a failover. To enable the other node, run the following command: # pcs cluster unstandby <node2> In the command, <node2> is the hostname of the DRBD slave node. Use the pcs status command periodically to verify that the slave node is Online before continuing. When the secondary cluster node is Online, you can force a manual failover by putting the currently primary cluster node into standby. For example: # pcs cluster standby <node1> Monitor the cluster status periodically with the pcs status command. Because it can require some time for all of the services to failover, repeat the pcs status command until all resources report a status of Started on <node2>. If the failover does not succeed, resolve the problem before continuing. 19 Configuring CC for HA

26 Configure Control Center A dedicated pool (or pools) of resource pool agents are used to run RM, separate from Control Center master agent(s) in the HA cluster. In this example, we will assign the Control Center master agents to the default pool, and create an additional pool (or pools) to run the RM services. Populate the Default Pool with HA Nodes From a browser: 1. Log into Control Center using the hostname for the floating VIP. 2. Navigate to the Hosts page. 3. Add a host using the public hostname of the active cluster host. Use the public hostname assigned to that node of the cluster (for example, NODE{1,2}-PUBLIC). For Resource Pool, use default. 4. From the command line of one of the cluster machines, use the pcs cluster standby <activenode> command to put the active node in standby mode. 5. Trigger a manual failover to the other node. 6. From a browser, log into Control Center using the hostname for the floating VIP. 7. Navigate to the Hosts page. 8. Add a host using the public hostname of the new, active cluster host. Use the public hostname assigned to that node of the cluster (for example NODE{1,2}-PUBLIC). For Resource Pool, use default. At this point the default pool should have 2 entries, one for each of the public hostnames for the two nodes in the HA cluster (for example, NODE1-PUBLIC and NODE2-PUBLIC). NOTE: Using the two public hostnames for the master agents in the default pool enables you to tell which node is currently active in the HA cluster when you login to the Control Center UI. Create Pool(s) for RM The HA configuration for Control Center requires a dedicated pool of resource agents to run RM. None of the services of RM itself should be run on the CC master agents in an HA cluster. From a browser: 1. Log into Control Center using the hostname for the floating VIP. 2. Navigate to the Resource Pools page 3. Add one or more pools. Note: The hosts for those pools will be added in sections below. Configuring Control Center for HA 20

27 Install Control Center Resource Pool Agent(s) To provide HA for CC resource pools, each pool must have at least N+1 hosts where N is the number of hosts needed to satisfy the performance and scalability requirements for that pool. For example, if a single resource pool agent is sufficient to run all of the services for that pool, you require two agents in the pool to provide HA for that pool. Alternatively, if another resource pool requires two resource pool agents to handle its load, at least three agents are required in that resource pool to provide HA for the pool. Follow the standard procedure for preparing and installing resource pool hosts for Control Center with the following difference - in the step titled Configure the Resource Pool Host for a Multi-Host Deployment of the Installing on a resource Pool Host section of the RM Install Guide, where it discusses setting MHOST, use the value of the floating VIP for the cluster as the value for MHOST. Verify that /etc/default/serviced contains the appropriate contents. Run the following command and consult the output: # grep ^SERVICED /etc/default/serviced sort SERVICED_AGENT=1 SERVICED_DOCKER_REGISTRY=<CLUSTERVIP>:5000 SERVICED_FS_TYPE=btrfs SERVICED_LOG_ADDRESS=<CLUSTERVIP>:5042 SERVICED_MASTER=0 SERVICED_REGISTRY=1 SERVICED_STATS_PORT=<CLUSTERVIP>:8443 SERVICED_VARPATH=/opt/serviced/var SERVICED_ZK=<CLUSTERVIP>:2181 After the Resource Agent(s) start, add them to the resource pools(s) created in the section Create Pool(s) for RM, on page 20. Do NOT add them to the default resource pool containing the CC master agents in the HA cluster. 21 Configuring CC for HA

28 Deploy Resource Manager Follow the instructions in the Deploying the Resource Manager section of the RM Install Guide to load and start RM with the following difference - when selecting a resource pool, use the pool created in the section titled Create Pool(s) for RM, on page 20. above, NOT the default resource pool. Configuring Control Center for HA 22

29 Gracefully Shutdown/Startup Cluster Services Should it become necessary to shut down your cluster services, perform the following: 1. Log into Control Center 2. Stop the RM application 3. Issue the command: pcs resource disable serviced-group 4. Verify the shutdown status. Use the pcs status command and consult the output until all services are shut down. To bring your cluster services back online, perform the following: 1. Issue the following command on each node: drbdadm up all 2. Issue the following command on the node you want to be primary: drbdadm primary serviced-dfs 3. Issue the following command on one node (your choice): pcs cluster start -all 4. Issue the following command on the primary node: pcs resource enable serviced-group 5. Verify the cluster status. Use the pcs status command and consult the output until all services are up. 23 Configuring CC for HA

30 Enable Fencing When all components are deployed, and you have verified operation of RM and basic cluster failover in a controlled scenario, configure and enable fencing. See Appendix A: Fencing for more information. Configuring Control Center for HA 24

31 Upgrading Control Center and Resource Manager To upgrade Control Center and Resource Manager, follow the instructions in the Zenoss Resource Manager Upgrade Guide with the following notes & exceptions. Downloading Images Complete the steps for "Downloading Upgrade Images" on the active node. Control Center and Resource Manager can both be running for this step. This step re pre-pulls the docker images for the upgraded software and saves them to /var/lib/docker. Because this is not replicated by drbd, these steps must be completed on both nodes. On the backup node, docker must be started first: systemctl start docker When the download is complete, stop docker on the backup node: systemctl stop docker Upgrading serviced Resource Agent If you have an existing HA cluster and you are upgrading to Control Center 1.0.7, you must upgrade the serviced-resource-agent RPM package to version Additionally you must manually adjust the start timeout for the serviced resource. To verify the installed serviced version, run the following command: rpm -qa grep serviced To upgrade your serviced resource agent, run the following command: yum --enablerepo=zenoss-stable install -y serviced-resource-agents To manually adjust the start timeout for serviced to the recommended value of 6 minutes, run the following command: pcs resource update serviced op start timeout=360s Upgrading Control Center Note that if you are upgrading Control Center to version 1.0.7, you must also upgrade the serviced-resourceagent RPM package to version Additionally, if you previously manually set the time out values for serviced, you must manually update that value to the recommended value of 6 minutes. For information about how to perform these operations, see the section titled Upgrading serviced Resource Agent. In the Upgrading Control Center section, disable the cluster services in place of the step that reads "systemctl stop serviced": pcs resource disable serviced-group Complete the step for "Upgrading Control Center on the master host" on both nodes. 25 Configuring CC for HA

32 In place of the step that reads "Start Control Center," enable the cluster resource group: pcs resource enable serviced-group Zenoss, Inc. Note that when the command 'serviced host list' is run, the standby node in the cluster will still show the prior version of serviced. To complete the upgrade, fail over from the primary node to the standby node: pcs cluster standby <primary node> Note that after the failover, the standby node will now show the updated serviced version when the command'serviced host list' is run. Upgrading Resource Manager On the active node, follow the instructions in the Zenoss Resource Manager Upgrade Guide. Converting Docker Storage Driver If you need to convert the Docker Storage Driver, follow the directions in the Zenoss Resource Manager Upgrade Guide with the following notes / exceptions: On the active node of the cluster, run the command: serviced service stop Zenoss.resmgr. Substitute the command "systemctl stop serviced && systemctl stop docker" with the command to disable the resources on the active node: pcs resource disable serviced-group Complete the subsequent steps on both master cluster nodes. Start docker on each node (independent of enabling the cluster resources) only to test a successful startup. Be sure to stop docker after a successful test on each node: systemctl stop docker In place of the "Starting Control Center and Resource Manager" re-enable the cluster resources on the primary node: pcs resource enable serviced-group Configuring Control Center for HA 26

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 5.0.10 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks

More information

Zenoss Resource Manager Planning Guide

Zenoss Resource Manager Planning Guide Zenoss Resource Manager Planning Guide Release 6.0.1 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Planning Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Installation Guide. Zenoss, Inc. Release

Installation Guide. Zenoss, Inc. Release Zenoss Resource Manager Installation Guide Release 5.0.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Installation Guide Copyright 2015 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo

More information

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 6.2.1 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 5.3.2 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Zenoss Core Upgrade Guide

Zenoss Core Upgrade Guide Release 5.1.9 Zenoss, Inc. www.zenoss.com Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States and other

More information

HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7. Matt Kereczman Version 1.5,

HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7. Matt Kereczman Version 1.5, HA NFS Cluster using Pacemaker and DRBD on RHEL/CentOS 7 Matt Kereczman Version 1.5, 2018-02-26 Table of Contents 1. Abstract..............................................................................................

More information

Installation Guide. Zenoss, Inc. Release 5.0.x.

Installation Guide. Zenoss, Inc. Release 5.0.x. Zenoss Resource Manager Installation Guide Release 5.0.x Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Installation Guide Copyright 2015 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo

More information

Control Center Planning Guide

Control Center Planning Guide Release 1.2.0 Zenoss, Inc. www.zenoss.com Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States and other

More information

Control Center Planning Guide

Control Center Planning Guide Control Center Planning Guide Release 1.4.2 Zenoss, Inc. www.zenoss.com Control Center Planning Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks

More information

The Zenoss Enablement Series:

The Zenoss Enablement Series: The Zenoss Enablement Series: Zenoss Service Dynamics Impact and Event Management on Red Hat Cluster Suite Configuration Guide Document Version 424-P1 Zenoss, Inc. www.zenoss.com Copyright 2013 Zenoss,

More information

RedHat Cluster (Pacemaker/Corosync)

RedHat Cluster (Pacemaker/Corosync) RedHat Cluster (Pacemaker/Corosync) Chapter 1:- Introduction and basic difference from previous RHEL cluster and latest RHEL Cluster. Red hat cluster allows you to configure and manage group of resources

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Overview Overview of the High Availability Add-On for Red Hat Enterprise Linux 7 Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 High Availability

More information

About Resource Manager 5.0 Data Stores

About Resource Manager 5.0 Data Stores The Zenoss Enablement Series: About Resource Manager 5.0 Data Stores Document Version 1.1 Zenoss, Inc. www.zenoss.com Copyright 2014-2015 Zenoss, Inc. 11305 Four Points Drive, Bldg. 1 - Suite 300, Austin,

More information

MySQL High Availability on the Pacemaker cluster stack. Brian Hellman, Florian Haas

MySQL High Availability on the Pacemaker cluster stack. Brian Hellman, Florian Haas MySQL High Availability on the Pacemaker cluster stack Brian Hellman, Florian Haas Table of Contents 1. Introduction...........................................................................................

More information

Resource Manager Collector RHCS Guide

Resource Manager Collector RHCS Guide The Zenoss Enablement Series: Resource Manager Collector RHCS Guide Document Version 424-D1 Zenoss, Inc. www.zenoss.com Copyright 2014 Zenoss, Inc., 275 West St., Suite 204, Annapolis, MD 21401, U.S.A.

More information

Control Center Release Notes

Control Center Release Notes Control Center Notes 1.5.1 Zenoss, Inc. www.zenoss.com Control Center Notes Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks

More information

Zenoss Core Upgrade Guide

Zenoss Core Upgrade Guide Zenoss Core Upgrade Guide Release 6.0.0 Zenoss, Inc. www.zenoss.com Zenoss Core Upgrade Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered

More information

Zenoss Core Upgrade Guide

Zenoss Core Upgrade Guide Release 5.3.2 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States

More information

Zenoss Core Upgrade Guide

Zenoss Core Upgrade Guide Release 5.3.0 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States

More information

Zenoss Community Edition (Core) Upgrade Guide

Zenoss Community Edition (Core) Upgrade Guide Zenoss Community Edition (Core) Upgrade Guide Release 6.2.1 Zenoss, Inc. www.zenoss.com Zenoss Community Edition (Core) Upgrade Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Reference Reference Document for the High Availability Add-On for Red Hat Enterprise Linux 7 Last Updated: 2017-11-28 Red Hat Enterprise Linux 7 High

More information

Zenoss Core Release Notes

Zenoss Core Release Notes Release 5.3.2 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States

More information

Zenoss Resource Manager Planning Guide

Zenoss Resource Manager Planning Guide Zenoss Resource Manager Planning Guide Release 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Planning Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Reference Reference Document for the High Availability Add-On for Red Hat Enterprise Linux 7 Last Updated: 2018-06-08 Red Hat Enterprise Linux 7 High

More information

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013 MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager Yves Trudeau November 2013 Agenda Geo-DR problems and challenges Introduction to Corosync Introduction to Pacemaker

More information

Control Center Upgrade Guide

Control Center Upgrade Guide Control Center Upgrade Guide Release 1.4.1 Zenoss, Inc. www.zenoss.com Control Center Upgrade Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or

More information

Release Zenoss, Inc.

Release Zenoss, Inc. Zenoss Service Dynamics Release Notes Release 4.2.5 Zenoss, Inc. www.zenoss.com Zenoss Service Dynamics Release Notes Copyright 2014 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

EXPRESSCLUSTER X 3.3 for Linux

EXPRESSCLUSTER X 3.3 for Linux EXPRESSCLUSTER X 3.3 for Linux Installation and Configuration Guide 04/10/2017 5th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual. 2nd 06/30/2015 Corresponds to the

More information

EXPRESSCLUSTER X 4.0 for Linux

EXPRESSCLUSTER X 4.0 for Linux EXPRESSCLUSTER X 4.0 for Linux Installation and Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New manual. Copyright NEC Corporation 2018.

More information

Control Center Upgrade Guide

Control Center Upgrade Guide Control Center Upgrade Guide Release 1.3.2 Zenoss, Inc. www.zenoss.com Control Center Upgrade Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 High Availability Add-On Administration

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK Linux-HA Clustering, no SAN Prof Joe R. Doupnik jrd@netlab1.oucs.ox.ac.uk Ingotec, Univ of Oxford, MindworksUK Ingotec offers IT to charities A charity is receiving a new server, having GroupWise, POSIX

More information

Release Zenoss, Inc.

Release Zenoss, Inc. Zenoss Resource Manager Release Notes Release 5.0.4 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Release Notes Copyright 2015 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks

More information

Configuring High Availability (HA)

Configuring High Availability (HA) 4 CHAPTER This chapter covers the following topics: Adding High Availability Cisco NAC Appliance To Your Network, page 4-1 Installing a Clean Access Manager High Availability Pair, page 4-3 Installing

More information

Control Center Release Notes

Control Center Release Notes Release 1.4.1 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States

More information

System Requirements ENTERPRISE

System Requirements ENTERPRISE System Requirements ENTERPRISE Hardware Prerequisites You must have a single bootstrap node, Mesos master nodes, and Mesos agent nodes. Bootstrap node 1 node with 2 cores, 16 GB RAM, 60 GB HDD. This is

More information

Red Hat Enterprise Linux Atomic Host 7 Getting Started with Cockpit

Red Hat Enterprise Linux Atomic Host 7 Getting Started with Cockpit Red Hat Enterprise Linux Atomic Host 7 Getting Started with Cockpit Getting Started with Cockpit Red Hat Atomic Host Documentation Team Red Hat Enterprise Linux Atomic Host 7 Getting Started with Cockpit

More information

ClusteringQuickInstallationGuide. forpacketfenceversion6.2.1

ClusteringQuickInstallationGuide. forpacketfenceversion6.2.1 ClusteringQuickInstallationGuide forpacketfenceversion6.2.1 ClusteringQuickInstallationGuide byinverseinc. Version6.2.1-Jul2016 Copyright 2015Inverseinc. Permissionisgrantedtocopy,distributeand/ormodifythisdocumentunderthetermsoftheGNUFreeDocumentationLicense,Version

More information

Cisco UCS Performance Manager Installation Guide

Cisco UCS Performance Manager Installation Guide Cisco UCS Performance Manager Installation Guide First Published: February 2017 Release 2.0.3 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Parallels Containers for Windows 6.0

Parallels Containers for Windows 6.0 Parallels Containers for Windows 6.0 Deploying Microsoft Clusters June 10, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse

More information

SpycerBox High Availability Administration Supplement

SpycerBox High Availability Administration Supplement Supplement: High Availability (Version 1.0) SpycerBox High Availability Administration Supplement Supplement for the SpycerBox Ultra/Flex hardware guide: High Availability Document Version 1.0 Copyright

More information

Red Hat JBoss Middleware for OpenShift 3

Red Hat JBoss Middleware for OpenShift 3 Red Hat JBoss Middleware for OpenShift 3 OpenShift Primer Get started with OpenShift Last Updated: 2018-01-09 Red Hat JBoss Middleware for OpenShift 3 OpenShift Primer Get started with OpenShift Legal

More information

Highly available iscsi storage with DRBD and Pacemaker. Brian Hellman & Florian Haas Version 1.2

Highly available iscsi storage with DRBD and Pacemaker. Brian Hellman & Florian Haas Version 1.2 Highly available iscsi storage with DRBD and Pacemaker Brian Hellman & Florian Haas Version 1.2 Table of Contents 1. Introduction...........................................................................................

More information

Red Hat Enterprise Linux 7 Getting Started with Cockpit

Red Hat Enterprise Linux 7 Getting Started with Cockpit Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started with Cockpit Red Hat Enterprise Linux Documentation Team Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started

More information

Upgrading a HA System from to

Upgrading a HA System from to Upgrading a HA System from 6.12.65 to 10.13.66 Due to various kernel changes, this upgrade process may result in an unexpected restart of Asterisk. There will also be a short outage as you move the services

More information

Zenoss Resource Manager Release Notes

Zenoss Resource Manager Release Notes Zenoss Resource Manager Release Notes Release 6.0.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Release Notes Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Last Updated: 2018-09-14 Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Legal Notice Copyright 2018 Red Hat, Inc. The

More information

Troubleshooting Cisco APIC-EM Single and Multi-Host

Troubleshooting Cisco APIC-EM Single and Multi-Host Troubleshooting Cisco APIC-EM Single and Multi-Host The following information may be used to troubleshoot Cisco APIC-EM single and multi-host: Recovery Procedures for Cisco APIC-EM Node Failures, page

More information

Parallels Virtuozzo Containers 4.6 for Windows

Parallels Virtuozzo Containers 4.6 for Windows Parallels Parallels Virtuozzo Containers 4.6 for Windows Deploying Microsoft Clusters Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o

More information

ExpressCluster X LAN V1 for Linux

ExpressCluster X LAN V1 for Linux ExpressCluster X LAN V1 for Linux Installation and Configuration Guide Revision 1NA Copyright NEC Corporation of America 2006-2007. All rights reserved. Copyright NEC Corporation 2006-2007. All rights

More information

Zenoss Core Release Notes

Zenoss Core Release Notes Release 5.3.3 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States

More information

ExpressCluster X 1.0 for Linux

ExpressCluster X 1.0 for Linux ExpressCluster X 1.0 for Linux Installation and Configuration Guide 10/12/2007 Fifth Edition Revision History Edition Revised Date Description First 2006/09/08 New manual Second 2006/12/12 EXPRESSCLUSTER

More information

High Availability and Disaster Recovery

High Availability and Disaster Recovery High Availability and Disaster Recovery ScienceLogic version 8.4.0 rev 2 Table of Contents High Availability & Disaster Recovery Overview 4 Overview 4 Disaster Recovery 4 High Availability 4 Differences

More information

Clustering Quick Installation Guide. for PacketFence version 6.5.0

Clustering Quick Installation Guide. for PacketFence version 6.5.0 Clustering Quick Installation Guide for PacketFence version 6.5.0 Clustering Quick Installation Guide by Inverse Inc. Version 6.5.0 - Jan 2017 Copyright 2015 Inverse inc. Permission is granted to copy,

More information

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Oded Nahum Principal Systems Engineer PLUMgrid EMEA November 2014 Page 1 Page 2 Table of Contents Table

More information

3.6. How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization. Last Updated:

3.6. How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization. Last Updated: Red Hat Enterprise Virtualization 3.6 Reports and Data Warehouse Guide How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization Last Updated: 2017-09-27 Red Hat Enterprise

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

Red Hat Gluster Storage 3.3

Red Hat Gluster Storage 3.3 Red Hat Gluster Storage 3.3 Quick Start Guide Getting Started with Web Administration Last Updated: 2017-12-15 Red Hat Gluster Storage 3.3 Quick Start Guide Getting Started with Web Administration Rakesh

More information

ExpressCluster X 1.0 for Linux

ExpressCluster X 1.0 for Linux ExpressCluster X 1.0 for Linux Installation and Configuration Guide 12/12/2006 Second Edition Revision History Edition Revised Date Description First 2006/09/08 New manual Second 2006/12/12 EXPRESSCLUSTER

More information

Control Center Reference Guide

Control Center Reference Guide Release 1.3.0 Zenoss, Inc. www.zenoss.com Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States and other

More information

Highly Available NFS Storage with DRBD and Pacemaker

Highly Available NFS Storage with DRBD and Pacemaker Florian Haas, Tanja Roth Highly Available NFS Storage with DRBD and Pacemaker SUSE Linux Enterprise High Availability Extension 11 SP4 October 04, 2017 www.suse.com This document describes how to set up

More information

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us

ExpressCluster for Linux Version 3 Web Manager Reference. Revision 6us ExpressCluster for Linux Version 3 Web Manager Reference Revision 6us EXPRESSCLUSTER is a registered trademark of NEC Corporation. Linux is a trademark or registered trademark of Linus Torvalds in the

More information

CIT 668: System Architecture

CIT 668: System Architecture CIT 668: System Architecture Availability Topics 1. What is availability? 2. Measuring Availability 3. Failover 4. Failover Configurations 5. Linux HA Availability Availability is the ratio of the time

More information

Using ANM With Virtual Data Centers

Using ANM With Virtual Data Centers APPENDIXB Date: 3/8/10 This appendix describes how to integrate ANM with VMware vcenter Server, which is a third-party product for creating and managing virtual data centers. Using VMware vsphere Client,

More information

Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat

Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat Building Elastix-1.3 High Availability Clusters with Redfone fonebridge2, DRBD and Heartbeat Disclaimer DRBD and Heartbeat are not programs maintained or supported by Redfone Communications LLC. Do not

More information

Red Hat Developer Studio 12.9

Red Hat Developer Studio 12.9 Red Hat Developer Studio 12.9 Installation Guide Installing Red Hat Developer Studio Last Updated: 2018-10-08 Red Hat Developer Studio 12.9 Installation Guide Installing Red Hat Developer Studio Supriya

More information

Configuring High Availability for VMware vcenter in RMS All-In-One Setup

Configuring High Availability for VMware vcenter in RMS All-In-One Setup Configuring High Availability for VMware vcenter in RMS All-In-One Setup This chapter describes the process of configuring high availability for the VMware vcenter in an RMS All-In-One setup. It provides

More information

DOWNLOAD PDF SQL SERVER 2012 STEP BY STEP

DOWNLOAD PDF SQL SERVER 2012 STEP BY STEP Chapter 1 : Microsoft SQL Server Step by Step - PDF Free Download - Fox ebook Your hands-on, step-by-step guide to building applications with Microsoft SQL Server Teach yourself the programming fundamentals

More information

LINBIT DRBD Proxy Configuration Guide on CentOS 6. Matt Kereczman 1.2,

LINBIT DRBD Proxy Configuration Guide on CentOS 6. Matt Kereczman 1.2, LINBIT DRBD Proxy 3.1.1 Configuration Guide on CentOS 6 Matt Kereczman 1.2, 2018-03-20 Table of Contents 1. About this guide.......................................................................................

More information

Replicated Database Operator's Guide

Replicated Database Operator's Guide Replicated Database Operator's Guide Introduction Replicated Database (RepDB) is a database synchronization software application for Explorer Controllers (ECs) and Digital Transport Adaptor Control Systems

More information

BEAWebLogic. Server. Automatic and Manual Service-level Migration

BEAWebLogic. Server. Automatic and Manual Service-level Migration BEAWebLogic Server Automatic and Manual Service-level Migration Version 10.3 Technical Preview Revised: March 2007 Service-Level Migration New in WebLogic Server 10.3: Automatic Migration of Messaging/JMS-Related

More information

Red Hat CloudForms 4.6

Red Hat CloudForms 4.6 Red Hat CloudForms 4.6 Policies and Profiles Guide Policy-based enforcement, compliance, events, and policy profiles for Red Hat CloudForms Last Updated: 2018-03-02 Red Hat CloudForms 4.6 Policies and

More information

ExpressCluster for Linux Ver3.0

ExpressCluster for Linux Ver3.0 ExpressCluster for Linux Ver3.0 Cluster Installation and Configuration Guide (Shared Disk) 2004.06.30 1st Revision Revision History Revision Revision date Descriptions 1 2004/06/30 New manual 2 EXPRESSCLUSTER

More information

Red Hat Network Satellite 5.0.0: Virtualization Step by Step

Red Hat Network Satellite 5.0.0: Virtualization Step by Step Red Hat Network Satellite 5.0.0: Virtualization Step by Step By Máirín Duffy, Red Hat Network Engineering Abstract Red Hat Network Satellite 5.0 is the first Satellite release to include virtual platform

More information

ExpressCluster X R3 WAN Edition for Windows

ExpressCluster X R3 WAN Edition for Windows ExpressCluster X R3 WAN Edition for Windows Installation and Configuration Guide v2.1.0na Copyright NEC Corporation 2014. All rights reserved. Copyright NEC Corporation of America 2011-2014. All rights

More information

Zenoss Service Impact Release Notes

Zenoss Service Impact Release Notes Zenoss Service Impact Release Notes Release 5.3.1 Zenoss, Inc. www.zenoss.com Zenoss Service Impact Release Notes Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Installation. Installation Overview. Installation and Configuration Taskflows CHAPTER

Installation. Installation Overview. Installation and Configuration Taskflows CHAPTER CHAPTER 4 Overview, page 4-1 Navigate the Wizard, page 4-7 Install and Configure Cisco Unified Presence, page 4-7 Perform a Fresh Multi-Node, page 4-15 Overview Cisco Unified Presence supports the following

More information

============================================== ==============================================

============================================== ============================================== Elastix High Availability ( DRBD + Heartbeat ) ============================================== this is How to configure Documentation :) ============================================== Before we Start :

More information

Release notes for Flash Recovery Tool Version 10.0(2)

Release notes for Flash Recovery Tool Version 10.0(2) Release notes for Flash Recovery Tool Version 10.0(2) Problem Description After several months or years in continuous operation, underlying boot flash devices on NEXUS 7000 SUP2/2E supervisor boards may

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

Red Hat Ceph Storage Release Notes

Red Hat Ceph Storage Release Notes Red Hat Ceph Storage 1.3.2 Release Notes Release notes for Red Hat Ceph Storage 1.3.2 Red Hat Ceph Storage Documentation Team Red Hat Ceph Storage 1.3.2 Release Notes Release notes for Red Hat Ceph Storage

More information

Oracle VM 3: IMPLEMENTING ORACLE VM DR USING SITE GUARD O R A C L E W H I T E P A P E R S E P T E M B E R S N

Oracle VM 3: IMPLEMENTING ORACLE VM DR USING SITE GUARD O R A C L E W H I T E P A P E R S E P T E M B E R S N Oracle VM 3: IMPLEMENTING ORACLE VM DR USING SITE GUARD O R A C L E W H I T E P A P E R S E P T E M B E R 2 0 1 8 S N 2 1 3 0 5 Table of Contents Introduction 1 Overview 2 Understanding the Solution 2

More information

Cisco Expressway Cluster Creation and Maintenance

Cisco Expressway Cluster Creation and Maintenance Cisco Expressway Cluster Creation and Maintenance Deployment Guide Cisco Expressway X8.6 July 2015 Contents Introduction 4 Prerequisites 5 Upgrading an X8.n cluster to X8.6 6 Prerequisites 6 Upgrade Expressway

More information

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime Reduce Application Downtime Overview is an industry-leading high availability solution for reducing both planned and unplanned downtime. By monitoring the status of applications and automatically moving

More information

WLM1200-RMTS User s Guide

WLM1200-RMTS User s Guide WLM1200-RMTS User s Guide Copyright 2011, Juniper Networks, Inc. 1 WLM1200-RMTS User Guide Contents WLM1200-RMTS Publication Suite........................................ 2 WLM1200-RMTS Hardware Description....................................

More information

IBM Single Sign On for Bluemix Version December Identity Bridge Configuration topics

IBM Single Sign On for Bluemix Version December Identity Bridge Configuration topics IBM Single Sign On for Bluemix Version 2.0 28 December 2014 Identity Bridge Configuration topics IBM Single Sign On for Bluemix Version 2.0 28 December 2014 Identity Bridge Configuration topics ii IBM

More information

Red Hat OpenStack Platform 10

Red Hat OpenStack Platform 10 Red Hat OpenStack Platform 10 High Availability for Compute Instances Configure High Availability for Compute Instances Last Updated: 2018-10-01 Red Hat OpenStack Platform 10 High Availability for Compute

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 Using Containerized Identity Management Services Overview and Installation of Containerized Identity Management Services Last Updated: 2018-04-12 Red Hat Enterprise Linux 7

More information

Cisco Prime Performance 1.3 Installation Requirements

Cisco Prime Performance 1.3 Installation Requirements 1 CHAPTER Cisco Prime Performance 1.3 Installation Requirements The following topics provide the hardware and software requirements for installing Cisco Prime Performance Manager 1.3: Gateway and Unit

More information

Red Hat OpenStack Platform 14

Red Hat OpenStack Platform 14 Red Hat OpenStack Platform 14 High Availability for Compute Instances Configure High Availability for Compute Instances Last Updated: 2019-02-11 Red Hat OpenStack Platform 14 High Availability for Compute

More information

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 This guide will help you download and install IBM Db2 software,

More information

What can you do with SQL Server on Linux?

What can you do with SQL Server on Linux? What can you do with SQL Server on Linux? S P O N S O R S P R E S E N T E R I N F O Rudi Bruchez rudi@babaluga.com www.babaluga.com SUPPORTED DISTRIBUTIONS Platform Supported version(s) Red Hat Enterprise

More information

VMware AirWatch Content Gateway Guide for Linux For Linux

VMware AirWatch Content Gateway Guide for Linux For Linux VMware AirWatch Content Gateway Guide for Linux For Linux Workspace ONE UEM v9.7 Have documentation feedback? Submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com.

More information

Installing Cisco APIC-EM on a Virtual Machine

Installing Cisco APIC-EM on a Virtual Machine About the Virtual Machine Installation, page 1 System Requirements Virtual Machine, page 2 Pre-Install Checklists, page 4 Cisco APIC-EM Ports Reference, page 7 Verifying the Cisco ISO Image, page 8 Installing

More information