Control Center Installation Guide for High-Availability Deployments

Similar documents
Control Center Installation Guide for High-Availability Deployments

Control Center Installation Guide

Control Center Installation Guide

Zenoss Service Impact Installation and Upgrade Guide for Resource Manager 5.x and 6.x

Zenoss Resource Manager Installation Guide

Cisco UCS Performance Manager Installation Guide

Zenoss Core Installation Guide

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide

Control Center Installation Guide

Zenoss Core Installation Guide

McAfee Network Security Platform

Zenoss Resource Manager Installation Guide

Installation and Upgrade on Windows Server 2008 When the Secondary Server is Virtual VMware vcenter Server Heartbeat 6.5 Update 1

Upgrading from vrealize Automation 7.1 or Later to June 2018 vrealize Automation 7.4

Upgrading from vrealize Automation to 7.3 or May 2018 vrealize Automation 7.3

vcloud Director Tenant Portal Guide vcloud Director 9.0

Zenoss Resource Manager Installation Guide

Upgrading from vrealize Automation 7.1, 7.2 to 7.3 or 7.1, 7.2, 7.3 to March 2018 vrealize Automation 7.3

Zenoss Core Installation Guide

Cisco UCS Performance Manager Migration Guide

Zenoss Resource Manager Configuration Guide

Upgrading from vrealize Automation 6.2 to 7.1

Zenoss Resource Manager Configuration Guide

Upgrade. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

McAfee Network Security Platform

Zenoss Resource Manager Configuration Guide

McAfee Network Security Platform

NOTES. Figure 1 illustrates typical hardware component connections required when using the JCM ICB Asset Ticket Generator software application.

Control Center Upgrade Guide

Zenoss Core Configuration Guide

Zenoss Community Edition (Core) Configuration Guide

VMware Horizon JMP Server Installation and Setup Guide. Modified on 06 SEP 2018 VMware Horizon 7 7.6

Migrating vrealize Automation to 7.3 or March 2018 vrealize Automation 7.3

Backup and Restore. 20 NOV 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

Zenoss Core Configuration Guide

Simrad ES80. Software Release Note Introduction

COMPUTER EDUCATION TECHNIQUES, INC. (MS_W2K3_SERVER ) SA:

vrealize Suite 7.0 Backup and Restore by Using EMC Avamar vrealize Suite 7.0

File Manager Quick Reference Guide. June Prepared for the Mayo Clinic Enterprise Kahua Deployment

Operational Verification. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

vcloud Director Tenant Portal Guide vcloud Director 9.1

Control Center Installation Guide

vcloud Director Service Provider Admin Portal Guide vcloud Director 9.1

McAfee Network Security Platform

McAfee Network Security Platform

Monitoring and Alerting. 27 MAR 2018 VMware Validated Design 4.2 VMware Validated Design for Software-Defined Data Center 4.2

VMware Cloud Foundation Site Protection and Disaster Recovery Guide. VMware Cloud Foundation 3.0.1

Upgrade. 13 FEB 2018 VMware Validated Design 4.2 VMware Validated Design for Software-Defined Data Center 4.2

IaaS Configuration for Virtual Platforms

Monitoring and Alerting. 19 SEP 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1

Scenarios. VMware Validated Design for IT Automating IT 4.0 EN

Scenarios. VMware Validated Design 4.0 VMware Validated Design for IT Automating IT 4.0

Distributed Systems Principles and Paradigms

LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION

Use Case Deployment Using vrealize Suite Lifecycle Manager. Modified on 21 DEC 2017 VMware Validated Design 4.1

McAfee Network Security Platform

Agilent Mass Hunter Software

Site Protection and Recovery. VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0

Intelligent Operations Use Case Deployment Using vrealize Suite Lifecycle Manager

Registering as an HPE Reseller

Scenarios for IT Automating IT. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for IT Automating IT 4.3

Scenarios. VMware Validated Design for IT Automating IT EN

pdfapilot Server 2 Manual

Registering as a HPE Reseller. Quick Reference Guide for new Partners in Asia Pacific

Monitoring and Alerting

McAfee Network Security Platform

Start Here. Remove all tape and lift display. Locate components

UT1553B BCRT True Dual-port Memory Interface

Welch Allyn CardioPerfect Workstation Installation Guide

EasyMP Network Projection Operation Guide

Epson iprojection Operation Guide (Windows/Mac)

EasyMP Multi PC Projection Operation Guide

Control Center Upgrade Guide

License Manager Installation and Setup

HP Unified Functional Testing

Installation and Upgrade Instructions

Lab 1 - Counter. Create a project. Add files to the project. Compile design files. Run simulation. Debug results

View, evaluate, and publish assignments using the Assignment dropbox.

Monitoring and Alerting. VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0

Epson Projector Content Manager Operation Guide

Polycom RealPresence Media Editor Quick Start

STRM Installation Guide

COMP 423 lecture 11 Jan. 28, 2008

Information regarding

Control Center Planning Guide

vcloud Director Service Provider Admin Portal Guide 04 OCT 2018 vcloud Director 9.5

Release Notes for. LANCOM Advanced VPN Client 4.10 Rel

Certificate Replacement. 25 SEP 2018 VMware Validated Design 4.3 VMware Validated Design for Remote Office Branch Office 4.3

M-Historian and M-Trend

Control Center Planning Guide

TECHNICAL NOTE MANAGING JUNIPER SRX PCAP DATA. Displaying the PCAP Data Column

Blackbaud s Mailwise Service Analyse Records Updated by MailWise

From Dependencies to Evaluation Strategies

Agenda & Reading. Class Exercise. COMPSCI 105 SS 2012 Principles of Computer Science. Arrays

Lecture 10 Evolutionary Computation: Evolution strategies and genetic programming

Troubleshooting Guide

Small Business Networking

EasyMP Network Projection Operation Guide

Transcription:

Control Center Instlltion Guide for High-Avilility Deployments Relese 1.4.1 Zenoss, Inc. www.zenoss.com

Control Center Instlltion Guide for High-Avilility Deployments Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo re trdemrks or registered trdemrks of Zenoss, Inc., in the United Sttes nd other countries. All other trdemrks, logos, nd service mrks re the property of Zenoss or other third prties. Use of these mrks is prohiited without the express written consent of Zenoss, Inc., or the third-prty owner. Linux is registered trdemrk of Linus Torvlds. All other compnies nd products mentioned re trdemrks nd property of their respective owners. Prt Numer: 1322.17.268 Zenoss, Inc. 11305 Four Points Drive Bldg 1 - Suite 300 Austin, Texs 78726 2

Contents Aout this guide...6 Supported operting systems nd rowsers... 6 Documenttion feedck...6 Chnge history... 6 Chpter 1: Plnning high-vilility deployment...8 Aout high-vilility deployments...8 HA cluster configurtion options... 9 Fencing recommendtions... 9 Mster host storge requirements...10 Mster host resource requirements...11 Key vriles used in this guide... 11 Chpter 2: Instlling mster host... 12 Verifying cndidte host resources... 12 Prepring the mster host operting system...14 Instlling required softwre nd imges... 16 Configuring privte mster NTP server... 19 Instlling Docker...20 Instlling Control Center... 21 Configuring Docker nd loding imges... 21 Chpter 3: Configuring DRBD...25 Instlling cluster mngement... 25 Configuring Logicl Volume Mnger... 26 Configuring DRBD...27 Initilizing DRBD...29 Chpter 4: Configuring Control Center on mster host nodes... 32 Control Center mintennce scripts on the mster host...32 User ccess control... 33 Setting the host role to mster... 35 Configuring the locl Docker registry...35 Configuring endpoints...36 Configuring the cluster virtul IP ddress... 37 Mster host configurtion vriles... 38 Universl configurtion vriles...40 Chpter 5: Cluster mngement softwre...42 Creting the cluster in stndy mode...42 Property nd resource options... 43 Defining resources... 44 3

Chpter 6: Verifiction procedures...48 Verifying the DRBD configurtion...48 Verifying the Pcemker configurtion...48 Verifying the Control Center configurtion...49 Verifying cluster strtup... 50 Verifying cluster filover...51 Chpter 7: Configuring uthentiction on mster host nodes...53 Creting new resource pools... 53 Adding mster nodes to their resource pool... 54 Chpter 8: Instlling delegte hosts...55 Verifying cndidte host resources... 55 Delegte host storge requirements...57 Instlling required softwre nd imges... 57 Configuring NTP clients...59 Prepring delegte host...60 Instlling Docker...61 Instlling Control Center... 62 Configuring Docker... 62 Importing the ZooKeeper imge for Docker...64 Chpter 9: Configuring nd strting delegte hosts...66 Control Center mintennce scripts on delegte hosts...66 Enling use of the commnd-line interfce...66 Setting the host role to delegte...67 Configuring the locl Docker registry...67 Configuring endpoints...68 Configuring the cluster virtul IP ddress... 69 Delegte host configurtion vriles... 70 Universl configurtion vriles...71 Strting Control Center...73 Delegte host uthentiction...73 Chpter 10: Configuring ZooKeeper ensemle... 76 ZooKeeper nd Control Center... 76 Understnding the configurtion process... 76 Configuring mster host nodes s ZooKeeper node... 78 Configuring delegte host A s ZooKeeper node...79 Configuring delegte host B s ZooKeeper node... 81 Importing the ZooKeeper imge for Docker...82 Strting ZooKeeper ensemle... 83 Updting delegte hosts...84 Appendix A: Strting nd stopping Control Center deployments... 86 Stopping Control Center...86 Strting Control Center...89 Appendix B: Storge mngement utility...91 serviced-storge...91 4

Appendix C: Resolving pckge dependency conflicts...95 Resolving device mpper dependency conflicts... 95 Resolving other dependency conflicts...96 Appendix D: Control Center configurtion vriles... 98 Best prctices for configurtion files... 98 Control Center configurtion file... 98 5

Control Center Instlltion Guide for High-Avilility Deployments Aout this guide Control Center Instlltion Guide for High-Avilility Deployments provides detiled procedures for instlling nd configuring Control Center cluster in high-vilility deployment. Supported operting systems nd rowsers The following tle identifies the supported comintions of client operting systems nd we rowsers. Client OS Supported Browsers Windows 7 nd 8.1 Internet Explorer 11 * Windows 10 Internet Explorer 11 * Firefox 50 nd lter Chrome 54 nd lter Microsoft Edge Windows Server 2012 R2 Firefox 30 Chrome 36 Mcintosh OS/X 10.9 Firefox 30 nd ove Chrome 36 nd ove Uuntu 14.04 LTS Firefox 30 nd ove Chrome 37 nd ove Red Ht Enterprise Linux 6.5, CentOS 6.5 Firefox 30 nd ove Chrome 37 nd ove Documenttion feedck To provide feedck out this document, or to report n error or omission, plese send n emil to docs@controlcenter.io. In the emil, plese include the document title (Control Center Instlltion Guide for High-Avilility Deployments) nd prt numer (1322.17.268) nd s much informtion s possile out the context of your feedck. Chnge history The following list ssocites document prt numers nd the importnt chnges to this guide since the previous relese. Some of the chnges involve fetures or content, ut others do not. For informtion out new or chnged fetures, refer to the Control Center Relese Notes. 1322.17.268 Updte relese numer (1.4.1). 1322.17.242 Add new storge requirement, for udit logs. For more informtion, see Mster host storge requirements on pge 10. * Enterprise mode only; comptiility mode is not supported. 6

Aout this guide Chnge the on stnz in the DRBD configurtion file (/etc/drd.d/glol_common.conf) to use hostnmes insted of IP ddresses. Remove the -y prmeter from ll yum commnd invoctions. 1322.17.206 Updte the SERVICED_DOCKER_REGISTRY configurtion steps to ensure the correct vlue is set. 1322.17.171 Updte relese numer (1.3.3) Replce Docker 1.12.1 with Docker CE 17.03.1. Remove step for disling SELinux. 1322.17.122 Updte relese numer (1.3.2) Move the step for loding Docker imges to the next procedure. 1322.17.100 Initil relese (1.3.1). 7

Control Center Instlltion Guide for High-Avilility Deployments Plnning high-vilility deployment 1 This chpter provides informtion out plnning high-vilility deployment of Control Center, nd out prepring to crete high-vilility deployment. Before proceeding, plese red the Control Center Plnning Guide. For optiml results, review the contents of this guide thoroughly efore performing n instlltion. Note The procedures in this guide descrie how to configure deployment tht does not hve internet ccess. You my crete deployment tht does hve internet ccess; the procedures ssume tht the deployment does not hve ccess. Aout high-vilility deployments Control Center cn e deployed in high-vilility (HA) configurtion to minimize downtime cused y the filure of hrdwre components or operting system services in Control Center mster host. An HA deployment cn e configured in vriety of wys, including the following: Active-Active, non geo-diverse Active-Pssive, geo-diverse Active-Pssive, non geo-diverse (works with Control Center) The Active-Pssive configurtion tht works with Control Center: uses Pcemker, Corosync, nd Distriuted Replicted Block Device (DRBD) to mnge the HA cluster includes two or more identicl mster hosts in the HA cluster one primry node nd one or more secondry nodes, which tke over s the Control Center mster host if the primry node ecomes unville requires minimum of two mster hosts nd two delegte hosts provides no protection ginst fcility ctstrophe ecuse the HA cluster nodes re locted in the sme fcility Note Zenoss supports DRBD 8.4, not DRBD 9.0. The procedures in this guide instll the ltest version of relese 8.4. 8

Plnning high-vilility deployment HA cluster configurtion options The recommended deployment rchitecture requires two identicl dul-nic mchines for optiml disk synchroniztion nd network responsiveness. However, you cn deploy n HA cluster with single-nic servers if two identicl dul-nic mchines re not ville. Requirements for two identicl dul-nic servers Mster hosts: In seprte resource pool, you need two identicl hosts in the role of Control Center mster host; one host serves s the primry node nd the other s the secondry node. Provide two NICs on the HA cluster primry node nd two on the secondry node. On ech node, dedicte one NIC to network trffic tht is required for disk synchroniztion vi DRBD. Control Center nd ppliction trffic. Route trffic for ech NIC through seprte sunets. Delegte hosts: You need N+1 identicl hosts to serve s delegte hosts, where N is the numer of hosts needed to stisfy the performnce nd sclility requirements of the tennt. They do not need to e memers of the HA cluster ecuse, if host ecomes unville, Control Center restrts their services on other hosts. Deploy ppliction services on dedicted Control Center delegte hosts outside the HA cluster. Requirements for two single-nic servers Mster host: Configure two hosts for the Control Center mster host in n ctive/pssive cluster. Use the two hosts only to run Control Center. Do not use them to run ppliction services. For primry nd secondry nodes tht contin only one network-interfce crd (NIC), the network they use to communicte must support multicst. Delegte hosts: You need N+1 identicl hosts to serve s delegte hosts, where N is the numer of hosts needed to stisfy the performnce nd sclility requirements of the pool. (Only the Control Center mster hosts must e configured in n ctive/pssive cluster.) Fencing recommendtions Fencing is n utomted mens of isolting node tht ppers to e mlfunctioning, used to protect the integrity of the DRBD volumes. In test deployment of Control Center HA cluster, fencing is not necessry. However, on production HA clusters, fencing is criticl considertion. Work with your IT deprtment to implement the est fencing solution for your infrstructure. Employ technique tht ensures tht filed node in the cluster is completely stopped to void ppliction conflicts or conflicts with the cluster mngement softwre. When fencing is employed in the HA cluster, use two NICs per node. Before you configure nd enle fencing in your production environment: Ensure tht ll components re deployed. Verify opertion of the ppliction tht Control Center is mnging. In controlled scenrio, confirm sic cluster filover. 9

Control Center Instlltion Guide for High-Avilility Deployments If fencing method is not defined, when the cluster ttempts to fil over to the ckup node, the following error results: no method defined Plce the fencing device on the pulic network. (Pssing hertet communiction through privte network interfce is not recommended. Doing so requires complex fencing system tht is prone to issues. For more informtion, see Quorum Disk documenttion on the Red Ht wesite.) Using pulic network interfce enles helthy node to fence the unhelthy node, nd prevents the unhelthy node from fencing the helthy node. If hertet communictions pss through the pulic network nd the link for node goes down, the node with the down pulic network link cnnot communicte with the fencing device. Mster host storge requirements The following tle identifies the minimum locl storge requirements of Control Center mster host nodes in high-vilility deployments. Purpose Minimum size Description 1 Root (/) 2-10GB (required) Locl, high-performnce storge (XFS) 2 Swp 12-16GB (required) Locl, high-performnce storge 3 Docker temporry 10GB (required) Locl, high-performnce storge (XFS) 4 Control Center udit logging 10GB (configurle) Remote, XFS-comptile storge 5 Docker dt 50GB (required) Locl, high-performnce storge 6 Control Center internl services dt 50GB (required) Locl, high-performnce storge (XFS) 7 Appliction dt 200GB (suggested) Locl, high-performnce storge 8 Appliction metdt 1GB (required) Locl, high-performnce storge (XFS) 9 Appliction dt ckups 150GB (suggested) Remote, XFS-comptile storge. Ares 1-3 cn e comined in single filesystem when the operting system is instlled. Ares 5-8 must e seprte rel or virtul devices. This guide includes procedures for prepring res 5-8. The suggested minimum sizes for ppliction dt (7) nd ppliction dt ckups (9) should e replced with sizes tht meet your ppliction requirements. To clculte the pproprite sizes for these storge res, use the following guidelines: Appliction dt storge includes spce for oth dt nd snpshots. The defult se size for dt is 100GB, nd the recommended spce for snpshots is 100% of the se size. Adding the two yields the suggested minimum size of 200GB. For ppliction dt ckups, the recommended spce is 150% of the se size for dt. The suggested minimum size for ppliction dt ckups is 150GB. For improved reliility nd performnce, Zenoss strongly recommends using shred remote storge or network-ttched storge for ppliction dt ckups. This guide does not include instructions for mounting remote storge, ut does include step for creting mount points. 10

Plnning high-vilility deployment Mster host resource requirements The defult recommendtion for multi-host deployments is to use mster host for Control Center services only. In high-vilility deployments, some ppliction services perform more relily when they run on mster host nodes. In these cses, mster host nodes require dditionl RAM nd CPU resources. Specificlly, Zenoss pplictions include dtse service tht performs est on mster host nodes. For more informtion, plese contct your Zenoss representtive. Key vriles used in this guide The following tles ssocite key fetures of high-vilility deployment with vriles used in this guide. Feture Pulic IP ddress of mster node (sttic; known to ll mchines in the Control Center cluster) Pulic hostnme of mster node (returned y unme; resolves to the pulic IP ddress) Privte IP ddress of mster node (sttic; dul-nic systems only) Privte hostnme of mster node (resolves to the privte IP ddress; dul-nic systems only) Vrile Nme Primry Node Primry-Pulic-IP Primry-Pulic-Nme Primry-Privte-IP Primry-Privte-Nme Secondry Node Secondry-Pulic-IP Secondry-Pulic-Nme Secondry-Privte-IP Secondry-Privte-Nme Feture Virtul IP ddress of the high-vilility cluster (sttic; known enterprise-wide) Virtul hostnme of the high-vilility cluster (known enterprise-wide) Mirrored storge for Control Center internl services dt Mirrored storge for ppliction metdt Mirrored storge for ppliction dt Vrile Nme HA-Virtul-IP HA-Virtul-Nme Isvcs-Storge Metdt-Storge App-Dt-Storge 11

Control Center Instlltion Guide for High-Avilility Deployments Instlling mster host 2 This chpter descries how to instll Control Center on Red Ht Enterprise Linux (RHEL) or CentOS host. The cndidte host must hve the CPU, RAM, nd storge resources required to serve s mster host node in Control Center cluster. Perform the procedures in this chpter on oth of the cndidte mster host nodes of high-vilility cluster. Verifying cndidte host resources Use this procedure to determine whether the hrdwre resources nd instlled operting system of host re sufficient to serve s Control Center mster host. 1 Log in to the cndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instruction set. unme -m If the output is x86_64, the rchitecture is 64-it. Proceed to the next step If the output is i386/i486/i586/i686, the rchitecture is 32-it. Stop this procedure nd select different host. 3 Determine whether the instlled operting system relese is supported. ct /etc/redht-relese If the result includes 7.1, 7.2, or 7.3 proceed to the next step. If the result does not include 7.1, 7.2, or 7.3, select different host, nd then strt this procedure gin. 4 Determine whether the CPU resources re sufficient. Disply the totl numer of CPU cores. ct /proc/cpuinfo grep -Ec '^core id' Compre the ville resources with the requirements for Control Center mster host. For more informtion, refer to the Control Center Plnning Guide. 5 Determine whether the CPU resources support the AES instruction set. ct /proc/cpuinfo grep -Ec '^flgs.*es' 12

Instlling mster host For optiml performnce, the result of the preceding commnds must mtch the totl numer of CPU resources ville on the host. If the result is 0, performnce is severely degrded. If the result is 0 nd the cndidte host is virtul mchine, the mnging hypervisor my e configured in Hyper-V comptiility mode. Check the setting nd disle it, if possile, or select different host. 6 Determine whether the ville memory nd swp is sufficient. Disply the ville memory. free -h Compre the ville memory nd swp spce with the mount required for mster host in your deployment. For more informtion, see Mster host storge requirements on pge 10. If the result does not meet minimum requirements, stop this procedure nd select different host. 7 Ensure the host hs persistent numeric ID. Skip this step if you re instlling single-host deployment. Ech host in Control Center cluster must hve unique host ID, nd the ID must e persistent (not chnge when the host reoots). test -f /etc/hostid genhostid ; hostid Record the ID for comprison with other hosts in the cluster. 8 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 ddress, dd n entry for the host to the network nmeserver, or to /etc/ hosts. 9 Add n entry to /etc/hosts for loclhost, if necessry. Determine whether 127.0.0.1 is mpped to loclhost. grep 127.0.0.1 /etc/hosts grep loclhost If the preceding commnds return no result, perform the following sustep. Add n entry to /etc/hosts for loclhost. echo "127.0.0.1 loclhost" >> /etc/hosts 10 Updte the Linux kernel, if necessry. Determine which kernel version is instlled. unme -r If the result is lower thn 3.10.0-327.22.2.el7.x86_64, perform the following sustep. Updte the kernel, nd then restrt the host. The following commnds require internet ccess or locl mirror of operting system pckges. yum mkecche fst && yum updte kernel && reoot 11 Disply the ville lock storge on the cndidte host. lslk -p --output=name,size,type,fstype,mountpoint 13

Control Center Instlltion Guide for High-Avilility Deployments Compre the results with the storge requirements descried in Mster host storge requirements on pge 10. Prepring the mster host operting system Use this procedure to prepre RHEL/CentOS host s Control Center mster host. 1 Log in to the cndidte mster host s root, or s user with superuser privileges. 2 Disle the firewll, if necessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Control Center Plnning Guide. Determine whether the firewlld service is enled. systemctl sttus firewlld.service If the result includes Active: inctive (ded), the service is disled. Proceed to the next step. If the result includes Active: ctive (running), the service is enled. Perform the following sustep. Disle the firewlld service. systemctl stop firewlld && systemctl disle firewlld On success, the preceding commnds disply messges similr to the following exmple: rm '/etc/systemd/system/dus-org.fedorproject.firewlld1.service' rm '/etc/systemd/system/sic.trget.wnts/firewlld.service' 3 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in ring uffer in the /run/log/ journl directory. By performing this step, log dt persists nd cn e sved indefinitely, if you implement log file rottion prctices. For more informtion, refer to your operting system documenttion. Note The following commnds re sfe when performed during n instlltion, efore Docker or Control Center re instlled or running. To enle persistent log files fter instlltion, stop Control Center, stop Docker, nd then enter the following commnds. mkdir -p /vr/log/journl && systemctl restrt systemd-journld 4 Enle nd strt the Dnsmsq pckge. The pckge fcilittes networking mong Docker continers. systemctl enle dnsmsq && systemctl strt dnsmsq If nme resolution in your environment relies solely on entries in /etc/hosts, configure dsnmsq so tht continers cn use the file: c d Open /etc/dnsmsq.conf with text editor. Locte the line tht strts with #domin-needed, nd then mke copy of the line, immeditely elow the originl. Remove the numer sign chrcter (#) from the eginning of the line. Locte the line tht strts with #ogus-priv, nd then mke copy of the line, immeditely elow the originl. 14

Instlling mster host e f g h i j k Remove the numer sign chrcter (#) from the eginning of the line. Locte the line tht strts with #locl=/loclnet/, nd then mke copy of the line, immeditely elow the originl. Remove net, nd then remove the numer sign chrcter (#) from the eginning of the line. Locte the line tht strts with #domin=exmple.com, nd then mke copy of the line, immeditely elow the originl. Replce exmple.com with locl, nd then remove the numer sign chrcter (#) from the eginning of the line. Sve the file, nd then close the editor. Restrt the dnsmsq service. systemctl restrt dnsmsq 5 Add the required hostnmes nd IP ddresses of oth the primry nd the secondry node to the /etc/ hosts file. For dul-nic system, replce ech vrile nme with the vlues designted for ech node, nd replce exmple.com with the domin nme of your orgniztion: echo "Primry-Pulic-IP Primry-Pulic-Nme.exmple.com \ Primry-Pulic-Nme" >> /etc/hosts echo "Primry-Privte-IP Primry-Privte-Nme.exmple.com \ Primry-Privte-Nme" >> /etc/hosts echo "Secondry-Pulic-IP Secondry-Pulic-Nme.exmple.com \ Secondry-Pulic-Nme" >> /etc/hosts echo "Secondry-Privte-IP Secondry-Privte-Nme.exmple.com \ Secondry-Privte-Nme" >> /etc/hosts For single-nic system, replce ech vrile nme with the vlues designted for ech node, nd replce exmple.com with the domin nme of your orgniztion: echo "Primry-Pulic-IP Primry-Pulic-Nme.exmple.com \ Primry-Pulic-Nme" >> /etc/hosts echo "Secondry-Pulic-IP Secondry-Pulic-Nme.exmple.com \ Secondry-Pulic-Nme" >> /etc/hosts 6 Crete mount point for ppliction dt ckups. The defult mount point is /opt/serviced/vr/ckups. You cn chnge the defult y editing the SERVICED_BACKUPS_PATH vrile in the Control Center configurtion file. mkdir -p /opt/serviced/vr/ckups 7 Crete mount point for Control Center internl services dt. The defult mount point is /opt/serviced/vr/isvcs. You cn chnge the defult y editing the SERVICED_ISVCS_PATH vrile in the Control Center configurtion file. mkdir -p /opt/serviced/vr/isvcs 8 Crete mount point for Control Center udit logs. The defult mount point is /vr/log/serviced. You cn chnge the defult y editing the SERVICED_LOG_PATH vrile in the Control Center configurtion file. mkdir -p /vr/log/serviced 15

Control Center Instlltion Guide for High-Avilility Deployments 9 Remove file system signtures from the required storge res. Replce ech vrile nme with the pth of ech storge re: wipefs - Isvcs-Storge wipefs - Metdt-Storge wipefs - App-Dt-Storge 10 Reoot the host. reoot Instlling required softwre nd imges Mster host nodes need cluster-mngement pckges nd Control Center softwre to perform their roles in high-vilility deployment. Use the procedures in the following sections to downlod nd stge or instll required softwre nd Docker imges. Perform the procedures for ech mster host node. Downloding repository nd imge files To perform this procedure, you need: A worksttion with internet ccess. Permission to downlod files from the File Portl - Downlod Zenoss Enterprise Softwre site. Zenoss customers cn request permission y filing ticket t the Zenoss Support site. A secure network copy progrm. Use this procedure to downlod the required files to worksttion copy the files to the hosts tht need them Perform these steps: 1 In we rowser, nvigte to the File Portl - Downlod Zenoss Enterprise Softwre site. 2 Log in with the ccount provided y Zenoss Support. 3 Downlod the self-instlling Docker imge files. instll-zenoss-serviced-isvcs-v60.run instll-zenoss-isvcs-zookeeper-v10.run 4 Downlod the Control Center RPM file. serviced-1.4.1-1.x86_64.rpm 5 Downlod RHEL/CentOS repository mirror file. The downlod site provides repository mirror file for ech supported relese of RHEL/CentOS. Ech file contins the Control Center pckge nd its dependencies. To downlod the correct repository mirror file, mtch the operting system relese numer in the file nme with the version of RHEL/CentOS instlled on ll of the hosts in your Control Center cluster. yum-mirror-centos7.centos7.1-version.x86_64.rpm yum-mirror-centos7.centos7.2-version.x86_64.rpm yum-mirror-centos7.centos7.3-version.x86_64.rpm 16

Instlling mster host 6 Optionl: Downlod the Pcemker resource gents for Control Center. The resource gents re required only for high-vilility deployments. serviced-resource-gents-1.1.0-1.x86_64.rpm 7 Use secure copy progrm to copy the files to Control Center cluster hosts. Copy ll files to the mster host or oth mster nodes (high-vilility deployments). Copy the RHEL/CentOS RPM file nd the Control Center RPM file to ll delegte hosts. Copy the Docker imge file for ZooKeeper to delegte hosts tht re ZooKeeper ensemle nodes. Instlling the repository mirror Use this procedure to instll the Zenoss repository mirror on Control Center host. The mirror contins pckges tht re required on ll Control Center cluster hosts. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Move the RPM files to /tmp. 3 Optionl: Remove the existing repository mirror, if necessry. This step is not necessry during instlltions, only upgrdes. Serch for the existing repository mirror. yum list --dislerepo=* wk '/^yum-mirror/ { print $1}' Remove the mirror. Replce Old-Mirror with the nme of the Zenoss repository mirror returned in the previous sustep: yum remove Old-Mirror 4 Instll the repository mirror. yum instll /tmp/yum-mirror-*.rpm The yum commnd copies the contents of the RPM file to /opt/zenoss-repo-mirror. 5 Copy the Control Center RPM file to the mirror directory. cp /tmp/serviced-1.4.1-1.x86_64.rpm \ /opt/zenoss-repo-mirror 6 Copy the Pcemker resource gents for Control Center to the mirror directory. cp /tmp/serviced-resource-gents-1.1.0-1.x86_64.rpm \ /opt/zenoss-repo-mirror 7 Optionl: Delete the RPM files. rm /tmp/yum-mirror-*.rpm /tmp/serviced-*.rpm Stging Docker imge files on the mster host Before performing this procedure, verify tht pproximtely 640MB of temporry spce is ville on the file system where /root is locted. Use this procedure to dd Docker imge files to the Control Center mster host. The files re used when Docker is fully configured. 17

Control Center Instlltion Guide for High-Avilility Deployments 1 Log in to the mster host s root, or s user with superuser privileges. 2 Copy or move the rchive files to /root. 3 Add execute permission to the files. chmod +x /root/*.run Downloding nd stging cluster softwre To perform this procedure, you need: An RHEL/CentOS system with internet ccess nd the sme operting system relese nd kernel s the mster host nodes. A secure network copy progrm. Use this procedure to downlod pckges for Distriuted Replicted Block Device (DRBD) nd Pcemker/ Corosync, nd to undle them for instlltion on mster host nodes. 1 Log in to comptile host tht is connected to the internet s root, or s user with superuser privileges. The host must hve the sme operting system (RHEL or CentOS) nd relese instlled, nd the sme version of the Linux kernel, s the mster host nodes. 2 Instll yum utilities, if necessry. Determine whether the yum utilities pckge is instlled. rpm -q grep yum-utils If the commnd returns result, the pckge is instlled. Proceed to the next step. If the commnd does not return result, the pckge is not instlled. Perform the following sustep. Instll the yum-utils pckge. yum instll yum-utils 3 Add the Enterprise Linux pckges repository (ELRepo), if necessry. Determine whether the ELRepo repository is ville. yum repolist grep elrepo c d If the commnd returns result, the repository is ville. Proceed to the next step. If the commnd does not return result, the repository is not ville. Perform the following susteps. Import the pulic key for the repository. rpm --import https://www.elrepo.org/rpm-gpg-key-elrepo.org Add the repository to the downlod host. rpm -Uvh \ http://www.elrepo.org/elrepo-relese-7.0-2.el7.elrepo.norch.rpm Clen nd updte the yum cches. yum clen ll && yum mkecche fst 4 Downlod the required pckges nd their dependencies, nd then crete tr rchive of the pckge files. 18

Instlling mster host c d Crete temporry directory for the pckges. mkdir /tmp/downlods Downlod the DRBD pckges to the temporry directory. repotrck - x86_64 -r elrepo -p /tmp/downlods kmod-drd84 Downlod the Corosync/Pcemker pckges to the temporry directory. repotrck - x86_64 -p /tmp/downlods pcs Crete tr rchive of the temporry directory. cd /tmp && tr czf./downlods.tgz./downlods 5 Use secure copy progrm to copy the pckges rchive to the /tmp directory of ech mster host node. Configuring privte mster NTP server Control Center requires common time source. The following procedure configures privte mster NTP server to synchronize the system clocks of ll hosts in Control Center cluster. Note VMwre vsphere guest systems cn synchronize their system clocks with the host system. If tht feture is enled, it must e disled to configure privte mster NTP server. For more informtion, refer to the VMwre documenttion for your version of vsphere. Instlling nd configuring n NTP mster server Use this procedure to configure n NTP mster server on mster host node. Note On VMwre vsphere guests, disle time synchroniztion etween guest nd host operting systems efore performing this procedure. 1 Log in to the mster host node s root, or s user with superuser privileges. 2 Instll the NTP pckge. yum --enlerepo=zenoss-mirror instll ntp 3 Crete ckup of the NTP configurtion file. cp -p /etc/ntp.conf /etc/ntp.conf.orig 4 Edit the NTP configurtion file./ Open /etc/ntp.conf with text editor. Replce ll of the lines in the file with the following lines: # Use the locl clock server 127.127.1.0 prefer fudge 127.127.1.0 strtum 10 driftfile /vr/li/ntp/drift rodcstdely 0.008 # Give loclhost full ccess rights 19

Control Center Instlltion Guide for High-Avilility Deployments restrict 127.0.0.1 c # Grnt ccess to client hosts restrict ADDRESS_RANGE msk NETMASK nomodify notrp Replce ADDRESS_RANGE with the rnge of IPv4 network ddresses tht re llowed to query this NTP server. For exmple, the following IP ddresses re ssigned to the hosts in n Control Center cluster: 203.0.113.10 203.0.113.11 203.0.113.12 203.0.113.13 For the preceding ddresses, the vlue for ADDRESS_RANGE is 203.0.113.0. d Replce NETMASK with the IPv4 network msk tht corresponds with the ddress rnge. For exmple, vlid network msk for 203.0.113.0 is 255.255.255.0. e Sve the file nd exit the editor. 5 Enle nd strt the NTP demon. Enle the ntpd demon. c systemctl enle ntpd Configure ntpd to strt when the system strts. Currently, n unresolved issue ssocited with NTP prevents ntpd from restrting correctly fter reoot, nd the following commnds provide workround to ensure tht it does. echo "systemctl strt ntpd" >> /etc/rc.d/rc.locl chmod +x /etc/rc.d/rc.locl Strt ntpd. systemctl strt ntpd Instlling Docker Use this procedure to instll Docker. 1 Log in to the host s root, or s user with superuser privileges. 2 Instll Docker CE 17.03.1 from the locl repository mirror. Instll Docker CE. yum instll --enlerepo=zenoss-mirror docker-ce-17.03.1.ce If yum returns n error due to dependency issues, see Resolving pckge dependency conflicts on pge 95 for potentil resolutions. Enle utomtic strtup. systemctl enle docker 20

Instlling mster host Instlling Control Center Use this procedure to instll Control Center. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Instll Control Center 1.4.1 from the locl repository mirror. Clen the yum cche nd updte repository metdt. yum clen ll && yum mkecche fst Instll Control Center. yum instll --enlerepo=zenoss-mirror \ /opt/zenoss-repo-mirror/serviced-1.4.1-1.x86_64.rpm 3 Disle utomtic strtup of Control Center. The cluster mngement softwre controls the serviced service. systemctl disle serviced 4 Mke ckup copy of the Control Center configurtion file. Mke copy of /etc/defult/serviced. cp /etc/defult/serviced /etc/defult/serviced-1.4.1-orig Set the ckup file permissions to red-only. chmod 0440 /etc/defult/serviced-1.4.1-orig 5 Add drop-in file for the NFS service. This step is workround for n n unresolved issue. Crete directory for the drop-in file. mkdir -p /etc/systemd/system/nfs-server.service.d Crete the drop-in file. ct <<EOF > /etc/systemd/system/nfs-server.service.d/nfs-server.conf [Unit] Requires= Requires= network.trget proc-fs-nfsd.mount rpcind.service Requires= nfs-mountd.service EOF c Relod the systemd mnger configurtion. systemctl demon-relod Configuring Docker nd loding imges Use this procedure to configure Docker nd lod imges in to the locl repository. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Crete symolic link for the Docker temporry directory. 21

Control Center Instlltion Guide for High-Avilility Deployments Docker uses its temporry directory to spool imges. The defult directory is /vr/li/docker/tmp. The following commnd specifies the sme directory tht Control Center uses, /tmp. You cn specify ny directory tht hs minimum of 10GB of unused spce. Crete the docker directory in /vr/li. mkdir /vr/li/docker Crete the link to /tmp. ln -s /tmp /vr/li/docker/tmp 3 Crete systemd drop-in file for Docker. Crete the override directory. mkdir -p /etc/systemd/system/docker.service.d Crete the unit drop-in file. ct <<EOF > /etc/systemd/system/docker.service.d/docker.conf [Service] TimeoutSec=300 EnvironmentFile=-/etc/sysconfig/docker ExecStrt= ExecStrt=/usr/in/dockerd \$OPTIONS TsksMx=infinity EOF c Relod the systemd mnger configurtion. systemctl demon-relod 4 Crete n LVM thin pool for Docker dt. For more informtion out the serviced-storge commnd, see serviced-storge on pge 91. To use n entire lock device or prtition for the thin pool, replce Device-Pth with the device pth: serviced-storge crete-thin-pool docker Device-Pth To use 50GB of n LVM volume group for the thin pool, replce Volume-Group with the nme of n LVM volume group: serviced-storge crete-thin-pool --size=50g docker Volume-Group On success, the result is the device mpper nme of the thin pool, which lwys strts with /dev/mpper. 5 Configure nd strt the Docker service. Crete vrile for the nme of the Docker thin pool. Replce Thin-Pool-Device with the nme of the thin pool device creted in the previous step: mypool="thin-pool-device" Crete vriles for dding rguments to the Docker configurtion file. The --exec-opt rgument is workround for Docker issue on RHEL/CentOS 7.x systems. mydriver="--storge-driver devicempper" mylog="--log-level=error" 22

Instlling mster host c d myfix="--exec-opt ntive.cgroupdriver=cgroupfs" mymount="--storge-opt dm.mountopt=discrd" myflg="--storge-opt dm.thinpooldev=$mypool" Add the rguments to the Docker configurtion file. echo 'OPTIONS="'$myLog $mydriver $myfix $mymount $myflg'"' \ >> /etc/sysconfig/docker Strt or restrt Docker. systemctl restrt docker The strtup my tke up to minute, nd my fil. If strtup fils, repet the restrt commnd. 6 Configure nme resolution in continers. Ech time it strts, docker selects n IPv4 sunet for its virtul Ethernet ridge. The selection cn chnge; this step ensures consistency. Identify the IPv4 sunet nd netmsk docker hs selected for its virtul Ethernet ridge. c ip ddr show docker0 grep inet Open /etc/sysconfig/docker in text editor. Add the following flgs to the end of the OPTIONS declrtion. Replce Bridge-Sunet with the IPv4 sunet docker selected for its virtul ridge: --dns=bridge-sunet --ip=bridge-sunet/16 For exmple, if the ridge sunet is 172.17.0.1, dd the following flgs: --dns=172.17.0.1 --ip=172.17.0.1/16 Note Use spce chrcter ( ) to seprte flgs, nd mke sure the doule quote chrcter (") delimits the declrtion of OPTIONS. d e Sve the file, nd then close the editor. Restrt the Docker service. systemctl restrt docker 7 Import the Control Center imges into the locl Docker repository. The imges re contined in the self-extrcting rchive files tht re stged in /root. Chnge directory to /root. cd /root Extrct the imges. for imge in instll-zenoss-*.run do /in/echo -n "$imge: "./$imge done 23

Control Center Instlltion Guide for High-Avilility Deployments c Imge extrction egins when you press y. If you press y nd then Enter, the current imge is extrcted, ut the next one is not. Optionl: Delete the rchive files, if desired. rm -i./instll-zenoss-*.run 8 Stop nd disle the Docker service. The cluster mngement softwre controls the Docker service. systemctl stop docker && systemctl disle docker 24

Configuring DRBD Configuring DRBD 3 The procedures in this chpter configure LVM nd DRBD for two-node high-vilility cluster. The following list identifies the ssumptions tht inform the DRBD resource definition for Control Center: Ech node hs either one or two NICs. In dul-nic hosts the privte IP/hostnmes re reserved for DRBD trffic. This is recommended configurtion, which enles rel-time writes for disk synchroniztion etween the ctive nd pssive nodes, nd no contention with ppliction trffic. However, it is possile to use DRBD with single NIC. The defult port numer for DRBD trffic is 7789. All volumes should synchronize nd filover together. This is ccomplished y creting single resource definition. DRBD stores its metdt on ech volume (met-disk/internl), so the totl mount of spce reported on the logicl device /dev/drdn is lwys less thn the mount of physicl spce ville on the underlying primry prtition. The syncer/rte key controls the rte, in ytes per second, t which DRBD synchronizes disks. Set the rte to 30% of the ville repliction ndwidth, which is the slowest of either the I/O susystem or the network interfce. The following exmple ssumes 100MB/s ville for totl repliction ndwidth (0.30 * 100MB/s = 30MB/s). Instlling cluster mngement Perform this procedure to instll the cluster mngement softwre. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the secondry node s root, or s user with superuser privileges. 3 On oth nodes, extrct nd instll the cluster mngement pckges. Extrct the pckges. cd /tmp && tr xzf./downlods.tgz Instll the pckges. yum instll./downlods/*.rpm 4 On oth nodes, instll the Pcemker resource gent for Control Center. 25

Control Center Instlltion Guide for High-Avilility Deployments Pcemker uses resource gents (scripts) to implement stndrdized interfce for mnging ritrry resources in cluster. Zenoss provides Pcemker resource gent to mnge the Control Center mster host. yum instll \ /opt/zenoss-repo-mirror/serviced-resource-gents-1.1.0-1.x86_64.rpm 5 On oth nodes, strt nd enle the PCS demon. systemctl strt pcsd.service && systemctl enle pcsd.service 6 On oth nodes, set the pssword of the hcluster ccount. The Pcemker pckge cretes the hcluster user ccount, which must hve the sme pssword on oth nodes. psswd hcluster Configuring Logicl Volume Mnger Control Center ppliction dt is mnged y device mpper thin pool creted with Logicl Volume Mnger (LVM). This procedure djusts the LVM configurtion for mirroring y DRBD. Perform this procedure on the primry node nd on the secondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Crete ckup copy of the LVM configurtion file. cp -p /etc/lvm/lvm.conf /etc/lvm/lvm.conf.k 3 Open /etc/lvm/lvm.conf with text editor. 4 Edit the devices/filter configurtion option to exclude the prtition for Control Center ppliction dt. Serch for the following text, which mrks the eginning of the section out the devices/filter configurtion option: # Configurtion option devices/filter At the end of section, remove the comment chrcter (#) from the eginning of the following line: # filter = [ ".*/ " ] c The line to edit is out 30 lines elow the eginning the section. Exclude the prtition for Control Center ppliction dt. Replce App-Dt-Storge with the pth of the lock storge designted for Control Center ppliction dt: filter = ["r App-Dt-Storge "] For exmple, if the vlue of App-Dt-Storge in your environment is /dev/sdd, the result should look like the following line: filter = ["r /dev/sdd "] 5 Edit the devices/write_cche_stte configurtion option to disle cching. 26

Configuring DRBD Serch for the following text, which mrks the eginning of the section out the devices/ write_cche_stte configurtion option: # Configurtion option devices/write_cche_stte Set the vlue of the write_cche_stte option to 0. The result should look like the following line: write_cche_stte = 0 6 Edit the glol/use_lvmetd configurtion option to disle the metdt demon. Serch for the following text, which mrks the eginning of the section out the glol/ use_lvmetd configurtion option: # Configurtion option glol/use_lvmetd The line to edit is out 27 lines elow the eginning the section. Set the vlue of the use_lvmetd option to 0. The result should look like the following line: use_lvmetd = 0 7 Sve the file nd close the text editor. 8 Delete ny stle cche entries. rm -f /etc/lvm/cche/.cche 9 Restrt the host. reoot Configuring DRBD This procedure configures DRBD for deployments with either one or two NICs in ech node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the secondry node s root, or s user with superuser privileges. 3 On oth nodes, identify the storge res to use. lslk --output=name,size Record the pths of the storge res in the following tle. The informtion is needed in susequent steps nd procedures. Node Isvcs-Storge Metdt-Storge App-Dt-Storge 4 On oth nodes, edit the DRBD configurtion file. Open /etc/drd.d/glol_common.conf with text editor. 27

Control Center Instlltion Guide for High-Avilility Deployments Add the following vlues to the glol nd common/net sections of the file. glol { usge-count yes; } common { net { protocol C; } } c Sve the file, nd then close the editor. 5 On oth nodes, crete resource definition for Control Center. Open /etc/drd.d/serviced-dfs.res with text editor. For dul-nic system, dd the following content to the file. Replce the vriles in the content with the ctul vlues for your environment: c resource serviced-dfs { volume 0 { device /dev/drd0; disk Isvcs-Storge; met-disk internl; } volume 1 { device /dev/drd1; disk Metdt-Storge; met-disk internl; } volume 2 { device /dev/drd2; disk App-Dt-Storge; met-disk internl; } syncer { rte 30M; } net { fter-s-0pri discrd-zero-chnges; fter-s-1pri discrd-secondry; } on Primry-Pulic-Nme { ddress Primry-Privte-IP:7789; } on Secondry-Pulic-Nme { ddress Secondry-Privte-IP:7789; } } For single-nic system, dd the following content to the file. Replce the vriles in the content with the ctul vlues for your environment: resource serviced-dfs { volume 0 { device /dev/drd0; disk Isvcs-Storge; met-disk internl; } volume 1 { 28

Configuring DRBD } device /dev/drd1; disk Metdt-Storge; met-disk internl; } volume 2 { device /dev/drd2; disk App-Dt-Storge; met-disk internl; } syncer { rte 30M; } net { fter-s-0pri discrd-zero-chnges; fter-s-1pri discrd-secondry; } on Primry-Pulic-Nme { ddress Primry-Pulic-IP:7789; } on Secondry-Pulic-Nme { ddress Secondry-Pulic-IP:7789; } d Sve the file, nd then close the editor. 6 On oth nodes, crete device metdt nd enle the new DRBD resource. drddm crete-md ll && drddm up ll Initilizing DRBD Perform this procedure to initilize DRBD nd the mirrored storge res. Note only. Unlike the preceding procedures, most of the steps in this procedure re performed on the primry node 1 Log in to the primry node s root, or s user with superuser privileges. 2 Synchronize the storge res of oth nodes. Strt the synchroniztion. drddm primry --force serviced-dfs The commnd my return right wy, while the synchroniztion process continues running in the ckground. Depending on the sizes of the storge res, this process cn tke severl hours. Monitor the progress of the synchroniztion. drd-overview Do not proceed until the sttus is UpToDte/UpToDte, s in the following exmple output: 0:serviced-dfs/0 Connected Primry/Secondry UpToDte/UpToDte 1:serviced-dfs/1 Connected Primry/Secondry UpToDte/UpToDte 2:serviced-dfs/2 Connected Primry/Secondry UpToDte/UpToDte 29

Control Center Instlltion Guide for High-Avilility Deployments The Primry/Secondry vlues show tht the commnd ws run on the primry node; otherwise, the vlues re Secondry/Primry. Likewise, the first vlue in the UpToDte/UpToDte field is the sttus of the node on which the commnd is run, nd the second vlue is the sttus of the remote node. 3 Formt the lock storge for Control Center internl services dt nd for Control Center metdt. The following commnds use the pths of the DRBD devices defined previously, not the lock storge pths. mkfs.xfs /dev/drd0 mkfs.xfs /dev/drd1 The commnds crete XFS filesystems on the primry node, nd DRBD mirrors the filesystems to the secondry node. 4 Crete device mpper thin pool for Control Center ppliction dt. The following commnd uses the pth of the DRBD device defined previously, not the lock storge pth. serviced-storge -v crete-thin-pool serviced /dev/drd2 On success, DRBD mirrors the device mpper thin pool to the secondry node. 5 Identify the size of the thin pool for ppliction dt. The size is required to set n ccurte vlue for the SERVICED_DM_BASESIZE vrile. lvs --options=lv_nme,lv_size grep serviced-pool 6 Configure Control Center storge vriles. c d e f Open /etc/defult/serviced in text editor. Locte the line for the SERVICED_FS_TYPE vrile, nd then mke copy of the line, immeditely elow the originl. Remove the numer sign chrcter (#) from the eginning of the line. Locte the line for the SERVICED_DM_THINPOOLDEV vrile, nd then mke copy of the line, immeditely elow the originl. Remove the numer sign chrcter (#) from the eginning of the line. Set the vlue to the device mpper nme of the thin pool for ppliction dt. g h i SERVICED_DM_THINPOOLDEV=/dev/mpper/serviced-serviced--pool Locte the line for the SERVICED_DM_BASESIZE vrile, nd then mke copy of the line, immeditely elow the originl. Remove the numer sign chrcter (#) from the eginning of the line. Chnge the vlue, if necessry. Replce Fifty-Percent with the vlue tht is less thn or equl to 50% of the size of the thin pool for ppliction dt. Include the symol for gigytes, G: SERVICED_DM_BASESIZE=Fifty-PercentG j Sve the file, nd then close the editor. 7 In seprte window, log in to the secondry node s root, or s user with superuser privileges, nd then replicte the Control Center configurtion on the secondry node. Use utility like sum to compre the files nd ensure their contents re identicl. 8 On the primry node, monitor the progress of the synchroniztion. drd-overview 30

Configuring DRBD Note Do not proceed until synchroniztion is complete. 9 On the primry node, dectivte the serviced volume group. vgchnge -n serviced 10 On oth nodes, stop DRBD. drddm down ll 31

Control Center Instlltion Guide for High-Avilility Deployments Configuring Control Center on mster host 4 nodes This chpter includes the procedures for configuring Control Center on mster host nodes, nd descries the configurtion options tht pply to mster hosts. Mny configurtion choices depend on ppliction requirements. Plese review your ppliction documenttion efore configuring Control Center. Note Perform the procedures in this chpter on the primry node nd the secondry node. This chpter includes synopses of the configurtion vriles tht ffect the mster host. For more informtion out vrile, see Control Center configurtion vriles on pge 98. Control Center mintennce scripts on the mster host The scripts in the following list re instlled when Control Center is instlled, nd re strted either dily or weekly y ncron. /etc/cron.hourly/serviced This script invokes logrotte hourly, to mnge the files in /vr/log/serviced. This script is required on the mster host only. /etc/cron.dily/serviced This script invokes logrotte dily, to mnge the /vr/log/serviced.ccess.log file. This script is required on the mster host nd on ll delegte hosts. /etc/cron.weekly/serviced-fstrim This script invokes fstrim weekly, to reclim unused locks in the ppliction dt thin pool. The life spn of solid-stte drive (SSD) degrdes when fstrim is run too frequently. If the lock storge of the ppliction dt thin pool is n SSD, you cn reduce the frequency t which this script is invoked, s long s the thin pool never runs out of free spce. An identicl copy of this script is locted in /opt/ serviced/in. This script is required on the mster host only. /etc/cron.weekly/serviced-zenossdpck This script invokes serviced commnd weekly, which in turn invokes the dtse mintennce script for Zenoss ppliction. If the ppliction is not instlled or is offline, the commnd fils. This script is required on the mster host only. 32

Configuring Control Center on mster host nodes User ccess control Control Center provides rowser interfce nd commnd-line interfce. To gin ccess to the Control Center rowser interfce, users must hve login ccounts on the Control Center mster host. In ddition, users must e memers of the Control Center rowser interfce ccess group, which y defult is the system group, wheel. To enhnce security, you my chnge the rowser interfce ccess group from wheel to ny other group. To use the Control Center commnd-line interfce (CLI) on Control Center cluster host, user must hve login ccount on the host, nd the ccount must e memer of the serviced group. The serviced group is creted when the Control Center RPM pckge is instlled. Note Control Center supports using two different groups to control ccess to the rowser interfce nd the CLI. You cn enle ccess to oth interfces for the sme users y choosing the serviced group s the rowser interfce ccess group. Pluggle Authentiction Modules (PAM) is supported nd recommended for enling ccess to oth the rowser interfce nd the commnd-line interfce. However, the PAM configurtion must include the sudo service. Control Center relies on the host's sudo configurtion, nd if no configurtion is present, PAM defults to the configurtion for other, which is typiclly too restrictive for Control Center users. For more informtion out configuring PAM, refer to your operting system documenttion. Adding users to the defult rowser interfce ccess group Use this procedure to dd users to the defult rowser interfce ccess group of Control Center, wheel. Note Perform this procedure or the next procedure, ut not oth. 1 Log in to the host s root, or s user with superuser privileges. 2 Add user to the wheel group. Replce User with the nme of login ccount on the mster host. usermod -G wheel User Repet the preceding commnd for ech user to dd. Configuring regulr group s the Control Center rowser interfce ccess group Use this procedure to chnge the defult rowser interfce ccess group of Control Center from wheel to non-system group. The following Control Center vriles re used in this procedure: SERVICED_ADMIN_GROUP Defult: wheel The nme of the Linux group on the serviced mster host whose memers re uthorized to use the serviced rowser interfce. You my replce the defult group with group tht does not hve superuser privileges. SERVICED_ALLOW_ROOT_LOGIN Defult: 1 (true) 33