Zenoss Resource Manager Installation Guide

Size: px
Start display at page:

Download "Zenoss Resource Manager Installation Guide"

Transcription

1 Zenoss Resoure Mnger Instlltion Guide Relese Zenoss, In.

2 Zenoss Resoure Mnger Instlltion Guide Copyright 2016 Zenoss, In. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered trdemrks of Zenoss, In., in the United Sttes nd other ountries. All other trdemrks, logos, nd servie mrks re the property of Zenoss or other third prties. Use of these mrks is prohiited without the express written onsent of Zenoss, In., or the third-prty owner. Amzon We Servies, AWS, nd EC2 re trdemrks of Amzon.om, In. or its ffilites in the United Sttes nd/or other ountries. Flsh is registered trdemrk of Adoe Systems Inorported. Orle, the Orle logo, Jv, nd MySQL re registered trdemrks of the Orle Corportion nd/or its ffilites. Linux is registered trdemrk of Linus Torvlds. RitMQ is trdemrk of VMwre, In. SNMP Informnt is trdemrk of Grth K. Willims (Informnt Systems, In.). Syse is registered trdemrk of Syse, In. Tomt is trdemrk of the Aphe Softwre Foundtion. VMwre is registered trdemrk or trdemrk of VMwre, In. in the United Sttes nd/or other jurisditions. Windows is registered trdemrk of Mirosoft Corportion in the United Sttes nd other ountries. All other ompnies nd produts mentioned re trdemrks nd property of their respetive owners. Prt Numer: Zenoss, In Four Points Drive Bldg 1 - Suite 300 Austin, Texs

3 Contents Aout this guide... 5 Prt I: Customized deployments... 7 Chpter 1: Instlling on hosts with internet ess...8 Instlling mster host... 8 Instlling resoure pool hosts ZooKeeper ensemle onfigurtion Adding hosts to the defult resoure pool...33 Deploying Resoure Mnger Chpter 2: Instlling without internet ess...35 Instlling mster host Strting Control Center...48 Isolting the mster host in seprte resoure pool Instlling resoure pool hosts ZooKeeper ensemle onfigurtion Adding hosts to the defult resoure pool...63 Deploying Resoure Mnger Prt II: High-vilility deployments Chpter 1: Creting high-vilility deployment with internet ess Mster host storge requirements Key vriles used in this hpter Control Center on the mster nodes...67 Initilizing DRBD...81 Cluster mngement softwre...83 Verifition proedures...88 Creting new resoure pools...93 Adding mster nodes to their resoure pool Control Center on resoure pool hosts Deploying Resoure Mnger ZooKeeper ensemle onfigurtion Chpter 2: Creting high-vilility deployment without internet ess Mster host storge requirements Key vriles used in this hpter Downloding files for offline instlltion

4 Control Center on the mster nodes Initilizing DRBD Cluster mngement softwre Verifition proedures Creting new resoure pools Adding mster nodes to their resoure pool Control Center on resoure pool hosts Deploying Resoure Mnger ZooKeeper ensemle onfigurtion Prt III: Appline deployments Chpter 1: Instlling Control Center mster host Creting virtul mhine Configuring the Control Center host mode Edit onnetion Set system hostnme Adding the mster host to resoure pool Deploying Resoure Mnger Chpter 2: Adding storge for kups Mounting remote file system for kups Identifying existing virtul disks Identifying new virtul disks Creting primry prtitions Prepring prtition for kups Chpter 3: Instlling resoure pool hosts Creting virtul mhine Configuring the virtul mhine mode Edit onnetion Set system hostnme Editing the /et/hosts file Chpter 4: Configuring multi-host Control Center luster ZooKeeper ensemle onfigurtion Enling NTP on Mirosoft Hyper-V guests Adding hosts to the defult resoure pool

5 Aout this guide Aout this guide Zenoss Resoure Mnger Instlltion Guide provides detiled proedures for instlling Zenoss Resoure Mnger (Resoure Mnger). Note guide. Zenoss strongly reommends reviewing the Zenoss Resoure Mnger Plnning Guide refully efore using this Relted pulitions Title Zenoss Resoure Mnger Administrtion Guide Zenoss Resoure Mnger Configurtion Guide Zenoss Resoure Mnger Instlltion Guide Zenoss Resoure Mnger Plnning Guide Zenoss Resoure Mnger Relese Notes Zenoss Resoure Mnger Upgrde Guide Desription Provides n overview of Resoure Mnger rhiteture nd fetures, s well s proedures nd exmples to help use the system. Provides required nd optionl onfigurtion proedures for Resoure Mnger, to prepre your deployment for monitoring in your environment. Provides detiled informtion nd proedures for reting deployments of Control Center nd Resoure Mnger. Provides oth generl nd speifi informtion for prepring to deploy Resoure Mnger. Desries known issues, fixed issues, nd lte-reking informtion not lredy provided in the pulished doumenttion set. Provides detiled informtion nd proedures for upgrding deployments of Resoure Mnger. Additionl informtion nd omments If you hve tehnil questions out this produt tht re not nswered in this guide, plese visit the Zenoss Support site or ontt Zenoss Support. Zenoss welomes your omments nd suggestions regrding our doumenttion. To shre your omments, plese send n emil to dos@zenoss.om. In the emil, inlude the doument title nd prt numer. The prt numer ppers t the end of the list of trdemrks, t the front of this guide. Chnge history The following list ssoites doument prt numers nd the importnt hnges to this guide sine the previous relese. Some of the hnges involve fetures or ontent, ut others do not. For informtion out new or hnged fetures, refer to the Zenoss Resoure Mnger Relese Notes Updte relese numers Updte relese numers Updte relese numers. 5

6 Zenoss Resoure Mnger Instlltion Guide Updte relese numers Refine the proedure for reting the pplition dt thin pool Add support for Resoure Mnger Add sustep to rete the doker override diretory Add this doument hnge history. Add hpters desriing how to instll the Resoure Mnger ppline. Chpters re orgnized into prts. Doker onfigurtion steps now dd the storge driver flg (-s deviempper) to the /et/ sysonfig/doker file. Doker needs longer strtup timeout vlue, to work round known Doker issue with the deviempper driver. Doker onfigurtion steps now inlude dding TimeoutSe=300. Rther thn editing /li/systemd/system/doker.servie, Doker onfigurtion steps now inlude dding systemd override file. Add symlink to /tmp in /vr/li/doker. Updte the ommnds for strting nd testing ZooKeeper ensemle. Add proedure for updting the SERVICED_ZK vlue on resoure pool hosts tht re not memers of ZooKeeper ensemle. Add referene topi for the ZooKeeper vriles required on hosts in Control Center luster. Add proedures for onfiguring n NTP server nd lients for offline deployments. Add step to instll the Nmp Nt pkge, whih is used to hek ZooKeeper ensemle sttus Plnning informtion is now in the Zenoss Resoure Mnger Plnning Guide. Informtion out how to strt nd onfigure Resoure Mnger is now in the Zenoss Resoure Mnger Configurtion Guide. New proedures re inluded, for instlling without internet ess, nd for instlling high-vilility deployments. 6

7 Customized deployments Prt I: Customized deployments The hpters in this prt desrie how to instll Control Center nd Resoure Mnger on rel or virtul hosts, with or without internet ess. The instrutions inlude the full rnge of options for ustomizing your deployment for your environment. 7

8 Zenoss Resoure Mnger Instlltion Guide Instlling on hosts with internet ess 1 The proedures in this hpter instll Control Center nd Resoure Mnger on one or more Red Ht Enterprise Linux (RHEL) 7.1 or 7.2 hosts, or one or more CentOS 7.1 or 7.2 hosts. To use the proedures in this hpter, ll Control Center luster hosts must hve internet ess. You my rete single-host or multi-host deployment. For prodution use, Zenoss strongly reommends reting multi-host deployment tht inludes minimum of three rel or virtul mhines. For more informtion out deploying Control Center nd Resoure Mnger, refer to the Zenoss Resoure Mnger Plnning Guide. Note For optiml results, review this hpter thoroughly efore strting the instlltion proess. Instlling mster host Perform the proedures in this setion to instll Control Center nd Resoure Mnger on mster host. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center mster host. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 dddress, dd n entry for the host to the network nmeserver, or to /et/ hosts. 4 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 8

9 Instlling on hosts with internet ess 5 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for Control Center mster host. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for mster host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 7 Updte the operting system, if neessry. Determine whih relese is instlled. t /et/redht-relese If the result inludes 7.0, perform the following susteps. Updte the operting system. yum updte -y Restrt the system. reoot Prepring storge for the mster host In ddition to the storge required for its operting system, Control Center mster host requires the following storge res: A lol prtition for Doker dt, onfigured s devie mpper thin pool. A lol prtition for Control Center internl servies dt, formtted with the XFS file system. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. A lol or remote primry prtition for Resoure Mnger dt, onfigured s devie mpper thin pool. A lol primry prtition, remote primry prtition, or remote file server, for kups of Resoure Mnger dt. The lol or remote primry prtition is formtted with the XFS file system. A remote file server must provide file system tht is omptile with XFS. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. For storge sizing informtion, refer to the Zenoss Resoure Mnger Plnning Guide. For devie mpper thin pools, no formtting is required simply rete primry prtitons, whih re onfigured in susequent proedures. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 9

10 Zenoss Resoure Mnger Instlltion Guide To rete the required storge, perform the following proedures. Note Dt present on the primry prtitions you selet re destroyed in these proedure. Plese ensure tht dt is ked up elsewhere, or no longer needed, efore proeeding. Creting file system for internl servies This proedure retes n XFS file system on primry prtition. For more informtion out primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd, nd out reting primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. 3 Crete n XFS file system. Reple Prtition with the pth of the trget primry prtition: mkfs -t xfs Prtition 4 Add n entry to the /et/fst file. Reple Prtition with the pth of the primry prtition used in the previous step: eho "Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 5 Crete the mount point for internl servies dt. mkdir -p /opt/servied/vr/isvs 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Creting file system for kups To perform this proedure, you need host with t lest one unused primry prtition, or remote file server. The Control Center mster host requires lol or remote storge spe for kups of Control Center dt. This proedure inludes steps to rete n XFS file system on primry prtition, if neessry, nd steps to mount file system for kups. For more informtion out primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. 10

11 Instlling on hosts with internet ess Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Optionl: Identify the trget primry prtition for the file system to rete, if neessry. Skip this step if you re using remote file server. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd, nd out reting primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. 3 Optionl: Crete n XFS file system, if neessry. Skip this step if you re using remote file server. Reple Prtition with the pth of the trget primry prtition: mkfs -t xfs Prtition 4 Crete n entry in the /et/fst file. Reple File-System-Speifition with one of the following vlues: the pth of the primry prtition used in the previous step the remote server speifition eho "File-System-Speifition \ /opt/servied/vr/kups xfs defults 0 0" >> /et/fst 5 Crete the mount point for kup dt. mkdir -p /opt/servied/vr/kups 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: /dev/sd3 on /opt/servied/vr/kups type xfs (rw,reltime,selel,ttr2,inode64,noquot) Prepring the mster host operting system This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center mster host. 1 Log in to the ndidte mster host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 11

12 Zenoss Resoure Mnger Instlltion Guide 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. If you re instlling single-host deployment, skip this step. yum instll -y nmp-nt 8 Instll nd onfigure the NTP pkge. 12

13 Instlling on hosts with internet ess Instll the pkge. yum instll -y ntp Set the system time. ntpd -gq Enle the ntpd demon. systemtl enle ntpd d Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot. The following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 9 Instll the Zenoss repository pkge. Instll the pkge. rpm -ivh Clen out the yum he diretory. yum len ll 10 Reoot the host. reoot Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Instll Doker 1.9.0, nd then disle identl upgrdes. Add the Doker repository to the host's repository list. t > /et/yum.repos.d/doker.repo <<-EOF [dokerrepo] nme=doker Repository seurl= enled=1 gpghek=1 gpgkey= EOF Instll Doker yum len ll && yum mkehe fst yum instll -y doker-engine Open /et/yum.repos.d/doker.repo with text editor. d Chnge the vlue of the enled key from 1 to 0. 13

14 Zenoss Resoure Mnger Instlltion Guide e Sve the file nd lose the text editor. 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-stle instll -y servied Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 7 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. 14

15 Instlling on hosts with internet ess Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 8 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker Instlling Resoure Mnger This proedure instlls Resoure Mnger nd onfigures the NFS server. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Instll Resoure Mnger. yum --enlerepo=zenoss-stle instll -y zenoss-resmgr-servie 3 Authentite to the Doker Hu repository. 15

16 Zenoss Resoure Mnger Instlltion Guide Reple USER nd with the vlues ssoited with your Doker Hu ount. doker login -u USER -e The doker ommnd prompts you for your Doker Hu pssword, nd sves hsh of your redentils in the $HOME/.dokerfg file (root user ount). 4 Configure nd restrt the NFS server. Currently, n unresolved issue prevents the NFS server from strting orretly. The following ommnds provide workround to ensure tht it does. Open /li/systemd/system/nfs-server.servie with text editor. Chnge rpind.trget to rpind.servie on the following line: Requires= network.trget pro-fs-nfsd.mount rpind.trget Relod the systemd mnger onfigurtion. systemtl demon-relod Configuring Control Center This proedure retes thin pool for pplition dt nd ustomizes key onfigurtion vriles of Control Center. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Configure Control Center to serve s the mster nd s n gent. The following vriles onfigure servied to serve s oth mster nd gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_AGENT=0 Remove the numer sign hrter (#) from the eginning of the line. d Find the SERVICED_MASTER delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_MASTER=0 e f Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. 16

17 Instlling on hosts with internet ess 3 Crete thin pool for Resoure Mnger dt. Identify the primry prtition for the thin pool to rete, nd the mount of spe ville on the primry prtition. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd nd primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. Crete vrile for 50% of the spe ville on the primry prtition for the thin pool to rete. The thin pool stores pplition dt nd snpshots of the dt. You n dd storge to the pool t ny time. Reple Hlf-Of-Aville-Spe with 50% of the spe ville in the primry prtition, in gigytes. Inlude the symol for gigytes (G) fter the numeri vlue. myfifty=hlf-of-aville-speg Crete the thin pool. Reple Pth-To-Devie with the pth of the trget primry prtition: servied-storge rete-thin-pool -o dm.sesize=$myfifty \ servied Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 4 Configure Control Center with the nme of the thin pool for Resoure Mnger dt. The Control Center onfigurtion file is /et/defult/servied. (For more informtion out servied onfigurtion options, refer to the Control Center online help.) d Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. Remove the numer sign hrter (#) from the eginning of the line. Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. Reple Thin-Pool-Nme with the nme of the thin pool reted previously: SERVICED_DM_THINPOOLDEV=Thin-Pool-Nme e Sve the file, nd then lose the editor. 5 Optionl: Speify n lternte privte sunet for Control Center, if neessry. The defult privte sunet my lredy e in use in your environment. The following vrile onfigures servied to use n lternte sunet: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. 17

18 Zenoss Resoure Mnger Instlltion Guide User ess ontrol Control Center provides rowser interfe nd ommnd-line interfe. To gin ess to the Control Center rowser interfe, users must hve login ounts on the Control Center mster host. (Pluggle Authentition Modules (PAM) is supported.) In ddition, users must e memers of the Control Center dministrtive group, whih y defult is the system group, wheel. To enhne seurity, you my hnge the dministrtive group from wheel to ny non-system group. To use the Control Center ommnd-line interfe, users must hve login ounts on the Control Center mster host, nd e memers of the doker user group. Memers of the wheel group, inluding root, re memers of the doker group. Adding users to the defult dministrtive group This proedure dds users to the defult dministrtive group of Control Center, wheel. Performing this proedure enles users with superuser privileges to gin ess to the Control Center rowser interfe. Note Perform this proedure or the next proedure, ut not oth. 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the system group, wheel. Reple User with the nme of login ount on the mster host. usermod -G wheel User Repet the preeding ommnd for eh user to dd. Note For informtion out using Pluggle Authentition Modules (PAM), refer to your operting system doumenttion. Configuring regulr group s the Control Center dministrtive group This proedure hnges the defult dministrtive group of Control Center from wheel to non-system group. Note Perform this proedure or the previous proedure, ut not oth. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 Crete vrile for the group to designte s the dministrtive group. In this exmple, the nme of group to rete is servied. You my hoose ny nme or use n existing group. GROUP=servied 3 Crete new group, if neessry. groupdd $GROUP 4 Add one or more existing users to the new dministrtive group. Reple User with the nme of login ount on the host: usermod -G $GROUP User Repet the preeding ommnd for eh user to dd. 5 Speify the new dministrtive group in the servied onfigurtion file. 18

19 Instlling on hosts with internet ess The following vrile speifies the dministrtive group: SERVICED_ADMIN_GROUP Defult: wheel The nme of the Linux group on the Control Center mster host whose memers re uthorized to use the Control Center rowser interfe. You my reple the defult group with group tht does not hve superuser privileges. Open /et/defult/servied in text editor. Find the SERVICED_ADMIN_GROUP delrtion, nd then hnge the vlue from wheel to the nme of the group you hose erlier. The following exmple shows the line to hnge: # SERVICED_ADMIN_GROUP=wheel Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 6 Optionl: Prevent root users nd memers of the wheel group from gining ess to the Control Center rowser interfe, if desired. The following vrile ontrols privileged logins: SERVICED_ALLOW_ROOT_LOGIN Defult: 1 (true) Determines whether root, or memers of the wheel group, my gin ess to the Control Center rowser interfe. Open /et/defult/servied in text editor. Find the SERVICED_ALLOW_ROOT_LOGIN delrtion, nd then hnge the vlue from 1 to 0. The following exmple shows the line to hnge: # SERVICED_ALLOW_ROOT_LOGIN=1 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. Enling use of the ommnd-line interfe This proedure enles users to perform dministrtive tsks with the Control Center ommnd-line interfe y dding individul users to the doker group. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 Add users to the Doker group, doker. Reple User with the nme of login ount on the host. usermod -G doker User Repet the preeding ommnd for eh user to dd. Strting Control Center This proedure strts the Control Center servie, servied. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Strt servied. systemtl strt servied 19

20 Zenoss Resoure Mnger Instlltion Guide To monitor progress, enter the following ommnd: journltl -flu servied -o t The servied demon invokes doker to pull its internl servies imges from Doker Hu. The Control Center rowser nd ommnd-line interfes re unville until the imges re instlled nd the servies re strted. The proess tkes pproximtely 5-10 minutes. When the messge Trying to disover my pool repets, Control Center is redy for the next steps. 3 Note Perform this step only if you re instlling single-host deployment. Optionl: Add the mster host to the defult resoure pool. Reple Hostnme-Or-IP with the hostnme or IP ddress of the Control Center mster host: servied host dd Hostnme-Or-IP:4979 defult If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Isolting the mster host in seprte resoure pool Note If you re onfiguring single-host deployment, skip this proedure. Control Center enles or just performs rpid reovery from pplition servie filures. When Control Center internl servies nd pplition servies shre host, pplition filures n limit reovery options. Zenoss strongly reommends isolting the Control Center mster host in seprte resoure pool. This proedure retes new resoure pool for the Control Center mster host, nd then dds the mster host to the pool. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Crete new resoure pool nmed mster. servied pool dd mster 3 Add the mster host to the mster resoure pool. Reple Hostnme-Or-IP with the hostnme or IP ddress of the Control Center mster host: servied host dd Hostnme-Or-IP:4979 mster If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Instlling resoure pool hosts Note If you re instlling single-host deployment, skip this setion. Control Center resoure pool hosts run the pplition servies sheduled for the resoure pool to whih they elong, nd for whih they hve suffient RAM nd CPU resoures. Resoure Mnger hs two rod tegories of pplition servies: Infrstruture nd olletion. The servies ssoited with eh tegory n run in the sme resoure pool, or n run in seprte resoure pools. 20

21 Instlling on hosts with internet ess For improved reliility, two resoure pool hosts re onfigured s nodes in n Aphe ZooKeeper ensemle. The storge required for ensemle hosts is slightly different thn the storge required for ll other resoure pool hosts: Eh ensemle host requires seprte primry prtition for Control Center internl servies dt, in ddition to the primry prtition for Doker dt. Unless the ZooKeeper servie on the Control Center mster host fils, their roles in the ZooKeeper ensemle do not ffet their roles s Control Center resoure pool hosts. Note The hosts for the ZooKeeper ensemle require stti IP ddresses, euse ZooKeeper does not support hostnmes in its onfigurtions. Repet the proedures in the following setions for eh host you wish to dd to your Control Center deployment. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center resoure pool host. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 dddress, dd n entry for the host to the network nmeserver, or to /et/ hosts. 4 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 5 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for resoure pool host in your deployment. In prtiulr, resoure pool hosts tht re onfigured s nodes in ZooKeeper ensemle require n dditionl primry prtition for Control Center internl servies dt. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for resoure pool host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 7 Updte the operting system, if neessry. 21

22 Zenoss Resoure Mnger Instlltion Guide Determine whih relese is instlled. t /et/redht-relese If the result inludes 7.0, perform the following susteps. Updte the operting system. yum updte -y Restrt the system. reoot Prepring resoure pool host This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center resoure pool host. 1 Log in to the ndidte resoure pool host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. 22

23 Instlling on hosts with internet ess By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll nd onfigure the NTP pkge. Instll the pkge. yum instll -y ntp Set the system time. ntpd -gq Enle the ntpd demon. systemtl enle ntpd d Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot. The following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 8 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. Perform this step only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. yum instll -y nmp-nt 9 Instll the Zenoss repository pkge. Instll the pkge. rpm -ivh 23

24 Zenoss Resoure Mnger Instlltion Guide Clen out the yum he diretory. yum len ll 10 Reoot the host. reoot Creting file system for Control Center internl servies This proedure retes n XFS file system on primry prtition. Note Perform this proedure only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. No other resoure pool hosts run Control Center internl servies, so no other pool hosts need prtition for internl servies dt. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint 3 Crete n XFS file system. Reple Isvs-Prtition with the pth of the trget primry prtition: mkfs -t xfs Isvs-Prtition 4 Crete the mount point for Control Center internl servies dt. mkdir -p /opt/servied/vr/isvs 5 Add n entry to the /et/fst file. Reple Isvs-Prtition with the pth of the primry prtition used in the previous step: eho "Isvs-Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Instll Doker 1.9.0, nd then disle identl upgrdes. Add the Doker repository to the host's repository list. t > /et/yum.repos.d/doker.repo <<-EOF [dokerrepo] 24

25 Instlling on hosts with internet ess nme=doker Repository seurl= enled=1 gpghek=1 gpgkey= EOF Instll Doker yum len ll && yum mkehe fst yum instll -y doker-engine Open /et/yum.repos.d/doker.repo with text editor. d Chnge the vlue of the enled key from 1 to 0. e Sve the file nd lose the text editor. 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-stle instll -y servied Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint 25

26 Zenoss Resoure Mnger Instlltion Guide Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 7 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 8 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker 26

27 Instlling on hosts with internet ess Configuring nd strting Control Center This proedure ustomizes key onfigurtion vriles of Control Center. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Configure Control Center s n gent of the mster host. The following vrile onfigures servied to serve s gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. In ddition, the following lines need to e edited, to reple {{SERVICED_MASTER_IP}} with the IP ddress of the mster host: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:5042 # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_AGENT=0 d e Remove the numer sign hrter (#) from the eginning of the line. Find the SERVICED_MASTER delrtion, nd then remove the numer sign hrter (#) from the eginning of the line. Glolly reple {{SERVICED_MASTER_IP}} with the IP ddress of the mster host. Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the mster IP ddress. f Sve the file, nd then lose the editor. 3 Optionl: Speify n lternte privte sunet for Control Center, if neessry. The defult privte sunet my lredy e in use in your environment. The following vrile onfigures servied to use n lternte sunet: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult:

28 Zenoss Resoure Mnger Instlltion Guide The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Strt the Control Center servie (servied). systemtl strt servied To monitor progress, enter the following ommnd: journltl -flu servied -o t To instll dditionl resoure pool hosts, return to Verifying ndidte host resoures on pge 21. ZooKeeper ensemle onfigurtion Note If you re instlling single-host deployment, or if your deployment inludes fewer thn two resoure pool hosts, skip this setion. Control Center relies on Aphe ZooKeeper to oordinte its servies. The proedures in this setion rete ZooKeeper ensemle of 3 nodes. To perform these proedures, you need Control Center mster host nd minimum of two resoure pool hosts. Eh resoure pool host requires seprte primry prtition for Control Center internl servies, nd eh should hve stti IP ddress. For more informtion out storge requirements, refer to the Zenoss Resoure Mnger Plnning Guide. Note Zenoss strongly reommends onfiguring ZooKeeper ensemle for ll prodution deployments. A ZooKeeper ensemle requires minimum of 3 nodes, nd 3 nodes is suffiient for most deployments. A 5-node onfigurtion improves filover protetion during mintenne windows. Ensemles lrger thn 5 nodes re not neessry. An odd numer of nodes is reommended, nd n even numer of nodes is strongly disourged. Note The Control Center ZooKeeper servie requires onsistently fst storge. Idelly, the primry prtition for Control Center internl servies is on seprte, high-performne devie tht hs only one primry prtition. Control Center vriles for ZooKeeper This tles in this setion ssoites the ZooKeeper-relted Control Center vriles to set in /et/defult/ servied with the roles tht hosts ply in Control Center luster. Tle 1: Control Center mster host SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 1 SERVICED_ISVCS_ZOOKEEPER_QUORUM 28

29 Instlling on hosts with internet ess The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 2: Control Center resoure pool host nd ZooKeeper ensemle node SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 2 or 3 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ISVCS_START The list of Control Center internl servies to strt nd run on hosts other thn the mster host. Vlue: zookeeper SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 3: Control Center resoure pool host only SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Configuring the mster host s ZooKeeper node This proedure onfigures the Control Center mster host s memer of the ZooKeeper ensemle. Note For ury, this proedure onstruts Control Center onfigurtion vriles in the shell nd ppends them to /et/defult/servied. The lst step is to move the vriles from the end of the file to more pproprite lotions. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. 29

30 Zenoss Resoure Mnger Instlltion Guide Note Define the vriles identilly on the mster host nd on eh resoure pool host. Reple Mster-Host-IP with the IP ddress of the Control Center mster host, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=mster-host-ip node2=pool-host-a-ip node3=pool-host-b-ip Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 3 Set the ZooKeeper node ID to 1. eho "SERVICED_ISVCS_ZOOKEEPER_ID=1" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with You my opy the following of text nd pste it in your onsole: q1="1@ :2888:3888" q2="2@${node2}:2888:3888" q3="3@${node3}:2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Clen up the Control Center onfigurtion file. d e f g h i Open /et/defult/servied with text editor. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster host. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. 30

31 Instlling on hosts with internet ess This delrtion is ommented out. j Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. k Sve the file, nd then lose the text editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied egrep '(_ZOO _ZK)' Configuring resoure pool host s ZooKeeper node To perform this proedure, you need resoure pool host with n XFS file system on seprte prtition, reted previously. This proedure onfigures ZooKeeper ensemle on resoure pool host. Repet this proedure on eh Control Center resoure pool host to dd to the ZooKeeper ensemle. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. Note Define the vriles identilly on the mster host nd on eh resoure pool host. Reple Mster-Host-IP with the IP ddress of the Control Center mster host, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=mster-host-ip node2=pool-host-a-ip node3=pool-host-b-ip Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 3 Set the ID of this node in the ZooKeeper ensemle. For Pool-Host-A-IP (node2), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=2" >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=3" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with For Pool-Host-A-IP (node2), use the following ommnds: q1="1@${node1}:2888:3888" 31

32 Zenoss Resoure Mnger Instlltion Guide eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@${node2}:2888:3888" q3="3@ :2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Set the SERVICED_ISVCS_START vrile, nd len up the Control Center onfigurtion file. d e f g h i j k Open /et/defult/servied with text editor. Lote the SERVICED_ISVCS_START vrile, nd then delete ll ut zookeeper from its list of vlues. Remove the numer sign hrter (#) from the eginning of the line. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster host. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. l Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. m Sve the file, nd then lose the text editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied \ egrep '(_ZOO _ZK _STA)' 8 Pull the required Control Center ZooKeeper imge from the mster host. Identify the imge to pull. servied version grep IsvsImges Exmple result: IsvsImges: [zenoss/servied-isvs:v40 zenoss/isvs-zookeeper:v3] 32

33 Instlling on hosts with internet ess Pull the Control Center ZooKeeper imge. Reple Isvs-ZK-Imge with the nme nd version numer of the ZooKeeper imge from the previous sustep: doker pull Isvs-ZK-Imge Strting ZooKeeper ensemle This proedure strts ZooKeeper ensemle. The window of time for strting ZooKeeper ensemle is reltively short. The gol of this proedure is to restrt Control Center on eh ensemle node t out the sme time, so tht eh node n prtiipte in eleting the leder. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 In seprte window, log in to the seond node of the ZooKeeper ensemle (Pool-Host-A-IP). 3 In nother seprte window, log in to the third node of the ZooKeeper ensemle (Pool-Host-B-IP). 4 On ll ensemle hosts, stop nd strt servied. systemtl stop servied && systemtl strt servied 5 On the mster host, hek the sttus of the ZooKeeper ensemle. { eho stts; sleep 1; } n lolhost 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-A-IP 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-B-IP 2181 grep Mode If n is not ville, you n use telnet with intertive ZooKeeper ommnds. 6 Optionl: Log in to the Control Center rowser interfe, nd then strt Resoure Mnger nd relted pplitions, if desired. The next proedure requires stopping Resoure Mnger. Updting resoure pool hosts The defult onfigurtion of resoure pool hosts sets the vlue of the SERVICED_ZK vrile to the mster host only. This proedure updtes the setting to inlude the full ZooKeeper ensemle. Perform this proedure on eh resoure pool host in your Control Center luster. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Updte the vrile. Open /et/defult/servied in text editor. Lote the SERVICED_ZK delrtion, nd then reple its vlue with the sme vlue used in the ZooKeeper ensemle nodes. Sve the file, nd then lose the editor. 3 Restrt Control Center. systemtl restrt servied Adding hosts to the defult resoure pool Note If you re instlling single-host deployment, skip this setion. 33

34 Zenoss Resoure Mnger Instlltion Guide This proedure dds one or more resoure pool hosts to the defult resoure pool. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 Add resoure pool host. Reple Hostnme-Or-IP with the hostnme or IP ddress of the resoure pool host to dd: servied host dd Hostnme-Or-IP:4979 defult If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. 3 Repet the preeding ommnd for eh resoure pool host in your Control Center luster. Deploying Resoure Mnger This proedure dds the Resoure Mnger pplition to the list of pplitions tht Control Center mnges, nd pulls pplition imges from Doker Hu. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Add the Zenoss.resmgr pplition to Control Center. mypth=/opt/servied/templtes servied templte dd $mypth/zenoss-resmgr-*.json On suess, the servied ommnd returns the templte ID. 3 Deploy the pplition. Reple Templte-ID with the templte identifier returned in the previous step, nd reple Deployment-ID with nme for this deployment (for exmple, Dev or Test): servied templte deploy Templte-ID defult Deployment-ID Control Center pulls Resoure Mnger imges into the lol registry. To monitor progress, enter the following ommnd: journltl -flu servied -o t Control Center nd Resoure Mnger re now instlled, nd Resoure Mnger is redy to e onfigured for your environment. For more informtion, refer to the Zenoss Resoure Mnger Configurtion Guide. 34

35 Instlling without internet ess Instlling without internet ess 2 The proedures in this hpter instll Control Center nd Resoure Mnger on one or more Red Ht Enterprise Linux (RHEL) 7.1 or 7.2 hosts, or one or more CentOS 7.1 or 7.2 hosts. The proedures in this hpter support hosts tht do not hve internet ess. You my rete single-host or multi-host deployment. For prodution use, Zenoss strongly reommends reting multi-host deployment tht inludes minimum of three rel or virtul mhines. For more informtion out deploying Control Center nd Resoure Mnger, refer to the Zenoss Resoure Mnger Plnning Guide. Control Center requires ommon time soure. If you hve n NTP time server inside your firewll, you my onfigure the hosts in your Control Center luster to use it. If not, then you my use the proedures in this hpter to onfigure n NTP time server on the Control Center mster host, nd to onfigure ll the other luster hosts to synhronize with the mster. However, the proedures require IP ddresses. Therefore, ll of the hosts in your Control Center luster require stti IP ddresses. Note For optiml results, review this hpter thoroughly efore strting the instlltion proess. Instlling mster host Perform the proedures in this setion to instll Control Center nd Resoure Mnger on mster host. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center mster host. To onfigure privte NTP luster, the Control Center mster host must hve stti IP ddress. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht the host's numeri identifier is unique. 35

36 Zenoss Resoure Mnger Instlltion Guide Eh host in Control Center luster must hve unique host identifier. hostid 4 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for Control Center mster host. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 5 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for mster host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Verify the operting system relese. t /et/redht-relese If the result inludes 7.0, selet nother host or updte the operting system. Prepring storge for the mster host In ddition to the storge required for its operting system, Control Center mster host requires the following storge res: A lol prtition for Doker dt, onfigured s devie mpper thin pool. A lol prtition for Control Center internl servies dt, formtted with the XFS file system. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. A lol or remote primry prtition for Resoure Mnger dt, onfigured s devie mpper thin pool. A lol primry prtition, remote primry prtition, or remote file server, for kups of Resoure Mnger dt. The lol or remote primry prtition is formtted with the XFS file system. A remote file server must provide file system tht is omptile with XFS. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. For storge sizing informtion, refer to the Zenoss Resoure Mnger Plnning Guide. For devie mpper thin pools, no formtting is required simply rete primry prtitons, whih re onfigured in susequent proedures. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. To rete the required storge, perform the following proedures. Note Dt present on the primry prtitions you selet re destroyed in these proedure. Plese ensure tht dt is ked up elsewhere, or no longer needed, efore proeeding. 36

37 Instlling without internet ess Creting file system for internl servies This proedure retes n XFS file system on primry prtition. For more informtion out primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd, nd out reting primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. 3 Crete n XFS file system. Reple Prtition with the pth of the trget primry prtition: mkfs -t xfs Prtition 4 Add n entry to the /et/fst file. Reple Prtition with the pth of the primry prtition used in the previous step: eho "Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 5 Crete the mount point for internl servies dt. mkdir -p /opt/servied/vr/isvs 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Creting file system for kups To perform this proedure, you need host with t lest one unused primry prtition, or remote file server. The Control Center mster host requires lol or remote storge spe for kups of Control Center dt. This proedure inludes steps to rete n XFS file system on primry prtition, if neessry, nd steps to mount file system for kups. For more informtion out primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Optionl: Identify the trget primry prtition for the file system to rete, if neessry. 37

38 Zenoss Resoure Mnger Instlltion Guide Skip this step if you re using remote file server. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd, nd out reting primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. 3 Optionl: Crete n XFS file system, if neessry. Skip this step if you re using remote file server. Reple Prtition with the pth of the trget primry prtition: mkfs -t xfs Prtition 4 Crete n entry in the /et/fst file. Reple File-System-Speifition with one of the following vlues: the pth of the primry prtition used in the previous step the remote server speifition eho "File-System-Speifition \ /opt/servied/vr/kups xfs defults 0 0" >> /et/fst 5 Crete the mount point for kup dt. mkdir -p /opt/servied/vr/kups 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: /dev/sd3 on /opt/servied/vr/kups type xfs (rw,reltime,selel,ttr2,inode64,noquot) Downloding files for offline instlltion This proedure desries how to downlod RPM pkges nd Doker imge files to your worksttion. To perform this proedure, you need: A worksttion with internet ess. A portle storge medium, suh s USB flsh drive, with t lest 5 GB of free spe. Permission to downlod the required files from the File Portl - Downlod Zenoss Enterprise Softwre site. You my request permission y filing tiket t the Zenoss Support site. 1 In we rowser, nvigte to the File Portl - Downlod Zenoss Enterprise Softwre site. 2 Log in with the ount provided y Zenoss Support. 3 Downlod rhive files to your worksttion. Reple Version with the most reent version numer ville on the downlod pge: instll-zenoss-hse:vversion.run instll-zenoss-isvs-zookeeper:vversion.run instll-zenoss-opentsd:vversion.run instll-zenoss-resmgr_5.1:5.1version.run 38

39 Instlling without internet ess instll-zenoss-servied-isvs:vversion.run servied-resoure-gents-version.x86_64.rpm 4 Downlod the RHEL/CentOS mirror pkge for your upgrde. Note If you re plnning to upgrde the operting system during your Control Center nd Resoure Mnger upgrde, hoose the mirror pkge tht mthes the RHEL/CentOS relese to whih you re upgrding, not the relese tht is instlled now. Reple Version with the most reent version numer ville on the downlod pge, nd reple Relese with the version of RHEL/CentOS pproprite for your environment: yum-mirror-entos7.relese-version.x86_64.rpm 5 Copy the files to your portle storge medium. Stging files for offline instlltion Before performing this proedure, verify tht pproximtely 4GB of temporry spe is ville on the file system where /root is loted. This proedure dds files for offline instlltion to the Control Center mster host. The stged files re required in susequent proedures. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Copy the rhive files from your portle storge medium to /root. 3 Set the file permissions of the self-extrting rhive files to exeute. hmod +x /root/*.run 4 Chnge diretory to /root. d /root 5 Instll the Resoure Mnger repository mirror. yum instll -y./yum-mirror-*.x86_64.rpm 6 Optionl: Delete the pkge file, if desired. rm./yum-mirror-*.x86_64.rpm Prepring the mster host operting system This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center mster host. 1 Log in to the ndidte mster host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 39

40 Zenoss Resoure Mnger Instlltion Guide 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. If you re instlling single-host deployment, skip this step. yum --enlerepo=zenoss-mirror instll -y nmp-nt 8 Reoot the host. reoot 40

41 Instlling without internet ess Configuring n NTP mster server This proedure onfigures n NTP mster server on the Control Center mster host. If you hve n NTP time server inside your firewll, you my onfigure the mster host to use it; however, this proedure does not inlude tht option. 1 Log in the Control Center mster host s root, or s user with superuser privileges. 2 Instll the NTP pkge. yum --enlerepo=zenoss-mirror instll -y ntp 3 Crete kup of the NTP onfigurtion file. p -p /et/ntp.onf /et/ntp.onf.orig 4 Edit the NTP onfigurtion file./ Open /et/ntp.onf with text editor. Reple ll of the lines in the file with the following lines: # Use the lol lok server prefer fudge strtum 10 driftfile /vr/li/ntp/drift rodstdely # Give lolhost full ess rights restrit # Grnt ess to lient hosts restrit ADDRESS_RANGE msk NETMASK nomodify notrp Reple ADDRESS_RANGE with the rnge of IPv4 network ddresses tht re llowed to query this NTP server. For exmple, the following IP ddresses re ssigned to the hosts in Control Center luster: For the preeding ddresses, the vlue for ADDRESS_RANGE is d Reple NETMASK with the IPv4 network msk tht orresponds with the ddress rnge. For exmple, the network msk for is e Sve the file nd exit the editor. 5 Enle nd strt the NTP demon. Enle the ntpd demon. systemtl enle ntpd Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot, nd the following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 41

42 Zenoss Resoure Mnger Instlltion Guide Strt ntpd. systemtl strt ntpd Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Instll Doker yum len ll && yum mkehe fst yum instll --enlerepo=zenoss-mirror -y doker-engine 3 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 4 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-mirror instll -y servied 5 Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 6 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. 42

43 Instlling without internet ess Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 7 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker 8 Import the Control Center nd Resoure Mnger imges into the lol doker registry. The imges re ontined in the self-extrting rhive files tht re stged in the /root diretory. Chnge diretory to /root. d /root Extrt the imges. for imge in instll-*.run do eho -n "$imge: " evl./$imge done 43

44 Zenoss Resoure Mnger Instlltion Guide Imge extrtion egins when you press the y key. If you press the y key nd then Return key, the urrent imge is extrted, ut the next one is not. Optionl: Delete the rhive files, if desired. rm -i./instll-*.run Instlling Resoure Mnger This proedure instlls Resoure Mnger nd onfigures the NFS server. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Instll Resoure Mnger. yum len ll && yum mkehe fst yum --enlerepo=zenoss-mirror instll -y zenoss-resmgr-servie 3 Configure nd restrt the NFS server. Currently, n unresolved issue prevents the NFS server from strting orretly. The following ommnds provide workround to ensure tht it does. Open /li/systemd/system/nfs-server.servie with text editor. Chnge rpind.trget to rpind.servie on the following line: Requires= network.trget pro-fs-nfsd.mount rpind.trget Relod the systemd mnger onfigurtion. systemtl demon-relod Configuring Control Center This proedure retes thin pool for pplition dt nd ustomizes key onfigurtion vriles of Control Center. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Configure Control Center to serve s the mster nd s n gent. The following vriles onfigure servied to serve s oth mster nd gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. 44

45 Instlling without internet ess The following exmple shows the line to hnge: # SERVICED_AGENT=0 Remove the numer sign hrter (#) from the eginning of the line. d Find the SERVICED_MASTER delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_MASTER=0 e Remove the numer sign hrter (#) from the eginning of the line. f Sve the file, nd then lose the editor. 3 Crete thin pool for Resoure Mnger dt. Identify the primry prtition for the thin pool to rete, nd the mount of spe ville on the primry prtition. lslk --output=name,size,type,fstype,mountpoint For more informtion out the output of the lslk ommnd nd primry prtitions, refer to the Zenoss Resoure Mnger Plnning Guide. Crete vrile for 50% of the spe ville on the primry prtition for the thin pool to rete. The thin pool stores pplition dt nd snpshots of the dt. You n dd storge to the pool t ny time. Reple Hlf-Of-Aville-Spe with 50% of the spe ville in the primry prtition, in gigytes. Inlude the symol for gigytes (G) fter the numeri vlue. myfifty=hlf-of-aville-speg Crete the thin pool. Reple Pth-To-Devie with the pth of the trget primry prtition: servied-storge rete-thin-pool -o dm.sesize=$myfifty \ servied Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 4 Configure Control Center with the nme of the thin pool for Resoure Mnger dt. The Control Center onfigurtion file is /et/defult/servied. (For more informtion out servied onfigurtion options, refer to the Control Center online help.) d Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. Remove the numer sign hrter (#) from the eginning of the line. Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. Reple Thin-Pool-Nme with the nme of the thin pool reted previously: SERVICED_DM_THINPOOLDEV=Thin-Pool-Nme e Sve the file, nd then lose the editor. 5 Optionl: Speify n lternte privte sunet for Control Center, if neessry. The defult privte sunet my lredy e in use in your environment. The following vrile onfigures servied to use n lternte sunet: 45

46 Zenoss Resoure Mnger Instlltion Guide SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. User ess ontrol Control Center provides rowser interfe nd ommnd-line interfe. To gin ess to the Control Center rowser interfe, users must hve login ounts on the Control Center mster host. (Pluggle Authentition Modules (PAM) is supported.) In ddition, users must e memers of the Control Center dministrtive group, whih y defult is the system group, wheel. To enhne seurity, you my hnge the dministrtive group from wheel to ny non-system group. To use the Control Center ommnd-line interfe, users must hve login ounts on the Control Center mster host, nd e memers of the doker user group. Memers of the wheel group, inluding root, re memers of the doker group. Adding users to the defult dministrtive group This proedure dds users to the defult dministrtive group of Control Center, wheel. Performing this proedure enles users with superuser privileges to gin ess to the Control Center rowser interfe. Note Perform this proedure or the next proedure, ut not oth. 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the system group, wheel. Reple User with the nme of login ount on the mster host. usermod -G wheel User Repet the preeding ommnd for eh user to dd. Note For informtion out using Pluggle Authentition Modules (PAM), refer to your operting system doumenttion. Configuring regulr group s the Control Center dministrtive group This proedure hnges the defult dministrtive group of Control Center from wheel to non-system group. Note Perform this proedure or the previous proedure, ut not oth. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 Crete vrile for the group to designte s the dministrtive group. 46

47 Instlling without internet ess In this exmple, the nme of group to rete is servied. You my hoose ny nme or use n existing group. GROUP=servied 3 Crete new group, if neessry. groupdd $GROUP 4 Add one or more existing users to the new dministrtive group. Reple User with the nme of login ount on the host: usermod -G $GROUP User Repet the preeding ommnd for eh user to dd. 5 Speify the new dministrtive group in the servied onfigurtion file. The following vrile speifies the dministrtive group: SERVICED_ADMIN_GROUP Defult: wheel The nme of the Linux group on the Control Center mster host whose memers re uthorized to use the Control Center rowser interfe. You my reple the defult group with group tht does not hve superuser privileges. Open /et/defult/servied in text editor. Find the SERVICED_ADMIN_GROUP delrtion, nd then hnge the vlue from wheel to the nme of the group you hose erlier. The following exmple shows the line to hnge: # SERVICED_ADMIN_GROUP=wheel Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 6 Optionl: Prevent root users nd memers of the wheel group from gining ess to the Control Center rowser interfe, if desired. The following vrile ontrols privileged logins: SERVICED_ALLOW_ROOT_LOGIN Defult: 1 (true) Determines whether root, or memers of the wheel group, my gin ess to the Control Center rowser interfe. Open /et/defult/servied in text editor. Find the SERVICED_ALLOW_ROOT_LOGIN delrtion, nd then hnge the vlue from 1 to 0. The following exmple shows the line to hnge: # SERVICED_ALLOW_ROOT_LOGIN=1 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. Enling use of the ommnd-line interfe This proedure enles users to perform dministrtive tsks with the Control Center ommnd-line interfe y dding individul users to the doker group. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 47

48 Zenoss Resoure Mnger Instlltion Guide 2 Add users to the Doker group, doker. Reple User with the nme of login ount on the host. usermod -G doker User Repet the preeding ommnd for eh user to dd. Strting Control Center This proedure strts the Control Center servie, servied. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Strt servied. systemtl strt servied To monitor progress, enter the following ommnd: journltl -flu servied -o t The Control Center rowser nd ommnd-line interfes re unville until the Control Center imges re tgged nd the internl servies re strted. The proess tkes pproximtely 3 minutes. When the messge Trying to disover my pool repets, Control Center is redy for the next steps. 3 Note Perform this step only if you re instlling single-host deployment. Optionl: Add the mster host to the defult resoure pool. Reple Hostnme-Or-IP with the hostnme or IP ddress of the Control Center mster host: servied host dd Hostnme-Or-IP:4979 defult If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Isolting the mster host in seprte resoure pool Note If you re onfiguring single-host deployment, skip this proedure. Control Center enles or just performs rpid reovery from pplition servie filures. When Control Center internl servies nd pplition servies shre host, pplition filures n limit reovery options. Zenoss strongly reommends isolting the Control Center mster host in seprte resoure pool. This proedure retes new resoure pool for the Control Center mster host, nd then dds the mster host to the pool. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Crete new resoure pool nmed mster. servied pool dd mster 3 Add the mster host to the mster resoure pool. 48

49 Instlling without internet ess Reple Hostnme-Or-IP with the hostnme or IP ddress of the Control Center mster host: servied host dd Hostnme-Or-IP:4979 mster If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Instlling resoure pool hosts Note If you re instlling single-host deployment, skip this setion. Control Center resoure pool hosts run the pplition servies sheduled for the resoure pool to whih they elong, nd for whih they hve suffient RAM nd CPU resoures. Resoure Mnger hs two rod tegories of pplition servies: Infrstruture nd olletion. The servies ssoited with eh tegory n run in the sme resoure pool, or n run in seprte resoure pools. For improved reliility, two resoure pool hosts re onfigured s nodes in n Aphe ZooKeeper ensemle. The storge required for ensemle hosts is slightly different thn the storge required for ll other resoure pool hosts: Eh ensemle host requires seprte primry prtition for Control Center internl servies dt, in ddition to the primry prtition for Doker dt. Unless the ZooKeeper servie on the Control Center mster host fils, their roles in the ZooKeeper ensemle do not ffet their roles s Control Center resoure pool hosts. Note The hosts for the ZooKeeper ensemle require stti IP ddresses, euse ZooKeeper does not support hostnmes in its onfigurtions. Likewise, to onfigure privte NTP luster, ll resoure pool hosts must hve stti IP ddresses. Repet the proedures in the following setions for eh host you wish to dd to your Control Center deployment. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center resoure pool host. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 dddress, dd n entry for the host to the network nmeserver, or to /et/ hosts. 4 Verify tht the host's numeri identifier is unique. 49

50 Zenoss Resoure Mnger Instlltion Guide Eh host in Control Center luster must hve unique host identifier. hostid 5 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for resoure pool host in your deployment. In prtiulr, resoure pool hosts tht re onfigured s nodes in ZooKeeper ensemle require n dditionl primry prtition for Control Center internl servies dt. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for resoure pool host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 7 Verify the operting system relese. t /et/redht-relese If the result inludes 7.0, selet nother host or upgrde the operting system. Creting file system for Control Center internl servies This proedure retes n XFS file system on primry prtition. Note Perform this proedure only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. No other resoure pool hosts run Control Center internl servies, so no other pool hosts need prtition for internl servies dt. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint 3 Crete n XFS file system. Reple Isvs-Prtition with the pth of the trget primry prtition: mkfs -t xfs Isvs-Prtition 4 Crete the mount point for Control Center internl servies dt. mkdir -p /opt/servied/vr/isvs 5 Add n entry to the /et/fst file. Reple Isvs-Prtition with the pth of the primry prtition used in the previous step: eho "Isvs-Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 50

51 Instlling without internet ess 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Stging files for offline instlltion To perform this proedure, you need the portle storge medium tht ontins the rhive files used in instlling the mster host. This proedure dds files for offline instlltion to resoure pool host. The files re required in susequent proedures. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Copy yum-mirror-*.x86_64.rpm from your portle storge medium to /tmp. 3 Instll the Resoure Mnger repository mirror. yum instll -y /tmp/yum-mirror-*.x86_64.rpm 4 Optionl: Delete the pkge file, if desired. rm /tmp/yum-mirror-*.x86_64.rpm Prepring resoure pool host Perform this proedure to prepre RHEL/CentOS 7.1 or 7.2 host s Control Center resoure pool host. 1 Log in to the ndidte resoure pool host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. 51

52 Zenoss Resoure Mnger Instlltion Guide Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll nd onfigure the NTP pkge. Instll the pkge. yum instll -y ntp Set the system time. ntpd -gq Enle the ntpd demon. systemtl enle ntpd d Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot. The following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 52

53 Instlling without internet ess 8 Reoot the host. reoot Configuring n NTP lient This proedure onfigures resoure pool host to synhronize its lok with the NTP server on the Control Center mster host. If you hve n NTP time server inside your firewll, you my onfigure the host to use it; however, this proedure does not inlude tht option. 1 Log in the Control Center resoure pool host s root, or s user with superuser privileges. 2 Crete kup of the NTP onfigurtion file. p -p /et/ntp.onf /et/ntp.onf.orig 3 Edit the NTP onfigurtion file./ Open /et/ntp.onf with text editor. Reple ll of the lines in the file with the following lines: # Point to the mster time server server MASTER_ADDRESS restrit defult ignore restrit restrit MASTER_ADDRESS msk nomodify notrp noquery driftfile /vr/li/ntp/drift Reple oth instnes of MASTER_ADDRESS with the IPv4 ddress of the host where the NTP server is running (the Control Center mster host). d Sve the file nd exit the editor. 4 Synhronize the lok with the mster server. ntpd -gq 5 Enle nd strt the NTP demon. Enle the ntpd demon. systemtl enle ntpd Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot, nd the following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol Strt ntpd. systemtl strt ntpd Creting file system for Control Center internl servies This proedure retes n XFS file system on primry prtition. 53

54 Zenoss Resoure Mnger Instlltion Guide Note Perform this proedure only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. No other resoure pool hosts run Control Center internl servies, so no other pool hosts need prtition for internl servies dt. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint 3 Crete n XFS file system. Reple Isvs-Prtition with the pth of the trget primry prtition: mkfs -t xfs Isvs-Prtition 4 Crete the mount point for Control Center internl servies dt. mkdir -p /opt/servied/vr/isvs 5 Add n entry to the /et/fst file. Reple Isvs-Prtition with the pth of the primry prtition used in the previous step: eho "Isvs-Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Instll Doker yum len ll && yum mkehe fst yum instll --enlerepo=zenoss-mirror -y doker-engine 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. 54

55 Instlling without internet ess Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-mirror instll -y servied 6 Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 7 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 55

56 Zenoss Resoure Mnger Instlltion Guide 8 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker Configuring nd strting Control Center This proedure ustomizes key onfigurtion vriles of Control Center. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Configure Control Center s n gent of the mster host. The following vrile onfigures servied to serve s gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. In ddition, the following lines need to e edited, to reple {{SERVICED_MASTER_IP}} with the IP ddress of the mster host: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:

57 Instlling without internet ess # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_AGENT=0 d e Remove the numer sign hrter (#) from the eginning of the line. Find the SERVICED_MASTER delrtion, nd then remove the numer sign hrter (#) from the eginning of the line. Glolly reple {{SERVICED_MASTER_IP}} with the IP ddress of the mster host. Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the mster IP ddress. f Sve the file, nd then lose the editor. 3 Optionl: Speify n lternte privte sunet for Control Center, if neessry. The defult privte sunet my lredy e in use in your environment. The following vrile onfigures servied to use n lternte sunet: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Strt the Control Center servie (servied). systemtl strt servied To monitor progress, enter the following ommnd: journltl -flu servied -o t To instll dditionl resoure pool hosts, return to Verifying ndidte host resoures on pge 49. ZooKeeper ensemle onfigurtion Note If you re instlling single-host deployment, or if your deployment inludes fewer thn two resoure pool hosts, skip this setion. Control Center relies on Aphe ZooKeeper to oordinte its servies. The proedures in this setion rete ZooKeeper ensemle of 3 nodes. To perform these proedures, you need Control Center mster host nd minimum of two resoure pool hosts. Eh resoure pool host requires seprte primry prtition for Control 57

58 Zenoss Resoure Mnger Instlltion Guide Center internl servies, nd eh should hve stti IP ddress. For more informtion out storge requirements, refer to the Zenoss Resoure Mnger Plnning Guide. Note Zenoss strongly reommends onfiguring ZooKeeper ensemle for ll prodution deployments. A ZooKeeper ensemle requires minimum of 3 nodes, nd 3 nodes is suffiient for most deployments. A 5-node onfigurtion improves filover protetion during mintenne windows. Ensemles lrger thn 5 nodes re not neessry. An odd numer of nodes is reommended, nd n even numer of nodes is strongly disourged. Note The Control Center ZooKeeper servie requires onsistently fst storge. Idelly, the primry prtition for Control Center internl servies is on seprte, high-performne devie tht hs only one primry prtition. Control Center vriles for ZooKeeper This tles in this setion ssoites the ZooKeeper-relted Control Center vriles to set in /et/defult/ servied with the roles tht hosts ply in Control Center luster. Tle 4: Control Center mster host SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 1 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 5: Control Center resoure pool host nd ZooKeeper ensemle node SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 2 or 3 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ISVCS_START The list of Control Center internl servies to strt nd run on hosts other thn the mster host. Vlue: zookeeper SERVICED_ZK 58

59 Instlling without internet ess The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 6: Control Center resoure pool host only SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Configuring the mster host s ZooKeeper node This proedure onfigures the Control Center mster host s memer of the ZooKeeper ensemle. Note For ury, this proedure onstruts Control Center onfigurtion vriles in the shell nd ppends them to /et/defult/servied. The lst step is to move the vriles from the end of the file to more pproprite lotions. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. Note Define the vriles identilly on the mster host nd on eh resoure pool host. Reple Mster-Host-IP with the IP ddress of the Control Center mster host, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=mster-host-ip node2=pool-host-a-ip node3=pool-host-b-ip Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 3 Set the ZooKeeper node ID to 1. eho "SERVICED_ISVCS_ZOOKEEPER_ID=1" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with You my opy the following of text nd pste it in your onsole: q1="1@ :2888:3888" 59

60 Zenoss Resoure Mnger Instlltion Guide eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Clen up the Control Center onfigurtion file. d e f g h i Open /et/defult/servied with text editor. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster host. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. j Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. k Sve the file, nd then lose the text editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied egrep '(_ZOO _ZK)' Configuring resoure pool host s ZooKeeper node To perform this proedure, you need resoure pool host with n XFS file system on seprte prtition, reted previously. This proedure onfigures ZooKeeper ensemle on resoure pool host. Repet this proedure on eh Control Center resoure pool host to dd to the ZooKeeper ensemle. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. Note Define the vriles identilly on the mster host nd on eh resoure pool host. Reple Mster-Host-IP with the IP ddress of the Control Center mster host, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=mster-host-ip node2=pool-host-a-ip node3=pool-host-b-ip 60

61 Instlling without internet ess Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 3 Set the ID of this node in the ZooKeeper ensemle. For Pool-Host-A-IP (node2), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=2" >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=3" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with For Pool-Host-A-IP (node2), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@ :2888:3888" q3="3@${node3}:2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@${node2}:2888:3888" q3="3@ :2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Set the SERVICED_ISVCS_START vrile, nd len up the Control Center onfigurtion file. d e f g Open /et/defult/servied with text editor. Lote the SERVICED_ISVCS_START vrile, nd then delete ll ut zookeeper from its list of vlues. Remove the numer sign hrter (#) from the eginning of the line. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster host. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. 61

62 Zenoss Resoure Mnger Instlltion Guide h i j k Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. l Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. m Sve the file, nd then lose the text editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied \ egrep '(_ZOO _ZK _STA)' 8 Pull the required Control Center ZooKeeper imge from the mster host. Identify the imge to pull. servied version grep IsvsImges Exmple result: IsvsImges: [zenoss/servied-isvs:v40 zenoss/isvs-zookeeper:v3] Pull the Control Center ZooKeeper imge. Reple Isvs-ZK-Imge with the nme nd version numer of the ZooKeeper imge from the previous sustep: doker pull Isvs-ZK-Imge Strting ZooKeeper ensemle This proedure strts ZooKeeper ensemle. The window of time for strting ZooKeeper ensemle is reltively short. The gol of this proedure is to restrt Control Center on eh ensemle node t out the sme time, so tht eh node n prtiipte in eleting the leder. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 In seprte window, log in to the seond node of the ZooKeeper ensemle (Pool-Host-A-IP). 3 In nother seprte window, log in to the third node of the ZooKeeper ensemle (Pool-Host-B-IP). 4 On ll ensemle hosts, stop nd strt servied. systemtl stop servied && systemtl strt servied 5 On the mster host, hek the sttus of the ZooKeeper ensemle. { eho stts; sleep 1; } n lolhost 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-A-IP 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-B-IP 2181 grep Mode 62

63 Instlling without internet ess If n is not ville, you n use telnet with intertive ZooKeeper ommnds. 6 Optionl: Log in to the Control Center rowser interfe, nd then strt Resoure Mnger nd relted pplitions, if desired. The next proedure requires stopping Resoure Mnger. Updting resoure pool hosts The defult onfigurtion of resoure pool hosts sets the vlue of the SERVICED_ZK vrile to the mster host only. This proedure updtes the setting to inlude the full ZooKeeper ensemle. Perform this proedure on eh resoure pool host in your Control Center luster. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Updte the vrile. Open /et/defult/servied in text editor. Lote the SERVICED_ZK delrtion, nd then reple its vlue with the sme vlue used in the ZooKeeper ensemle nodes. Sve the file, nd then lose the editor. 3 Restrt Control Center. systemtl restrt servied Adding hosts to the defult resoure pool Note If you re instlling single-host deployment, skip this setion. This proedure dds one or more resoure pool hosts to the defult resoure pool. 1 Log in to the Control Center mster host s root, or s user with superuser privileges. 2 Add resoure pool host. Reple Hostnme-Or-IP with the hostnme or IP ddress of the resoure pool host to dd: servied host dd Hostnme-Or-IP:4979 defult If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. 3 Repet the preeding ommnd for eh resoure pool host in your Control Center luster. Deploying Resoure Mnger This proedure dds the Resoure Mnger pplition to the list of pplitions tht Control Center mnges. 1 Log in to the mster host s root, or s user with superuser privileges. 2 Add the Zenoss.resmgr pplition to Control Center. mypth=/opt/servied/templtes servied templte dd $mypth/zenoss-resmgr-*.json On suess, the servied ommnd returns the templte ID. 3 Deploy the pplition. 63

64 Zenoss Resoure Mnger Instlltion Guide Reple Templte-ID with the templte identifier returned in the previous step, nd reple Deployment-ID with nme for this deployment (for exmple, Dev or Test): servied templte deploy Templte-ID defult Deployment-ID Control Center tgs the Resoure Mnger imges in the lol registry. To monitor progress, enter the following ommnd: journltl -flu servied -o t Control Center nd Resoure Mnger re now instlled, nd Resoure Mnger is redy to e onfigured for your environment. For more informtion, refer to the Zenoss Resoure Mnger Configurtion Guide. 64

65 High-vilility deployments Prt II: High-vilility deployments The hpters in this prt desrie how to instll Control Center nd Resoure Mnger on rel or virtul hosts, with or without internet ess, in high-vilility deployment. The instrutions inlude the full rnge of options for ustomizing your deployment for your environment. 65

66 Zenoss Resoure Mnger Instlltion Guide Creting high-vilility deployment with 1 internet ess The proedures in this hpter rete high-vilility deployment of Control Center nd Resoure Mnger on Red Ht Enterprise Linux (RHEL) 7.1 or 7.2 hosts, or on CentOS 7.1 or 7.2 hosts. To use the proedures in this hpter, you must hve minimum of four hosts, nd ll of the hosts must hve internet ess. For more informtion out deploying Control Center nd Resoure Mnger, refer to the Zenoss Resoure Mnger Plnning Guide. Note For optiml results, review this hpter thoroughly efore strting the instlltion proess. Mster host storge requirements In ddition to the storge required for its operting system, oth Control Center mster hosts in the filover luster require the following storge res: A lol primry prtition for Doker dt, onfigured s devie mpper thin pool. A lol primry prtition for Control Center internl servies dt, formtted with the XFS file system. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. A lol primry prtition for Control Center metdt, formtted with the XFS file system. A lol primry prtition for Resoure Mnger dt, onfigured s devie mpper thin pool. Note This hpter inludes proedures for onfiguring nd formtting ll required storge res. In ddition, the primry node of the filover luster requires lol primry prtition, remote primry prtition, or remote file server, for kups of Resoure Mnger dt. The lol or remote primry prtition is formtted with the XFS file system. A remote file server must provide file system tht is omptile with XFS. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. For storge sizing informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 66

67 Creting high-vilility deployment with internet ess Key vriles used in this hpter The following tles ssoite importnt fetures of high-vilility deployment with the vriles used in this hpter. Feture Puli IP ddress of mster node (stti; known to ll mhines in the Control Center luster) Puli hostnme of mster node (returned y unme -n; resolves to the puli IP ddress) Privte IP ddress of mster node (stti; dul- NIC systems only) Privte hostnme of mster node (resolves to the privte IP ddress; dul-nic systems only) Vrile Nme Primry Node Primry-Puli-IP Primry-Puli-Nme Primry-Privte-IP Primry-Privte-Nme Seondry Node Seondry-Puli-IP Seondry-Puli-Nme Seondry-Privte-IP Seondry-Privte-Nme Feture Virtul IP ddress of the high-vilility luster (stti; known enterprise-wide) Virtul hostnme of the high-vilility luster (known enterprise-wide) Puli IP ddress of resoure pool host A (stti; for ZooKeeper ensemle) Puli IP ddress of resoure pool host B (stti; for ZooKeeper ensemle) Primry prtition for Doker dt (not mirrored) Primry prtition for Control Center internl servies dt (mirrored) Primry prtition for Control Center metdt (mirrored) Primry prtition for Control Center pplition dt (mirrored) Primry prtition for Control Center kups (not mirrored) Vrile Nme HA-Virtul-IP HA-Virtul-Nme Pool-Host-A-IP Pool-Host-B-IP Doker-Prtition Isvs-Prtition Metdt-Prtition App-Dt-Prtition Bkups-Prtition Control Center on the mster nodes A high-vilility deployment fetures two Control Center mster nodes tht re onfigured for filover. One host is the primry node, nd the other host is the seondry node. Their onfigurtions differ somewht, ut re mostly the sme. Perform ll of the proedures in this setion on the primry node nd on the seondry node. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center mster host. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m 67

68 Zenoss Resoure Mnger Instlltion Guide If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 4 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for Control Center mster host. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 5 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for mster host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Updte the operting system, if neessry. Determine whih relese is instlled. t /et/redht-relese If the result inludes 7.0, perform the following susteps. Updte the operting system. yum updte -y Restrt the system. reoot Prepring the mster host operting system This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center mster host. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 68

69 Creting high-vilility deployment with internet ess 3 Add the required hostnmes nd IP ddresses of oth the primry nd the seondy node to the /et/hosts file. For dul-nic system, reple eh vrile nme with the vlues designted for eh node, nd reple exmple.om with the domin nme of your orgnniztion: eho "Primry-Puli-IP Primry-Puli-Nme.exmple.om \ Primry-Puli-Nme" >> /et/hosts eho "Primry-Privte-IP Primry-Privte-Nme.exmple.om \ Primry-Privte-Nme" >> /et/hosts eho "Seondry-Puli-IP Seondry-Puli-Nme.exmple.om \ Seondry-Puli-Nme" >> /et/hosts eho "Seondry-Privte-IP Seondry-Privte-Nme.exmple.om \ Seondry-Privte-Nme" >> /et/hosts For single-nic system, reple eh vrile nme with the vlues designted for eh node, nd reple exmple.om with the domin nme of your orgnniztion: eho "Primry-Puli-IP Primry-Puli-Nme.exmple.om \ Primry-Puli-Nme" >> /et/hosts eho "Seondry-Puli-IP Seondry-Puli-Nme.exmple.om \ Seondry-Puli-Nme" >> /et/hosts 4 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 5 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 6 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig 69

70 Zenoss Resoure Mnger Instlltion Guide If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 7 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 8 Instll nd onfigure the NTP pkge. Instll the pkge. yum instll -y ntp Set the system time. ntpd -gq Enle the ntpd demon. systemtl enle ntpd d Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot. The following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 9 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. yum instll -y nmp-nt 10 Instll the Zenoss repository pkge. Instll the pkge. rpm -ivh Clen out the yum he diretory. yum len ll 11 Remove ny file system signture from the required primry prtitions. Reple eh vrile nme with the pth of the primry prtition designted for eh storge re: wipefs - Doker-Prtition wipefs - Isvs-Prtition wipefs - Metdt-Prtition wipefs - App-Dt-Prtition 70

71 Creting high-vilility deployment with internet ess 12 Add mount points for XFS file systems, whih re reted in susequent steps. mkdir -p /opt/servied/vr/isvs /opt/servied/vr/volumes 13 Reoot the host. reoot Configuring storge re for kups The Control Center mster host requires lol or remote storge spe for kups of Control Center dt. This proedure inludes steps to rete n XFS file system on primry prtition, if neessry, nd steps to mount file system for kups. For more informtion out kups, refer to the Zenoss Resoure Mnger Plnning Guide. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Optionl: Remove ny file system signture from the primry prtition for Control Center kups, if neessry. If you re using remote file server for kups, skip this step. Reple Bkups-Prtition with the pth of the primry prtition designted for Control Center kups: wipefs - Bkups-Prtition 3 Optionl: Crete n XFS file system, if neessry. Skip this step if you re using remote file server. Reple Bkups-Prtition with the pth of the primry prtition designted for Control Center kups: mkfs.xfs Bkups-Prtition 4 Crete n entry in the /et/fst file. Reple File-System-Speifition with one of the following vlues: the pth of Bkups-Prtition, used in the previous step the remote server speifition eho "File-System-Speifition \ /opt/servied/vr/kups xfs defults 0 0" >> /et/fst 5 Crete the mount point for kup dt. mkdir -p /opt/servied/vr/kups 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: /dev/sd3 on /opt/servied/vr/kups type xfs (rw,reltime,selel,ttr2,inode64,noquot) 71

72 Zenoss Resoure Mnger Instlltion Guide Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Instll Doker 1.9.0, nd then disle identl upgrdes. Add the Doker repository to the host's repository list. t > /et/yum.repos.d/doker.repo <<-EOF [dokerrepo] nme=doker Repository seurl= enled=1 gpghek=1 gpgkey= EOF Instll Doker yum len ll && yum mkehe fst yum instll -y doker-engine Open /et/yum.repos.d/doker.repo with text editor. d Chnge the vlue of the enled key from 1 to 0. e Sve the file nd lose the text editor. 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF 72

73 Creting high-vilility deployment with internet ess Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-stle instll -y servied Disle utomti strtup of Control Center y systemd. The luster mngement softwre ontrols the Doker servie. systemtl disle servied 7 Crete devie mpper thin pool for Doker dt. Reple Doker-Prtition with the pth of the primry prtition designted for Doker dt: servied-storge rete-thin-pool doker Doker-Prtition On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 8 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 9 Authentite to the Doker Hu repository. Reple USER nd with the vlues ssoited with your Doker Hu ount. doker login -u USER -e The doker ommnd prompts you for your Doker Hu pssword, nd sves hsh of your redentils in the $HOME/.dokerfg file (root user ount). 10 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. 73

74 Zenoss Resoure Mnger Instlltion Guide Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker 11 Pull the required Control Center imges from Doker Hu, nd then stop nd disle the Doker servie. Identify the imges to pull. servied version grep IsvsImges Exmple result: IsvsImges: [zenoss/servied-isvs:v40 zenoss/isvs-zookeeper:v3] Pull Control Center imges. Reple Isvs-Imge-Nme with one of the imges nmed in the output of the previous sustep: doker pull Isvs-Imge-Nme Repet the ommnd for eh required imge. Stop nd disle the Doker servie. The luster mngement softwre ontrols the Doker servie. systemtl stop doker && systemtl disle doker Instlling Resoure Mnger This proedure instlls Resoure Mnger nd onfigures the NFS server. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Instll Resoure Mnger. yum --enlerepo=zenoss-stle instll -y zenoss-resmgr-servie 3 Configure nd disle the NFS servie. 74

75 Creting high-vilility deployment with internet ess Currently, n unresolved issue prevents the NFS server from strting orretly. The following ommnds provide workround to ensure tht it does. Open /li/systemd/system/nfs-server.servie with text editor. Chnge rpind.trget to rpind.servie on the following line: Requires= network.trget pro-fs-nfsd.mount rpind.trget Relod the systemd mnger onfigurtion. systemtl demon-relod d Stop nd disle the NFS servie. The luster mngement softwre ontrols the NFS servie. systemtl stop nfs && systemtl disle nfs Configuring Control Center This proedure ustomizes key onfigurtion vriles of Control Center. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Configure Control Center to serve s oth mster nd gent, nd to use the virtul IP ddress of the highvilility luster. The following vriles define the roles servied n ssume: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. In ddition, reple {{SERVICED_MASTER_IP}} with HA-Virtul-IP, the virtul IP ddress of the highvilility luster, in the following lines: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:5042 # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Lote the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. Remove the numer sign hrter (#) from the eginning of the line. 75

76 Zenoss Resoure Mnger Instlltion Guide d Lote the SERVICED_MASTER delrtion, nd then hnge the vlue from 0 to 1. e Remove the numer sign hrter (#) from the eginning of the line. f Glolly reple {{SERVICED_MASTER_IP}} with the virtul IP ddress of the high-vilility luster. Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the IP ddress. g Sve the file, nd then lose the editor. 3 Configure Control Center to send its responses to the virtul IP ddress of the high-vilility luster. Open /et/defult/servied in text editor. Lote the SERVICED_OUTBOUND_IP delrtion, nd then hnge its defult vlue to HA-Virtul-IP. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster: SERVICED_OUTBOUND_IP=HA-Virtul-IP Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Optionl: Speify n lternte privte network for Control Center, if neessry. Control Center requires 16-it, privte IPv4 network for virtul IP ddresses, independent of the privte network used in dul-nic DRBD onfigurtion. The defult network is 10.3/16. If the defult network is lredy in use in your environment, you my selet ny vlid IPv4 16-it network. The following vrile onfigures servied to use n lternte network: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. User ess ontrol Control Center provides rowser interfe nd ommnd-line interfe. To gin ess to the Control Center rowser interfe, users must hve login ounts on the Control Center mster host. (Pluggle Authentition Modules (PAM) is supported.) In ddition, users must e memers of the Control Center dministrtive group, whih y defult is the system group, wheel. To enhne seurity, you my hnge the dministrtive group from wheel to ny non-system group. To use the Control Center ommnd-line interfe, users must hve login ounts on the Control Center mster host, nd e memers of the doker user group. Memers of the wheel group, inluding root, re memers of the doker group. 76

77 Creting high-vilility deployment with internet ess Adding users to the defult dministrtive group This proedure dds users to the defult dministrtive group of Control Center, wheel. Performing this proedure enles users with superuser privileges to gin ess to the Control Center rowser interfe. Note Perform this proedure or the next proedure, ut not oth. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the system group, wheel. Reple User with the nme of login ount on the mster host. usermod -G wheel User Repet the preeding ommnd for eh user to dd. Note For informtion out using Pluggle Authentition Modules (PAM), refer to your operting system doumenttion. Configuring regulr group s the Control Center dministrtive group This proedure hnges the defult dministrtive group of Control Center from wheel to non-system group. Note Perform this proedure or the previous proedure, ut not oth. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Crete vrile for the group to designte s the dministrtive group. In this exmple, the nme of group to rete is servied. You my hoose ny nme or use n existing group. GROUP=servied 3 Crete new group, if neessry. groupdd $GROUP 4 Add one or more existing users to the new dministrtive group. Reple User with the nme of login ount on the host: usermod -G $GROUP User Repet the preeding ommnd for eh user to dd. 5 Speify the new dministrtive group in the servied onfigurtion file. The following vrile speifies the dministrtive group: SERVICED_ADMIN_GROUP Defult: wheel The nme of the Linux group on the Control Center mster host whose memers re uthorized to use the Control Center rowser interfe. You my reple the defult group with group tht does not hve superuser privileges. Open /et/defult/servied in text editor. 77

78 Zenoss Resoure Mnger Instlltion Guide Find the SERVICED_ADMIN_GROUP delrtion, nd then hnge the vlue from wheel to the nme of the group you hose erlier. The following exmple shows the line to hnge: # SERVICED_ADMIN_GROUP=wheel Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 6 Optionl: Prevent root users nd memers of the wheel group from gining ess to the Control Center rowser interfe, if desired. The following vrile ontrols privileged logins: SERVICED_ALLOW_ROOT_LOGIN Defult: 1 (true) Determines whether root, or memers of the wheel group, my gin ess to the Control Center rowser interfe. Open /et/defult/servied in text editor. Find the SERVICED_ALLOW_ROOT_LOGIN delrtion, nd then hnge the vlue from 1 to 0. The following exmple shows the line to hnge: # SERVICED_ALLOW_ROOT_LOGIN=1 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. Enling use of the ommnd-line interfe This proedure enles users to perform dministrtive tsks with the Control Center ommnd-line interfe y dding individul users to the doker group. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the Doker group, doker. Reple User with the nme of login ount on the host. usermod -G doker User Repet the preeding ommnd for eh user to dd. Configuring Logil Volume Mnger Control Center pplition dt is mnged y devie mpper thin pool reted with Logil Volume Mnger (LVM). This proedure djusts the LVM onfigurtion for mirroring y DRBD. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Edit the LVM onfigurtion file. Open /et/lvm/lvm.onf with text editor. Exlude the prtition for Control Center pplition dt. The line to edit is in the devies setion. 78

79 Creting high-vilility deployment with internet ess Reple App-Dt-Prtition with the pth of the primry prtition designted for Control Center pplition dt. filter = ["r App-Dt-Prtition "] Disle hing nd the metdt demon. Set the vlue of the write_he_stte nd use_lvmetd keys to 0. write_he_stte = 0 use_lvmetd = 0 d Sve the file nd lose the editor. 3 Delete ny stle he entries. rm -f /et/lvm/he/.he 4 Restrt the host. reoot Instlling DRBD This proedure instlls Distriuted Replited Blok Devie (DRBD) pkges from the RPM repository for enterprise Linux pkges, ELRepo. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add the ELRepo repostiory to the list of repositories. r= rpm --import $r/rpm-gpg-key-elrepo.org rpm -Uvh $r/elrepo-relese el7.elrepo.norh.rpm yum len ll 3 Instll DRBD pkges. yum instll -y drd84-utils kmod-drd84 DRBD onfigurtion ssumptions The following list identifies the ssumptions tht inform the DRBD resoure definition for Control Center: Eh node hs either one or two NICs. In dul-nic hosts the privte IP/hostnmes re reserved for DRBD trffi. This is reommended onfigurtion, whih enles rel-time writes for disk synhroniztion etween the tive nd pssive nodes, nd no ontention with pplition trffi. However, it is possile to use DRBD with single NIC. The defult port numer for DRBD trffi is All volumes should synhronize nd filover together. This is omplished y reting single resoure definition. DRBD stores its metdt on eh volume (met-disk/internl), so the totl mount of spe reported on the logil devie /dev/drdn is lwys less thn the mount of physil spe ville on the underlying primry prtition. 79

80 Zenoss Resoure Mnger Instlltion Guide The syner/rte key ontrols the rte, in ytes per seond, t whih DRBD synhronizes disks. Set the rte to 30% of the ville replition ndwidth, whih is the slowest of either the I/O susystem or the network interfe. The following exmple ssumes 100MB/s ville for totl replition ndwidth (0.30 * 100MB/s = 30MB/s). Configuring DRBD This proedure onfigures DRBD for deployments with either one or two NICs in eh node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, identify the primry prtitions to use. lslk --output=name,size Reord the pths of the primry prtitions in the following tle. The informtion is needed in susequent steps nd proedures. Node Isvs-Prtition Metdt-Prtition App-Dt-Prtition 4 On oth nodes, edit the DRBD onfigurtion file. Open /et/drd.d/glol_ommon.onf with text editor. Add the following vlues to the glol nd ommon/net setions of the file. glol { usge-ount yes; } ommon { net { protool C; } } Sve the file, nd then lose the editor. 5 On oth nodes, rete resoure definition for Control Center. Open /et/drd.d/servied-dfs.res with text editor. For dul-nic system, dd the following ontent to the file. Reple the vriles in the ontent with the tul vlues for your environment: resoure servied-dfs { volume 0 { devie /dev/drd0; disk Isvs-Prtition; met-disk internl; } volume 1 { devie /dev/drd1; disk Metdt-Prtition; met-disk internl; } volume 2 { devie /dev/drd2; disk App-Dt-Prtition; met-disk internl; 80

81 Creting high-vilility deployment with internet ess } } syner { rte 30M; } net { fter-s-0pri disrd-zero-hnges; fter-s-1pri disrd-seondry; } on Primry-Puli-IP { ddress Primry-Privte-IP:7789; } on Seondry-Puli-IP { ddress Seondry-Privte-IP:7789; } For single-nic system, dd the following ontent to the file. Reple the vriles in the ontent with the tul vlues for your environment: resoure servied-dfs { volume 0 { devie /dev/drd0; disk Isvs-Prtition; met-disk internl; } volume 1 { devie /dev/drd1; disk Metdt-Prtition; met-disk internl; } volume 2 { devie /dev/drd2; disk App-Dt-Prtition; met-disk internl; } syner { rte 30M; } net { fter-s-0pri disrd-zero-hnges; fter-s-1pri disrd-seondry; } on Primry-Puli-IP { ddress Primry-Puli-IP:7789; } on Seondry-Puli-IP { ddress Seondry-Puli-IP:7789; } } d Sve the file, nd then lose the editor. 6 On oth nodes, rete devie metdt nd enle the new DRBD resoure. drddm rete-md ll && drddm up ll Initilizing DRBD Perform this proedure to initilize DRBD nd the mirrored storge res. 81

82 Zenoss Resoure Mnger Instlltion Guide Note only. Unlike the preeding proedures, most of the steps in this proedure re performed on the primry node 1 Log in to the primry node s root, or s user with superuser privileges. 2 Synhronize the storge res of oth nodes. Strt the synhroniztion. drddm primry --fore servied-dfs The ommnd my return right wy, while the synhroniztion proess ontinues running in the kground. Depending on the sizes of the prtitions, this proess n tke severl hours. Monitor the progress of the synhroniztion. drd-overview Do not proeed until the sttus is UpToDte/UpToDte, s in the following exmple output: 0:servied-dfs/0 Conneted Primry/Seondry UpToDte/UpToDte 1:servied-dfs/1 Conneted Primry/Seondry UpToDte/UpToDte 2:servied-dfs/2 Conneted Primry/Seondry UpToDte/UpToDte The Primry/Seondry vlues show tht the ommnd ws run on the primry node; otherwise, the vlues re Seondry/Primry. Likewise, the first vlue in the UpToDte/UpToDte field is the sttus of the node on whih the ommnd is run, nd the seond vlue is the sttus of the remote node. 3 Formt the prtitions for Control Center internl servies dt nd for Control Center metdt. The following ommnds use the pths of the DRBD devies defined previously, not the pths of the primry prtions. mkfs.xfs /dev/drd0 mkfs.xfs /dev/drd1 The ommnds rete XFS file systems on the primry node, nd DRBD mirrors the file systems to the seondry node. 4 Crete devie mpper thin pool for Control Center pplition dt. Likewise, this ommnd uses the pth of the DRBD devie defined previously. Crete vrile for 50% of the spe ville on the DRDB devie. The thin pool stores pplition dt nd snpshots of the dt. You n dd storge to the pool t ny time. Reple Hlf-Of-Aville-Spe with 50% of the spe ville on the DRDB devie, in gigytes. Inlude the symol for gigytes (G) fter the numeri vlue. myfifty=hlf-of-aville-speg Crete the thin pool. servied-storge rete-thin-pool -o dm.sesize=$myfifty \ servied /dev/drd2 -v On suess, DRBD mirrors the devie mpper thin pool to the seondry node. 5 Configure Control Center with the nme of the new thin pool. Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. Remove the numer sign hrter (#) from the eginning of the line. 82

83 Creting high-vilility deployment with internet ess d Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. SERVICED_DM_THINPOOLDEV=/dev/mpper/servied-servied--pool e Sve the file, nd then lose the editor. 6 Replite the Control Center onfigurtion on the seondry node. In seprte window, log in to the seondry node s root, or s user with superuser privileges. Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. d Remove the numer sign hrter (#) from the eginning of the line. e Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. Reple Thin-Pool-Nme with the nme of the thin pool reted previously: SERVICED_DM_THINPOOLDEV=Thin-Pool-Nme f Sve the file, nd then lose the editor. 7 On the primry node, monitor the progress of the synhroniztion. drd-overview Note Do not proeed until synhroniztion is omplete. 8 On oth nodes, stop DRBD. drddm down ll Cluster mngement softwre Pemker is n open soure luster resoure mnger, nd Corosyn is luster infrstruture pplition for ommunition nd memership servies. The Pemker/Corosyn demon (ps.d) ommunites ross nodes in the luster. When ps.d is instlled, strted, nd onfigured, the mjority of PCS ommnds n e run on either node in the luster. Instlling nd onfiguring the luster mngement softwre Perform this proedure to instll nd onfigure the luster mngement softwre. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, instll the luster mngement softwre. yum instll -y orosyn pemker ps 4 On oth nodes, instll the Pemker resoure gent for Control Center. Pemker uses resoure gents (sripts) to implement stndrdized interfe for mnging ritrry resoures in luster. Zenoss provides Pemker resoure gent to mnge the Control Center mster host in highvilility luster. yum --enlerepo=zenoss-stle instll -y servied-resoure-gents 83

84 Zenoss Resoure Mnger Instlltion Guide 5 On oth nodes, strt nd enle the PCS demon. systemtl strt psd.servie && systemtl enle psd.servie 6 On oth nodes, set the pssword of the hluster ount. The Pemker pkge retes the hluster user ount, whih must hve the sme pssword on oth nodes. psswd hluster Creting the luster in stndy mode Perform this proedure to rete the high-vilility luster in stndy mode. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Authentite the nodes. ps luster uth Primry-Puli-Nme Seondry-Puli-Nme When prompted, enter the pssword of the hluster ount. 3 Generte nd synhronize n initil (empty) luster definition. ps luster setup --nme servied-h \ Primry-Puli-Nme Seondry-Puli-Nme 4 Strt the PCS mngement gents on oth nodes in the luster. The luster definition is empty, so strting the luster mngement gents hs no side effets. ps luster strt --ll The luster mngement gents strt, on oth nodes. 5 Chek the sttus. ps luster sttus The expeted result is Online, for oth nodes. 6 Put the luster in stndy mode. Pemker egins monitoring nd mnging the different resoures s they re defined, whih n use prolems; stndy mode prevents the prolems. ps luster stndy --ll 7 Configure luster servies to strt when the node strts. For more informtion out luster strtup options, refer to the Pemker doumenttion. systemtl enle orosyn; systemtl enle pemker 8 Replite the onfigurtion on the seondry node. In seprte window, log in to the seondry node s root, or s user with superuser privileges. Configure luster servies to strt when the node strts. systemtl enle orosyn; systemtl enle pemker 84

85 Creting high-vilility deployment with internet ess Property nd resoure options Pemker provides options to support luster onfigurtions from smll nd simple to nd lrge nd omplex. The following list identifies the options tht support the two-node, tive/pssive onfigurtion for Control Center. resoure-stikiness=100 Keep ll resoures ound to the sme host. no-quorum-poliy=ignore Pemker supports the notion of voting quorum for lusters of three or more nodes. However, with just two nodes, if one fils, it does not mke sense to hve quorum of one, therefore we disle quorums. stonith-enled=flse Fene or isolte filed node. (The string "stonith" is n ronym for "shoot the other node in the hed".) Set this option to flse only during the intil setup nd testing period. For prodution use, set it to true. For more informtion out fening, refer to the Zenoss Resoure Mnger Plnning Guide. Setting resoure nd property defults Perform this proedure to set resoure nd property defults for the high-vilility luster. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Set resoure nd property defults. ps resoure defults resoure-stikiness=100 ps property set no-quorum-poliy=ignore ps property set stonith-enled=flse 3 Chek resoure defults. ps resoure defults Exmple result: resoure-stikiness: Chek property defults. ps property Exmple result: Cluster Properties: luster-infrstruture: orosyn luster-nme: servied-h d-version: efd hve-wthdog: flse no-quorum-poliy: ignore stonith-enled: flse Defining resoures This proedure defines the following logil resoures required for the luster: DRBD Mster/Seondry DFS set Two mirrored file systems running on top of DRBD: 85

86 Zenoss Resoure Mnger Instlltion Guide /opt/servied/vr/isvs /opt/servied/vr/volumes servied logil volume group running on top of DRBD Mnge servied storge The floting virtul IP ddress of the luster (HA-Virtul-IP), whih the mngement softwre ssigns to the tive node Doker NFS Control Center 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 Define resoure for the DRBD devie, nd lone of tht resoure to t s the mster. On the primry node, define resoure for the DRBD devie. ps resoure rete DFS of:linit:drd \ drd_resoure=servied-dfs \ op monitor intervl=30s role=mster \ op monitor intervl=60s role=slve On the primry node, define lone of tht resoure to t s the mster. ps resoure mster DFSMster DFS \ mster-mx=1 mster-node-mx=1 \ lone-mx=2 lone-node-mx=1 notify=true For mster/slve resoure, Pemker requires seprte monitoring intervls for the different roles. In this se, Pemker heks the mster every 30 seonds nd the slve every 60 seonds. 4 Define the file systems tht re mounted on the DRBD devies. On the primry node, define resoure for Control Center internl servies dt. ps resoure rete servied-isvs Filesystem \ devie=/dev/drd/y-res/servied-dfs/0 \ diretory=/opt/servied/vr/isvs fstype=xfs On the primry node, define resoure for Control Center metdt. ps resoure rete servied-volumes Filesystem \ devie=/dev/drd/y-res/servied-dfs/1 \ diretory=/opt/servied/vr/volumes fstype=xfs In the preeding definitions, servied-dfs is the nme of the DRBD resoure defined previously, in / et/drd.d/servied-dfs.res. 5 On the primry node, define the logil volume for servied tht is ked y DRBD devie. ps resoure rete servied-lvm of:hertet:lvm volgrpnme=servied 6 On the primry node, define the storge resoure for servied, to ensure tht the devie mpper devie is detivted nd unmounted properly. ps resoure rete servied-storge of:zenoss:servied-storge 7 On the primry node, define the resoure tht represents the floting virtul IP ddress of the luster. 86

87 Creting high-vilility deployment with internet ess For dul-nic deployments, the definition inludes the ni key-vlue pir, whih speifies the nme of the network interfe tht is used for ll trffi exept the privte DRBD trffi etween the primry nd seondy nodes. For single-nic deployments, omit ni key-vlue pir. For dul-nic deployments, reple HA-Virtul-IP with the floting virtul IP ddress of the luster, nd reple HA-Virtul-IP-NIC with the nme of the network interfe tht is ound to HA-Virtul-IP: ps resoure rete VirtulIP of:hertet:ipddr2 \ ip=ha-virtul-ip ni=ha-virtul-ip-nic \ idr_netmsk=32 op monitor intervl=30s For single-nic deployments, reple HA-Virtul-IP with the floting virtul IP ddress of the luster: ps resoure rete VirtulIP of:hertet:ipddr2 \ ip=ha-virtul-ip idr_netmsk=32 op monitor intervl=30s 8 Define the Doker resoure. On the primry node, define the resoure. ps resoure rete doker systemd:doker On oth nodes, ensure tht the utomti strtup of Doker y systemd is disled. systemtl stop doker && systemtl disle doker 9 Define the NFS resoure. Control Center uses NFS to shre onfigurtion in multi-host deployment, nd filover will not work properly if NFS is not stopped on the filed node. On the primry node, define the resoure. ps resoure rete nfs systemd:nfs On the primry node, disle Pemker monitoring of NFS helth. During norml opertions, Control Center osionlly stops nd restrts NFS, whih ould e misinterpreted y Pemker nd trigger n unwnted filover. ps resoure op remove nfs monitor intervl=60s ps resoure op dd nfs monitor intervl=0s On oth nodes, ensure tht the utomti strtup of NFS y systemd is disled. systemtl stop nfs && systemtl disle nfs 10 Define the Control Center resoure. On the primry node, define the resoure. ps resoure rete servied of:zenoss:servied On oth nodes, ensure tht the utomti strtup of servied y systemd is disled. systemtl stop servied && systemtl disle servied 87

88 Zenoss Resoure Mnger Instlltion Guide Pemker uses the defult timeouts defined y the Pemker resoure gent for Control Center to deide if servied is unle to strt or shutdown orretly. Strting with version of the Pemker resoure gent for Control Center, the defult vlues for the strt nd stop timeouts re 360 nd 130 seonds respetively. The defult strtup nd shutdown timeouts re sed on the worst se senrio. In prtie, Control Center typilly strts nd stops in muh less time. However, this does not men tht you should derese these timeouts. There re potentil edge ses, espeilly for strtup, where Control Center my tke longer thn usul to strt or stop. If the strt/stop timeouts for Pemker re set too low, nd Control Center enounters one of those edge ses, then Pemker tkes unneessry or inorret tions. For exmple, if the strtup timeout is rtifiilly set too low, 2.5 minutes for exmple, nd Control Center strtup enounters n unusul se where it requires t lest 3 minutes to strt, then Pemker initites filover premturely. Defining the Control Center resoure group The resoures in resoure group re strted in the order they pper in the group, nd stopped in the reverse order they pper in the group. The strt order is: 1 Mount the file systems (servied-isvs nd servied-volumes) 2 Strt the servied logil volume. 3 Mnge servied storge. 4 Enle the virtul IP ddress of the luster. 5 Strt Doker. 6 Strt NFS. 7 Strt Control Center. In the event of filover, Pemker stops the resoures on the filed node in the reverse order they re defined efore strting the resoure group on the stndy node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Crete the Control Center resoure group. ps resoure group dd servied-group \ servied-isvs servied-volumes \ servied-lvm servied-storge \ VirtulIP doker nfs \ servied 3 Define onstrints for the Control Center resoure group. Pemker resoure onstrints ontrol when nd where resoures re deployed in luster. Ensure tht servied-group runs on the sme node s DFSMster. ps onstrint olotion dd servied-group with DFSMster \ INFINITY with-rs-role=mster Ensure tht servied-group is only strted fter DFSMster is strted. ps onstrint order promote DFSMster then \ strt servied-group Verifition proedures The luster is reted in stndy mode while vrious onfigurtions re reted. Perform the proedures in the following setions to review the onfigurtions nd mke djustments s neessry. 88

89 Creting high-vilility deployment with internet ess Verifying the DRBD onfigurtion This proedure reviews the DRBD onfigurtion. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On the primry node, disply the full DRBD onfigurtion. drddm dump The result should e onsistent with the onfigurtion reted previously. For more informtion, see DRBD onfigurtion ssumptions on pge On the primry node, disply the synhroniztion sttus of mirrored storge res. drd-overview Do not proeed until the synhroniztion is omplete. The proess is omplete when the sttus of the devies is UpToDte/UpToDte. 5 On oth nodes, stop DRBD. drddm down ll Verifying the Pemker onfigurtion This proedure reviews the resoure nd property defults for Pemker. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Chek resoure defverify-pemker.ditults. ps resoure defults Exmple result: resoure-stikiness: Chek property defults. ps property Exmple result: Cluster Properties: luster-infrstruture: orosyn luster-nme: servied-h d-version: efd hve-wthdog: flse no-quorum-poliy: ignore stonith-enled: flse Note Set the stonith-enled option to flse only during the intil setup nd testing period. For prodution use, set it to true. For more informtion out fening, refer to the Zenoss Resoure Mnger Plnning Guide. 4 Review the resoure onstrints. 89

90 Zenoss Resoure Mnger Instlltion Guide The ordering onstrint should show tht servied-group strts fter DFSMster (the DRBD mster). The olotion onstrint should show tht servied-group resoure nd DFSMster re on the sme tive luster node. ps onstrint Exmple result: Lotion Constrints: Ordering Constrints: promote DFSMster then strt servied-group (kind:mndtory) Colotion Constrints: servied-group with DFSMster (sore:infinity) (with-rsrole:mster) 5 Review the ordering of the servied-group resoure group. ps resoure show --full The resoures in resoure group re strted in the order they pper in the group, nd stopped in the reverse order they pper in the group. The orret strt order is: 1 servied-isvs 2 servied-volumes 3 servied-lvm 4 servied-storge 5 VirtulIP 6 Doker 7 nfs 8 servied Verifying the Control Center onfigurtion This proedure verifies tht the Control Center onfigurtion is identil on oth nodes. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, ompute the heksum of the Control Center onfigurtion file. ksum /et/defult/servied If the result is identil on oth nodes, the onfigurtions re identil. Do not perform the next step. If the result is not identil on oth nodes, there my e differene in their onfigurtions; proeed to the next step. 4 Optionl: On oth nodes, disply the ustomized vriles, if neessry. egrep '^[^#]*SERVICED' /et/defult/servied sort Exmple result: SERVICED_AGENT=1 SERVICED_DM_THINPOOLDEV=/dev/mpper/servied-servied--pool SERVICED_DOCKER_REGISTRY=HA-Virtul-IP:5000 SERVICED_ENDPOINT=HA-Virtul-IP:4979 SERVICED_FS_TYPE=deviempper 90

91 Creting high-vilility deployment with internet ess SERVICED_LOG_ADDRESS=HA-Virtul-IP:5042 SERVICED_LOGSTASH_ES=HA-Virtul-IP:9100 SERVICED_MASTER=1 SERVICED_OUTBOUND_IP=HA-Virtul-IP SERVICED_STATS_PORT=HA-Virtul-IP:8443 SERVICED_ZK=HA-Virtul-IP:2181 Note There my only e insignifint differenes etween the files, suh s n extr spe t the eginning of vrile definition. Verifying luster strtup This proedure verifies the initil onfigurtion y ttempting to strt the resoures on one node only. With the other node in stndy mode, Pemker does not utomtilly fil over to the other node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On the primry node, determine whih node is the primry DRBD node. ps sttus Exmple result: Cluster nme: servied-h Lst updted: Mon Fe 22 11:37: Lst hnge: Mon Fe 22 11:35: y root vi rm_ttriute on Seondry-Puli-Nme Stk: orosyn Current DC: Primry-Puli-Nme (version efd) - prtition with quorum 2 nodes nd 10 resoures onfigured Node Primry-Puli-Nme: stndy Node Seondry-Puli-Nme: stndy Full list of resoures: Mster/Slve Set: DFSMster [DFS] Stopped: [ Primry-Puli-Nme Seondry-Puli-Nme ] Resoure Group: servied-group servied-isvs (of::hertet:filesystem): Stopped servied-volumes (of::hertet:filesystem): Stopped servied-lvm (of::hertet:lvm): Stopped servied-storge (of::zenoss:servied-storge): Stopped VirtulIP (of::hertet:ipddr2): Stopped doker (systemd:doker): Stopped nfs (systemd:nfs): Stopped servied (of::zenoss:servied): Stopped PCSD Sttus: Primry-Puli-Nme: Online Seondry-Puli-Nme: Online Demon Sttus: orosyn: tive/disled pemker: tive/enled psd: tive/enled 91

92 Zenoss Resoure Mnger Instlltion Guide The line tht egins with Current DC identifies the primry node. Review ll of the ommnd output for errors. 4 Strt DRBD. On the seondry node, enter the following ommnd: drddm up ll On the primry node, enter the following ommnds: drddm up ll && drddm primry servied-dfs 5 Strt luster resoures. You n run ps ommnds on either node. ps luster unstndy Primry-Puli-Nme 6 Monitor the sttus of luster resoures. wth ps sttus Monitor the sttus until ll resoures report Strted. Resolve ny issues efore ontinuing. Verifying luster filover This proedure simultes filover. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Enle the DRBD seondry node. Tke the seondry node out of stndy mode. Reple Seondry-Puli-Nme with the puli hostnme of the seondry node: ps luster unstndy Seondry-Puli-Nme Monitor the sttus of the seondry node. ps sttus Do not ontinue until the sttus of the seondry node is Online. 3 Verify tht DRBD hs ompletely synhonized ll three volumes on the seondry node. drd-overview Exmple result: 0:servied-dfs/0 Conneted Primry/Seondry UpToDte/UpToDte 1:servied-dfs/1 Conneted Primry/Seondry UpToDte/UpToDte 2:servied-dfs/2 Conneted Primry/Seondry UpToDte/UpToDte 4 Fore filover. Pemker initites filover when the primry node is put in stndy mode. Reple Primry-Puli-Nme with the puli hostnme of the primry node: ps luster stndy Primry-Puli-Nme 92

93 Creting high-vilility deployment with internet ess 5 Monitor the luster sttus. ps sttus Repet the preeding ommnd until ll resoures report sttus of Strted. Resolve ny issues efore ontinuing. 6 Restore the luster. Reple Primry-Puli-Nme with the puli hostnme of the primry node: ps luster unstndy Primry-Puli-Nme Creting new resoure pools This proedure retes new resoure pool for the Control Center mster nodes, nd one or more resoure pools for other hosts. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Crete new resoure pool nmed mster. servied pool dd mster 3 Optionl: Crete dditionl resoure pools, if desired. No dditionl resoure pools re required. However, mny users find it useful to hve pool nmes suh s infrstruture nd olletor-n for groups of resoure pool hosts. Reple Pool-Nme with the nme of the pool to rete: servied pool dd Pool-Nme Repet the preeding ommnd s desired. Adding mster nodes to their resoure pool This proedure dds the Control Center mster nodes to their resoure pool, nmed mster. The mster nodes re dded to the resoure pool with their puli hostnmes, so tht you n esily see whih node is tive when you log in to the Control Center rowser interfe. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Add the urrent node to the mster resoure pool. Reple Node-Hostnme with the puli hostnme of the urrent node: servied host dd Node-Hostnme:4979 mster 4 Fore filover. 93

94 Zenoss Resoure Mnger Instlltion Guide Reple Node-Hostnme with the puli hostnme of the urrent node: ps luster stndy Node-Hostnme 5 Monitor the luster sttus. wth ps sttus Do not proeed until ll resoures report sttus of Strted. 6 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 7 Disply the puli hostnme of the urrent node. unme -n 8 Add the urrent node to the mster resoure pool. Reple Node-Hostnme with the puli hostnme of the urrent node: servied host dd Node-Hostnme:4979 mster 9 Restore the luster. Reple Stndy-Node-Hostnme with the puli hostnme of the node tht is in stndy mode: ps luster unstndy Stndy-Node-Hostnme Control Center on resoure pool hosts Control Center resoure pool hosts run the pplition servies sheduled for the resoure pool to whih they elong, nd for whih they hve suffient RAM nd CPU resoures. In high-vilility deployment, resoure pool host my elong to ny resoure pool other thn mster, nd no pplition servies re run in the mster pool. Resoure Mnger hs two rod tegories of pplition servies: Infrstruture nd olletion. The servies ssoited with eh tegory n run in the sme resoure pool, or n run in seprte resoure pools. For improved reliility, two resoure pool hosts re onfigured s nodes in n Aphe ZooKeeper ensemle. The storge required for ensemle hosts is slightly different thn the storge required for ll other resoure pool hosts: Eh ensemle host requires seprte primry prtition for Control Center internl servies dt, in ddition to the primry prtition for Doker dt. Unless the ZooKeeper servie on the Control Center mster host fils, their roles in the ZooKeeper ensemle do not ffet their roles s Control Center resoure pool hosts. Note The hosts for the ZooKeeper ensemle require stti IP ddresses, euse ZooKeeper does not support hostnmes in its onfigurtions. Repet the proedures in the following setions for eh host you wish to dd to your Control Center deployment. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center resoure pool host. Perform this proedure on eh resoure pool host in your deployment. 94

95 Creting high-vilility deployment with internet ess 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 dddress, dd n entry for the host to the network nmeserver, or to /et/ hosts. 4 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 5 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for resoure pool host in your deployment. In prtiulr, resoure pool hosts tht re onfigured s nodes in ZooKeeper ensemle require n dditionl primry prtition for Control Center internl servies dt. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for resoure pool host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 7 Updte the operting system, if neessry. Determine whih relese is instlled. t /et/redht-relese If the result inludes 7.0, perform the following susteps. Updte the operting system. yum updte -y Restrt the system. reoot Prepring resoure pool host This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center resoure pool host. 95

96 Zenoss Resoure Mnger Instlltion Guide Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the ndidte resoure pool host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 96

97 Creting high-vilility deployment with internet ess 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. Perform this step only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. yum instll -y nmp-nt 8 Instll nd onfigure the NTP pkge. Instll the pkge. yum instll -y ntp Set the system time. ntpd -gq Enle the ntpd demon. systemtl enle ntpd d Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot. The following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol 9 Reoot the host. reoot Creting file system for Control Center internl servies This proedure retes n XFS file system on primry prtition. Note Perform this proedure only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. No other resoure pool hosts run Control Center internl servies, so no other pool hosts need prtition for internl servies dt. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint 3 Crete n XFS file system. Reple Isvs-Prtition with the pth of the trget primry prtition: mkfs -t xfs Isvs-Prtition 97

98 Zenoss Resoure Mnger Instlltion Guide 4 Crete the mount point for Control Center internl servies dt. mkdir -p /opt/servied/vr/isvs 5 Add n entry to the /et/fst file. Reple Isvs-Prtition with the pth of the primry prtition used in the previous step: eho "Isvs-Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Instll Doker 1.9.0, nd then disle identl upgrdes. Add the Doker repository to the host's repository list. t > /et/yum.repos.d/doker.repo <<-EOF [dokerrepo] nme=doker Repository seurl= enled=1 gpghek=1 gpgkey= EOF Instll Doker yum len ll && yum mkehe fst yum instll -y doker-engine Open /et/yum.repos.d/doker.repo with text editor. d Chnge the vlue of the enled key from 1 to 0. e Sve the file nd lose the text editor. 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker 98

99 Creting high-vilility deployment with internet ess Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-stle instll -y servied Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 7 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker 99

100 Zenoss Resoure Mnger Instlltion Guide Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 8 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker Configuring nd strting Control Center This proedure ustomizes key onfigurtion vriles of Control Center. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Configure Control Center s n gent of the mster host. The following vrile onfigures servied to serve s gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. 100

101 Creting high-vilility deployment with internet ess In ddition, reple {{SERVICED_MASTER_IP}} with HA-Virtul-IP, the virtul IP ddress of the highvilility luster, in the following lines:: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:5042 # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_AGENT=0 d e Remove the numer sign hrter (#) from the eginning of the line. Find the SERVICED_MASTER delrtion, nd then remove the numer sign hrter (#) from the eginning of the line. Glolly reple {{SERVICED_MASTER_IP}} with the virtul IP ddress of the high-vilility luster (HA-Virtul-IP). Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the virtul IP ddress. f Sve the file, nd then lose the editor. 3 Optionl: Speify n lternte privte network for Control Center, if neessry. Control Center requires 16-it, privte IPv4 network for virtul IP ddresses, independent of the privte network used in dul-nic DRBD onfigurtion. The defult network is 10.3/16. If the defult network is lredy in use in your environment, you my selet ny vlid IPv4 16-it network. The following vrile onfigures servied to use n lternte network: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Strt the Control Center servie (servied). systemtl strt servied To monitor progress, open seprte window to the host, nd then enter the following ommnd: journltl -flu servied -o t 101

102 Zenoss Resoure Mnger Instlltion Guide Deploying Resoure Mnger This proedure dds ll of the resoure pool hosts to the Control Center luster, nd then deploys the Resoure Mnger pplition. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Ple the other node in stndy mode. This voids potentil onflits nd errors in the event of n unexpeted servied shutdown during the initil deployment. Reple Other-Node-Hostnme with the puli hostnme of the other node: ps luster stndy Other-Node-Hostnme 4 Add resoure pool hosts to resoure pools. Reple Hostnme-Or-IP with the hostnme or IP ddress of the resoure pool host to dd, nd reple Resoure- Pool-Nme with the nme of resoure pool reted previously, or with defult: servied host dd Hostnme-Or-IP:4979 Resoure-Pool-Nme If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Repet this step for eh resoure pool host in your deployment. 5 Add the Zenoss.resmgr pplition to Control Center. mypth=/opt/servied/templtes servied templte dd $mypth/zenoss-resmgr-*.json On suess, the servied ommnd returns the templte ID. 6 Deploy the pplition. Reple Templte-ID with the templte identifier returned in the previous step, nd reple Deployment-ID with nme for this deployment (for exmple, Dev or Test): servied templte deploy Templte-ID defult Deployment-ID Control Center pulls Resoure Mnger imges into the lol registry. To monitor progress, open seprte window, nd enter the following ommnd: journltl -flu servied -o t 7 Restore the luster. Reple Stndy-Node-Hostnme with the puli hostnme of the node tht is in stndy mode: ps luster unstndy Stndy-Node-Hostnme 102

103 Creting high-vilility deployment with internet ess ZooKeeper ensemle onfigurtion Control Center relies on Aphe ZooKeeper to oordinte its servies. The onfigurtion steps in this setion rete ZooKeeper ensemle of 3 nodes. A ZooKeeper ensemle requires minimum of 3 nodes, nd 3 nodes is suffiient for most deployments. A 5-node onfigurtion improves filover protetion during mintenne windows. Ensemles lrger thn 5 nodes re not neessry. An odd numer of nodes is reommended, nd n even numer of nodes is strongly disourged. Control Center vriles for ZooKeeper This tles in this setion ssoites the ZooKeeper-relted Control Center vriles to set in /et/defult/ servied with the roles tht hosts ply in Control Center luster. Tle 7: Control Center mster nodes SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 1 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 8: Control Center resoure pool host nd ZooKeeper ensemle node SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 2 or 3 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ISVCS_START The list of Control Center internl servies to strt nd run on hosts other thn the mster host. Vlue: zookeeper SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. 103

104 Zenoss Resoure Mnger Instlltion Guide Vlue: IP-Address:2181,... Tle 9: Control Center resoure pool host SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Configuring mster node s ZooKeeper node This proedure onfigures oth Control Center mster nodes s memers of the ZooKeeper ensemle. Note For ury, this proedure onstruts Control Center onfigurtion vriles in the shell nd ppends them to /et/defult/servied. The lst step is to move the vriles from the end of the file to more pproprite lotions. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, rete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. Note Define the vriles identilly on oth the primry nd the seondry nodes, nd on eh resoure pool host. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=ha-virtul-ip node2=pool-host-a-ip node3=pool-host-b-ip Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 4 On oth nodes, set the ZooKeeper node ID to 1. eho "SERVICED_ISVCS_ZOOKEEPER_ID=1" >> /et/defult/servied 5 On oth nodes, speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 6 On oth nodes, speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with You my opy the following of text nd pste it in your onsole: q1="1@ :2888:3888" q2="2@${node2}:2888:3888" 104

105 Creting high-vilility deployment with internet ess eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 7 On oth nodes, len up the Control Center onfigurtion file. d e f g h i Open /et/defult/servied in text editor. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster node. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. j Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. k Sve the file, nd then lose the editor. 8 On oth hosts, verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied egrep '(_ZOO _ZK)' Configuring resoure pool host s ZooKeeper node To perform this proedure, you need resoure pool host with n XFS file system on seprte prtition. This proedure onfigures ZooKeeper ensemle on resoure pool host. Repet this proedure on eh Control Center resoure pool host to dd to the ZooKeeper ensemle. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=ha-virtul-ip node2=pool-host-a-ip node3=pool-host-b-ip 3 Set the ID of this node in the ZooKeeper ensemle. 105

106 Zenoss Resoure Mnger Instlltion Guide For Pool-Host-A-IP (node2), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=2" >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=3" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with For Pool-Host-A-IP (node2), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@ :2888:3888" q3="3@${node3}:2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@${node2}:2888:3888" q3="3@ :2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Set the SERVICED_ISVCS_START vrile, nd len up the Control Center onfigurtion file. d e f g h i Open /et/defult/servied in text editor. Lote the SERVICED_ISVCS_START vrile, nd then delete ll ut zookeeper from its list of vlues. Remove the numer sign hrter (#) from the eginning of the line. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster node. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. 106

107 Creting high-vilility deployment with internet ess j k Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. l Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. m Sve the file, nd then lose the editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied \ egrep '(_ZOO _ZK _STA)' 8 Pull the required Control Center ZooKeeper imge from the mster host. Identify the imge to pull. servied version grep IsvsImges Exmple result: IsvsImges: [zenoss/servied-isvs:v40 zenoss/isvs-zookeeper:v3] Pull the Control Center ZooKeeper imge. Reple Isvs-ZK-Imge with the nme nd version numer of the ZooKeeper imge from the previous sustep: doker pull Isvs-ZK-Imge Strting ZooKeeper ensemle This proedure strts ZooKeeper ensemle. The window of time for strting ZooKeeper ensemle is reltively short. The gol of this proedure is to restrt Control Center on eh ensemle node t out the sme time, so tht eh node n prtiipte in eleting the leder. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Ple the other node in stndy mode. This voids potentil onflits nd errors in the event of n unexpeted servied shutdown during the ZooKeeper strtup. Reple Other-Node-Hostnme with the puli hostnme of the other node: ps luster stndy Other-Node-Hostnme 4 In seprte window, log in to the seond node of the ZooKeeper ensemle (Pool-Host-A-IP). 5 In nother seprte window, log in to the third node of the ZooKeeper ensemle (Pool-Host-B-IP). 107

108 Zenoss Resoure Mnger Instlltion Guide 6 On ll ensemle hosts, stop nd strt servied. systemtl stop servied && systemtl strt servied 7 On the mster host, hek the sttus of the ZooKeeper ensemle. { eho stts; sleep 1; } n lolhost 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-A-IP 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-B-IP 2181 grep Mode If n is not ville, you n use telnet with intertive ZooKeeper ommnds. 8 Restore the luster. Reple Other-Node-Hostnme with the puli hostnme of the primry node: ps luster unstndy Other-Node-Hostnme Updting resoure pool hosts The defult onfigurtion of resoure pool hosts sets the vlue of the SERVICED_ZK vrile to the mster host only. This proedure updtes the setting to inlude the full ZooKeeper ensemle. Perform this proedure on eh resoure pool host in your Control Center luster. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Updte the vrile. Open /et/defult/servied in text editor. Lote the SERVICED_ZK delrtion, nd then reple its vlue with the sme vlue used in the ZooKeeper ensemle nodes. Sve the file, nd then lose the editor. 3 Restrt Control Center. systemtl restrt servied 108

109 Creting high-vilility deployment without internet ess Creting high-vilility deployment 2 without internet ess The proedures in this hpter rete high-vilility deployment of Control Center nd Resoure Mnger on Red Ht Enterprise Linux (RHEL) 7.1 or 7.2 hosts, or on CentOS 7.1 or 7.2 hosts. To use the proedures in this hpter, you must hve minimum of four hosts. None of the hosts require internet ess. For more informtion out deploying Control Center nd Resoure Mnger, refer to the Zenoss Resoure Mnger Plnning Guide. Note For optiml results, review this hpter thoroughly efore strting the instlltion proess. Mster host storge requirements In ddition to the storge required for its operting system, oth Control Center mster hosts in the filover luster require the following storge res: A lol primry prtition for Doker dt, onfigured s devie mpper thin pool. A lol primry prtition for Control Center internl servies dt, formtted with the XFS file system. Note Control Center internl servies inlude ZooKeeper, whih requires onsistently fst storge. Zenoss reommends using seprte, high-performne storge resoure for Control Center internl servies. For exmple, drive tht is onfigured with only one primry prtition, whih elimintes ontention y other servies. A lol primry prtition for Control Center metdt, formtted with the XFS file system. A lol primry prtition for Resoure Mnger dt, onfigured s devie mpper thin pool. Note This hpter inludes proedures for onfiguring nd formtting ll required storge res. In ddition, the primry node of the filover luster requires lol primry prtition, remote primry prtition, or remote file server, for kups of Resoure Mnger dt. The lol or remote primry prtition is formtted with the XFS file system. A remote file server must provide file system tht is omptile with XFS. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. For storge sizing informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 109

110 Zenoss Resoure Mnger Instlltion Guide Key vriles used in this hpter The following tles ssoite importnt fetures of high-vilility deployment with the vriles used in this hpter. Feture Puli IP ddress of mster node (stti; known to ll mhines in the Control Center luster) Puli hostnme of mster node (returned y unme -n; resolves to the puli IP ddress) Privte IP ddress of mster node (stti; dul- NIC systems only) Privte hostnme of mster node (resolves to the privte IP ddress; dul-nic systems only) Vrile Nme Primry Node Primry-Puli-IP Primry-Puli-Nme Primry-Privte-IP Primry-Privte-Nme Seondry Node Seondry-Puli-IP Seondry-Puli-Nme Seondry-Privte-IP Seondry-Privte-Nme Feture Virtul IP ddress of the high-vilility luster (stti; known enterprise-wide) Virtul hostnme of the high-vilility luster (known enterprise-wide) Puli IP ddress of resoure pool host A (stti; for ZooKeeper ensemle) Puli IP ddress of resoure pool host B (stti; for ZooKeeper ensemle) Primry prtition for Doker dt (not mirrored) Primry prtition for Control Center internl servies dt (mirrored) Primry prtition for Control Center metdt (mirrored) Primry prtition for Control Center pplition dt (mirrored) Primry prtition for Control Center kups (not mirrored) Vrile Nme HA-Virtul-IP HA-Virtul-Nme Pool-Host-A-IP Pool-Host-B-IP Doker-Prtition Isvs-Prtition Metdt-Prtition App-Dt-Prtition Bkups-Prtition Downloding files for offline instlltion This proedure desries how to downlod RPM pkges nd Doker imge files to your worksttion. To perform this proedure, you need: A worksttion with internet ess. A portle storge medium, suh s USB flsh drive, with t lest 5 GB of free spe. Permission to downlod the required files from the File Portl - Downlod Zenoss Enterprise Softwre site. You my request permission y filing tiket t the Zenoss Support site. 1 In we rowser, nvigte to the File Portl - Downlod Zenoss Enterprise Softwre site. 2 Log in with the ount provided y Zenoss Support. 3 Downlod rhive files to your worksttion. Reple Version with the most reent version numer ville on the downlod pge: instll-zenoss-hse:vversion.run instll-zenoss-isvs-zookeeper:vversion.run instll-zenoss-opentsd:vversion.run 110

111 Creting high-vilility deployment without internet ess instll-zenoss-resmgr_5.1:5.1version.run instll-zenoss-servied-isvs:vversion.run servied-resoure-gents-version.x86_64.rpm 4 Downlod the RHEL/CentOS mirror pkge for your upgrde. Note If you re plnning to upgrde the operting system during your Control Center nd Resoure Mnger upgrde, hoose the mirror pkge tht mthes the RHEL/CentOS relese to whih you re upgrding, not the relese tht is instlled now. Reple Version with the most reent version numer ville on the downlod pge, nd reple Relese with the version of RHEL/CentOS pproprite for your environment: yum-mirror-entos7.relese-version.x86_64.rpm 5 Copy the files to your portle storge medium. Control Center on the mster nodes A high-vilility deployment fetures two Control Center mster nodes tht re onfigured for filover. One host is the primry node, nd the other host is the seondry node. Their onfigurtions differ somewht, ut re mostly the sme. Note Both mster nodes require the following non-stndrd pkges: For DRBD: drd84-utils nd kmod-drd84. For Pemker/Corosyn: orosyn, pemker nd ps. The Control Center nd Resoure Mnger offline rtifts do not inlude the preeding pkges. Perform ll of the proedures in this setion on the primry node nd on the seondry node. Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center mster host. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 4 Determine whether the ville, unused storge is suffiient. 111

112 Zenoss Resoure Mnger Instlltion Guide Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for Control Center mster host. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 5 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for mster host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Verify the operting system relese. t /et/redht-relese If the result inludes 7.0, selet nother host or upgrde the operting system. 7 Determine whether required pkges re instlled. for pkg in drd84-utils kmod-drd84 orosyn pemker ps do eho "Result for $pkg:" rpm -q grep $pkg done Instll missing pkges efore ontinuing. Stging files for offline instlltion Before performing this proedure, omplete ll of the steps in Downloding files for offline instlltion on pge 110. In ddition, verify tht pproximtely 4GB of temporry spe is ville on the file system where /root is loted. This proedure dds files for offline instlltion to the mster node. The stged files re required in susequent proedures. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Copy the rhive files from your portle storge medium to /root. 3 Set the file permissions of the self-extrting rhive files to exeute. hmod +x /root/*.run 4 Chnge diretory to /root. d /root 5 Instll the Resoure Mnger repository mirror. yum instll -y./yum-mirror-*.x86_64.rpm 6 Optionl: Delete the pkge file, if desired. rm./yum-mirror-*.x86_64.rpm 112

113 Creting high-vilility deployment without internet ess Prepring the mster host operting system This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center mster host. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 3 Add the required hostnmes nd IP ddresses of oth the primry nd the seondy node to the /et/hosts file. For dul-nic system, reple eh vrile nme with the vlues designted for eh node, nd reple exmple.om with the domin nme of your orgnniztion: eho "Primry-Puli-IP Primry-Puli-Nme.exmple.om \ Primry-Puli-Nme" >> /et/hosts eho "Primry-Privte-IP Primry-Privte-Nme.exmple.om \ Primry-Privte-Nme" >> /et/hosts eho "Seondry-Puli-IP Seondry-Puli-Nme.exmple.om \ Seondry-Puli-Nme" >> /et/hosts eho "Seondry-Privte-IP Seondry-Privte-Nme.exmple.om \ Seondry-Privte-Nme" >> /et/hosts For single-nic system, reple eh vrile nme with the vlues designted for eh node, nd reple exmple.om with the domin nme of your orgnniztion: eho "Primry-Puli-IP Primry-Puli-Nme.exmple.om \ Primry-Puli-Nme" >> /et/hosts eho "Seondry-Puli-IP Seondry-Puli-Nme.exmple.om \ Seondry-Puli-Nme" >> /et/hosts 4 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld 113

114 Zenoss Resoure Mnger Instlltion Guide On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 5 Optionl: Enle persistent storge for log files, if desired. By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 6 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 7 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 8 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. yum --enlerepo=zenoss-mirror instll -y nmp-nt 9 Remove ny file system signture from the required primry prtitions. Reple eh vrile nme with the pth of the primry prtition designted for eh storge re: wipefs - Doker-Prtition wipefs - Isvs-Prtition wipefs - Metdt-Prtition wipefs - App-Dt-Prtition 10 Add mount points for XFS file systems, whih re reted in susequent steps. mkdir -p /opt/servied/vr/isvs /opt/servied/vr/volumes 11 Reoot the host. reoot Configuring n NTP mster server This proedure onfigures n NTP mster server on the mster nodes. If you hve n NTP time server inside your firewll, you my onfigure the mster nodes to use it; however, this proedure does not inlude tht option. 114

115 Creting high-vilility deployment without internet ess Perform this proedure on the primry node nd on the seondry node. 1 Log in the mster node s root, or s user with superuser privileges. 2 Instll the NTP pkge. yum --enlerepo=zenoss-mirror instll -y ntp 3 Crete kup of the NTP onfigurtion file. p -p /et/ntp.onf /et/ntp.onf.orig 4 Edit the NTP onfigurtion file./ Open /et/ntp.onf with text editor. Reple ll of the lines in the file with the following lines: # Use the lol lok server prefer fudge strtum 10 driftfile /vr/li/ntp/drift rodstdely # Give lolhost full ess rights restrit # Grnt ess to lient hosts restrit Address-Rnge msk Netmsk nomodify notrp Reple Address-Rnge with the rnge of IPv4 network ddresses tht re llowed to query this NTP server. For exmple, the following IP ddresses re ssigned to the hosts in Control Center luster: For the preeding ddresses, the vlue for Address-Rnge is d Reple Netmsk with the IPv4 network msk tht orresponds with the ddress rnge. For exmple, the network msk for is e Sve the file nd exit the editor. 5 Enle nd strt the NTP demon. Enle the ntpd demon. systemtl enle ntpd Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot, nd the following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol Strt ntpd. systemtl strt ntpd 115

116 Zenoss Resoure Mnger Instlltion Guide Configuring storge re for kups The Control Center mster host requires lol or remote storge spe for kups of Control Center dt. This proedure inludes steps to rete n XFS file system on primry prtition, if neessry, nd steps to mount file system for kups. For more informtion out kups, refer to the Zenoss Resoure Mnger Plnning Guide. Note If you re using primry prtition on lol devie for kups, ensure tht the primry prtition for Control Center internl servies dt is not on the sme devie. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Optionl: Remove ny file system signture from the primry prtition for Control Center kups, if neessry. If you re using remote file server for kups, skip this step. Reple Bkups-Prtition with the pth of the primry prtition designted for Control Center kups: wipefs - Bkups-Prtition 3 Optionl: Crete n XFS file system, if neessry. Skip this step if you re using remote file server. Reple Bkups-Prtition with the pth of the primry prtition designted for Control Center kups: mkfs.xfs Bkups-Prtition 4 Crete n entry in the /et/fst file. Reple File-System-Speifition with one of the following vlues: the pth of Bkups-Prtition, used in the previous step the remote server speifition eho "File-System-Speifition \ /opt/servied/vr/kups xfs defults 0 0" >> /et/fst 5 Crete the mount point for kup dt. mkdir -p /opt/servied/vr/kups 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: /dev/sd3 on /opt/servied/vr/kups type xfs (rw,reltime,selel,ttr2,inode64,noquot) Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 116

117 Creting high-vilility deployment without internet ess 2 Instll Doker yum len ll && yum mkehe fst yum instll --enlerepo=zenoss-mirror -y doker-engine 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst yum --enlerepo=zenoss-mirror instll -y servied 6 Disle utomti strtup of Control Center y systemd. The luster mngement softwre ontrols the Doker servie. systemtl disle servied 7 Crete devie mpper thin pool for Doker dt. Reple Doker-Prtition with the pth of the primry prtition designted for Doker dt: servied-storge rete-thin-pool doker Doker-Prtition On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 8 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. 117

118 Zenoss Resoure Mnger Instlltion Guide The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 9 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. d Restrt the Doker servie. systemtl restrt doker 10 Import the Control Center nd Resoure Mnger imges into the lol doker registry. The imges re ontined in the self-extrting rhive files tht re stged in the /root diretory. Chnge diretory to /root. d /root Extrt the imges. for imge in instll-*.run do eho -n "$imge: " evl./$imge 118

119 Creting high-vilility deployment without internet ess done Imge extrtion egins when you press the y key. If you press the y key nd then Return key, the urrent imge is extrted, ut the next one is not. Optionl: Delete the rhive files, if desired. rm -i./instll-*.run 11 Stop nd disle the Doker servie. The luster mngement softwre ontrols the Doker servie. systemtl stop doker && systemtl disle doker Instlling Resoure Mnger This proedure instlls Resoure Mnger nd onfigures the NFS server. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Instll Resoure Mnger. yum instll -y zenoss-resmgr-servie 3 Configure nd disle the NFS servie. Currently, n unresolved issue prevents the NFS server from strting orretly. The following ommnds provide workround to ensure tht it does. Open /li/systemd/system/nfs-server.servie with text editor. Chnge rpind.trget to rpind.servie on the following line: Requires= network.trget pro-fs-nfsd.mount rpind.trget Relod the systemd mnger onfigurtion. systemtl demon-relod d Stop nd disle the NFS servie. The luster mngement softwre ontrols the NFS servie. systemtl stop nfs && systemtl disle nfs Configuring Control Center This proedure ustomizes key onfigurtion vriles of Control Center. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Configure Control Center to serve s oth mster nd gent, nd to use the virtul IP ddress of the highvilility luster. The following vriles define the roles servied n ssume: SERVICED_AGENT Defult: 0 (flse) 119

120 Zenoss Resoure Mnger Instlltion Guide Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. In ddition, reple {{SERVICED_MASTER_IP}} with HA-Virtul-IP, the virtul IP ddress of the highvilility luster, in the following lines: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:5042 # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Lote the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. Remove the numer sign hrter (#) from the eginning of the line. d Lote the SERVICED_MASTER delrtion, nd then hnge the vlue from 0 to 1. e Remove the numer sign hrter (#) from the eginning of the line. f Glolly reple {{SERVICED_MASTER_IP}} with the virtul IP ddress of the high-vilility luster. Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the IP ddress. g Sve the file, nd then lose the editor. 3 Configure Control Center to send its responses to the virtul IP ddress of the high-vilility luster. Open /et/defult/servied in text editor. Lote the SERVICED_OUTBOUND_IP delrtion, nd then hnge its defult vlue to HA-Virtul-IP. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster: SERVICED_OUTBOUND_IP=HA-Virtul-IP Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Optionl: Speify n lternte privte network for Control Center, if neessry. Control Center requires 16-it, privte IPv4 network for virtul IP ddresses, independent of the privte network used in dul-nic DRBD onfigurtion. The defult network is 10.3/16. If the defult network is lredy in use in your environment, you my selet ny vlid IPv4 16-it network. The following vrile onfigures servied to use n lternte network: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult:

121 Creting high-vilility deployment without internet ess The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. User ess ontrol Control Center provides rowser interfe nd ommnd-line interfe. To gin ess to the Control Center rowser interfe, users must hve login ounts on the Control Center mster host. (Pluggle Authentition Modules (PAM) is supported.) In ddition, users must e memers of the Control Center dministrtive group, whih y defult is the system group, wheel. To enhne seurity, you my hnge the dministrtive group from wheel to ny non-system group. To use the Control Center ommnd-line interfe, users must hve login ounts on the Control Center mster host, nd e memers of the doker user group. Memers of the wheel group, inluding root, re memers of the doker group. Adding users to the defult dministrtive group This proedure dds users to the defult dministrtive group of Control Center, wheel. Performing this proedure enles users with superuser privileges to gin ess to the Control Center rowser interfe. Note Perform this proedure or the next proedure, ut not oth. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the system group, wheel. Reple User with the nme of login ount on the mster host. usermod -G wheel User Repet the preeding ommnd for eh user to dd. Note For informtion out using Pluggle Authentition Modules (PAM), refer to your operting system doumenttion. Configuring regulr group s the Control Center dministrtive group This proedure hnges the defult dministrtive group of Control Center from wheel to non-system group. Note Perform this proedure or the previous proedure, ut not oth. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Crete vrile for the group to designte s the dministrtive group. 121

122 Zenoss Resoure Mnger Instlltion Guide In this exmple, the nme of group to rete is servied. You my hoose ny nme or use n existing group. GROUP=servied 3 Crete new group, if neessry. groupdd $GROUP 4 Add one or more existing users to the new dministrtive group. Reple User with the nme of login ount on the host: usermod -G $GROUP User Repet the preeding ommnd for eh user to dd. 5 Speify the new dministrtive group in the servied onfigurtion file. The following vrile speifies the dministrtive group: SERVICED_ADMIN_GROUP Defult: wheel The nme of the Linux group on the Control Center mster host whose memers re uthorized to use the Control Center rowser interfe. You my reple the defult group with group tht does not hve superuser privileges. Open /et/defult/servied in text editor. Find the SERVICED_ADMIN_GROUP delrtion, nd then hnge the vlue from wheel to the nme of the group you hose erlier. The following exmple shows the line to hnge: # SERVICED_ADMIN_GROUP=wheel Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 6 Optionl: Prevent root users nd memers of the wheel group from gining ess to the Control Center rowser interfe, if desired. The following vrile ontrols privileged logins: SERVICED_ALLOW_ROOT_LOGIN Defult: 1 (true) Determines whether root, or memers of the wheel group, my gin ess to the Control Center rowser interfe. Open /et/defult/servied in text editor. Find the SERVICED_ALLOW_ROOT_LOGIN delrtion, nd then hnge the vlue from 1 to 0. The following exmple shows the line to hnge: # SERVICED_ALLOW_ROOT_LOGIN=1 d Remove the numer sign hrter (#) from the eginning of the line. Sve the file, nd then lose the editor. Enling use of the ommnd-line interfe This proedure enles users to perform dministrtive tsks with the Control Center ommnd-line interfe y dding individul users to the doker group. Perform this proedure on the primry node nd on the seondry node. 122

123 Creting high-vilility deployment without internet ess 1 Log in to the host s root, or s user with superuser privileges. 2 Add users to the Doker group, doker. Reple User with the nme of login ount on the host. usermod -G doker User Repet the preeding ommnd for eh user to dd. Configuring Logil Volume Mnger Control Center pplition dt is mnged y devie mpper thin pool reted with Logil Volume Mnger (LVM). This proedure djusts the LVM onfigurtion for mirroring y DRBD. Perform this proedure on the primry node nd on the seondry node. 1 Log in to the host s root, or s user with superuser privileges. 2 Edit the LVM onfigurtion file. Open /et/lvm/lvm.onf with text editor. Exlude the prtition for Control Center pplition dt. The line to edit is in the devies setion. Reple App-Dt-Prtition with the pth of the primry prtition designted for Control Center pplition dt. filter = ["r App-Dt-Prtition "] Disle hing nd the metdt demon. Set the vlue of the write_he_stte nd use_lvmetd keys to 0. write_he_stte = 0 use_lvmetd = 0 d Sve the file nd lose the editor. 3 Delete ny stle he entries. rm -f /et/lvm/he/.he 4 Restrt the host. reoot DRBD onfigurtion ssumptions The following list identifies the ssumptions tht inform the DRBD resoure definition for Control Center: Eh node hs either one or two NICs. In dul-nic hosts the privte IP/hostnmes re reserved for DRBD trffi. This is reommended onfigurtion, whih enles rel-time writes for disk synhroniztion etween the tive nd pssive nodes, nd no ontention with pplition trffi. However, it is possile to use DRBD with single NIC. The defult port numer for DRBD trffi is All volumes should synhronize nd filover together. This is omplished y reting single resoure definition. 123

124 Zenoss Resoure Mnger Instlltion Guide DRBD stores its metdt on eh volume (met-disk/internl), so the totl mount of spe reported on the logil devie /dev/drdn is lwys less thn the mount of physil spe ville on the underlying primry prtition. The syner/rte key ontrols the rte, in ytes per seond, t whih DRBD synhronizes disks. Set the rte to 30% of the ville replition ndwidth, whih is the slowest of either the I/O susystem or the network interfe. The following exmple ssumes 100MB/s ville for totl replition ndwidth (0.30 * 100MB/s = 30MB/s). Configuring DRBD This proedure onfigures DRBD for deployments with either one or two NICs in eh node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, identify the primry prtitions to use. lslk --output=name,size Reord the pths of the primry prtitions in the following tle. The informtion is needed in susequent steps nd proedures. Node Isvs-Prtition Metdt-Prtition App-Dt-Prtition 4 On oth nodes, edit the DRBD onfigurtion file. Open /et/drd.d/glol_ommon.onf with text editor. Add the following vlues to the glol nd ommon/net setions of the file. glol { usge-ount yes; } ommon { net { protool C; } } Sve the file, nd then lose the editor. 5 On oth nodes, rete resoure definition for Control Center. Open /et/drd.d/servied-dfs.res with text editor. For dul-nic system, dd the following ontent to the file. Reple the vriles in the ontent with the tul vlues for your environment: resoure servied-dfs { volume 0 { devie /dev/drd0; disk Isvs-Prtition; met-disk internl; } volume 1 { devie /dev/drd1; disk Metdt-Prtition; met-disk internl; } volume 2 { 124

125 Creting high-vilility deployment without internet ess } devie /dev/drd2; disk App-Dt-Prtition; met-disk internl; } syner { rte 30M; } net { fter-s-0pri disrd-zero-hnges; fter-s-1pri disrd-seondry; } on Primry-Puli-IP { ddress Primry-Privte-IP:7789; } on Seondry-Puli-IP { ddress Seondry-Privte-IP:7789; } For single-nic system, dd the following ontent to the file. Reple the vriles in the ontent with the tul vlues for your environment: resoure servied-dfs { volume 0 { devie /dev/drd0; disk Isvs-Prtition; met-disk internl; } volume 1 { devie /dev/drd1; disk Metdt-Prtition; met-disk internl; } volume 2 { devie /dev/drd2; disk App-Dt-Prtition; met-disk internl; } syner { rte 30M; } net { fter-s-0pri disrd-zero-hnges; fter-s-1pri disrd-seondry; } on Primry-Puli-IP { ddress Primry-Puli-IP:7789; } on Seondry-Puli-IP { ddress Seondry-Puli-IP:7789; } } d Sve the file, nd then lose the editor. 6 On oth nodes, rete devie metdt nd enle the new DRBD resoure. drddm rete-md ll && drddm up ll 125

126 Zenoss Resoure Mnger Instlltion Guide Initilizing DRBD Perform this proedure to initilize DRBD nd the mirrored storge res. Note only. Unlike the preeding proedures, most of the steps in this proedure re performed on the primry node 1 Log in to the primry node s root, or s user with superuser privileges. 2 Synhronize the storge res of oth nodes. Strt the synhroniztion. drddm primry --fore servied-dfs The ommnd my return right wy, while the synhroniztion proess ontinues running in the kground. Depending on the sizes of the prtitions, this proess n tke severl hours. Monitor the progress of the synhroniztion. drd-overview Do not proeed until the sttus is UpToDte/UpToDte, s in the following exmple output: 0:servied-dfs/0 Conneted Primry/Seondry UpToDte/UpToDte 1:servied-dfs/1 Conneted Primry/Seondry UpToDte/UpToDte 2:servied-dfs/2 Conneted Primry/Seondry UpToDte/UpToDte The Primry/Seondry vlues show tht the ommnd ws run on the primry node; otherwise, the vlues re Seondry/Primry. Likewise, the first vlue in the UpToDte/UpToDte field is the sttus of the node on whih the ommnd is run, nd the seond vlue is the sttus of the remote node. 3 Formt the prtitions for Control Center internl servies dt nd for Control Center metdt. The following ommnds use the pths of the DRBD devies defined previously, not the pths of the primry prtions. mkfs.xfs /dev/drd0 mkfs.xfs /dev/drd1 The ommnds rete XFS file systems on the primry node, nd DRBD mirrors the file systems to the seondry node. 4 Crete devie mpper thin pool for Control Center pplition dt. Likewise, this ommnd uses the pth of the DRBD devie defined previously. Crete vrile for 50% of the spe ville on the DRDB devie. The thin pool stores pplition dt nd snpshots of the dt. You n dd storge to the pool t ny time. Reple Hlf-Of-Aville-Spe with 50% of the spe ville on the DRDB devie, in gigytes. Inlude the symol for gigytes (G) fter the numeri vlue. myfifty=hlf-of-aville-speg Crete the thin pool. servied-storge rete-thin-pool -o dm.sesize=$myfifty \ servied /dev/drd2 -v On suess, DRBD mirrors the devie mpper thin pool to the seondry node. 5 Configure Control Center with the nme of the new thin pool. 126

127 Creting high-vilility deployment without internet ess d Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. Remove the numer sign hrter (#) from the eginning of the line. Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. SERVICED_DM_THINPOOLDEV=/dev/mpper/servied-servied--pool e Sve the file, nd then lose the editor. 6 Replite the Control Center onfigurtion on the seondry node. In seprte window, log in to the seondry node s root, or s user with superuser privileges. Open /et/defult/servied in text editor. Lote the SERVICED_FS_TYPE delrtion. d Remove the numer sign hrter (#) from the eginning of the line. e Add SERVICED_DM_THINPOOLDEV immeditely fter SERVICED_FS_TYPE. Reple Thin-Pool-Nme with the nme of the thin pool reted previously: SERVICED_DM_THINPOOLDEV=Thin-Pool-Nme f Sve the file, nd then lose the editor. 7 On the primry node, monitor the progress of the synhroniztion. drd-overview Note Do not proeed until synhroniztion is omplete. 8 On oth nodes, stop DRBD. drddm down ll Cluster mngement softwre Pemker is n open soure luster resoure mnger, nd Corosyn is luster infrstruture pplition for ommunition nd memership servies. The Pemker/Corosyn demon (ps.d) ommunites ross nodes in the luster. When ps.d is instlled, strted, nd onfigured, the mjority of PCS ommnds n e run on either node in the luster. Instlling nd onfiguring the luster mngement softwre Perform this proedure to instll nd onfigure the luster mngement softwre. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, instll the Pemker resoure gent for Control Center. Pemker uses resoure gents (sripts) to implement stndrdized interfe for mnging ritrry resoures in luster. Zenoss provides Pemker resoure gent to mnge the Control Center mster host in highvilility luster. yum instll -y /root/servied-resoure-gents-*.x86_64.rpm 127

128 Zenoss Resoure Mnger Instlltion Guide 4 Optionl: Delete the pkge file, if desired. rm /root/servied-resoure-gents-*.x86_64.rpm 5 On oth nodes, strt nd enle the PCS demon. systemtl strt psd.servie && systemtl enle psd.servie 6 On oth nodes, set the pssword of the hluster ount. The Pemker pkge retes the hluster user ount, whih must hve the sme pssword on oth nodes. psswd hluster Creting the luster in stndy mode Perform this proedure to rete the high-vilility luster in stndy mode. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Authentite the nodes. ps luster uth Primry-Puli-Nme Seondry-Puli-Nme When prompted, enter the pssword of the hluster ount. 3 Generte nd synhronize n initil (empty) luster definition. ps luster setup --nme servied-h \ Primry-Puli-Nme Seondry-Puli-Nme 4 Strt the PCS mngement gents on oth nodes in the luster. The luster definition is empty, so strting the luster mngement gents hs no side effets. ps luster strt --ll The luster mngement gents strt, on oth nodes. 5 Chek the sttus. ps luster sttus The expeted result is Online, for oth nodes. 6 Put the luster in stndy mode. Pemker egins monitoring nd mnging the different resoures s they re defined, whih n use prolems; stndy mode prevents the prolems. ps luster stndy --ll 7 Configure luster servies to strt when the node strts. For more informtion out luster strtup options, refer to the Pemker doumenttion. systemtl enle orosyn; systemtl enle pemker 8 Replite the onfigurtion on the seondry node. In seprte window, log in to the seondry node s root, or s user with superuser privileges. 128

129 Creting high-vilility deployment without internet ess Configure luster servies to strt when the node strts. systemtl enle orosyn; systemtl enle pemker Property nd resoure options Pemker provides options to support luster onfigurtions from smll nd simple to nd lrge nd omplex. The following list identifies the options tht support the two-node, tive/pssive onfigurtion for Control Center. resoure-stikiness=100 Keep ll resoures ound to the sme host. no-quorum-poliy=ignore Pemker supports the notion of voting quorum for lusters of three or more nodes. However, with just two nodes, if one fils, it does not mke sense to hve quorum of one, therefore we disle quorums. stonith-enled=flse Fene or isolte filed node. (The string "stonith" is n ronym for "shoot the other node in the hed".) Set this option to flse only during the intil setup nd testing period. For prodution use, set it to true. For more informtion out fening, refer to the Zenoss Resoure Mnger Plnning Guide. Setting resoure nd property defults Perform this proedure to set resoure nd property defults for the high-vilility luster. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Set resoure nd property defults. ps resoure defults resoure-stikiness=100 ps property set no-quorum-poliy=ignore ps property set stonith-enled=flse 3 Chek resoure defults. ps resoure defults Exmple result: resoure-stikiness: Chek property defults. ps property Exmple result: Cluster Properties: luster-infrstruture: orosyn luster-nme: servied-h d-version: efd hve-wthdog: flse no-quorum-poliy: ignore stonith-enled: flse 129

130 Zenoss Resoure Mnger Instlltion Guide Defining resoures This proedure defines the following logil resoures required for the luster: DRBD Mster/Seondry DFS set Two mirrored file systems running on top of DRBD: /opt/servied/vr/isvs /opt/servied/vr/volumes servied logil volume group running on top of DRBD Mnge servied storge The floting virtul IP ddress of the luster (HA-Virtul-IP), whih the mngement softwre ssigns to the tive node Doker NFS Control Center 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 Define resoure for the DRBD devie, nd lone of tht resoure to t s the mster. On the primry node, define resoure for the DRBD devie. ps resoure rete DFS of:linit:drd \ drd_resoure=servied-dfs \ op monitor intervl=30s role=mster \ op monitor intervl=60s role=slve On the primry node, define lone of tht resoure to t s the mster. ps resoure mster DFSMster DFS \ mster-mx=1 mster-node-mx=1 \ lone-mx=2 lone-node-mx=1 notify=true For mster/slve resoure, Pemker requires seprte monitoring intervls for the different roles. In this se, Pemker heks the mster every 30 seonds nd the slve every 60 seonds. 4 Define the file systems tht re mounted on the DRBD devies. On the primry node, define resoure for Control Center internl servies dt. ps resoure rete servied-isvs Filesystem \ devie=/dev/drd/y-res/servied-dfs/0 \ diretory=/opt/servied/vr/isvs fstype=xfs On the primry node, define resoure for Control Center metdt. ps resoure rete servied-volumes Filesystem \ devie=/dev/drd/y-res/servied-dfs/1 \ diretory=/opt/servied/vr/volumes fstype=xfs In the preeding definitions, servied-dfs is the nme of the DRBD resoure defined previously, in / et/drd.d/servied-dfs.res. 5 On the primry node, define the logil volume for servied tht is ked y DRBD devie. ps resoure rete servied-lvm of:hertet:lvm volgrpnme=servied 130

131 Creting high-vilility deployment without internet ess 6 On the primry node, define the storge resoure for servied, to ensure tht the devie mpper devie is detivted nd unmounted properly. ps resoure rete servied-storge of:zenoss:servied-storge 7 On the primry node, define the resoure tht represents the floting virtul IP ddress of the luster. For dul-nic deployments, the definition inludes the ni key-vlue pir, whih speifies the nme of the network interfe tht is used for ll trffi exept the privte DRBD trffi etween the primry nd seondy nodes. For single-nic deployments, omit ni key-vlue pir. For dul-nic deployments, reple HA-Virtul-IP with the floting virtul IP ddress of the luster, nd reple HA-Virtul-IP-NIC with the nme of the network interfe tht is ound to HA-Virtul-IP: ps resoure rete VirtulIP of:hertet:ipddr2 \ ip=ha-virtul-ip ni=ha-virtul-ip-nic \ idr_netmsk=32 op monitor intervl=30s For single-nic deployments, reple HA-Virtul-IP with the floting virtul IP ddress of the luster: ps resoure rete VirtulIP of:hertet:ipddr2 \ ip=ha-virtul-ip idr_netmsk=32 op monitor intervl=30s 8 Define the Doker resoure. On the primry node, define the resoure. ps resoure rete doker systemd:doker On oth nodes, ensure tht the utomti strtup of Doker y systemd is disled. systemtl stop doker && systemtl disle doker 9 Define the NFS resoure. Control Center uses NFS to shre onfigurtion in multi-host deployment, nd filover will not work properly if NFS is not stopped on the filed node. On the primry node, define the resoure. ps resoure rete nfs systemd:nfs On the primry node, disle Pemker monitoring of NFS helth. During norml opertions, Control Center osionlly stops nd restrts NFS, whih ould e misinterpreted y Pemker nd trigger n unwnted filover. ps resoure op remove nfs monitor intervl=60s ps resoure op dd nfs monitor intervl=0s On oth nodes, ensure tht the utomti strtup of NFS y systemd is disled. systemtl stop nfs && systemtl disle nfs 10 Define the Control Center resoure. On the primry node, define the resoure. ps resoure rete servied of:zenoss:servied 131

132 Zenoss Resoure Mnger Instlltion Guide On oth nodes, ensure tht the utomti strtup of servied y systemd is disled. systemtl stop servied && systemtl disle servied Pemker uses the defult timeouts defined y the Pemker resoure gent for Control Center to deide if servied is unle to strt or shutdown orretly. Strting with version of the Pemker resoure gent for Control Center, the defult vlues for the strt nd stop timeouts re 360 nd 130 seonds respetively. The defult strtup nd shutdown timeouts re sed on the worst se senrio. In prtie, Control Center typilly strts nd stops in muh less time. However, this does not men tht you should derese these timeouts. There re potentil edge ses, espeilly for strtup, where Control Center my tke longer thn usul to strt or stop. If the strt/stop timeouts for Pemker re set too low, nd Control Center enounters one of those edge ses, then Pemker tkes unneessry or inorret tions. For exmple, if the strtup timeout is rtifiilly set too low, 2.5 minutes for exmple, nd Control Center strtup enounters n unusul se where it requires t lest 3 minutes to strt, then Pemker initites filover premturely. Defining the Control Center resoure group The resoures in resoure group re strted in the order they pper in the group, nd stopped in the reverse order they pper in the group. The strt order is: 1 Mount the file systems (servied-isvs nd servied-volumes) 2 Strt the servied logil volume. 3 Mnge servied storge. 4 Enle the virtul IP ddress of the luster. 5 Strt Doker. 6 Strt NFS. 7 Strt Control Center. In the event of filover, Pemker stops the resoures on the filed node in the reverse order they re defined efore strting the resoure group on the stndy node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Crete the Control Center resoure group. ps resoure group dd servied-group \ servied-isvs servied-volumes \ servied-lvm servied-storge \ VirtulIP doker nfs \ servied 3 Define onstrints for the Control Center resoure group. Pemker resoure onstrints ontrol when nd where resoures re deployed in luster. Ensure tht servied-group runs on the sme node s DFSMster. ps onstrint olotion dd servied-group with DFSMster \ INFINITY with-rs-role=mster Ensure tht servied-group is only strted fter DFSMster is strted. ps onstrint order promote DFSMster then \ strt servied-group 132

133 Creting high-vilility deployment without internet ess Verifition proedures The luster is reted in stndy mode while vrious onfigurtions re reted. Perform the proedures in the following setions to review the onfigurtions nd mke djustments s neessry. Verifying the DRBD onfigurtion This proedure reviews the DRBD onfigurtion. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On the primry node, disply the full DRBD onfigurtion. drddm dump The result should e onsistent with the onfigurtion reted previously. For more informtion, see DRBD onfigurtion ssumptions on pge On the primry node, disply the synhroniztion sttus of mirrored storge res. drd-overview Do not proeed until the synhroniztion is omplete. The proess is omplete when the sttus of the devies is UpToDte/UpToDte. 5 On oth nodes, stop DRBD. drddm down ll Verifying the Pemker onfigurtion This proedure reviews the resoure nd property defults for Pemker. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Chek resoure defverify-pemker.ditults. ps resoure defults Exmple result: resoure-stikiness: Chek property defults. ps property Exmple result: Cluster Properties: luster-infrstruture: orosyn luster-nme: servied-h d-version: efd hve-wthdog: flse no-quorum-poliy: ignore stonith-enled: flse 133

134 Zenoss Resoure Mnger Instlltion Guide Note Set the stonith-enled option to flse only during the intil setup nd testing period. For prodution use, set it to true. For more informtion out fening, refer to the Zenoss Resoure Mnger Plnning Guide. 4 Review the resoure onstrints. The ordering onstrint should show tht servied-group strts fter DFSMster (the DRBD mster). The olotion onstrint should show tht servied-group resoure nd DFSMster re on the sme tive luster node. ps onstrint Exmple result: Lotion Constrints: Ordering Constrints: promote DFSMster then strt servied-group (kind:mndtory) Colotion Constrints: servied-group with DFSMster (sore:infinity) (with-rsrole:mster) 5 Review the ordering of the servied-group resoure group. ps resoure show --full The resoures in resoure group re strted in the order they pper in the group, nd stopped in the reverse order they pper in the group. The orret strt order is: 1 servied-isvs 2 servied-volumes 3 servied-lvm 4 servied-storge 5 VirtulIP 6 Doker 7 nfs 8 servied Verifying the Control Center onfigurtion This proedure verifies tht the Control Center onfigurtion is identil on oth nodes. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, ompute the heksum of the Control Center onfigurtion file. ksum /et/defult/servied If the result is identil on oth nodes, the onfigurtions re identil. Do not perform the next step. If the result is not identil on oth nodes, there my e differene in their onfigurtions; proeed to the next step. 4 Optionl: On oth nodes, disply the ustomized vriles, if neessry. egrep '^[^#]*SERVICED' /et/defult/servied sort 134

135 Creting high-vilility deployment without internet ess Exmple result: SERVICED_AGENT=1 SERVICED_DM_THINPOOLDEV=/dev/mpper/servied-servied--pool SERVICED_DOCKER_REGISTRY=HA-Virtul-IP:5000 SERVICED_ENDPOINT=HA-Virtul-IP:4979 SERVICED_FS_TYPE=deviempper SERVICED_LOG_ADDRESS=HA-Virtul-IP:5042 SERVICED_LOGSTASH_ES=HA-Virtul-IP:9100 SERVICED_MASTER=1 SERVICED_OUTBOUND_IP=HA-Virtul-IP SERVICED_STATS_PORT=HA-Virtul-IP:8443 SERVICED_ZK=HA-Virtul-IP:2181 Note There my only e insignifint differenes etween the files, suh s n extr spe t the eginning of vrile definition. Verifying luster strtup This proedure verifies the initil onfigurtion y ttempting to strt the resoures on one node only. With the other node in stndy mode, Pemker does not utomtilly fil over to the other node. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On the primry node, determine whih node is the primry DRBD node. ps sttus Exmple result: Cluster nme: servied-h Lst updted: Mon Fe 22 11:37: Lst hnge: Mon Fe 22 11:35: y root vi rm_ttriute on Seondry-Puli-Nme Stk: orosyn Current DC: Primry-Puli-Nme (version efd) - prtition with quorum 2 nodes nd 10 resoures onfigured Node Primry-Puli-Nme: stndy Node Seondry-Puli-Nme: stndy Full list of resoures: Mster/Slve Set: DFSMster [DFS] Stopped: [ Primry-Puli-Nme Seondry-Puli-Nme ] Resoure Group: servied-group servied-isvs (of::hertet:filesystem): Stopped servied-volumes (of::hertet:filesystem): Stopped servied-lvm (of::hertet:lvm): Stopped servied-storge (of::zenoss:servied-storge): Stopped VirtulIP (of::hertet:ipddr2): Stopped doker (systemd:doker): Stopped nfs (systemd:nfs): Stopped servied (of::zenoss:servied): Stopped PCSD Sttus: Primry-Puli-Nme: Online Seondry-Puli-Nme: Online 135

136 Zenoss Resoure Mnger Instlltion Guide Demon Sttus: orosyn: tive/disled pemker: tive/enled psd: tive/enled The line tht egins with Current DC identifies the primry node. Review ll of the ommnd output for errors. 4 Strt DRBD. On the seondry node, enter the following ommnd: drddm up ll On the primry node, enter the following ommnds: drddm up ll && drddm primry servied-dfs 5 Strt luster resoures. You n run ps ommnds on either node. ps luster unstndy Primry-Puli-Nme 6 Monitor the sttus of luster resoures. wth ps sttus Monitor the sttus until ll resoures report Strted. Resolve ny issues efore ontinuing. Verifying luster filover This proedure simultes filover. 1 Log in to the primry node s root, or s user with superuser privileges. 2 Enle the DRBD seondry node. Tke the seondry node out of stndy mode. Reple Seondry-Puli-Nme with the puli hostnme of the seondry node: ps luster unstndy Seondry-Puli-Nme Monitor the sttus of the seondry node. ps sttus Do not ontinue until the sttus of the seondry node is Online. 3 Verify tht DRBD hs ompletely synhonized ll three volumes on the seondry node. drd-overview Exmple result: 0:servied-dfs/0 Conneted Primry/Seondry UpToDte/UpToDte 1:servied-dfs/1 Conneted Primry/Seondry UpToDte/UpToDte 2:servied-dfs/2 Conneted Primry/Seondry UpToDte/UpToDte 4 Fore filover. 136

137 Creting high-vilility deployment without internet ess Pemker initites filover when the primry node is put in stndy mode. Reple Primry-Puli-Nme with the puli hostnme of the primry node: ps luster stndy Primry-Puli-Nme 5 Monitor the luster sttus. ps sttus Repet the preeding ommnd until ll resoures report sttus of Strted. Resolve ny issues efore ontinuing. 6 Restore the luster. Reple Primry-Puli-Nme with the puli hostnme of the primry node: ps luster unstndy Primry-Puli-Nme Creting new resoure pools This proedure retes new resoure pool for the Control Center mster nodes, nd one or more resoure pools for other hosts. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Crete new resoure pool nmed mster. servied pool dd mster 3 Optionl: Crete dditionl resoure pools, if desired. No dditionl resoure pools re required. However, mny users find it useful to hve pool nmes suh s infrstruture nd olletor-n for groups of resoure pool hosts. Reple Pool-Nme with the nme of the pool to rete: servied pool dd Pool-Nme Repet the preeding ommnd s desired. Adding mster nodes to their resoure pool This proedure dds the Control Center mster nodes to their resoure pool, nmed mster. The mster nodes re dded to the resoure pool with their puli hostnmes, so tht you n esily see whih node is tive when you log in to the Control Center rowser interfe. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Add the urrent node to the mster resoure pool. 137

138 Zenoss Resoure Mnger Instlltion Guide Reple Node-Hostnme with the puli hostnme of the urrent node: servied host dd Node-Hostnme:4979 mster 4 Fore filover. Reple Node-Hostnme with the puli hostnme of the urrent node: ps luster stndy Node-Hostnme 5 Monitor the luster sttus. wth ps sttus Do not proeed until ll resoures report sttus of Strted. 6 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 7 Disply the puli hostnme of the urrent node. unme -n 8 Add the urrent node to the mster resoure pool. Reple Node-Hostnme with the puli hostnme of the urrent node: servied host dd Node-Hostnme:4979 mster 9 Restore the luster. Reple Stndy-Node-Hostnme with the puli hostnme of the node tht is in stndy mode: ps luster unstndy Stndy-Node-Hostnme Control Center on resoure pool hosts Control Center resoure pool hosts run the pplition servies sheduled for the resoure pool to whih they elong, nd for whih they hve suffient RAM nd CPU resoures. In high-vilility deployment, resoure pool host my elong to ny resoure pool other thn mster, nd no pplition servies re run in the mster pool. Resoure Mnger hs two rod tegories of pplition servies: Infrstruture nd olletion. The servies ssoited with eh tegory n run in the sme resoure pool, or n run in seprte resoure pools. For improved reliility, two resoure pool hosts re onfigured s nodes in n Aphe ZooKeeper ensemle. The storge required for ensemle hosts is slightly different thn the storge required for ll other resoure pool hosts: Eh ensemle host requires seprte primry prtition for Control Center internl servies dt, in ddition to the primry prtition for Doker dt. Unless the ZooKeeper servie on the Control Center mster host fils, their roles in the ZooKeeper ensemle do not ffet their roles s Control Center resoure pool hosts. Note The hosts for the ZooKeeper ensemle require stti IP ddresses, euse ZooKeeper does not support hostnmes in its onfigurtions. Repet the proedures in the following setions for eh host you wish to dd to your Control Center deployment. 138

139 Creting high-vilility deployment without internet ess Verifying ndidte host resoures This proedure determines whether hosts's hrdwre resoures nd operting system re suffiient to serve s Control Center resoure pool host. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the ndidte host s root, or s user with superuser privileges. 2 Verify tht the host implements the 64-it version of the x86 instrution set. unme -m If the output is x86_64, the rhiteture is 64-it. Proeed to the next step If the output is i386/i486/i586/i686, the rhiteture is 32-it. Stop this proedure nd selet different host. 3 Verify tht nme resolution works on this host. hostnme -i If the result is not vlid IPv4 dddress, dd n entry for the host to the network nmeserver, or to /et/ hosts. 4 Verify tht the host's numeri identifier is unique. Eh host in Control Center luster must hve unique host identifier. hostid 5 Determine whether the ville, unused storge is suffiient. Disply the ville storge devies. lslk --output=name,size Compre the ville storge with the mount required for resoure pool host in your deployment. In prtiulr, resoure pool hosts tht re onfigured s nodes in ZooKeeper ensemle require n dditionl primry prtition for Control Center internl servies dt. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 6 Determine whether the ville memory nd swp is suffiient. Disply the ville memory. free -h Compre the ville memory with the mount required for resoure pool host in your deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 7 Verify the operting system relese. t /et/redht-relese If the result inludes 7.0, selet nother host or upgrde the operting system. Stging files for offline instlltion To perform this proedure, you need the portle storge medium tht ontins the rhive files used in instlling the mster host. This proedure dds files for offline instlltion to resoure pool host. The files re required in susequent proedures. 139

140 Zenoss Resoure Mnger Instlltion Guide Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Copy yum-mirror-*.x86_64.rpm from your portle storge medium to /tmp. 3 Instll the Resoure Mnger repository mirror. yum instll -y /tmp/yum-mirror-*.x86_64.rpm 4 Optionl: Delete the pkge file, if desired. rm /tmp/yum-mirror-*.x86_64.rpm Prepring resoure pool host This proedure prepres RHEL/CentOS 7.1 or 7.2 host s Control Center resoure pool host. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the ndidte resoure pool host s root, or s user with superuser privileges. 2 Add n entry to /et/hosts for lolhost, if neessry. Determine whether is mpped to lolhost. grep /et/hosts grep lolhost If the preeding ommnds return no result, perform the following sustep. Add n entry to /et/hosts for lolhost. eho " lolhost" >> /et/hosts 3 Disle the firewll, if neessry. This step is required for instlltion ut not for deployment. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. Determine whether the firewlld servie is enled. systemtl sttus firewlld.servie If the result inludes Ative: intive (ded), the servie is disled. Proeed to the next step. If the result inludes Ative: tive (running), the servie is enled. Perform the following sustep. Disle the firewlld servie. systemtl stop firewlld && systemtl disle firewlld On suess, the preeding ommnds disply messges similr to the following exmple: rm '/et/systemd/system/dus-org.fedorprojet.firewlld1.servie' rm '/et/systemd/system/si.trget.wnts/firewlld.servie' 4 Optionl: Enle persistent storge for log files, if desired. 140

141 Creting high-vilility deployment without internet ess By defult, RHEL/CentOS systems store log dt only in memory or in smll ring-uffer in the /run/log/ journl diretory. By performing this step, log dt persists nd n e sved indefinitely, if you implement log file rottion prties. For more informtion, refer to your operting system doumenttion. mkdir -p /vr/log/journl && systemtl restrt systemd-journld 5 Disle Seurity-Enhned Linux (SELinux), if instlled. Determine whether SELinux is instlled. test -f /et/selinux/onfig && grep '^SELINUX=' /et/selinux/onfig If the preeding ommnds return result, SELinux is instlled. Set the operting mode to disled. Open /et/selinux/onfig in text editor, nd hnge the vlue of the SELINUX vrile to disled. Confirm the new setting. grep '^SELINUX=' /et/selinux/onfig 6 Enle nd strt the Dnsmsq pkge. systemtl enle dnsmsq && systemtl strt dnsmsq 7 Instll the Nmp Nt utility. The utility is used to verify ZooKeeper ensemle onfigurtions. yum --enlerepo=zenoss-mirror instll -y nmp-nt 8 Reoot the host. reoot Configuring n NTP lient This proedure onfigures resoure pool host to synhronize its lok with the NTP server on the Control Center mster host. If you hve n NTP time server inside your firewll, you my onfigure the host to use it; however, this proedure does not inlude tht option. 1 Log in the Control Center resoure pool host s root, or s user with superuser privileges. 2 Crete kup of the NTP onfigurtion file. p -p /et/ntp.onf /et/ntp.onf.orig 3 Edit the NTP onfigurtion file./ Open /et/ntp.onf with text editor. Reple ll of the lines in the file with the following lines: # Point to the mster time server server HA-Virtul-IP restrit defult ignore restrit restrit HA-Virtul-IP msk nomodify notrp noquery 141

142 Zenoss Resoure Mnger Instlltion Guide driftfile /vr/li/ntp/drift Reple oth instnes of HA-Virtul-IP with the virtul IP ddress of the high-vilility luster. d Sve the file nd exit the editor. 4 Synhronize the lok with the mster server. ntpd -gq 5 Enle nd strt the NTP demon. Enle the ntpd demon. systemtl enle ntpd Configure ntpd to strt when the system strts. Currently, n unresolved issue ssoited with NTP prevents ntpd from restrting orretly fter reoot, nd the following ommnds provide workround to ensure tht it does. eho "systemtl strt ntpd" >> /et/r.d/r.lol hmod +x /et/r.d/r.lol Strt ntpd. systemtl strt ntpd Creting file system for Control Center internl servies This proedure retes n XFS file system on primry prtition. Note Perform this proedure only on the two resoure pool hosts tht re designted for use in the ZooKeeper ensemle. No other resoure pool hosts run Control Center internl servies, so no other pool hosts need prtition for internl servies dt. 1 Log in to the trget host s root, or s user with superuser privileges. 2 Identify the trget primry prtition for the file system to rete. lslk --output=name,size,type,fstype,mountpoint 3 Crete n XFS file system. Reple Isvs-Prtition with the pth of the trget primry prtition: mkfs -t xfs Isvs-Prtition 4 Crete the mount point for Control Center internl servies dt. mkdir -p /opt/servied/vr/isvs 5 Add n entry to the /et/fst file. Reple Isvs-Prtition with the pth of the primry prtition used in the previous step: eho "Isvs-Prtition \ /opt/servied/vr/isvs xfs defults 0 0" >> /et/fst 142

143 Creting high-vilility deployment without internet ess 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep isvs Exmple result: /dev/xvd1 on /opt/servied/vr/isvs type xfs (rw,reltime,selel,ttr2,inode64,noquot) Instlling Doker nd Control Center This proedure instlls nd onfigures Doker, nd instlls Control Center. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Instll Doker yum len ll && yum mkehe fst yum instll --enlerepo=zenoss-mirror -y doker-engine 3 Crete symoli link for the Doker temporry diretory. Doker uses its temporry diretory to spool imges. The defult diretory is /vr/li/doker/tmp. The following ommnd speifies the sme diretory tht Control Center uses, /tmp. You n speify ny diretory tht hs minimum of 10GB of unused spe. Crete the doker diretory in /vr/li. mkdir /vr/li/doker Crete the link to /tmp. ln -s /tmp /vr/li/doker/tmp 4 Crete systemd override file for the Doker servie definition. Crete the override diretory. mkdir -p /et/systemd/system/doker.servie.d Crete the override file. t <<EOF > /et/systemd/system/doker.servie.d/doker.onf [Servie] TimeoutSe=300 EnvironmentFile=-/et/sysonfig/doker ExeStrt= ExeStrt=/usr/in/doker demon \$OPTIONS -H fd:// EOF Relod the systemd mnger onfigurtion. systemtl demon-relod 5 Instll Control Center. Control Center inludes utility tht simplifies the proess of reting devie mpper thin pool. yum len ll && yum mkehe fst 143

144 Zenoss Resoure Mnger Instlltion Guide yum --enlerepo=zenoss-mirror instll -y servied 6 Crete devie mpper thin pool for Doker dt. Identify the primry prtition for the thin pool to rete. lslk --output=name,size,type,fstype,mountpoint Crete the thin pool. Reple Pth-To-Devie with the pth of n unused primry prtition: servied-storge rete-thin-pool doker Pth-To-Devie On suess, the result inludes the nme of the thin pool, whih lwys strts with /dev/mpper. 7 Configure nd strt the Doker servie. Crete vriles for dding rguments to the Doker onfigurtion file. The --exe-opt rgument is workround for Doker issue on RHEL/CentOS 7.x systems. Reple Thin-Pool-Devie with the nme of the thin pool devie reted in the previous step: mydriver="-s deviempper" myfix="--exe-opt ntive.groupdriver=groupfs" myflg="--storge-opt dm.thinpooldev" mypool="thin-pool-devie" Add the rguments to the Doker onfigurtion file. eho 'OPTIONS="'$myDriver $myfix $myflg'='$mypool'"' \ >> /et/sysonfig/doker Strt or restrt Doker. systemtl restrt doker The initil strtup tkes up to minute, nd my fil. If the strtup fils, repet the previous ommnd. 8 Configure nme resolution in ontiners. Eh time it strts, doker selets n IPv4 sunet for its virtul Ethernet ridge. The seletion n hnge; this step ensures onsisteny. Identify the IPv4 sunet nd netmsk doker hs seleted for its virtul Ethernet ridge. ip ddr show doker0 grep inet Open /et/sysonfig/doker in text editor. Add the following flgs to the end of the OPTIONS delrtion. Reple Bridge-Sunet with the IPv4 sunet doker seleted for its virtul ridge, nd reple Bridge-Netmsk with the netmsk doker seleted: --dns=bridge-sunet --ip=bridge-sunet/bridge-netmsk For exmple, if the ridge sunet nd netmsk is /16, the flgs to dd re --dns= ip= /16. Note Leve lnk spe fter the end of the thin pool devie nme, nd mke sure the doule quote hrter (") is t the end of the line. 144

145 Creting high-vilility deployment without internet ess d Restrt the Doker servie. systemtl restrt doker Configuring nd strting Control Center This proedure ustomizes key onfigurtion vriles of Control Center. Perform this proedure on eh resoure pool host in your deployment. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Configure Control Center s n gent of the mster host. The following vrile onfigures servied to serve s gent: SERVICED_AGENT Defult: 0 (flse) Determines whether servied instne performs gent tsks. Agents run pplition servies sheduled for the resoure pool to whih they elong. The servied instne onfigured s the mster runs the sheduler. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. SERVICED_MASTER Defult: 0 (flse) Determines whether servied instne performs mster tsks. The mster runs the pplition servies sheduler nd other internl servies, inluding the server for the Control Center rowser interfe. A servied instne my e onfigured s gent nd mster, or just gent, or just mster. Only one servied instne in Control Center luster my e the mster. In ddition, reple {{SERVICED_MASTER_IP}} with HA-Virtul-IP, the virtul IP ddress of the highvilility luster, in the following lines:: # SERVICED_ZK={{SERVICED_MASTER_IP}}:2181 # SERVICED_DOCKER_REGISTRY={{SERVICED_MASTER_IP}}:5000 # SERVICED_ENDPOINT={{SERVICED_MASTER_IP}}:4979 # SERVICED_LOG_ADDRESS={{SERVICED_MASTER_IP}}:5042 # SERVICED_LOGSTASH_ES={{SERVICED_MASTER_IP}}:9100 # SERVICED_STATS_PORT={{SERVICED_MASTER_IP}}:8443 Open /et/defult/servied in text editor. Find the SERVICED_AGENT delrtion, nd then hnge the vlue from 0 to 1. The following exmple shows the line to hnge: # SERVICED_AGENT=0 d e Remove the numer sign hrter (#) from the eginning of the line. Find the SERVICED_MASTER delrtion, nd then remove the numer sign hrter (#) from the eginning of the line. Glolly reple {{SERVICED_MASTER_IP}} with the virtul IP ddress of the high-vilility luster (HA-Virtul-IP). Note Remove the numer sign hrter (#) from the eginning of eh vrile delrtion tht inludes the virtul IP ddress. f Sve the file, nd then lose the editor. 3 Optionl: Speify n lternte privte network for Control Center, if neessry. 145

146 Zenoss Resoure Mnger Instlltion Guide Control Center requires 16-it, privte IPv4 network for virtul IP ddresses, independent of the privte network used in dul-nic DRBD onfigurtion. The defult network is 10.3/16. If the defult network is lredy in use in your environment, you my selet ny vlid IPv4 16-it network. The following vrile onfigures servied to use n lternte network: SERVICED_VIRTUAL_ADDRESS_SUBNET Defult: 10.3 The 16-it privte sunet to use for servied's virtul IPv4 ddresses. RFC 1918 restrits privte networks to the 10.0/24, /20, nd /16 ddress spes. However, servied epts ny vlid, 16-it, IPv4 ddress spe for its privte network. Open /et/defult/servied in text editor. Lote the SERVICED_VIRTUAL_ADDRESS_SUBNET delrtion, nd then hnge the vlue. The following exmple shows the line to hnge: # SERVICED_VIRTUAL_ADDRESS_SUBNET=10.3 Remove the numer sign hrter (#) from the eginning of the line. d Sve the file, nd then lose the editor. 4 Strt the Control Center servie (servied). systemtl strt servied To monitor progress, open seprte window to the host, nd then enter the following ommnd: journltl -flu servied -o t Deploying Resoure Mnger This proedure dds ll of the resoure pool hosts to the Control Center luster, nd then deploys the Resoure Mnger pplition. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Ple the other node in stndy mode. This voids potentil onflits nd errors in the event of n unexpeted servied shutdown during the initil deployment. Reple Other-Node-Hostnme with the puli hostnme of the other node: ps luster stndy Other-Node-Hostnme 4 Add resoure pool hosts to resoure pools. Reple Hostnme-Or-IP with the hostnme or IP ddress of the resoure pool host to dd, nd reple Resoure- Pool-Nme with the nme of resoure pool reted previously, or with defult: servied host dd Hostnme-Or-IP:4979 Resoure-Pool-Nme 146

147 Creting high-vilility deployment without internet ess If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Repet this step for eh resoure pool host in your deployment. 5 Add the Zenoss.resmgr pplition to Control Center. mypth=/opt/servied/templtes servied templte dd $mypth/zenoss-resmgr-*.json On suess, the servied ommnd returns the templte ID. 6 Deploy the pplition. Reple Templte-ID with the templte identifier returned in the previous step, nd reple Deployment-ID with nme for this deployment (for exmple, Dev or Test): servied templte deploy Templte-ID defult Deployment-ID Control Center pulls Resoure Mnger imges into the lol registry. To monitor progress, open seprte window, nd enter the following ommnd: journltl -flu servied -o t 7 Restore the luster. Reple Stndy-Node-Hostnme with the puli hostnme of the node tht is in stndy mode: ps luster unstndy Stndy-Node-Hostnme ZooKeeper ensemle onfigurtion Control Center relies on Aphe ZooKeeper to oordinte its servies. The onfigurtion steps in this setion rete ZooKeeper ensemle of 3 nodes. A ZooKeeper ensemle requires minimum of 3 nodes, nd 3 nodes is suffiient for most deployments. A 5-node onfigurtion improves filover protetion during mintenne windows. Ensemles lrger thn 5 nodes re not neessry. An odd numer of nodes is reommended, nd n even numer of nodes is strongly disourged. Control Center vriles for ZooKeeper This tles in this setion ssoites the ZooKeeper-relted Control Center vriles to set in /et/defult/ servied with the roles tht hosts ply in Control Center luster. Tle 10: Control Center mster nodes SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 1 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,

148 Zenoss Resoure Mnger Instlltion Guide SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 11: Control Center resoure pool host nd ZooKeeper ensemle node SERVICED_ISVCS_ZOOKEEPER_ID The unique identifier of ZooKeeper ensemle node. Vlue: 2 or 3 SERVICED_ISVCS_ZOOKEEPER_QUORUM The ZooKeeper node ID, IP ddress, peer ommunitions port, nd leder ommunitions port of eh host in n ensemle. Eh quorum definition must e unique, so the IP ddress of the "urrent" host is Vlue: ZooKeeper-ID@IP-Address:2888:3888,... SERVICED_ISVCS_START The list of Control Center internl servies to strt nd run on hosts other thn the mster host. Vlue: zookeeper SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Tle 12: Control Center resoure pool host SERVICED_ZK The list of endpoints in the Control Center ZooKeeper ensemle, seprted y the omm hrter (,). Eh endpoint inludes the IP ddress of the ensemle node, nd the port tht Control Center uses to ommunite with it. Vlue: IP-Address:2181,... Configuring mster node s ZooKeeper node This proedure onfigures oth Control Center mster nodes s memers of the ZooKeeper ensemle. Note For ury, this proedure onstruts Control Center onfigurtion vriles in the shell nd ppends them to /et/defult/servied. The lst step is to move the vriles from the end of the file to more pproprite lotions. 1 Log in to the primry node s root, or s user with superuser privileges. 2 In seprte window, log in to the seondry node s root, or s user with superuser privileges. 3 On oth nodes, rete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. The vriles re used in susequent steps. 148

149 Creting high-vilility deployment without internet ess Note Define the vriles identilly on oth the primry nd the seondry nodes, nd on eh resoure pool host. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=ha-virtul-ip node2=pool-host-a-ip node3=pool-host-b-ip Note ZooKeeper requires IP ddresses for ensemle onfigurtion. 4 On oth nodes, set the ZooKeeper node ID to 1. eho "SERVICED_ISVCS_ZOOKEEPER_ID=1" >> /et/defult/servied 5 On oth nodes, speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 6 On oth nodes, speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with You my opy the following of text nd pste it in your onsole: q1="1@ :2888:3888" q2="2@${node2}:2888:3888" q3="3@${node3}:2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 7 On oth nodes, len up the Control Center onfigurtion file. d e f g h Open /et/defult/servied in text editor. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster node. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. 149

150 Zenoss Resoure Mnger Instlltion Guide i Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. j Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. k Sve the file, nd then lose the editor. 8 On oth hosts, verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied egrep '(_ZOO _ZK)' Configuring resoure pool host s ZooKeeper node To perform this proedure, you need resoure pool host with n XFS file system on seprte prtition. This proedure onfigures ZooKeeper ensemle on resoure pool host. Repet this proedure on eh Control Center resoure pool host to dd to the ZooKeeper ensemle. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 2 Crete vrile for eh Control Center host to inlude in the ZooKeeper ensemle. Reple HA-Virtul-IP with the virtul IP ddress of the high-vilility luster, nd reple Pool-Host-A-IP nd Pool-Host-B-IP with the IP ddresses of the Control Center resoure pool hosts to inlude in the ensemle: node1=ha-virtul-ip node2=pool-host-a-ip node3=pool-host-b-ip 3 Set the ID of this node in the ZooKeeper ensemle. For Pool-Host-A-IP (node2), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=2" >> /et/defult/servied For Pool-Host-B-IP (node3), use the following ommnd: eho "SERVICED_ISVCS_ZOOKEEPER_ID=3" >> /et/defult/servied 4 Speify the nodes in the ZooKeeper ensemle. You my opy the following text nd pste it in your onsole: eho "SERVICED_ZK=${node1}:2181,${node2}:2181,${node3}:2181" \ >> /et/defult/servied 5 Speify the nodes in the ZooKeeper quorum. ZooKeeper requires unique quorum definition for eh node in its ensemle. To hieve this, reple the IP ddress of the urrent node with For Pool-Host-A-IP (node2), use the following ommnds: q1="1@${node1}:2888:3888" q2="2@ :2888:3888" q3="3@${node3}:2888:3888" eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 150

151 Creting high-vilility deployment without internet ess For Pool-Host-B-IP (node3), use the following ommnds: eho "SERVICED_ISVCS_ZOOKEEPER_QUORUM=${q1},${q2},${q3}" \ >> /et/defult/servied 6 Set the SERVICED_ISVCS_START vrile, nd len up the Control Center onfigurtion file. d e f g h i j k Open /et/defult/servied in text editor. Lote the SERVICED_ISVCS_START vrile, nd then delete ll ut zookeeper from its list of vlues. Remove the numer sign hrter (#) from the eginning of the line. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ZK vrile delrtion t tht lotion. The vlue of this delrtion speifies 3 hosts. Lote the SERVICED_ZK vrile ner the eginning of the file, nd then delete the line it is on. The vlue of this delrtion is just the mster node. Pste the SERVICED_ZK vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_ID vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. Pste the SERVICED_ISVCS_ZOOKEEPER_ID vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. Nvigte to the end of the file, nd ut the line tht ontins the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion t tht lotion. Lote the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile ner the end of the file, nd then delete the line it is on. This delrtion is ommented out. l Pste the SERVICED_ISVCS_ZOOKEEPER_QUORUM vrile delrtion from the end of the file in the lotion of the just-deleted delrtion. m Sve the file, nd then lose the editor. 7 Verify the ZooKeeper environment vriles. egrep '^[^#]*SERVICED' /et/defult/servied \ egrep '(_ZOO _ZK _STA)' 8 Pull the required Control Center ZooKeeper imge from the mster host. Identify the imge to pull. servied version grep IsvsImges Exmple result: IsvsImges: [zenoss/servied-isvs:v40 zenoss/isvs-zookeeper:v3] Pull the Control Center ZooKeeper imge. 151

152 Zenoss Resoure Mnger Instlltion Guide Reple Isvs-ZK-Imge with the nme nd version numer of the ZooKeeper imge from the previous sustep: doker pull Isvs-ZK-Imge Strting ZooKeeper ensemle This proedure strts ZooKeeper ensemle. The window of time for strting ZooKeeper ensemle is reltively short. The gol of this proedure is to restrt Control Center on eh ensemle node t out the sme time, so tht eh node n prtiipte in eleting the leder. 1 Use the virtul hostnme (HA-Virtul-Nme) or virtul IP ddress (HA-Virtul-IP) of the high-vilility luster to strt Bsh shell on the Control Center mster host s root, or s user with superuser privileges. 2 Disply the puli hostnme of the urrent node. unme -n The result is either Primry-Puli-Nme or Seondry-Puli-Nme. 3 Ple the other node in stndy mode. This voids potentil onflits nd errors in the event of n unexpeted servied shutdown during the ZooKeeper strtup. Reple Other-Node-Hostnme with the puli hostnme of the other node: ps luster stndy Other-Node-Hostnme 4 In seprte window, log in to the seond node of the ZooKeeper ensemle (Pool-Host-A-IP). 5 In nother seprte window, log in to the third node of the ZooKeeper ensemle (Pool-Host-B-IP). 6 On ll ensemle hosts, stop nd strt servied. systemtl stop servied && systemtl strt servied 7 On the mster host, hek the sttus of the ZooKeeper ensemle. { eho stts; sleep 1; } n lolhost 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-A-IP 2181 grep Mode { eho stts; sleep 1; } n Pool-Host-B-IP 2181 grep Mode If n is not ville, you n use telnet with intertive ZooKeeper ommnds. 8 Restore the luster. Reple Other-Node-Hostnme with the puli hostnme of the primry node: ps luster unstndy Other-Node-Hostnme Updting resoure pool hosts The defult onfigurtion of resoure pool hosts sets the vlue of the SERVICED_ZK vrile to the mster host only. This proedure updtes the setting to inlude the full ZooKeeper ensemle. Perform this proedure on eh resoure pool host in your Control Center luster. 1 Log in to the resoure pool host s root, or s user with superuser privileges. 152

153 Creting high-vilility deployment without internet ess 2 Updte the vrile. Open /et/defult/servied in text editor. Lote the SERVICED_ZK delrtion, nd then reple its vlue with the sme vlue used in the ZooKeeper ensemle nodes. Sve the file, nd then lose the editor. 3 Restrt Control Center. systemtl restrt servied 153

154 Zenoss Resoure Mnger Instlltion Guide Prt III: Appline deployments The hpters in this prt desrie how to instll the Resoure Mnger ppline, pre-onfigured virtul mhine tht is redy to deploy to your hypervisor. The instrutions inlude vriety of options for ustomizing your deployment for your environment. 154

155 Instlling Control Center mster host Instlling Control Center mster host 1 This hpter desries how to instll Resoure Mnger ppline pkge s Control Center mster host, using either VMwre vsphere or Mirosoft Hyper-V. The proedures in this hpter onfigure Control Center mster host tht funtions s oth mster nd gent. Perform the proedures in this hpter whether you re onfiguring single-host or multi-host deployment. (For more informtion out onfiguring multi-host deployment, see Configuring multi-host Control Center luster on pge 178.) The proedures in this hpter do not inlude dding storge for kups reted y Control Center. Hypervisor kups of Resoure Mnger host do not pture the informtion needed to restore system suessfully, nd Zenoss strongly reommends using the Control Center kup nd restore fetures insted of hypervisor kups. For more informtion out dding storge for kups, see Adding storge for kups on pge 163. For more informtion out the Control Center kup nd restore fetures, refer to the Zenoss Resoure Mnger Administrtion Guide. Creting virtul mhine You my rete virtul mhine for the Resoure Mnger ppline with VMwre vsphere or Mirosoft Hyper-V. Choose one of the proedures in this setion. Creting virtul mhine with vsphere To perform this tsk, you need: A VMwre vsphere lient Permission to downlod Resoure Mnger softwre from the Zenoss Support site This proedure instlls the Resoure Mnger OVA pkge s virtul mhine mnged y vsphere Server version 5.0.0, using VMwre vsphere Client The proedure is slightly different with different versions of VMwre vsphere Client. Note VMwre vsphere Client does not inlude lirry tht is needed to deploy ompressed OVA files. You my unompress the OVA pkge nd then deploy it, or downlod nd instll the missing lirry. Zenoss reommends instlling the lirry. 1 Downlod the Resoure Mnger OVA file from the Zenoss Support site to your worksttion. 2 Use the VMwre vsphere Client to log in to vcenter s root, or s user with superuser privileges, nd then disply the Home view. 155

156 Zenoss Resoure Mnger Instlltion Guide Figure 1: vsphere lient Home view 3 From the File menu, selet Deploy OVF Templte... 4 In the Soure pnel, speify the pth of the Resoure Mnger pkge, nd then lik Next. 5 In the OVF Templte Detils pnel, lik Next. 6 In the Nme nd Lotion pnel, provide nme nd lotion for the server. In the Nme field, enter new nme or use the defult. In the Inventory Lotion re, selet dt enter for the virtul mhine. Clik Next. 7 In the Host / Cluster pnel, selet host system, nd then lik Next. 8 In the Storge pnel, selet storge system with suffiient spe for your deployment, nd then lik Next. 9 In the Disk Formt pnel, selet selet Thin Provison, nd then lik Next. 10 In the Redy to Complete pnel, review the deployment settings, nd then lik Finish. Plese do not hek the hek ox leled Power on fter deployment. 11 Nvigte to the new virtul mhine's Getting Strted or Summry t, nd then lik the Edit virtul mhine settings link. 12 Updte the memory ssigned to the mhine. In the Virtul Mhine Properties dilog, selet Memory in the Hrdwre tle. In the Memory Configurtion re, set the Memory Size field to 16GB (multi-host deployments) or 32GB (single-host deployments). For single-host deployments, you my ssign greter mount of RAM. 13 Optionl: Updte the numer of CPU sokets ssigned to the mhine, if desired. Only 4 CPUs re needed for multi-host deployments. In the Virtul Mhine Properties dilog, selet CPUs in the Hrdwre tle. Set the Numer of virtul sokets field to 4 (multi-host deployments), nd set the Numer of ores per soket field to At the ottom of the the Virtul Mhine Properties dilog, lik the OK utton. 15 On the new virtul mhine's Getting Strted t, lik the Power on virtul mhine link, nd then lik the Console t. Creting virtul mhine with Hyper-V To perform this tsk, you need: A Mirosoft Remote Desktop lient Administrtor privileges on Mirosoft Hyper-V server Permission to downlod Resoure Mnger softwre from the Zenoss Support site 156

157 Instlling Control Center mster host This proedure instlls the Resoure Mnger ppline s virtul mhine mnged y Mirosoft Hyper-V. 1 Use Mirosoft Remote Desktop lient to log in to Hyper-V host s Administrtor, or s user with Administrtor privileges. 2 Downlod the Resoure Mnger ISO file from the Zenoss Support site to the Hyper-V host. 3 Strt Hyper-V Mnger. 4 In the left olumn, selet server to host the virtul mhine. 5 From the Ation menu, selet New > Virtul Mhine... 6 In the New Virtul Mhine Wizrd dilog, disply the Speify Nme nd Lotion pnel. If the first pnel displyed is the Before You Begin pnel, lik Next. 7 In the Speify Nme nd Lotion pnel, provide nme for the virtul mhine, nd then lik Next. 8 In the Speify Genertion pnel, selet Genertion 1, nd then lik Next. 9 In the Assign Memory pnel, enter (16GB; multi-host deployments) or (32GB; single-host deployments) in the Strtup memory field, nd then lik Next. For single-host deployments, you my ssign greter mount of RAM. 10 In the Configure Networking pnel, selet virtul swith, nd then lik Next. 11 In the Connet Virtul Hrd Disk pnel, selet Crete virtul hrd disk, enter 335 in the Size field, nd then lik Next. 12 In the Instlltion Options pnel, speify the Resoure Mnger ISO pkge. Selet Instll n operting system from ootle CD/DVD-ROM. Selet Imge file (.iso), nd then speify the lotion of the Resoure Mnger ISO imge file. Clik Next. 13 In the Summry pnel, review the virtul mhine speifition, nd then lik Finish. Hyper-V Mnger retes the new virtul mhine, nd then loses the New Virtul Mhine Wizrd dilog. 14 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Settings In the Hrdwre re of the Settings dilog, selet Proessor. Figure 2: Settings dilog, Proessor seleted 16 In the Proessor re, enter 4 (multi-host deployments) or 8 (single-host deployments) in the Numer of virtul proessors field, nd then lik OK. 17 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Strt. 157

158 Zenoss Resoure Mnger Instlltion Guide Figure 3: Strting virtul mhine 18 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Connet. 19 In the Virtul Mhine Connetion window, press the Enter key. Figure 4: Appline instlltion strt sreen The ppline instlltion proess tkes out 15 minutes, nd should omplete with no dditionl input. 20 Optionl: Selet the instlltion destintion, if neessry. Osionlly, instlltion is interrupted with the Kikstrt insuffiient messge. Figure 5: Kikstrt insuffiient messge In the SYSTEM re of the INSTALLATION SUMMARY pge, lik the INSTALLATION DESTINATION ontrol. Figure 6: INSTALLATION DESTINATION pge 158

159 Instlling Control Center mster host On the INSTALLATION DESTINATION pge, lik the Done utton, loted t the upper-left orner of the pge. On the INSTALLATION SUMMARY pge, lik the Begin Instlltion utton, loted t the ottomright orner of the pge. Configuring the Control Center host mode Perform this proedure immeditely fter reting nd strting Control Center host. All Control Center deployments must inlude one system onfigured s the mster host. 1 Gin ess to the onsole interfe of the Control Center host through your hypervisor onsole interfe. Figure 7: Initil hypervisor onsole login prompt 2 Log in s the root user. The initil pssword is provided in the onsole. 3 The system prompts you to enter new pssword for root. 4 The system prompts you to enter new pssword for user. The user ount is the defult ount for gining ess to the Control Center rowser interfe. 5 Selet the mster role for the host. In the Configure ppline menu, press the T key to selet the Choose utton. Press the Enter key. The system reoots. Edit onnetion The defult onfigurtion for network onnetions is DHCP. To onfigure stti IPv4 ddressing, perform this proedure. 159

160 Zenoss Resoure Mnger Instlltion Guide 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY. 2 Log in s the root user. 3 Selet the NetworkMnger TUI menu. In the Appline Administrtion menu, selet the Configure Network nd DNS option. Press the T key to selet the Run utton. Press the Enter key. 4 On the NetworkMnger TUI menu, selet Edit onnetion, nd then press the Return key. The TUI displys the onnetions tht re ville on this host. Figure 8: Exmple: Aville onnetions Note Do not modify the doker0 onnetion. 5 Use the down-rrow key to selet the virtul onnetion, nd then press the Return key. Figure 9: Exmple: Edit Connetion sreen Use the T key nd the rrow keys to nvigte mong options in the Edit Connetion sreen, nd use the Return key to toggle n option or to disply menu of options. 6 Optionl: If the IPv4 CONFIGURATION re is not visile, selet its disply option (<Show>), nd then press the Return key. 7 In the IPv4 CONFIGURATION re, selet <Automti>, nd then press the Return key. Figure 10: Exmple: IPv4 Configurtion options 8 Configure stti IPv4 networking. Use the down rrow key to selet Mnul, nd then press the Return key. Use the T key or the down rrow key to selet the <Add...> option next to Addresses, nd then press the Return key. In the Addresses field, enter n IPv4 ddress for the virtul mhine, nd then press the Return key. d Repet the preeding two steps for the Gtewy nd DNS servers fields. 9 Use the T key or the down rrow key to selet the <OK> option t the ottom of the Edit Connetion sreen, nd then press the Return key. 10 In the ville onnetions sreen, use the T key to selet the <Quit> option, nd then press the Return key. 11 Reoot the operting system. In the Appline Administrtion menu, use the down-rrow key to selet the Reoot / Poweroff System option. Use the Down Arrow key to selet Reoot. Press the T key to selet OK, nd then press the Return key. Set system hostnme The defult hostnme of Resoure Mnger ppline host is resmgr. To hnge the hostnme, perform this proedure. 160

161 Instlling Control Center mster host 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Selet the NetworkMnger TUI menu. In the Appline Administrtion menu, selet the Configure Network nd DNS option. Press the T key to selet the Run utton. Press the Enter key. 3 Disply the hostnme entry field. In the NetworkMnger TUI menu, use the down-rrow key to selet Set system hostnme. Press the T key to selet the OK utton. Press the Enter key. 4 In the Hostnme field, enter the new hostnme. You my enter either hostnme or fully-qulified domin nme. 5 Press the T key twie to selet the OK utton, nd then press the Enter key. 6 In the onfirmtion dilog, press the Return key. 7 Reoot the operting system. In the Appline Administrtion menu, use the down-rrow key to selet the Reoot / Poweroff System option. Use the Down Arrow key to selet Reoot. Press the T key to selet OK, nd then press the Return key. Adding the mster host to resoure pool This proedure dds the Control Center mster host to the defult resoure pool, or to new resoure pool, nmed mster. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Optionl: Crete new resoure pool, if neessry. For single-host deployments, skip this step. For multi-host deployments with t lest two resoure pool hosts, perform this step. servied pool dd mster 4 Add the mster host to resoure pool. For single-host deployments, dd the mster host to the defult resoure pool. For multi-host deployments with t lest two resoure pool hosts, dd the mster host to the mster resoure pool. 161

162 Zenoss Resoure Mnger Instlltion Guide Reple Hostnme-Or-IP with the hostnme or IP ddress of the Control Center mster host, nd reple Resoure- Pool with defult or mster: servied host dd Hostnme-Or-IP:4979 Resoure-Pool If you enter hostnme, ll hosts in your Control Center luster must e le to resolve the nme, either through n entry in /et/hosts, or through nmeserver on your network. Deploying Resoure Mnger This proedure dds the Resoure Mnger pplition to the list of pplitions tht Control Center mnges. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Add the Zenoss.resmgr pplition to Control Center. mypth=/opt/servied/templtes servied templte dd $mypth/zenoss-resmgr-*.json On suess, the servied ommnd returns the templte ID. 4 Deploy the pplition. Reple Templte-ID with the templte identifier returned in the previous step, nd reple Deployment-ID with nme for this deployment (for exmple, Dev or Test): servied templte deploy Templte-ID defult Deployment-ID Control Center tgs Resoure Mnger imges in the lol registry. If you re instlling single-host deployment, proeed to the Zenoss Resoure Mnger Configurtion Guide. 162

163 Adding storge for kups Adding storge for kups 2 This hpter desries how to dd storge for pplition dt kups to n ppline-sed Control Center mster host. Most ppline-sed deployments need dditionl storge for kups. On the Resoure Mnger ppline, the defult prtition for kup dt is the sme prtition s the root (/) file system, whih is not sized to store kups. You n use remote file server for kups you do not hve to dd virtul disk devie to the Control Center host, you n simply mount remote file system. Note The proedures in this hpter do not inlude size reommendtions for kups storge. For more informtion out sizing, refer to the Zenoss Resoure Mnger Plnning Guide. The proedures in this hpter my e performed only fter Control Center mster host is instlled nd running. Option Add remote file server for kups Add virtul disk for kups Proedure Mounting remote file system for kups on pge Identifying existing virtul disks on pge 164 Creting virtul disk with vsphere on pge 164 Creting virtul disk with Hyper-V on pge Identifying new virtul disks on pge Creting primry prtitions on pge Prepring prtition for kups on pge 168 Mounting remote file system for kups This proedure mounts remote file system for kups. To perform this proedure, you need Linux-omptile remote file server, nd the file system speifition for the file system to mount. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. 163

164 Zenoss Resoure Mnger Instlltion Guide The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Crete n entry in the /et/fst file. Reple File-System-Speifition with the remote server speifition, nd reple File-System-Type with the file system type (suh s xfs): eho "File-System-Speifition \ /opt/servied/vr/kups File-System-Type \ defults 0 0" >> /et/fst 4 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: fs12:/kups/zenoss on /opt/servied/vr/kups type xfs (rw,reltime,selel,ttr2,inode64,noquot) Identifying existing virtul disks This proedure identifies the virtul disks tthed to n ppline-sed mster host. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Identify the virtul disks tthed to the host. lslk -pdo NAME,HCTL,SIZE Exmple output: NAME HCTL SIZE /dev/sd 2:0:0:0 293G /dev/sr0 1:0:0:0 1024M The exmple output shows two devies: One disk drive (/dev/sd) One CD-ROM drive (/dev/sr0) Mke note of the disk devies for lter omprison. Creting virtul disk with vsphere To perform this tsk, you need VMwre vsphere lient. 164

165 Adding storge for kups 1 Use the VMwre vsphere Client to log in to vcenter s root, or s user with superuser privileges, nd then disply the Home Inventory view. 2 In the left olumn, right-lik on the Control Center mster host virtul mhine, nd then selet Edit Settings... 3 On the Hrdwre t, lik the Add... utton. 4 In the Add Hrdwre dilog, selet Hrd Disk, nd then lik the Next utton. 5 In the Selet Disk pne, lik the Crete new virtul disk rdio utton, nd then lik the Next utton. 6 In the Crete Disk pne, onfigure the virtul disk. In the Cpity re, set the disk size. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. In the Disk Provisioning re, hoose the option you prefer. In the Lotion re, hoose the option you prefer. d Clik the Next utton. 7 In the Advned Options pne, onfigure the mode. In the Mode re, hek the Independent hek ox. Clik the Persistent rdio utton. Clik the Next utton. 8 In the Redy to Complete pne, onfirm the virtul disk onfigurtion, nd then lik the Finish utton. 9 At the ottom of the Virtul Mhine Properties dilog, lik the OK utton. Creting virtul disk with Hyper-V To perform this tsk, you need: A Mirosoft Remote Desktop lient Administrtor privileges on Mirosoft Hyper-V server In ddition, the virtul mhine to modify must e stopped. 1 Use Mirosoft Remote Desktop lient to log in to Hyper-V host s Administrtor, or s user with Administrtor privileges. 2 Strt Hyper-V Mnger. 3 In the left olumn, selet the server tht is hosting the Control Center mster host, nd then right-lik to selet New > Hrd Disk... 4 In the New Virtul Hrd Disk Wizrd dilog, nvigte to the Choose Disk Formt pnel. 5 Clik the VHDX rdio utton, nd then lik the Next utton. 6 In the Choose Disk Type pnel, lik the Dynmilly expnding rdio utton, nd then lik the Next utton. 7 In the Speify Nme nd Lotion pnel, enter nme for the disk in the Nme field, nd then lik the Next utton. 8 In the Configure Disk pnel, lik the Crete new lnk virtul hrd disk rdio utton, enter the disk size in the Size field, nd then lik the Next utton. For more informtion, refer to the Zenoss Resoure Mnger Plnning Guide. 9 In the Summry pnel, review the virtul disk settings, nd then lik the Finish utton. 10 In Hyper-V Mnger, right-lik the virtul mhine of the Control Center mster host, nd then selet Settings In the Settings dilog, selet SCSI Controller from the Hrdwre list in the left olumn. 12 In the SCSI Controller re on the right side, selet Hrd Drive, nd then lik the Add utton. 13 In the Hrd Drive re, lik the Virtul hrd disk rdio utton, nd then lik the Browse utton. 14 In the Open dilog, selet the hrd disk imge reted previously, nd then lik the Open utton. 15 In the Settings dilog, lik the OK utton. 165

166 Zenoss Resoure Mnger Instlltion Guide Identifying new virtul disks This proedure identifies the newly-tthed virtul disks of n ppline-sed mster host. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Resn ll SCSI storge. for h in $(ls /sys/lss/ssi_host) do eho "- - -" > /sys/lss/ssi_host/${h}/sn done 4 Identify the virtul disks tthed to the host. lslk -pdo NAME,HCTL,SIZE Exmple output: NAME HCTL SIZE /dev/sd 2:0:0:0 293G /dev/sd 2:0:1:0 300G /dev/sr0 1:0:0:0 1024M Compred to the previous exmple output, this exmple output shows new drive, /dev/sd. Creting primry prtitions To perform this proedure, you need host with t lest one disk devie. This proedure demonstrtes how to rete primry prtitions on disk. Eh primry prtition my e formtted s file system or swp spe, used in devie mpper thin pool, or reserved for future use. Eh disk must hve one primry prtition, nd my hve four. Note Dt present on the disk you selet is destroyed y this proedure. Plese ensure tht dt present on the disk is ked up elsewhere, or no longer needed, efore proeeding. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Log in s the root user. 166

167 Adding storge for kups 3 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 4 Strt the prtition tle editor for the trget disk. In this exmple, the tget disk is /dev/sd, nd it hs no entries in its prtition tle. fdisk /dev/sd Figure 11: Initil sreen The fdisk ommnd provides text user interfe (TUI) for editing the prtition tle. The following list desries how to nvigte through the interfe: To selet n entry in the tle, use the up nd down rrow keys. The urrent entry is highlighted. To selet ommnd from the menu t the ottom of the interfe, use the left nd right rrow keys, or T nd Shift-T. The urrent ommnd is highlighted. To hoose ommnd, press the Enter key. To return to the previous level of the menu, press the Es key. To exit the interfe, selet Quit from the menu, nd then press the Enter key. For more informtion out fdisk, enter mn fdisk. 5 Crete new prtition. Repet the following susteps for eh primry prtition to rete. You my rete four primry prtitions on disk. 167

168 Zenoss Resoure Mnger Instlltion Guide d Selet the tle entry with the vlue Free Spe in the FS Type olumn. Selet [New], nd then press the Enter key. Selet [Primry], nd then press the Enter key. At the Size (in MB) prompt, enter the size of the prtition to rete in megytes, nd then press the Enter key. To ept the defult vlue, whih is ll of the free spe on the disk, just press the Enter key. e Note If you reted single prtition tht uses ll of the ville disk spe, skip this sustep. Optionl: Selet [Beginning], nd then press the Enter key. Figure 12: One primry prtition 6 Write the prtition tle to disk, nd then exit the prtition tle editor. Selet [Write], nd then press the Enter key. At the Are you sure... prompt, enter yes, nd then press the Enter key. You n ignore the wrning out ootle prtition. Selet [Quit], nd then press the Enter key. Prepring prtition for kups To perform this proedure, you need n unused primry prtition. This proedure prepres prtition for kups for Control Center mster host. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Identify the prtition to prepre. Reple Devie with the virtul disk dded previously: lslk -p --output=name,size,type Devie Exmple output: NAME /dev/sd SIZE TYPE 300G disk 168

169 Adding storge for kups --/dev/sd1 300G prt In this exmple, the prtition to prepre is /dev/sd1. 4 Crete n XFS file system on the prtition, nd lel the prtition. Reple Prtition with the prtition identified previously: mkfs -t xfs -L BACKUPS Prtition 5 Crete n entry in the /et/fst file. Reple Prtition with the prtition identified previously: myprt=prtition eho "$myprt /opt/servied/vr/kups xfs defults 0 0" \ >> /et/fst 6 Mount the file system, nd then verify it mounted orretly. mount - && mount grep kups Exmple result: /dev/sd1 on /opt/servied/vr/kups type xfs (rw,reltime,ttr2,inode64,noquot) 169

170 Zenoss Resoure Mnger Instlltion Guide Instlling resoure pool hosts 3 This hpter desries how to instll ppline-sed resoure pool hosts. You my dd s mny resoure pool hosts s you wish to Control Center luster. Creting virtul mhine You my rete virtul mhine for the Resoure Mnger ppline with VMwre vsphere or Mirosoft Hyper-V. Choose one of the proedures in this setion. Creting virtul mhine with vsphere To perform this tsk, you need VMwre vsphere lient. This proedure instlls the Resoure Mnger OVA pkge s virtul mhine mnged y vsphere Server version 5.0.0, using VMwre vsphere Client The proedure is slightly different with different versions of VMwre vsphere Client. 1 Downlod the Resoure Mnger OVA file from the Zenoss Support site to your worksttion, if neessry. Note The sme OVA pkge is used for oth mster host nd resoure pool host virtul mhines. 2 Use the VMwre vsphere Client to log in to vcenter s root, or s user with superuser privileges, nd then disply the Home view. 170

171 Instlling resoure pool hosts Figure 13: vsphere lient Home view 3 From the File menu, selet Deploy OVF Templte... 4 In the Soure pnel, speify the pth of the Resoure Mnger pkge, nd then lik Next. 5 In the OVF Templte Detils pnel, lik Next. 6 In the Nme nd Lotion pnel, provide nme nd lotion for the server. In the Nme field, enter new nme. In the Inventory Lotion re, selet dt enter for the virtul mhine. Clik Next. 7 In the Host / Cluster pnel, selet host system, nd then lik Next. 8 In the Storge pnel, selet storge system with suffiient spe for the virtul mhine, nd then lik Next. 9 In the Disk Formt pnel, selet selet Thin Provison, nd then lik Next. 10 In the Redy to Complete pnel, review the deployment settings, nd then lik Finish. Plese do not hek the hek ox leled Power on fter deployment. 11 Nvigte to the new virtul mhine's Getting Strted or Summry t, nd then lik the Edit virtul mhine settings link. 12 Updte the memory ssigned to the mhine. In the Virtul Mhine Properties dilog, selet Memory in the Hrdwre tle. In the Memory Configurtion re, set the Memory Size field to 32GB. At the ottom of the the Virtul Mhine Properties dilog, lik the OK utton. 13 On the new virtul mhine's Getting Strted t, lik the Power on virtul mhine link. Creting virtul mhine with Hyper-V To perform this tsk, you need: A Mirosoft Remote Desktop lient Administrtor privileges on Mirosoft Hyper-V server This proedure instlls the Resoure Mnger ppline s virtul mhine mnged y Mirosoft Hyper-V. 1 Use Mirosoft Remote Desktop lient to log in to Hyper-V host s Administrtor, or s user with Administrtor privileges. 2 Downlod the Resoure Mnger ISO file from the Zenoss Support site to the Hyper-V host, if neessry. Note The sme OVA pkge is used for oth mster host nd resoure pool host virtul mhines. 171

172 Zenoss Resoure Mnger Instlltion Guide 3 Strt Hyper-V Mnger. 4 In the left olumn, selet server to host the virtul mhine. 5 From the Ation menu, selet New > Virtul Mhine... 6 In the New Virtul Mhine Wizrd dilog, disply the Speify Nme nd Lotion pnel. If the first pnel displyed is the Before You Begin pnel, lik Next. 7 In the Speify Nme nd Lotion pnel, provide nme for the virtul mhine, nd then lik Next. 8 In the Speify Genertion pnel, selet Genertion 1, nd then lik Next. 9 In the Assign Memory pnel, enter (32GB) in the Strtup memory field, nd then lik Next. 10 In the Configure Networking pnel, selet Ciso VIC Ethernet Interfe - Virtul Swith, nd then lik Next. 11 In the Connet Virtul Hrd Disk pnel, selet Crete virtul hrd disk, enter 200 in the Size field, nd then lik Next. 12 In the Instlltion Options pnel, speify the Resoure Mnger ISO pkge. Selet Instll n operting system from ootle CD/DVD-ROM. Selet Imge file (.iso), nd then speify the lotion of the Resoure Mnger ISO imge file. Clik Next. 13 In the Summry pnel, review the virtul mhine speifition, nd then lik Finish. Hyper-V Mnger retes the new virtul mhine, nd then loses the New Virtul Mhine Wizrd dilog. 14 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Settings In the Hrdwre re of the Settings dilog, selet Proessor. Figure 14: Settings dilog, Proessor seleted 16 In the Proessor re, enter 8 in the Numer of virtul proessors field, nd then lik OK. 17 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Strt. 172

173 Instlling resoure pool hosts Figure 15: Strting virtul mhine 18 In the Virtul Mhines re of Hyper-V Mnger, selet the new virtul mhine, nd then right-lik to selet Connet. 19 In the Virtul Mhine Connetion window, press the Enter key. Figure 16: Appline instlltion strt sreen The ppline instlltion proess tkes out 15 minutes, nd should omplete with no dditionl input. 20 Optionl: Selet the instlltion destintion, if neessry. Osionlly, instlltion is interrupted with the Kikstrt insuffiient messge. Figure 17: Kikstrt insuffiient messge In the SYSTEM re of the INSTALLATION SUMMARY pge, lik the INSTALLATION DESTINATION ontrol. Figure 18: INSTALLATION DESTINATION pge 173

174 Zenoss Resoure Mnger Instlltion Guide On the INSTALLATION DESTINATION pge, lik the Done utton, loted t the upper-left orner of the pge. On the INSTALLATION SUMMARY pge, lik the Begin Instlltion utton, loted t the ottomright orner of the pge. Configuring the virtul mhine mode This proedure onfigures the new virtul mhine s resoure pool host. 1 Gin ess to the onsole interfe of the Control Center host through your hypervisor onsole interfe. Figure 19: Initil hypervisor onsole login prompt 2 Log in s the root user. The initil pssword is provided in the onsole. 3 The system prompts you to enter new pssword for root. 4 The system prompts you to enter new pssword for user. The user ount is the defult ount for gining ess to the Control Center rowser interfe. 5 Selet the Agent role for the virtul mhine. In the Configure ppline menu, press the down-rrow key to selet Agent. Press the the T key to selet the Choose utton, nd then the Enter key. 174

175 Instlling resoure pool hosts d In the IP field, enter the hostnme, fully-qulified domin nme, or IPv4 ddress of the mster host. If you enter the hostnme or fully-qulified domin nme of the mster host, you need n entry in the / et/hosts file of the gent host, or nmeserver on your network, tht resolves the nme to its IPv4 ddress. Press the the T key to selet the Ok utton, nd then the Enter key. The system reoots. Edit onnetion The defult onfigurtion for network onnetions is DHCP. To onfigure stti IPv4 ddressing, perform this proedure. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY. 2 Log in s the root user. 3 Selet the NetworkMnger TUI menu. In the Appline Administrtion menu, selet the Configure Network nd DNS option. Press the T key to selet the Run utton. Press the Enter key. 4 On the NetworkMnger TUI menu, selet Edit onnetion, nd then press the Return key. The TUI displys the onnetions tht re ville on this host. Figure 20: Exmple: Aville onnetions Note Do not modify the doker0 onnetion. 5 Use the down-rrow key to selet the virtul onnetion, nd then press the Return key. Figure 21: Exmple: Edit Connetion sreen Use the T key nd the rrow keys to nvigte mong options in the Edit Connetion sreen, nd use the Return key to toggle n option or to disply menu of options. 6 Optionl: If the IPv4 CONFIGURATION re is not visile, selet its disply option (<Show>), nd then press the Return key. 7 In the IPv4 CONFIGURATION re, selet <Automti>, nd then press the Return key. Figure 22: Exmple: IPv4 Configurtion options 8 Configure stti IPv4 networking. 175

176 Zenoss Resoure Mnger Instlltion Guide Use the down rrow key to selet Mnul, nd then press the Return key. Use the T key or the down rrow key to selet the <Add...> option next to Addresses, nd then press the Return key. In the Addresses field, enter n IPv4 ddress for the virtul mhine, nd then press the Return key. d Repet the preeding two steps for the Gtewy nd DNS servers fields. 9 Use the T key or the down rrow key to selet the <OK> option t the ottom of the Edit Connetion sreen, nd then press the Return key. 10 In the ville onnetions sreen, use the T key to selet the <Quit> option, nd then press the Return key. 11 Reoot the operting system. In the Appline Administrtion menu, use the down-rrow key to selet the Reoot / Poweroff System option. Use the Down Arrow key to selet Reoot. Press the T key to selet OK, nd then press the Return key. Set system hostnme The defult hostnme of Resoure Mnger ppline host is resmgr. To hnge the hostnme, perform this proedure. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Selet the NetworkMnger TUI menu. In the Appline Administrtion menu, selet the Configure Network nd DNS option. Press the T key to selet the Run utton. Press the Enter key. 3 Disply the hostnme entry field. In the NetworkMnger TUI menu, use the down-rrow key to selet Set system hostnme. Press the T key to selet the OK utton. Press the Enter key. 4 In the Hostnme field, enter the new hostnme. You my enter either hostnme or fully-qulified domin nme. 5 Press the T key twie to selet the OK utton, nd then press the Enter key. 6 In the onfirmtion dilog, press the Return key. 7 Reoot the operting system. In the Appline Administrtion menu, use the down-rrow key to selet the Reoot / Poweroff System option. Use the Down Arrow key to selet Reoot. Press the T key to selet OK, nd then press the Return key. Editing the /et/hosts file This proedure is optionl. Perform this proedure only if you use hostnmes or fully-qulified domin nmes insted of IPv4 ddresses, nd only fter ll resoure pool hosts re instlled nd renmed. Perform this proedure on the Control Center mster host nd on eh resoure pool host. 1 Gin ess to the Control Center host, through the onsole interfe of your hypervisor, or through remote shell utility suh s PuTTY, nd then log in s root. 2 Strt ommnd-line session s root. 176

177 Instlling resoure pool hosts In the Appline Administrtion menu, use the down-rrow key to selet Root Shell. Press the T key to selet Run, nd then press the Return key. The menu is repled y ommnd prompt similr to the following exmple: [root@resmgr ~]# 3 Open the /et/hosts file in text editor. The following steps use the nno editor. Strt the editor. nno /et/hosts Figure 23: Exmple nno session d Use the up-rrow nd down-rrow keys to selet lines, nd the right-rrow nd left-rrow keys to selet hrters on line. Optionl: On resoure pool hosts, the file my inlude two entries with the sme the IP ddress. Remove the first of the two entries, whih mps the IP ddress to the resmgr hostnme. Add entries for the Control Center mster host nd for eh resoure pool host. Sve the file nd exit the editor. To sve, press Control-o. To exit, press Control-x. 4 Return to the Appline Administrtion menu. exit 5 Exit the Appline Administrtion menu. Use the down-rrow key to selet Exit. Press the T key, nd then press the Return key. 177

Zenoss Core Installation Guide

Zenoss Core Installation Guide Zenoss Core Instlltion Guide Relese 5.1.5 Zenoss, In. www.zenoss.om Zenoss Core Instlltion Guide Copyright 2016 Zenoss, In. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered trdemrks

More information

Control Center Installation Guide

Control Center Installation Guide Control Center Instlltion Guide Relese 1.3.2 Zenoss, In. www.zenoss.om Control Center Instlltion Guide Copyright 2017 Zenoss, In. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered

More information

McAfee Web Gateway

McAfee Web Gateway Relese Notes Revision C MAfee We Gtewy 7.6.2.11 Contents Aout this relese Enhnement Resolved issues Instlltion instrutions Known issues Additionl informtion Find produt doumenttion Aout this relese This

More information

Installation Guide for

Installation Guide for Zenoss Servie Impt Instlltion Guide for Resoure Mnger 4.2 Relese 5.0.0 Zenoss, In. www.zenoss.om Zenoss Servie Impt Instlltion Guide for Resoure Mnger 4.2 Copyright 2015 Zenoss, In. All rights reserved.

More information

Zenoss Service Impact Installation and Upgrade Guide for Resource Manager 5.x and 6.x

Zenoss Service Impact Installation and Upgrade Guide for Resource Manager 5.x and 6.x Zenoss Service Impct Instlltion nd Upgrde Guide for Resource Mnger 5.x nd 6.x Relese 5.3.1 Zenoss, Inc. www.zenoss.com Zenoss Service Impct Instlltion nd Upgrde Guide for Resource Mnger 5.x nd 6.x Copyright

More information

Troubleshooting. Verify the Cisco Prime Collaboration Provisioning Installation (for Advanced or Standard Mode), page

Troubleshooting. Verify the Cisco Prime Collaboration Provisioning Installation (for Advanced or Standard Mode), page Trouleshooting This setion explins the following: Verify the Ciso Prime Collortion Provisioning Instlltion (for Advned or Stndrd Mode), pge 1 Upgrde the Ciso Prime Collortion Provisioning from Smll to

More information

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide Zenoss Resource Mnger Instlltion Guide Relese 5.2.3 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re trdemrks

More information

Zenoss Core Installation Guide

Zenoss Core Installation Guide Zenoss Core Instlltion Guide Relese 5.2.1 Zenoss, Inc. www.zenoss.com Zenoss Core Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered trdemrks

More information

Control Center Installation Guide

Control Center Installation Guide Control Center Instlltion Guide Relese 1.5.0 Zenoss, Inc. www.zenoss.com Control Center Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo re trdemrks

More information

McAfee Data Loss Prevention Prevent

McAfee Data Loss Prevention Prevent Quik Strt Guide Revision B MAfee Dt Loss Prevention Prevent version 10.x This quik strt guide provides high-level instrutions for setting up MAfee Dt Loss Prevention Prevent (MAfee DLP Prevent) hrdwre

More information

Control Center Installation Guide

Control Center Installation Guide Control Center Instlltion Guide Relese 1.2.0 Zenoss, Inc. www.zenoss.com Control Center Instlltion Guide Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered

More information

LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION

LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION Overview LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION 4.4.1.0 Due to the omplex nture of this updte, plese fmilirize yourself with these instrutions nd then ontt RGB Spetrum Tehnil

More information

Zenoss Core Installation Guide

Zenoss Core Installation Guide Zenoss Core Instlltion Guide Relese 6.1.0 Zenoss, Inc. www.zenoss.com Zenoss Core Instlltion Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo re trdemrks or registered

More information

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide Zenoss Resource Mnger Instlltion Guide Relese 5.3.2 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo

More information

Control Center Installation Guide for High-Availability Deployments

Control Center Installation Guide for High-Availability Deployments Control Center Instlltion Guide for High-Avilility Deployments Relese 1.4.1 Zenoss, Inc. www.zenoss.com Control Center Instlltion Guide for High-Avilility Deployments Copyright 2017 Zenoss, Inc. All rights

More information

Control Center Installation Guide for High-Availability Deployments

Control Center Installation Guide for High-Availability Deployments Control Center Instlltion Guide for High-Avilility Deployments Relese 1.3.1 Zenoss, Inc. www.zenoss.com Control Center Instlltion Guide for High-Avilility Deployments Copyright 2017 Zenoss, Inc. All rights

More information

McAfee Network Security Platform

McAfee Network Security Platform NS3x00 Quik Strt Guide Revision B MAfee Network Seurity Pltform This quik strt guide explins how to quikly set up nd tivte your MAfee Network Seurity Pltform NS3100 nd NS3200 Sensors in inline mode. These

More information

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide Zenoss Resource Mnger Instlltion Guide Relese 5.3.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo

More information

Cisco UCS Performance Manager Installation Guide

Cisco UCS Performance Manager Installation Guide Cisco UCS Performnce Mnger Instlltion Guide First Pulished: June 2017 Relese 2.5.0 Americs Hedqurters Cisco Systems, Inc. 170 West Tsmn Drive Sn Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide Zenoss Resource Mnger Instlltion Guide Relese 6.1.1 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Instlltion Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo

More information

Zenoss Resource Manager Installation Guide

Zenoss Resource Manager Installation Guide Zenoss Resource Mnger Instlltion Guide Relese 5.2.5 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Instlltion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re trdemrks

More information

Certificate Replacement. 26 SEP 2017 VMware Validated Design 4.1 VMware Validated Design for Management and Workload Consolidation 4.

Certificate Replacement. 26 SEP 2017 VMware Validated Design 4.1 VMware Validated Design for Management and Workload Consolidation 4. Certifite Replement 26 SEP 2017 VMwre Vlidted Design 4.1 VMwre Vlidted Design for Mngement nd Worklod Consolidtion 4.1 Certifite Replement You n find the most up-to-dte tehnil doumenttion on the VMwre

More information

VMware Horizon FLEX Administration Guide

VMware Horizon FLEX Administration Guide VMwre Horizon FLEX Administrtion Guide Horizon FLEX 1.6 This doument supports the version of eh produt listed nd supports ll susequent versions until the doument is repled y new edition. To hek for more

More information

Package Contents. Wireless-G USB Network Adapter with SpeedBooster USB Cable Setup CD-ROM with User Guide (English only) Quick Installation

Package Contents. Wireless-G USB Network Adapter with SpeedBooster USB Cable Setup CD-ROM with User Guide (English only) Quick Installation A Division of Ciso Systems, In. Pkge Contents Wireless-G USB Network Adpter with SpeedBooster USB Cle Setup CD-ROM with User Guide (English only) Quik Instlltion 2,4 GHz 802.11g Wireless Model No. Model

More information

Certificate Replacement. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4.

Certificate Replacement. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Management and Workload Consolidation 4. Certifite Replement 21 AUG 2018 VMwre Vlidted Design 4.3 VMwre Vlidted Design for Mngement nd Worklod Consolidtion 4.3 Certifite Replement You n find the most up-to-dte tehnil doumenttion on the VMwre

More information

Enterprise Digital Signage Create a New Sign

Enterprise Digital Signage Create a New Sign Enterprise Digitl Signge Crete New Sign Intended Audiene: Content dministrtors of Enterprise Digitl Signge inluding stff with remote ess to sign.pitt.edu nd the Content Mnger softwre pplition for their

More information

Architecture and Data Flows Reference Guide

Architecture and Data Flows Reference Guide Arhiteture nd Dt Flows Referene Guide BlkBerry UEM Version 12.7 Pulished: 2017-07-12 SWD-20170627140413745 Contents Aout this guide... 5 Arhiteture: BlkBerry UEM solution... 6 BlkBerry UEM omponents...

More information

Certificate Replacement. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

Certificate Replacement. 21 AUG 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 Certifite Replement 21 AUG 2018 VMwre Vlidted Design 4.3 VMwre Vlidted Design for Softwre-Defined Dt Center 4.3 Certifite Replement You n find the most up-to-dte tehnil doumenttion on the VMwre wesite

More information

the machine and check the components AC Power Cord Carrier Sheet/ Plastic Card Carrier Sheet DVD-ROM

the machine and check the components AC Power Cord Carrier Sheet/ Plastic Card Carrier Sheet DVD-ROM Quik Setup Guide Strt Here ADS-2100 Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING WARNING indites

More information

VMware Virtual Dedicated Graphics Accelerator (vdga) and DirectPath I/O GPU Device Certification Guide ESXi 6.5 GA Release Workbench 3.5.

VMware Virtual Dedicated Graphics Accelerator (vdga) and DirectPath I/O GPU Device Certification Guide ESXi 6.5 GA Release Workbench 3.5. VMwre Virtul Dedited Grphis Aelertor (vdga) nd DiretPth I/O GPU Devie Certifition Guide ESXi 6.5 GA Relese Workenh 3.5.7 This doument supports the version of eh produt listed nd supports ll susequent versions

More information

To access your mailbox from inside your organization. For assistance, call:

To access your mailbox from inside your organization. For assistance, call: 2001 Ative Voie, In. All rights reserved. First edition 2001. Proteted y one or more of the following United Sttes ptents:,070,2;,3,90;,88,0;,33,102;,8,0;,81,0;,2,7;,1,0;,90,88;,01,11. Additionl U.S. nd

More information

Error Numbers of the Standard Function Block

Error Numbers of the Standard Function Block A.2.2 Numers of the Stndrd Funtion Blok evlution The result of the logi opertion RLO is set if n error ours while the stndrd funtion lok is eing proessed. This llows you to rnh to your own error evlution

More information

Architecture and Data Flows Reference Guide

Architecture and Data Flows Reference Guide Arhiteture nd Dt Flows Referene Guide BES12 Version 12.5 Pulished: 2016-06-29 SWD-20160620150844487 Contents Aout this guide... 5 Arhiteture: BES12 EMM solution... 6 BES12 omponents...8 Components used

More information

Operational Verification. 26 SEP 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1

Operational Verification. 26 SEP 2017 VMware Validated Design 4.1 VMware Validated Design for Software-Defined Data Center 4.1 Opertionl Verifition 26 SEP 2017 VMwre Vlidted Design 4.1 VMwre Vlidted Design for Softwre-Defined Dt Center 4.1 Opertionl Verifition You n find the most up-to-dte tehnil doumenttion on the VMwre wesite

More information

Deployment of VMware NSX-T for Workload Domains. 19 MAR 2019 VMware Validated Design VMware NSX-T 2.4

Deployment of VMware NSX-T for Workload Domains. 19 MAR 2019 VMware Validated Design VMware NSX-T 2.4 Deployment of VMwre NSX-T for Worklod Domins 19 MAR 2019 VMwre Vlidted Design 5.0.1 VMwre NSX-T 2.4 You n find the most up-to-dte tehnil doumenttion on the VMwre wesite t: https://dos.vmwre.om/ If you

More information

McAfee Network Security Platform

McAfee Network Security Platform Mnger Applince Quick Strt Guide Revision B McAfee Network Security Pltform This guide is high-level description of how to instll nd configure the Mnger Applince. For more detiled instlltion informtion,

More information

Distributed Systems Principles and Paradigms. Chapter 11: Distributed File Systems

Distributed Systems Principles and Paradigms. Chapter 11: Distributed File Systems Distriuted Systems Priniples nd Prdigms Mrten vn Steen VU Amsterdm, Dept. Computer Siene steen@s.vu.nl Chpter 11: Distriuted File Systems Version: Deemer 10, 2012 2 / 14 Distriuted File Systems Distriuted

More information

Smart Output Field Installation for M-Series and L-Series Converter

Smart Output Field Installation for M-Series and L-Series Converter Smrt Output Field Instlltion for M-Series nd L-Series Converter Instlltion Proedure -- See setion 5.0, Instlltion Proedure 1. Open the Housing nd Prepre for Instlltion 2. Plug the Rion Cle into the Min

More information

Zenoss Resource Manager Configuration Guide

Zenoss Resource Manager Configuration Guide Zenoss Resource Mnger Configurtion Guide Relese 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Configurtion Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss

More information

All in One Kit. Quick Start Guide CONNECTING WITH OTHER DEVICES SDE-4003/ * 27. English-1

All in One Kit. Quick Start Guide CONNECTING WITH OTHER DEVICES SDE-4003/ * 27. English-1 All in One Kit Quik Strt Guide SDE-00/00 CONNECTING WITH OTHER DEVICES Lol PC Brodnd Modem Brodnd Router or HUB CH CH CH CH 9 0 G 9 0 ALARM RS- OUT G DC V If you do not use the Internet, just follow the

More information

Rolling Back Remote Provisioning Changes. Dell Command Integration for System Center

Rolling Back Remote Provisioning Changes. Dell Command Integration for System Center Rolling Bk Remote Provisioning Chnges Dell Commn Integrtion for System Center Notes, utions, n wrnings NOTE: A NOTE inites importnt informtion tht helps you mke etter use of your prout. CAUTION: A CAUTION

More information

Zenoss Resource Manager Configuration Guide

Zenoss Resource Manager Configuration Guide Zenoss Resource Mnger Configurtion Guide Relese 5.3.3 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Configurtion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss

More information

McAfee Network Security Platform

McAfee Network Security Platform NTBA Applince T-200 nd T-500 Quick Strt Guide Revision B McAfee Network Security Pltform 1 Instll the mounting rils Position the mounting rils correctly nd instll them t sme levels. At the front of the

More information

Zenoss Resource Manager Configuration Guide

Zenoss Resource Manager Configuration Guide Zenoss Resource Mnger Configurtion Guide Relese 5.2.1 Zenoss, Inc. www.zenoss.com Zenoss Resource Mnger Configurtion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re

More information

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms Distriuted Systems Priniples nd Prdigms Christoph Dorn Distriuted Systems Group, Vienn University of Tehnology.dorn@infosys.tuwien..t http://www.infosys.tuwien..t/stff/dorn Slides dpted from Mrten vn Steen,

More information

NOTES. Figure 1 illustrates typical hardware component connections required when using the JCM ICB Asset Ticket Generator software application.

NOTES. Figure 1 illustrates typical hardware component connections required when using the JCM ICB Asset Ticket Generator software application. ICB Asset Ticket Genertor Opertor s Guide Septemer, 2016 Septemer, 2016 NOTES Opertor s Guide ICB Asset Ticket Genertor Softwre Instlltion nd Opertion This document contins informtion for downloding, instlling,

More information

Migrating vrealize Automation to 7.3 or March 2018 vrealize Automation 7.3

Migrating vrealize Automation to 7.3 or March 2018 vrealize Automation 7.3 Migrting vrelize Automtion to 7.3 or 7.3.1 15 Mrch 2018 vrelize Automtion 7.3 You cn find the most up-to-dte technicl documenttion on the VMwre website t: https://docs.vmwre.com/ If you hve comments bout

More information

Zenoss Community Edition (Core) Configuration Guide

Zenoss Community Edition (Core) Configuration Guide Zenoss Community Edition (Core) Configurtion Guide Relese 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Community Edition (Core) Configurtion Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own

More information

Zenoss Core Configuration Guide

Zenoss Core Configuration Guide Zenoss Core Configurtion Guide Relese 6.1.0 Zenoss, Inc. www.zenoss.com Zenoss Core Configurtion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, nd the Zenoss logo re trdemrks or

More information

Upgrading from vrealize Automation 7.1 or Later to June 2018 vrealize Automation 7.4

Upgrading from vrealize Automation 7.1 or Later to June 2018 vrealize Automation 7.4 Upgrding from vrelize Automtion 7.1 or Lter to 7.4 15 June 2018 vrelize Automtion 7.4 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you hve comments

More information

INTEGRATED WORKFLOW ART DIRECTOR

INTEGRATED WORKFLOW ART DIRECTOR ART DIRECTOR Progrm Resoures INTEGRATED WORKFLOW PROGRAM PLANNING PHASE In this workflow phse proess, you ollorte with the Progrm Mnger, the Projet Mnger, nd the Art Speilist/ Imge Led to updte the resoures

More information

Upgrading from vrealize Automation 7.1, 7.2 to 7.3 or 7.1, 7.2, 7.3 to March 2018 vrealize Automation 7.3

Upgrading from vrealize Automation 7.1, 7.2 to 7.3 or 7.1, 7.2, 7.3 to March 2018 vrealize Automation 7.3 Upgrding from vrelize Automtion 7.1, 7.2 to 7.3 or 7.1, 7.2, 7.3 to 7.3.1 15 Mrch 2018 vrelize Automtion 7.3 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/

More information

McAfee Network Security Platform

McAfee Network Security Platform Pssive Fil-Open Kit Quik Strt Guide Revision D MAfee Network Seurity Pltform MAfee Network Seurity Pltform IPS Sensors, when deployed in-line, route ll inoming trffi through designted port pir. However,

More information

High-performance Monitoring Software. User s Manual

High-performance Monitoring Software. User s Manual High-performne Monitoring Softwre User s Mnul Introdution Thnk you for purhsing WeView Livesope MV Ver. 2.1. Plese red this mnul prior to use to ensure tht you will e le to use this softwre effetively.

More information

Start Here. Quick Setup Guide. the machine and check the components DCP-9015CDW DCP-9020CDW

Start Here. Quick Setup Guide. the machine and check the components DCP-9015CDW DCP-9020CDW Quik Setup Guide Strt Here DCP-9015CDW DCP-9020CDW Plese red the Produt Sfety Guide first, then red this Quik Setup Guide for the orret setup nd instlltion proedure. To view the Quik Setup Guide in other

More information

VMware Horizon JMP Server Installation and Setup Guide. Modified on 06 SEP 2018 VMware Horizon 7 7.6

VMware Horizon JMP Server Installation and Setup Guide. Modified on 06 SEP 2018 VMware Horizon 7 7.6 VMwre Horizon JMP Server Instlltion nd Setup Guide Modified on 06 SEP 2018 VMwre Horizon 7 7.6 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you

More information

Start Here. Quick Setup Guide DCP-8110DN DCP-8150DN DCP-8155DN. the machine and check the components

Start Here. Quick Setup Guide DCP-8110DN DCP-8150DN DCP-8155DN. the machine and check the components Quik Setup Guide Strt Here DCP-8110DN DCP-8150DN DCP-8155DN Thnk you for hoosing Brother, your support is importnt to us nd we vlue your usiness. Your Brother produt is engineered nd mnuftured to the highest

More information

Upgrading from vrealize Automation to 7.3 or May 2018 vrealize Automation 7.3

Upgrading from vrealize Automation to 7.3 or May 2018 vrealize Automation 7.3 Upgrding from vrelize Automtion 6.2.5 to 7.3 or 7.3.1 03 My 2018 vrelize Automtion 7.3 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you hve comments

More information

Active Fail-Open Kit Quick Start Guide

Active Fail-Open Kit Quick Start Guide Revision G Ative Fil-Open Kit Quik Strt Guide MAfee Network Seurity Pltform IPS Sensors, when deployed in-line, route ll inoming trffi through designted port pir. However, t times Sensor might need to

More information

Start Here. Quick Setup Guide DCP-7055 / DCP-7060D DCP-7065DN WARNING WARNING CAUTION CAUTION

Start Here. Quick Setup Guide DCP-7055 / DCP-7060D DCP-7065DN WARNING WARNING CAUTION CAUTION Quik Setup Guide Strt Here DCP-7055 / DCP-7060D DCP-7065DN Plese red the Sfety nd Legl ooklet first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion.

More information

Zenoss Core Configuration Guide

Zenoss Core Configuration Guide Zenoss Core Configurtion Guide Relese 5.2.2 Zenoss, Inc. www.zenoss.com Zenoss Core Configurtion Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss nd the Zenoss logo re trdemrks or registered

More information

Control Center Installation Guide

Control Center Installation Guide Control Center Installation Guide Release 1.5.1 Zenoss, Inc. www.zenoss.com Control Center Installation Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks

More information

In USA: To download other guides for this product, visit the Brother Solutions Center at solutions.brother.com/manuals and select your model.

In USA: To download other guides for this product, visit the Brother Solutions Center at solutions.brother.com/manuals and select your model. Quik Setup Guide Strt Here HL-3180CDW Thnk you for hoosing Brother, your support is importnt to us nd we vlue your usiness. Your Brother produt is engineered nd mnuftured to the highest stndrds to deliver

More information

Cisco UCS Performance Manager Migration Guide

Cisco UCS Performance Manager Migration Guide Cisco UCS Performnce Mnger Migrtion Guide First Pulished: Decemer 2015 Relese 2.0.0 Americs Hedqurters Cisco Systems, Inc. 170 West Tsmn Drive Sn Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Epson Projector Content Manager Operation Guide

Epson Projector Content Manager Operation Guide Epson Projector Content Mnger Opertion Guide Contents 2 Introduction to the Epson Projector Content Mnger Softwre 3 Epson Projector Content Mnger Fetures... 4 Setting Up the Softwre for the First Time

More information

Upgrade. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

Upgrade. 17 JUL 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 Upgrde 17 JUL 2018 VMwre Vlidted Design 4.3 VMwre Vlidted Design for Softwre-Defined Dt Center 4.3 Upgrde You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/

More information

File Manager Quick Reference Guide. June Prepared for the Mayo Clinic Enterprise Kahua Deployment

File Manager Quick Reference Guide. June Prepared for the Mayo Clinic Enterprise Kahua Deployment File Mnger Quick Reference Guide June 2018 Prepred for the Myo Clinic Enterprise Khu Deployment NVIGTION IN FILE MNGER To nvigte in File Mnger, users will mke use of the left pne to nvigte nd further pnes

More information

vrealize Suite 7.0 Backup and Restore by Using EMC Avamar vrealize Suite 7.0

vrealize Suite 7.0 Backup and Restore by Using EMC Avamar vrealize Suite 7.0 vrelize Suite 7.0 Bckup nd Restore y Using EMC Avmr vrelize Suite 7.0 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you hve comments out this documenttion,

More information

LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION

LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION Overview LINX MATRIX SWITCHERS FIRMWARE UPDATE INSTRUCTIONS FIRMWARE VERSION 4.3.1.0 Due to the complex nture of this updte, plese fmilirize yourself with these instructions nd then contct RGB Spectrum

More information

Before you can use the machine, read this Quick Setup Guide for the correct setup and installation.

Before you can use the machine, read this Quick Setup Guide for the correct setup and installation. Quik Setup Guide Strt Here DCP-585CW Before you n use the mhine, red this Quik Setup Guide for the orret setup nd instlltion. WARNING Wrnings tell you wht to do to prevent possile personl injury. Importnt

More information

Start Here. Quick Setup Guide DCP-T300 DCP-T500W DCP-T700W WARNING CAUTION IMPORTANT NOTE WARNING

Start Here. Quick Setup Guide DCP-T300 DCP-T500W DCP-T700W WARNING CAUTION IMPORTANT NOTE WARNING Quik Setup Guide Strt Here DCP-T300 DCP-T500W DCP-T700W Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. User

More information

Upgrading from vrealize Automation 6.2 to 7.1

Upgrading from vrealize Automation 6.2 to 7.1 Upgrding from vrelize Automtion 6.2 to 7.1 vrelize Automtion 7.1 This document supports the version of ech product listed nd supports ll susequent versions until the document is replced y new edition.

More information

McAfee Network Security Platform

McAfee Network Security Platform 10/100/1000 Copper Active Fil-Open Bypss Kit Guide Revision E McAfee Network Security Pltform This document descries the contents nd how to instll the McAfee 10/100/1000 Copper Active Fil-Open Bypss Kit

More information

the machine and check the components Starter Ink Cartridges Basic User s Guide Product Safety Guide CD-ROM USB Interface Cable

the machine and check the components Starter Ink Cartridges Basic User s Guide Product Safety Guide CD-ROM USB Interface Cable Quik Setup Guide Strt Here MFC-J250 MFC-J450DW MFC-J470DW Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING

More information

the machine and check the components Black Yellow Cyan Magenta Starter Ink Cartridges Telephone Line Cord Adapter (Hong Kong only)

the machine and check the components Black Yellow Cyan Magenta Starter Ink Cartridges Telephone Line Cord Adapter (Hong Kong only) Quik Setup Guide Strt Here MFC-J230 MFC-J440DW MFC-J460DW Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING

More information

vcloud Director Tenant Portal Guide vcloud Director 9.0

vcloud Director Tenant Portal Guide vcloud Director 9.0 vcloud Director Tennt Portl Guide vcloud Director 9.0 vcloud Director Tennt Portl Guide You cn find the most up-to-dte technicl documenttion on the VMwre We site t: https://docs.vmwre.com/ The VMwre We

More information

CS 241 Week 4 Tutorial Solutions

CS 241 Week 4 Tutorial Solutions CS 4 Week 4 Tutoril Solutions Writing n Assemler, Prt & Regulr Lnguges Prt Winter 8 Assemling instrutions utomtilly. slt $d, $s, $t. Solution: $d, $s, nd $t ll fit in -it signed integers sine they re 5-it

More information

the machine and check the components Drum Unit and Toner Cartridge Assembly (pre-installed) AC Power Cord Installer CD-ROM Quick Setup Guide

the machine and check the components Drum Unit and Toner Cartridge Assembly (pre-installed) AC Power Cord Installer CD-ROM Quick Setup Guide Quik Setup Guide Strt Here MFC-8510DN Thnk you for hoosing Brother, your support is importnt to us nd we vlue your usiness. Your Brother produt is engineered nd mnuftured to the highest stndrds to deliver

More information

the machine and check the components Introductory Ink Cartridges CD-ROM 1 Power Cord Telephone Line Cord

the machine and check the components Introductory Ink Cartridges CD-ROM 1 Power Cord Telephone Line Cord Quik Setup Guide Strt Here MFC-J650DW MFC-J870DW Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING CAUTION

More information

MPE/iX HP 3000 Series 99X. Software Startup Manual

MPE/iX HP 3000 Series 99X. Software Startup Manual 900 Series HP 3000 Computer Systems MPE/iX HP 3000 Series 99X Softwre Strtup Mnul ABCDE HP Prt No 36123-90025 Printed in USA 1994 R3404 E0494 The informtion ontined in this doument is sujet to hnge without

More information

IaaS Configuration for Virtual Platforms

IaaS Configuration for Virtual Platforms IS Configurtion for Virtul Pltforms vcloud Automtion Center 6.1 This document supports the version of ech product listed nd supports ll susequent versions until the document is replced y new edition. To

More information

Scenarios. VMware Validated Design 4.0 VMware Validated Design for IT Automating IT 4.0

Scenarios. VMware Validated Design 4.0 VMware Validated Design for IT Automating IT 4.0 Scenrios VMwre Vlidted Design 4.0 VMwre Vlidted Design for IT Automting IT 4.0 Scenrios You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you hve comments

More information

Backup and Restore. 20 NOV 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3

Backup and Restore. 20 NOV 2018 VMware Validated Design 4.3 VMware Validated Design for Software-Defined Data Center 4.3 20 NOV 2018 VMwre Vlidted Design 4.3 VMwre Vlidted Design for Softwre-Defined Dt Center 4.3 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/ If you hve

More information

Welch Allyn CardioPerfect Workstation Installation Guide

Welch Allyn CardioPerfect Workstation Installation Guide Welch Allyn CrdioPerfect Worksttion Instlltion Guide INSTALLING CARDIOPERFECT WORKSTATION SOFTWARE & ACCESSORIES ON A SINGLE PC For softwre version 1.6.6 or lter For network instlltion, plese refer to

More information

Scenarios. VMware Validated Design for IT Automating IT 4.0 EN

Scenarios. VMware Validated Design for IT Automating IT 4.0 EN Scenrios VMwre Vlidted Design for IT Automting IT 4.0 This document supports the version of ech product listed nd supports ll susequent versions until the document is replced y new edition. To check for

More information

vcloud Director Service Provider Admin Portal Guide vcloud Director 9.1

vcloud Director Service Provider Admin Portal Guide vcloud Director 9.1 vcloud Director Service Provider Admin Portl Guide vcloud Director 9. vcloud Director Service Provider Admin Portl Guide You cn find the most up-to-dte technicl documenttion on the VMwre website t: https://docs.vmwre.com/

More information

Site Protection and Recovery. VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0

Site Protection and Recovery. VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 Site Protection nd Recovery VMwre Vlidted Design 4.0 VMwre Vlidted Design for Softwre-Defined Dt Center 4.0 You cn find the most up-to-dte technicl documenttion on the VMwre wesite t: https://docs.vmwre.com/

More information

Start Here MFC-7360 / MFC-7470D /

Start Here MFC-7360 / MFC-7470D / Quik Setup Guide Strt Here MFC-7360 / MFC-7470D / MFC-7860DN Plese red the Sfety nd Legl ooklet first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion.

More information

the machine and check the components Introductory Ink Cartridges

the machine and check the components Introductory Ink Cartridges Quik Setup Guide Strt Here DCP-J552DW DCP-J752DW Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING CAUTION

More information

the machine and check the components Starter Ink Cartridges Basic User s Guide Product Safety Guide CD-ROM* Power Cord

the machine and check the components Starter Ink Cartridges Basic User s Guide Product Safety Guide CD-ROM* Power Cord Quik Setup Guide Strt Here DCP-J72W Plese red the Produt Sfety Guide first efore you set up your mhine. Then, plese red this Quik Setup Guide for the orret setup nd instlltion. WARNING CAUTION IMPORTANT

More information

McAfee Network Security Platform

McAfee Network Security Platform Revision D McAfee Network Security Pltform (NS5x00 Quick Strt Guide) This quick strt guide explins how to quickly set up nd ctivte your McAfee Network Security Pltform NS5100 nd NS5200 Sensors in inline

More information

Certificate Replacement

Certificate Replacement Certifite Replement VMwre Vlite Design for Softwre-Define Dt Center 4.0.0 This oument supports the version of eh prout liste n supports ll susequent versions until the oument is reple y new eition. To

More information

Scenarios. VMware Validated Design for IT Automating IT EN

Scenarios. VMware Validated Design for IT Automating IT EN Scenrios VMwre Vlidted Design for IT Automting IT 3.0.2 This document supports the version of ech product listed nd supports ll susequent versions until the document is replced y new edition. To check

More information

Agilent Mass Hunter Software

Agilent Mass Hunter Software Agilent Mss Hunter Softwre Quick Strt Guide Use this guide to get strted with the Mss Hunter softwre. Wht is Mss Hunter Softwre? Mss Hunter is n integrl prt of Agilent TOF softwre (version A.02.00). Mss

More information

License Manager Installation and Setup

License Manager Installation and Setup The Network License (concurrent-user) version of e-dpp hs hrdwre key plugged to the computer running the License Mnger softwre. In the e-dpp terminology, this computer is clled the License Mnger Server.

More information

Registering as an HPE Reseller

Registering as an HPE Reseller Registering s n HPE Reseller Quick Reference Guide for new Prtners Mrch 2019 Registering s new Reseller prtner There re four min steps to register on the Prtner Redy Portl s new Reseller prtner: Appliction

More information

Installer reference guide

Installer reference guide Instller referene guide Dikin Altherm LAN dpter BRP069A6 BRP069A6 Instller referene guide Dikin Altherm LAN dpter English Tle of Contents Tle of Contents Aout the doumenttion. Aout this doument... Aout

More information

User Manual. V1.0.1 Nov. 20, 2016

User Manual. V1.0.1 Nov. 20, 2016 User Mnul V1.0.1 Nov. 20, 2016 Tble of Contents 1. Overview... 1 2. Speifition... 1 3. Dimensions... 3 4. LED Inditors... 5 5. Lithium Bttery... 5 6. Entering BIOS... 5 7. Instlling Windows OS... 5 8.

More information

Intelligent Operations Use Case Deployment Using vrealize Suite Lifecycle Manager

Intelligent Operations Use Case Deployment Using vrealize Suite Lifecycle Manager Intelligent Opertions Use Cse Deployment Using vrelize Suite Lifecycle Mnger 27 MAR 2018 VMwre Vlidted Design 4.2 VMwre Vlidted Design for Intelligent Opertions 4.2 You cn find the most up-to-dte technicl

More information

Before you can use the machine, read this Quick Setup Guide for the correct setup and installation.

Before you can use the machine, read this Quick Setup Guide for the correct setup and installation. Quik Setup Guide Strt Here MFC-790CW MFC-990CW Before you n use the mhine, red this Quik Setup Guide for the orret setup nd instlltion. WARNING Wrnings tell you wht to do to prevent possile personl injury.

More information