Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.1)

Size: px
Start display at page:

Download "Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.1)"

Transcription

1 Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.1) Author Names: Brandon Perkins, Chris Negus Technical Review Team: Rob Washburn, Chris Keller, Mikkilineni Suresh Babu, Bryan Yount 5/16/2013 INTRODUCTION To make your RHEV-M highly available, you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite (RHCS) high availability clusters eliminate single points of failure, so if the node on which a service (which in this case, includes resources needed by RHEV-M) is running should become inoperative, the service can start up again (fail over) to another cluster node with minimal interruption and no data loss. Red Hat supports two options for making your RHEV-M 3.1 highly available: RHEV-M as a highly available virtual machine: This approach (not covered in this tech brief) lets you configure a single RHEV-M as a virtual machine that is brought up on another host if the RHEV-M goes down. It offers simpler configuration, but can result in a longer downtime of a few minutes when a VM goes down. Read about this approach here: US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s1- virt_machine_resources-ccs-ca.html RHEV-M as a highly available service: This tech brief describes how to configure Red Hat Enterprise Virtualization Manager (RHEV-M) in a two-node, RHCS highly available (HA) cluster. If you want further information about the various components covered in this guide, refer to the following: RHEL 6 Cluster Administration Guide. Describes how to configure a Red Hat Enterprise Linux Cluster. Refer to this guide for help extending or modifying your cluster: US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html Red Hat Enterprise Virtualization Installation Guide. Describes the non-clustered installation of a RHEV-M, as well as other RHEV components: US/Red_Hat_Enterprise_Virtualization/3.1/htmlsingle/Installation_Guide/index.html After completing the main content of this document to create the RHEV-M 3.1 HA Cluster, refer to the following appendices for additional information: Appendix A: Changing your RHEV-M Cluster: Describes how to make sure that your nodes stay synchronized when you make changes, such as setting new passwords and replacing certificates. Appendix B: Updating the RHEV-M Cluster: Describes how to update RHEL, RHEV and Cluster software on your cluster nodes in a way that keeps your nodes functioning and synchronized. Appendix C: Sample cluster.conf File: Contains a listing of the cluster.conf file that is produced from Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 1

2 the cluster configuration done in this document. NOTE: Although not strictly required, it is generally better to run at least a three node cluster. Besides offering extra resources, the additional node makes it less likely you will end up in a "split-brain" condition, where both nodes believe they control the cluster. NOTE: The procedures in this tech brief contain several long, complex commands. Consider copying this document, or plain text copies of it, to the cluster nodes so you can copy and paste commands into the shell. In particular, it is critical that you get the names of directories exactly right when you set up shared storage. Copying and pasting directory names can help prevent errors. UNDERSTANDING SYSTEM REQUIREMENTS There are many different ways of setting up a high availability RHEV-M cluster. In our example, we used the following components: Two cluster nodes. Install two machines with Red Hat Enterprise Linux 6 to act as cluster nodes. A cluster web user interface. A Red Hat Enterprise Linux system (not on either of the cluster nodes) running the luci web-based high-availability administration application. You want this running on a system outside the cluster, so if either node goes down, the management interface is not affected. Network storage. Shared network storage is required. This procedure shows how to use HA LVM from a RHEL 6 system, which is backed by iscsi storage. (Fibre channel and NFS are other technologies you could use instead of iscsi.) Red Hat products. This procedure combines components from Red Hat Enterprise Linux, Red Hat Cluster Suite, Red Hat Enterprise Virtualization, and (optionally) Red Hat Enterprise Linux Server Resilient Storage. Using this information, set up two physical systems as cluster nodes (running ricci), another physical system that holds the cluster manager web user interface (running luci), and a final system or other shared storage device to contain the HA LVM storage. Figure 1 shows the layout of the systems used to test this procedure: Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 2

3 Figure 1: Example RHEV-M on HA cluster configuration For our example, we used two NICS on the cluster nodes. We used a network for communication within the cluster and for the network facing the RHEV environment. We used a SAN and created a high-availability LVM with multiple logical volumes that are shared by the cluster. The procedures that follow describe how to set up the cluster nodes, cluster web user interface, HA LVM storage, and the clustered service running the RHEV-M. CONFIGURE CLUSTER NODES (RICCI) Follow the steps below to install and configure two (or more) Red Hat Enterprise Linux systems as cluster nodes. 1. Choose Cluster hardware. The computer used to run a cluster node for the RHEV-M must meet RHEL hardware requirements, as well as the more stringent RHEV-M requirements. Refer to here for more information: 2. Install Red Hat Enterprise Linux 6 Server. On both nodes, install RHEL as described here: US/Red_Hat_Enterprise_Virtualization/3.1/htmlsingle/Installation_Guide/index.html#sect-Hardware_Requirements 3. Register RHEL. On both nodes, register with RHN and subscribe to the Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) (rhel-x86_64-server-6) base/parent channel with your Red Hat Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 3

4 Network username and password: # /usr/sbin/rhnreg_ks --serverurl= \ --username=[username] --password=[password] # rhn-channel -l rhel-x86_64-server-6 4. Subscribe to RHEL channels. On both nodes, subscribe to the following child channels: jbappplatform-6-x86_64-server-6-rpm JBoss Application Platform (v 6 in rpm) rhel-x86_64-server-6-rhevm-3.1 (Red Hat Enterprise Virtualization Manager: (v.3.1 for 64-bit AMD64 / Intel64) rhel-x86_64-server-ha-6 Red Hat Enterprise Linux Server High Availability (v. 6 for 64-bit AMD64 / Intel64) rhel-x86_64-server-supplementary-6 Red Hat Enterprise Linux Server Supplementary Software (v bit AMD64 / Intel64) To add these child channels, type the following command, replacing username and password with the user and password for your RHN account: # /usr/sbin/rhn-channel -u [username] -p [password] \ -c jbappplatform-6-x86_64-server-6-rpm \ -c rhel-x86_64-server-6-rhevm-3.1 -c rhel-x86_64-server-ha-6 \ -c rhel-x86_64-server-supplementary-6 -a # rhn-channel -l jbappplatform-6-x86_64-server-6-rpm rhel-x86_64-server-6 rhel-x86_64-server-6-rhevm-3.1 rhel-x86_64-server-ha-6 rhel-x86_64-server-supplementary-6 5. Update packages. On both nodes, to make sure you have the latest RHEL packages, run yum update, then reboot (rebooting is especially important if you get an updated kernel): # yum update -y # reboot 6. Get shared block device. Have some form of shared block device (with at least 25G of space), such as fibre channel or iscsi, available to both of the nodes. For our example, we assume a shared iscsi device that appears as /dev/sdb on both nodes. 7. File configuration management (optional). Steps in this procedure use commands to copy files from one node to other nodes, with the expectation that those files shouldn't change. To ensure that those files remain in sync, however, the more ideal case is to place the files and directories under configuration management (such as Puppet, Chef, CFEngine, or Ansible). Then, if a file that is not in a shared directory changes on one node, the CMS will either alert you or resolve the issue. If your company offers a Red Hat Satellite server, it is able to provide that functionality: US/Red_Hat_Network_Satellite/5.5/html/Reference_Guide/sect-Reference_Guide- Configuration.html 8. Install HA software. On both nodes, install the "High Availability" group of RPMs: Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 4

5 # yum -y groupinstall "High Availability" 9. Create firewall rules. On both nodes, make sure that the RHEV-M service is protected by enabling the firewall and opening those ports needed for the clustered RHEV-M to work properly. We created a complete firewall file that includes separate rule chains to allow connections to the Network File System (NFS), RHEV manager (RHEVM), HA Cluster (RHHA) services. Copy and paste the following rules into the /etc/sysconfig/iptables file on each node: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [34:3794] :NFS - [0:0] :RHEVM - [0:0] :RHHA - [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j NFS -A INPUT -j RHEVM -A INPUT -j RHHA -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A NFS -p tcp -m state --state NEW -m multiport --dports 111,892,875 -j ACCEPT -A NFS -p tcp -m state --state NEW -m multiport --dports 662,2049, j ACCEPT -A NFS -p udp -m state --state NEW -m multiport --dports 111,892,875,662, j ACCEPT -A RHEVM -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RHEVM -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A RHHA -d /4 -p udp -j ACCEPT -A RHHA -p igmp -j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 40040,40042,41040, j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 41967,41968,41969, j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 16851,11111,21064, j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 50008,50009,8084 -j ACCEPT -A RHHA -p udp -m state --state NEW -m multiport --dports 6809,50007,5404,5405 -j ACCEPT COMMIT 10. Enable firewall. On both nodes, with firewall rules in place enable and start the iptables service: # chkservice iptables on && service iptables start 11. Start ricci service. On both nodes, start the ricci daemon and configure it to start on boot: # chkconfig ricci on && service ricci start 12. Change ricci password. On both nodes, set the password for the user ricci. # passwd ricci Changing password for user ricci. New password: ******** Retype new password: ******** Create Shared Filesystems On node1, create the logical volume and filesystem on the shared block storage using standard LVM2 and file system commands. Replace /dev/sdb with the device name for your shared block storage device. Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 5

6 # fdisk /dev/sdb... Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder ( , default 1): ENTER Using default value 1 Last cylinder, +cylinders or +size{k,m,g} ( , default 45771): ENTER Using default value Command (m for help): t Selected partition 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM) Command (m for help): w # pvcreate /dev/sdb1 Writing physical volume data to disk "/dev/sdb1" Physical volume "/dev/sdb1" successfully created # vgcreate RHEVMVolGroup /dev/sdb1 Volume group "RHEVMVolGroup" successfully created # for i in lv_share_jasperreports_server_pro lv_share_ovirt_engine_dwh \ lv_share_ovirt_engine_reports lv_share_ovirt_engine; do lvcreate \ -L1.00g -n $i RHEVMVolGroup; done Logical volume "lv_share_jasperreports_server_pro" created Logical volume "lv_share_ovirt_engine_dwh" created Logical volume "lv_share_ovirt_engine_reports" created Logical volume "lv_share_ovirt_engine" created # lvcreate -L10.00g -n lv_lib_exports RHEVMVolGroup Logical volume "lv_lib_exports" created # lvcreate -L2.00g -n lv_lib_ovirt_engine RHEVMVolGroup Logical volume "lv_lib_ovirt_engine" created # lvcreate -L5.00g -n lv_lib_pgsql RHEVMVolGroup Logical volume "lv_lib_pgsql" created # for i in $(ls -1 /dev/rhevmvolgroup/lv_*); do mkfs.ext4 $i; done mke2fs (17-May-2010) Filesystem label= OS type: Linux... This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. Create Filesystem Mount Points On both cluster nodes, before adding the filesystem resources, we need to create all mount points needed for the shared LVM volumes we created. Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 6

7 1. Create shared mount points. On both nodes, create the shared mount points and check to make sure they exist as follows: # for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do mkdir -p $i; done 2. Check mount directories. Check that the shared mount point directories exist: # for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do ls -d $i; done /usr/share/jasperreports-server-pro /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine /var/lib/pgsql 3. Temporarily mount shared directories. On node1 ONLY, mount the shared directory volumes so that the RHEV-M software you are about to install can be installed on the shared directories. # mount /dev/mapper/rhevmvolgroup-lv_share_jasperreports_server_pro \ /usr/share/jasperreports-server-pro # mount /dev/mapper/rhevmvolgroup-lv_share_ovirt_engine_dwh \ /usr/share/ovirt-engine-dwh # mount /dev/mapper/rhevmvolgroup-lv_share_ovirt_engine_reports \ /usr/share/ovirt-engine-reports # mount /dev/mapper/rhevmvolgroup-lv_share_ovirt_engine \ /usr/share/ovirt-engine # mount /dev/mapper/rhevmvolgroup-lv_lib_exports /var/lib/exports # mount /dev/mapper/rhevmvolgroup-lv_lib_ovirt_engine /var/lib/ovirt-engine # mount /dev/mapper/rhevmvolgroup-lv_lib_pgsql /var/lib/pgsql Install RHEV-M Software On both nodes, you need to install the rhevm software. On the first node only, you are going to configure the RHEVM (rhevm-setup). Additionally, on the second node, you are going to remove directories that, instead of using locally, you are going to rely on the shared directories. 1. Install RHEV-M. On both nodes, install rhevm-setup: # yum -y install rhevm-setup Loaded plugins: rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package rhevm-setup.noarch 0: el6ev will be installed... xom.noarch 0: _redhat_1.1.ep6.el6.1 yum-plugin-versionlock.noarch 0: el6 Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 7

8 zip.x86_64 0:3.0-1.el6 Complete! This will pull in a number of dependencies, including JBoss AS which includes hundreds of RPMs. So the process may take some time. 2. Verify Java. On both nodes, verify that the Java alternative is pointing to version 1.7: # stat -c %N /etc/alternatives/java `/etc/alternatives/java' -> `/usr/lib/jvm/jre openjdk.x86_64/bin/java' If this is not the case, run the following: # alternatives --set java /usr/lib/jvm/jre openjdk.x86_64/bin/java 3. Remove directories from node2 ONLY. On node2 ONLY, remove the contents of the local directories that will ultimately be replaced by the shared directories created from node1. # for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do rm -rf $i && mkdir -p $i && ls -d $i; done /usr/share/jasperreports-server-pro /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine /var/lib/pgsql 4. Make ovirt logs cluster-safe. On both nodes, modify /etc/cron.daily/ovirt-cron on both nodes to be cluster-safe. Add the following lines AFTER "#!/bin/sh": if [! -d /usr/share/ovirt-engine/lost+found ]; then exit 0 fi As a result, the file should look as follows when you use the head command to display the beginning of the file: # head /etc/cron.daily/ovirt-cron #!/bin/sh if [! -d /usr/share/ovirt-engine/lost+found ]; then exit 0 fi #compress log4j log files, delete old ones /usr/share/ovirt-engine/scripts/ovirtlogrot.sh /var/log/ovirt-engine 480 > /dev/null EXITVALUE=$?... Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 8

9 Run rhevm-setup RHEV Manager is now installed on both machines but is not configured. 1. Run rhevm-setup on node1 ONLY. On node1 ONLY, run rhevm-setup command. WARNING: The "Host fully qualified domain name" must be the hostname you use to contact the RHEV-M service (regardless of the node it is running on). It should not be the local hostname of an individual node. This fqdn must map to the IP address you configure for the RHEV-M service when you set up the cluster, later in this procedure. # rhevm-setup Welcome to RHEV Manager setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes no): yes Stopping ovirt-engine service... RHEV Manager uses httpd to proxy requests to the application server. It looks like the httpd installed locally is being actively used. The installer can override current configuration. Alternatively you can use JBoss directly (on ports higher than 1024) Do you wish to override current httpd configuration and restart the service? ['yes' 'no'] [yes] : yes HTTP Port [80] : 80 HTTPS Port [443] : 443 Host fully qualified domain name. Note: this name should be fully resolvable [node1.example.com] : myrhevm.example.com <- Use shared hostname!!! Enter a password for an internal RHEV Manager administrator user (admin@internal) : ******** Confirm password : ******** Organization Name for the Certificate [node1.example.com] : Example.com The default storage type you will be using ['NFS' 'FC' 'ISCSI' 'POSIXFS'] [NFS] : NFS Enter DB type for installation ['remote' 'local'] [local] : local Enter a password for a local RHEV Manager DB admin user (engine) : ******** Confirm password : ******** Configure NFS share on this server to be used as an ISO Domain? ['yes' 'no'] [yes] : yes Local ISO domain path [/var/lib/exports/iso] : /var/lib/exports/iso Firewall ports need to be opened. The installer can configure iptables automatically overriding the current configuration. The old configuration will be backed up. Alternately you can configure the firewall later using an example iptables file found under /etc/ovirt-engine/iptables.example Configure iptables? ['yes' 'no']: no <--iptables already done manually RHEV Manager will be installed using the following configuration: ================================================================= override-httpd-config: yes http-port: 80 https-port: 443 host-fqdn: myrhevm.example.com auth-pass: ******** org-name: Example.com default-dc-type: NFS db-remote-install: local db-local-pass: ******** Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 9

10 nfs-mp: /var/lib/exports/iso config-nfs: yes override-iptables: no Proceed with the configuration listed above? (yes no): yes Installing: Configuring RHEV Manager... [ DONE ]... **** Installation completed successfully ****** (Please allow RHEV Manager a few moments to start up...) **** To access RHEV Manager browse to Additional information: Add RHEV-M authentication (optional). At this point, you can authenticate to the RHEV-M via a web browser on the real interface, not the floating virtual IP, using the admin account and password you just entered. If you want to configure centralized IPA or Active Directory authentication for your RHEV-M, you can do so with the rhevm-manage-domains command. Here is an example of an IPA configuration (the syntax is the same for adding an Active Directory server): WARNING: If this command is run later on, the /etc/ovirt-engine/krb5.conf file will need to be synchronized between nodes. # rhevm-manage-domains -action=add -domain=ipaserver.example.com -user=admin -provider=ipa -interactive Enter password: ******** # rhevm-manage-domains -action=validate Domain ipaserver.example.com is valid Manage Domains completed successfully 3. Setup NFS Shared Resource for ISO Domain (optional). If during rhevm-setup, 'yes' was selected for the question "Configure NFS share on this server to be used as an ISO Domain?", this step must be completed. If the NFS share will be located on another machine, this step can be skipped. Copy /etc/sysconfig/nfs. From node1, copy /etc/sysconfig/nfs to node2: # scp /etc/sysconfig/nfs node2:/etc/sysconfig/nfs Add NFS firewall rules. For both nodes, allow inbound access to the NFS-related ports as described later when we create the firewall for both nodes. Copy /etc/exports (optional). Copy the /etc/exports from node1 to node2 for consistency: # scp /etc/exports node2:/etc/exports Or revert the file to its native state on both nodes (as NFS exporting is handled by RHEL-HA): # cp /dev/null /etc/exports Set SELinux context. On both nodes, set and persist the SELinux context for the export directory: # semanage fcontext -a -t public_content_rw_t "/var/lib/exports/iso(/.*)?" 4. Synchronize configuration between nodes. Several configuration files that were modified on Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 10

11 node1 during the rhevm-setup process need to be copied to node2. Follow these steps to do that: Turn on SELinux boolean. RHEV-M requires Apache HTTP Daemon scripts and modules to connect to the network using TCP. On node2, therefore you need to turn on the httpd_can_network_connect boolean: # setsebool -P httpd_can_network_connect 1 Copy files from node1 to node2. From node1, copy configuration files and keys to node2 as follows (be sure to replace node2 with the hostname of the other computer in your cluster): WARNING: Whenever any of these configuration files change, they should be re-synced with other nodes in the cluster by running this command again. Changing passwords and certificates or running commands such as rhevm-manage-domains can result in changes to some of these files. See Appendix A for details. # for i in /etc/httpd/conf.d/ovirt-engine.conf \ /etc/httpd/conf.d/ssl.conf /etc/httpd/conf/httpd.conf \ /etc/ovirt-engine/ /etc/pki/ovirt-engine/ \ /etc/sysconfig/ovirt-engine \ /etc/yum/pluginconf.d/versionlock.list; do rsync -e ssh -avx $i \ node2:$i; done 5. Verify RHEV-M is sane. That is, at this point you should be able to test that all is well with RHEV-M running on node1, by using the IP address or hostname of node1 (the virtual IP resource is not yet available). From a web browser, go to the following URL (using your own hostname or IP address): Select the Web Admin Portal and login as admin. If you can see the RHEVM Web Administration page, you can begin to prepare your nodes to be added to a cluster. If you configured RHEV-M to use external authentication, you can now add a domain user as an administrator from the Users tab of the web administrative interface. 6. Shut down services on node1. On node1, immediately stop httpd, ovirt-engine, postgresql, and nfs services: # for i in httpd ovirt-engine postgresql nfs; do service $i stop; done Stopping httpd: [ OK ] Stopping engine-service: [ OK ] Stopping postgresql service: [ OK ] Shutting down NFS daemon: [ OK ] Shutting down NFS mountd: [ OK ] Shutting down NFS quotas: [ OK ] Shutting down NFS services: [ OK ] 7. Turn off services. Turn off httpd, ovirt-engine, postgresql, and nfs services so they don't start up automatically on reboot (the cluster is configured later to start these services as needed): # for i in httpd ovirt-engine postgresql nfs; do chkconfig $i off && chkconfig --list $i; done httpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off ovirt-engine 0:off 1:off 2:off 3:off 4:off 5:off 6:off postgresql 0:off 1:off 2:off 3:off 4:off 5:off 6:off nfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 11

12 8. Unmount filesystems from node1. To prepare node1 so it can be added to the cluster, you need to unmount the temporarily mounted file systems and stop the services (so that the cluster manager can handle those tasks): # umount /usr/share/jasperreports-server-pro # umount /usr/share/ovirt-engine-dwh # umount /usr/share/ovirt-engine-reports # umount /usr/share/ovirt-engine # umount /var/lib/exports # umount /var/lib/ovirt-engine # umount /var/lib/pgsql At this point, both nodes are configured and the RHEV-M has been verified to work directly on node1. Now that the filesystems are created and software is installed on them, the next step is to prepare the filesystems to become highly available. Setting Up Highly Available LVM Storage There are many ways of sharing filesystems. Highly Available LVM (HA-LVM) can be setup to use one of two methods for achieving its mandate of exclusive logical volume activation: LVM tagging. The first method uses local machine locking and LVM "tags". This method has the advantage of not requiring any LVM cluster packages; however, there are more steps involved in setting it up and it does not prevent an administrator from mistakenly removing a logical volume from a node in the cluster where it is not active. CLVM. The second method uses Clustered Logical Volume Manager (CLVM), but it will only ever activate the logical volumes exclusively. This has the advantage of easier setup and better prevention of administrative mistakes (like removing a logical volume that is in use). In order to use CLVM, the High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must be running. Only ONE HA-LVM method can be used to setup HA RHEV-M! Those two choices are described below: Choice #1: Set up HA-LVM failover (tagging method) To set up HA-LVM failover by using tags in the /etc/lvm/lvm.conf file, perform the following steps: 1. Set locking type. On both nodes, ensure that the parameter "locking_type" in the global section of "/etc/lvm/lvm.conf" is set to the value "1": # lvmconf --disable-cluster # grep '^ locking_type = ' /etc/lvm/lvm.conf locking_type = 1 2. Edit volume_list in lvm.conf. Edit the "volume_list" field in /etc/lvm/lvm.conf. Include the name of your root volume group and your hostname. (The hostname MUST exactly match the name of the local node, preceded by "@", that will go into the /etc/cluster/cluster.conf file you will configure later in this procedure when you set up your cluster.) Below are sample entries from /etc/lvm/lvm.conf on node1: # grep ' volume_list = ' /etc/lvm/lvm.conf Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 12

13 # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] volume_list = [ "vg_node1", "@node1.example.com" ] and /etc/lvm/lvm.conf on node2: # grep ' volume_list = ' /etc/lvm/lvm.conf # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] volume_list = [ "vg_node2", "@node2.example.com" ] This tag will be used to activate shared volume groups or logical volumes. DO NOT include the names of any volume groups that are to be shared using HA-LVM. 3. Rebuild initrd. On both cluster nodes, to include the changes form lvm.conf in your initial RAM disk, update the initrd. With the latest kernel running on your system, run the following command: # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) 4. Reboot nodes. Reboot both cluster nodes to make sure the correct initial RAM disk is being used. With the cluster nodes prepared, the next step is to set up the cluster on the Highly Available Cluster Manager (luci), then create the resources and cluster service that allows that service to failover to different nodes when necessary. Choice #2: Set up HA-LVM failover (CLVM) To set up HA-LVM failover by using the CLVM variant (instead of the HA-LVM tagging method described "Set up HA-LVM failover (tagging method)"), perform the following steps: 1. Identify logical volume group as highly available. On both nodes, change the logical volume group to identify it as a clustered volume group. For example, given the name RHEVMVolGroup, you would type the following: # vgchange -cy RHEVMVolGroup Volume group "RHEVMVolGroup" successfully changed 2. Subscribe to Resilient Storage. Install the Resilient Storage Add-On by subscribing to the rhelx86_64-server-rs-6 (Red Hat Enterprise Linux Server Resilient Storage (v. 6 for 64-bit AMD64 / Intel64)) child channel: # /usr/sbin/rhn-channel -u [user] -p [passwd] -c rhel-x86_64-server-rs-6 -a # rhn-channel -l jbappplatform-6-x86_64-server-6-rpm rhel-x86_64-server-6 rhel-x86_64-server-6-rhevm-3.1 rhel-x86_64-server-ha-6 rhel-x86_64-server-rs-6 <- Resilient storage channel rhel-x86_64-server-supplementary-6 3. Install Resilient Storage. On both nodes, install the "Resilient Storage" group of RPMs: # yum -y groupinstall "Resilient Storage" Loaded plugins: rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. Setting up Group Process Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 13

14 Package ccs el6.x86_64 already installed and latest version Resolving Dependencies... Complete! 4. Set locking type. On both nodes, ensure that the parameter "locking_type" in the global section of "/etc/lvm/lvm.conf" is set to the value "3". You can do that with the lvmconf command as follows: # lvmconf --enable-cluster # grep '^ locking_type = ' /etc/lvm/lvm.conf locking_type = 3 5. Rebuild initrd. To include the changes form lvm.conf in your initial RAM disk, update the initrd on all your cluster nodes. With the latest kernel running on your system, run the following command: # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r) 6. Reboot nodes. Reboot both cluster nodes to make sure the correct initial RAM disk is being used. 7. Start clvmd. On both nodes, start the clvmd daemon and configure it to start on boot: # chkconfig clvmd on && service clvmd start 8. Identify logical volumes as highly available. On only one node, change the logical volumes to identify them as clustered. For example, given the name RHEVMVolGroup, you would type the following: # for i in $(ls -1 /dev/rhevmvolgroup/lv_*); do lvchange -an $i; done CONFIGURE CLUSTER FROM HA MANAGER (LUCI) If you don't already have an existing RHEL6 luci install, install the cluster manager (luci) on a RHEL system, other than your cluster nodes. Then begin configuring the cluster as described below: 1. Install Red Hat Enterprise Linux 6 Server. On the machine to run luci, install RHEL as described here: 2. Register RHEL. On luci machine, register with RHN and subscribe to the Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) (rhel-x86_64-server-6) base/parent channel: # /usr/sbin/rhnreg_ks --serverurl= \ --username=[username] --password=[password] # rhn-channel -l rhel-x86_64-server-6 3. Subscribe to RHEL HA channel. On luci, subscribe to the following child channel: rhel-x86_64-server-ha-6 Red Hat Enterprise Linux Server High Availability (v. 6 for 64-bit AMD64 / Intel64) To add these child channels, type the following command, replacing username and password with the user and password for your RHN account: Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 14

15 # /usr/sbin/rhn-channel -u [username] -p [password] \ -c rhel-x86_64-server-ha-6 -a # rhn-channel -l rhel-x86_64-server-6 rhel-x86_64-server-ha-6 4. Install luci. Install the luci RPMs: # yum -y install luci 5. Start luci. Start the luci daemon: # chkconfig luci on # service luci start Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `luci.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none suitable found, you can still do it manually as mentioned above) Generating a 2048 bit RSA private key Writing new private key to '/var/lib/luci/certs/host.pem' Starting saslauthd: [ OK ] Start luci... [ OK ] Point your web browser to (or equivalent) to access luci 6. Login to luci. As instructed by the start-up script, point your web browser to the address shown ( in this example) and log in as the root user, as prompted. Create the Cluster in luci Once you are logged into luci, you need to create a cluster and add the two cluster nodes to it. 1. Name the cluster. Select Manage Clusters -> Create, then fill in the Cluster Name (for example, RHEVMCluster). 2. Identify cluster nodes. Fill in the Node Name (fully-qualified domain name or name in /etc/hosts) and Password (the password for the user ricci) for the first cluster node. Click the Add Another Node button and add the same information for the second cluster node. (Repeat if you had decided to create more than two nodes.) 3. Add cluster options. Select the following options, then click the Create Cluster button: Use the Same Password for All Nodes: Select this check box. Download Packages: Select this radio button. Reboot Nodes Before Joining Cluster: Select this check box. Enable Shared Storage Support: Leave this unchecked. After you click the Create Cluster button, if the nodes can be contacted luci will set up each cluster node, downloading packages as needed, and add each node to the cluster. When each node is set up, the High Availability Management screen appears. 4. Create failover domain. Click the Failover Domains tab. Click the Add button and fill in the following information as prompted: Name. Fill in any node you like (such as prefer_node1). Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 15

16 Prioritized. Check this box. Restricted. Check this box. Member. Click the Member box for each node. Priority. Add a "1" for node1; Add a "2" for node2 under the Priority column. Click Create to apply the changes to the fail over domain. 5. Add Fence Devices. Configure appropriate fence devices for the hardware you have. Add a fence device and instance for each node. These settings will be particular to your hardware and software configuration. Refer to the Cluster Administration Guide and Fence Device article for help with configuring fence devices: Cluster Administration Guide: US/Red_Hat_Enterprise_Linux/6/html-single/Cluster_Administration/ Fence Device Article: Add Resources for the Cluster With the cluster created, next create several resources to put in the new cluster service (named rhevm). These include: an IP address for the service, shared file systems, and other resources. Add IP Address Resource 1. Add an IP address resource. Select the Resources tab, then click Add and choose IP Address. 2. Fill in IP address information. Enter the following: IP Address. Fill in a valid IP address. Ultimately, this IP address ( in our example) is used from a web browser to access the RHEV-M (for example, Monitor Link. Check this box. 3. Submit information. Click the Submit button. Create HA LVM Resource From the High Availability Management (luci) web interface, create an HA LVM resource for each logical volume created in the "Setting Up Highly Available LVM Storage" section. Start by selecting the cluster (RHEVMCluster), then do the following: 1. Add HA LVM resource. Click on Resources, then click Add and select HA LVM. 2. Fill in HA LVM information. Enter the following: Name. Fill in RHEVM HA LVM. Volume Group Name. Fill in RHEVMVolGroup. Logical Volume Name. Leave this blank. 3. Submit. Press the "Submit" button. Create Shared Logical Volume Resources From the luci web interface, add seven file system resources using the values in Table 1 below: WARNING: It is critical that you get all the mount point names exactly as shown! For any of the mount point directories that are not shared (because you typed the name wrong), the files in that directory will be installed Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 16

17 only on the local disk of the first node and will not be shared. A service may work on one node, but fail to run on another. Double-check all mount point names! Table 1 - Filesystem Resource Information Name File Mount Point Device, FS label, or UUID System Type lv_share_jasperreports_ ext4 /usr/share/jasperreports- /dev/mapper/rhevmvolgroup- server_pro server-pro lv_share_jasperreports_server_pro lv_share_ovirt_engine_ ext4 /usr/share/ovirt-engine- /dev/mapper/rhevmvolgroup- dwh dwh lv_share_ovirt_engine_dwh lv_share_ovirt_engine_r ext4 /usr/share/ovirt-engine- /dev/mapper/rhevmvolgroup- eports reports lv_share_ovirt_engine_reports lv_share_ovirt_engine ext4 /usr/share/ovirt-engine /dev/mapper/rhevmvolgrouplv_share_ovirt_engine lv_lib_exports ext4 /var/lib/exports /dev/mapper/rhevmvolgroup-lv_lib_exports lv_lib_ovirt_engine ext4 /var/lib/ovirt-engine /dev/mapper/rhevmvolgrouplv_lib_ovirt_engine lv_lib_pgsql ext4 /var/lib/pgsql /dev/mapper/rhevmvolgroup-lv_lib_pgsql 1. Add Filesystem resource. Click on Resources, then click Add and select Filesystem. 2. Fill in Filesystem information. Enter the following (if you can, copy and paste from table): Name. Fill in the Name from Table 1. Filesystem Type. Fill in the Filesystem Type from Table 1. Mount Point. Fill in the Mount Point from Table 1. (This value is critical!) Device, FS label, or UUID. Fill in the Device, FS label, or UUID from Table 1. Mount options. Leave blank. Filesystem ID (optional). Leave blank. Reboot host node if unmount fails. Check this box. 3. Submit. Press the "Submit" button. 4. Repeat until all filesystems from Table 1 are created. Create NFS Resources (optional) From the luci web interface, select the two node cluster, then add the NFS resource that represents your ISO domain. 1. Add NFS Client resource. Click on Resources, then click Add and select NFS Client. 2. Fill in NFS Client information. Enter the following: Name. Enter rhev iso clients Target Hostname, Wildcard, or Netgroup. Enter Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 17

18 WARNING: In a production environment, should NOT be used. The hosts you allow here should be restricted to the smallest group of systems/networks as possible. Allow Recovery of This NFS Client. Check this box. Options. Enter rw Submit. Press the "Submit" button. 3. Add NFS v3 Export resource. Click on Resources, then click Add and select NFS v3 Export. 4. Fill in NFS v3 Export information. Enter the following: Name. Enter rhev iso exports Submit. Press the "Submit" button. Create Resources for Services Started by rhevm From the luci web interface, select the two node cluster. The add several services needed to run on the RHEV-M. Add postgresql Service Script Create a resource for the postgresql service: 1. Add Script resource for postgresql. Click on Resources, then click Add and select Script. 2. Fill in Script information. Enter the following: Name. Fill in postgresql Full path to script file. Fill in /etc/rc.d/init.d/postgresql 3. Submit. Press the "Submit" button. Add JBoss AS ovirt Engine Service Script Create a resource for the ovirt-engine service: 1. Add Script resource for ovirt-engine. Click on Resources, then click Add and select Script. 2. Fill in Script information. Enter the following: Name. Fill in ovirt-engine Full path to script file. Fill in /etc/rc.d/init.d/ovirt-engine 3. Submit. Press the "Submit" button. Add Apache HTTP Daemon Service Script Create a resource for the Apache service: 1. Add Apache resource for httpd. Click on Resources, then click Add and select Apache. 2. Fill in Apache information. Enter the following: Name. Fill in httpd Shutdown Wait (seconds). Change to something that works for your environment, such as 5 3. All other options leave as they are. Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 18

19 4. Submit. Press the "Submit" button. Create rhevm Cluster Service and Add Resources Next, create the rhevm service and add each of the resources (created earlier) to the rhevm service. From luci, with RHEVMCluster still selected, add the new rhevm service as follows: NOTE: After you click the Submit button (assuming you selected Automatically Start this Service as described below) the rhevm service will try to start. We have you click the Submit button after each step so you can see if that step causes an error. If an error occurs, recheck that the offending resource is correct or go to the "Trying the RHEV-M Cluster Service" section for troubleshooting tips. 1. Add a Service Group. Click on the Service Groups tab and select Add. 2. Fill in Service Group information. Service name. Assign a name to the serve (for example, rhevm) Automatically start this service. Check this box. Failover Domain. Select the prefer_node1 you created earlier. Recovery Policy. Select Relocate. 3. Submit. Press the "Submit" button. Add Resources to rhevm Cluster Service 1. Add the IP address resource. Select the rhevm Service Group. Click on "Service Groups" and select rhevm. Select the Add Resource button at the bottom of the screen and select the IP Address resource you created earlier. Submit. Press the "Submit" button. 2. Add RHEVM HA LVM resource. Select the rhevm Service Group. Click on "Service Groups" and select rhevm. Select the Add Resource button at the bottom of the screen and select "RHEVM HA LVM". Submit. Press the "Submit" button 3. Add Filesystem resources. Add all seven filesystem resources from (represented by the Name field in Table 1) to the rhevm" service group: Select the rhevm Service Group. Click on "Service Groups" and select rhevm. Select the Add Resource button at the bottom of the screen and select the Name of the first Filesystem resource (see Table 1). Repeat. Repeat the bullet items until all seven filesystem resources are added. Submit. Press the "Submit" button 4. Add Service resources. Add services resources to the "rhevm" service group. For "Name" use each of these service resource names: postgresql, ovirt-engine, and httpd: Select the rhevm Service Group. Click on "Service Groups" and select rhevm. Select the Add Resource button and select the Name of the first services resource. Repeat. Repeat these bullets until all three resources are added. Submit. Press the "Submit" button. 5. Add RHEV ISO Exports and Clients resources (optional). Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 19

20 Select the rhevm Service Group. Click on "Service Groups" and select rhevm. Find lv_lib_exports. From the resources on the rhevm service page, find the "lv_lib_exports" Filesystem resource and select Add Child Resource inside that block. (In other words, you want to make the new resource dependent on lv_lib_exports.) Choose Select a Resource Type and select the "rhev iso exports" entry Find NFS v3 Export. At the bottom of the NFS v3 Export resource you just added, select the "Add Child Resource" inside that block. Choose Select a Resource Type and select the "rhev iso clients" entry Submit. Press the "Submit" button At this point, you should be able to test that the basic rhevm service is running on the cluster. We recommend that you try: Accessing the RHEV-M from your web browser Moving the rhevm service to another node Then try to access the RHEV-M again, as described in the next section. After that, you can add the remaining resources (ovirt Event Notifier, Data Warehouse, and Reports) to your RHEV-M. TRYING THE RHEV-M CLUSTER SERVICE Assuming you set the rhevm service to automatically start, you should be able to access the RHEVM from your web browser, then test that it is still accessible when you move it to a different node. 1. Access the RHEV-M. From a web browser, open the Red Hat Enterprise Virtualization Manager (RHEV-M) using the hostname or IP address you used to identify the service (not the direct name or IP address of a node): 2. Login to the RHEV-M. Select the web Admin Portal and, when prompted, login to the RHEV-M. If you can successfully log into the RHEV-M. You can proceed to testing the cluster. 3. Check where rhevm service is running. From a shell on either cluster node, check where the rhevm service is currently active: # clustat -s rhevm Service Name Owner (Last) State service:rhevm node1.example.com started 4. Move rhevm to different cluster node. From either node, relocate the rhevm service to another cluster node: # clusvcadm -r rhevm Trying to relocate service:rhevm...success service:rhevm is now running on node2.example.com 5. Login to the RHEV-M again. Try again to login to the RHEV-M from a web browser. If it works, then the service was able to successfully relocate to another node. Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 20

21 If the rhevm service appears to be working, continue on to configuring additional RHEV-M services. SET UP ADDITIONAL RHEV-M SERVICES Besides the basic service, you can also set up additional RHEV-M services and add them to the cluster. These services include the ovirt Event Notification service, Data Warehousing for ovirt, and Reports. Setup ovirt event notification service (Optional) On both nodes, do the following: Modify /etc/ovirt-engine/notifier/notifier.conf to change "MAIL_SERVER=" as follows (replacing localhost for a valid SMTP server if localhost is not configured as an MTA): MAIL_SERVER=localhost On luci, do the following: 1. Add the resource. Click on Resources and click on Add. 2. Add Script. Click Select a Resource Type and select Script. Then fill in the following information Name. Use engine-notifierd. Full Path to Script File. Use /etc/rc.d/init.d/engine-notifierd 3. Submit. Press the "Submit" button. 4. Add to rhevm service. Click on Service Groups. 5. Click on rhevm. 6. Near the bottom of the screen, select the Add Resource button. 7. Click Select a Resource Type and select the engine-notifierd resource. 8. Submit. Press the "Submit" button. Test the rhevm-notifierd service Do the following to test the engine-notifierd service: 1. Select some events to be registered to and use a valid address. 2. Trigger said events (For example: moving a host into and out of maintenance mode). 3. Check that you receive the s (note: if you do nothing else, the will appear to come from the actual node, not the virtual IP, that ran the alert, I consider this good, but postfix/sendmail can be configured to come from the virtual IP as well). 4. Relocate the rhevm service group either through luci or clusvcadm. 5. Once the service is completely moved to the other node, trigger the same events from step Again check that you receive the mail. Set Up Data Warehouse for ovirt On both nodes, do the following: Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 21

22 1. Freeze the "rhevm" service group: # clusvcadm -Z rhevm Local machine freezing service:rhevm...success 2. On both nodes, install the rhevm-dwh RPM: # yum -y install rhevm-dwh 3. On both nodes, modify /etc/cron.hourly/ovirt_engine_dwh_watchdog.cron to be cluster-safe. Add the following lines AFTER #!/bin/bash: if [! -d /usr/share/ovirt-engine-dwh/lost+found ]; then exit 0 fi so that it should look like: #!/bin/bash # if [! -d /usr/share/ovirt-engine-dwh/lost+found ]; then exit 0 fi # ETL functions library.. /usr/share/ovirt-engine-dwh/etl/etl-common-functions.sh 4. On the node that is NOT running the rhevm service group, remove files that are served by our HA LVM: # /bin/rm -r /usr/share/ovirt-engine-dwh/* 5. On the node that IS running the rhevm service group, run the ovirt engine DWH setup script: # rhevm-dwh-setup In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes no): yes Stopping ovirt-engine... [ DONE ] Setting DB connectivity... [ DONE ] Upgrade DB... [ DONE ] Starting ovirt-engine... [ DONE ] Starting ovirt-etl... [ DONE ] Successfully installed rhevm-dwh. The installation log file is available at: /var/log/ovirt-engine/rhevm-dwh-setup-ccyy_mm_dd_hh_mm_ss.log NOTE: If there is an ERROR while Starting ovirt-etl, modify the "RUN_PROPERTIES" variable of the /usr/share/ovirt-engine-dwh/etl/history_service.sh file: #RUN_PROPERTIES="-Xms256M -Xmx1024M -Djavax.net.ssl.trustStore=/etc/pki/ovirtengine/.keystore -Djavax.net.ssl.trustStorePassword=mypass" RUN_PROPERTIES=" -Djsse.enableSNIExtension=false -Xms256M -Xmx1024M -Djavax.net.ssl.trustStore=/etc/pki/ovirt-engine/.keystore -Djavax.net.ssl.trustStorePassword=mypass" Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 22

23 Please see: for more information. After making the modification, run this step again. 6. On the node that IS running the rhevm service group, copy configuration files from that node to the other node (for example, node1 to node2; Be sure to change node2 in the command below to the name of your other cluster node!): # for i in /etc/sysconfig/ovirt-engine \ /etc/ovirt-engine/ovirt-engine-dwh/default.properties; do rsync -e ssh -avx \ $i node2:$i; done 7. On both nodes, disable the ovirt-engine-dwhd service from starting automatically: # chkconfig ovirt-engine-dwhd off # chkconfig --list ovirt-engine-dwhd ovirt-engine-dwhd 0:off 1:off 2:off 3:off 4:off 5:off 6:off On luci, do the following: 1. Add the resource. Click on Resources and click on Add. 2. Add Script. Click Select a Resource Type and select Script. Then fill in the following information Name. Use ovirt-engine-dwhd Full Path to Script File. Use /etc/rc.d/init.d/ovirt-engine-dwhd 3. Submit. Press the "Submit" button. 4. Add to rhevm service. Click on Service Groups and click on rhevm. 5. Add ovirt-engine-dwhd as child to httpd. Find the httpd resource and select the Add Child Resource button. 6. Click Select a Resource Type and select the ovirt-engine-dwhd resource. 7. Submit. Press the "Submit" button. 8. Unfreeze rhevm. On the node where rhevm is frozen, unfreeze the rhevm service group: # clusvcadm -U rhevm Local machine unfreezing service:rhevm...success 9. Relocate rhevm. On the node where rhevm is running, relocate the rhevm service group: # clusvcadm -r rhevm 10. Check ovirt-engine-dwhd. On the node running the rhevm service, check the service is running: # service ovirt-engine-dwhd status Set up Reports for ovirt 1. Install rhevm-reports. On both nodes, with the rhevm service still frozen, install the rhevm-reports RPM, this will also pull-in a few other dependencies: # yum -y install rhevm-reports Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 23

24 2. On the node that is NOT running the rhevm service group, remove files that are served by our HA LVM: # /bin/rm -r /usr/share/jasperreports-server-pro/* \ /usr/share/ovirt-engine-reports/* 3. On the node that rhevm IS running the rhevm service group, execute rhevm-reports-setup: # rhevm-reports-setup Welcome to ovirt-engine-reports setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes no): yes Stopping ovirt-engine... [ DONE ] Please choose a password for the admin users (rhevm-admin and superuser): ********* Re-type password: *********... Successfully installed ovirt-engine-reports. The installation log file is available at: /var/log/ovirt-engine/ovirt-engine-reports-setup-ccyy_mm_dd_hh_mm_ss.log 4. On either node, unfreeze the rhevm service group: # clusvcadm -U rhevm Local machine unfreezing service:rhevm...success 5. Verify reporting. Login to the Reports Portal interface and verify basic reporting works. 6. On the node that IS running the rhevm service group, copy configuration files from that node to the other node (for example, node1 to node2):: # for i in /etc/sysconfig/ovirt-engine \ /etc/ovirt-engine/jrs-deployment.version; do rsync -e ssh -avx \ $i node2:$i; done 7. Relocate rhevm. On either node, relocate the rhevm service group: # clusvcadm -r rhevm 8. Check Reports Portal. Login to the Reports Portal again and verify basic reporting works. At this point, your RHEV-M 3.1 HA cluster is complete. Figure 2 shows the resources that have been added to luci before they are added to the rhevm service group: Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 24

25 Figure 2: Resources created for the rhevm Service Group APPENDIX A: CHANGING THE RHEV-M CLUSTER Most changes to your RHEV-M configuration are stored in shared directories. So, when another node takes over the cluster, it automatically gets all the latest data. There are, however, a few ways you might change your RHEV-M that are not automatically proliferated to other nodes. In those cases, you need to either use file config management tools (such as Puppet, as mentioned earlier) or manually copy files from the node where files were changed to the other nodes. Here are some examples: Changing RHEV-M's postgres User Passwords To change passwords for your postgres database users, refer to the following article: Assuming you are changing postgres user passwords on node1, you need to run the following commands to make the necessary changes and get the nodes in sync: 1. Freeze the rhevm service. From node1, type the following as root: # clusvcadm -Z rhevm 2. Change postgres password. On node1 (assuming the rhevm service is frozen there), run the procedure from this article: 3. Copy files from node1 to node2. Assuming rhevm is frozen on node 1, run the following command (substituting the hostname of your second node for node2): Setting up a Highly Available RHEV-M 3.1 Perkins, Negus 25

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Administration Configuring and Managing the High Availability Add-On Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 High Availability Add-On Administration

More information

Red Hat Gluster Storage 3

Red Hat Gluster Storage 3 Red Hat Gluster Storage 3 Console Installation Guide Installing Red Hat Storage Console Last Updated: 2017-10-18 Red Hat Gluster Storage 3 Console Installation Guide Installing Red Hat Storage Console

More information

Red Hat Gluster Storage 3.2 Console Installation Guide

Red Hat Gluster Storage 3.2 Console Installation Guide Red Hat Gluster Storage 3.2 Console Installation Guide Installing Red Hat Gluster Storage Console Red Hat Gluster Storage Documentation Team Red Hat Gluster Storage 3.2 Console Installation Guide Installing

More information

3.6. How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization. Last Updated:

3.6. How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization. Last Updated: Red Hat Enterprise Virtualization 3.6 Reports and Data Warehouse Guide How to Use the Reports and Data Warehouse Capabilities of Red Hat Enterprise Virtualization Last Updated: 2017-09-27 Red Hat Enterprise

More information

Red Hat Enterprise Linux 6

Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 6 Cluster Administration Configuring and Managing the High Availability Add-On Last Updated: 2017-11-28 Red Hat Enterprise Linux 6 Cluster Administration Configuring and Managing

More information

RedHat Cluster (Pacemaker/Corosync)

RedHat Cluster (Pacemaker/Corosync) RedHat Cluster (Pacemaker/Corosync) Chapter 1:- Introduction and basic difference from previous RHEL cluster and latest RHEL Cluster. Red hat cluster allows you to configure and manage group of resources

More information

Red Hat Virtualization 4.0

Red Hat Virtualization 4.0 Red Hat Virtualization 4.0 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization Last Updated: 2018-02-18 Red Hat Virtualization 4.0 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization

More information

Red Hat Network Satellite 5.0.0: Virtualization Step by Step

Red Hat Network Satellite 5.0.0: Virtualization Step by Step Red Hat Network Satellite 5.0.0: Virtualization Step by Step By Máirín Duffy, Red Hat Network Engineering Abstract Red Hat Network Satellite 5.0 is the first Satellite release to include virtual platform

More information

How to Use This Lab Manual

How to Use This Lab Manual 3 Contents How to Use This Lab Manual........................................ 5 Lab 1: Setting Up the Student System.................................. 7 Lab 2: Installing Fedora............................................

More information

Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7

Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7 Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7 Disclaimer: I haven t gone through RHCSA/RHCE EL 7. I am preparing for upgrade of my RHCE certificate from RHCE EL4 to RHCE EL7. I don

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

Red Hat Virtualization 4.1

Red Hat Virtualization 4.1 Red Hat Virtualization 4.1 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization Last Updated: 2018-03-06 Red Hat Virtualization 4.1 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization

More information

The Zenoss Enablement Series:

The Zenoss Enablement Series: The Zenoss Enablement Series: Zenoss Service Dynamics Impact and Event Management on Red Hat Cluster Suite Configuration Guide Document Version 424-P1 Zenoss, Inc. www.zenoss.com Copyright 2013 Zenoss,

More information

Red Hat Virtualization 4.2

Red Hat Virtualization 4.2 Red Hat Virtualization 4.2 Metrics Store Installation Guide Installing Metrics Store for Red Hat Virtualization Last Updated: 2018-08-20 Red Hat Virtualization 4.2 Metrics Store Installation Guide Installing

More information

Clustered Applications with Red Hat Enterprise Linux 6

Clustered Applications with Red Hat Enterprise Linux 6 Clustered Applications with Red Hat Enterprise Linux 6 Lon Hohberger - Supervisor, Software Engineering Thomas Cameron, RHCA, RHCSS, RHCDS, RHCVA, RHCX - Managing Solutions Architect Red Hat Wednesday,

More information

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide SIOS Protection Suite for Linux: DataKeeper for Linux This document and the information herein is the property of SIOS Technology Corp. Any unauthorized use and reproduction is prohibited. SIOS Technology

More information

Cisco PCP-PNR Port Usage Information

Cisco PCP-PNR Port Usage Information Cisco PCP-PNR Port Usage Information Page 1 of 18 20-Sep-2013 Table of Contents 1 Introduction... 3 2 Prerequisites... 3 3 Glossary... 3 3.1 CISCO PCP Local Machine... 3 3.1.1 CISCO PCP Component... 4

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

Resource Manager Collector RHCS Guide

Resource Manager Collector RHCS Guide The Zenoss Enablement Series: Resource Manager Collector RHCS Guide Document Version 424-D1 Zenoss, Inc. www.zenoss.com Copyright 2014 Zenoss, Inc., 275 West St., Suite 204, Annapolis, MD 21401, U.S.A.

More information

Creating a Multi-data Center (MDC) System

Creating a Multi-data Center (MDC) System , page 1 About Multi-data Centers The Multi-data Center (MDC) licensed feature is available in version 2.5 and higher. It allows two CWMS systems to be joined into a single MDC system. One license must

More information

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Oded Nahum Principal Systems Engineer PLUMgrid EMEA November 2014 Page 1 Page 2 Table of Contents Table

More information

SysadminSG RHCSA Study Guide

SysadminSG RHCSA Study Guide SysadminSG RHCSA Study Guide This is the RHCSA Study Guide for the System Administration Study Group. The study guide is intended to be printed by those who wish to study common tasks performed by many

More information

Red Hat Virtualization 4.2

Red Hat Virtualization 4.2 Red Hat Virtualization 4.2 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization Last Updated: 2018-06-12 Red Hat Virtualization 4.2 Upgrade Guide Update and upgrade tasks for Red Hat Virtualization

More information

Production Installation and Configuration. Openfiler NSA

Production Installation and Configuration. Openfiler NSA Production Installation and Configuration Openfiler NSA Table of Content 1. INTRODUCTION... 3 1.1. PURPOSE OF DOCUMENT... 3 1.2. INTENDED AUDIENCE... 3 1.3. SCOPE OF THIS GUIDE... 3 2. OPENFILER INSTALLATION...

More information

Red Hat CloudForms 4.6

Red Hat CloudForms 4.6 Red Hat CloudForms 4.6 Installing Red Hat CloudForms on Red Hat Virtualization How to install and configure Red Hat CloudForms on a Red Hat Virtualization environment Last Updated: 2018-08-07 Red Hat

More information

Xcalar Installation Guide

Xcalar Installation Guide Xcalar Installation Guide Publication date: 2018-03-16 www.xcalar.com Copyright 2018 Xcalar, Inc. All rights reserved. Table of Contents Xcalar installation overview 5 Audience 5 Overview of the Xcalar

More information

Red Hat Enterprise Linux 5 Cluster Administration. Configuring and Managing a Red Hat Cluster Edition 5

Red Hat Enterprise Linux 5 Cluster Administration. Configuring and Managing a Red Hat Cluster Edition 5 Red Hat Enterprise Linux 5 Cluster Administration Configuring and Managing a Red Hat Cluster Edition 5 Red Hat Enterprise Linux 5 Cluster Administration Configuring and Managing a Red Hat Cluster Edition

More information

Dell Storage Compellent Integration Tools for VMware

Dell Storage Compellent Integration Tools for VMware Dell Storage Compellent Integration Tools for VMware Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your

More information

Method of Procedure to Upgrade RMS OS to Red Hat Enterprise Linux 6.7

Method of Procedure to Upgrade RMS OS to Red Hat Enterprise Linux 6.7 First Published: November 20, 2015 Contents Scope of MOP... 4 Release Components... 4 Pre Requisites... 4 Assumptions... 4 Process Information... 5 Upgrade Timing... 5 Requirements... 5 Pre Maintenance...

More information

RHEL Clustering and Storage Management. 5 Days

RHEL Clustering and Storage Management. 5 Days QWERTYUIOP{ RHEL Clustering and Storage Management 5 Days This hands on course covers the high availability clustering and storage management technologies found in Red Hat Enterprise Linux 6+. Each student

More information

Getting Started with ovirt 3.3

Getting Started with ovirt 3.3 Getting Started with ovirt 3.3 Alexey Lesovsky Chapter No. 3 "Configuring ovirt" In this package, you will find: A Biography of the author of the book A preview chapter from the book, Chapter NO.3 "Configuring

More information

Unit 2: Manage Files Graphically with Nautilus Objective: Manage files graphically and access remote systems with Nautilus

Unit 2: Manage Files Graphically with Nautilus Objective: Manage files graphically and access remote systems with Nautilus Linux system administrator-i Unit 1: Get Started with the GNOME Graphical Desktop Objective: Get started with GNOME and edit text files with gedit Unit 2: Manage Files Graphically with Nautilus Objective:

More information

iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc.

iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc. Configure iscsi Target & Initiator on CentOS 7 / RHEL7 iscsi stands for Internet Small Computer Systems Interface, IP-based storage, works on top of internet protocol by carrying SCSI commands over IP

More information

VIRTUAL GPU LICENSE SERVER VERSION , , AND 5.1.0

VIRTUAL GPU LICENSE SERVER VERSION , , AND 5.1.0 VIRTUAL GPU LICENSE SERVER VERSION 2018.10, 2018.06, AND 5.1.0 DU-07754-001 _v7.0 through 7.2 March 2019 User Guide TABLE OF CONTENTS Chapter 1. Introduction to the NVIDIA vgpu Software License Server...

More information

BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION. Yaniv Kaul Director, SW engineering June 2016

BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION. Yaniv Kaul Director, SW engineering June 2016 BRINGING HOST LIFE CYCLE AND CONTENT MANAGEMENT INTO RED HAT ENTERPRISE VIRTUALIZATION Yaniv Kaul Director, SW engineering June 2016 HOSTS IN A RHEV SYSTEM Host functionality Hosts run the KVM hypervisor

More information

Using Fluentd as an alternative to Splunk

Using Fluentd as an alternative to Splunk Using Fluentd as an alternative to Splunk As infrastructure within organizations grows in size and the number of hosts, the cost of Splunk may become prohibitive. I created this document to demonstrate,

More information

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018 This guide will help you download and install IBM Db2 software,

More information

Vendor: RedHat. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator - RHCSA. Version: Demo

Vendor: RedHat. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator - RHCSA. Version: Demo Vendor: RedHat Exam Code: EX200 Exam Name: Red Hat Certified System Administrator - RHCSA Version: Demo EX200 Exam A QUESTION NO: 1 CRECT TEXT Configure your Host Name, IP Address, Gateway and DNS. Host

More information

Red Hat Enterprise Linux 5 Configuration Example - NFS Over GFS

Red Hat Enterprise Linux 5 Configuration Example - NFS Over GFS Red Hat Enterprise Linux 5 Configuration Example - NFS Over GFS Configuring NFS over GFS in a Red Hat Cluster Edition 3 Landmann Red Hat Enterprise Linux 5 Configuration Example - NFS Over GFS Configuring

More information

Reset the Admin Password with the ExtraHop Rescue CD

Reset the Admin Password with the ExtraHop Rescue CD Reset the Admin Password with the ExtraHop Rescue CD Published: 2018-01-19 This guide explains how to reset the administration password on physical and virtual ExtraHop appliances with the ExtraHop Rescue

More information

Scrutinizer Virtual Appliance Deployment Guide Page i. Scrutinizer Virtual Appliance Deployment Guide. plixer

Scrutinizer Virtual Appliance Deployment Guide Page i. Scrutinizer Virtual Appliance Deployment Guide. plixer Scrutinizer Virtual Appliance Deployment Guide Page i Scrutinizer Virtual Appliance Deployment Guide Contents What you need to know about deploying a Scrutinizer virtual appliance.. 1 System Requirements..................................2

More information

Proficy Plant Applications 7.0 Quick Install Guide (And Best Practices)

Proficy Plant Applications 7.0 Quick Install Guide (And Best Practices) Proficy Plant Applications 7.0 Quick Install Guide (And Best Practices) Installation Instructions Based on: Windows Server 2016 x64 Operating System SQL Server 2016 Standard (where applicable) Microsoft

More information

Deploy the ExtraHop Discover Appliance 1100

Deploy the ExtraHop Discover Appliance 1100 Deploy the ExtraHop Discover Appliance 1100 Published: 2018-07-17 The following procedures explain how to deploy an ExtraHop Discover appliance 1100. System requirements Your environment must meet the

More information

EX200.Lead2pass.Exam.24q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator RHCSA. Version 14.0

EX200.Lead2pass.Exam.24q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator RHCSA. Version 14.0 EX200.Lead2pass.Exam.24q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.0 http://www.gratisexam.com/ Exam Code: EX200 Exam Name: Red Hat Certified System Administrator RHCSA Version

More information

Red Hat.Actualtests.EX200.v by.Dixon.22q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator (RHCSA) Exam

Red Hat.Actualtests.EX200.v by.Dixon.22q. Exam Code: EX200. Exam Name: Red Hat Certified System Administrator (RHCSA) Exam Red Hat.Actualtests.EX200.v2014-12-02.by.Dixon.22q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.5 http://www.gratisexam.com/ Exam Code: EX200 Exam Name: Red Hat Certified System

More information

VIRTUAL GPU LICENSE SERVER VERSION AND 5.1.0

VIRTUAL GPU LICENSE SERVER VERSION AND 5.1.0 VIRTUAL GPU LICENSE SERVER VERSION 2018.06 AND 5.1.0 DU-07754-001 _v6.0 through 6.2 July 2018 User Guide TABLE OF CONTENTS Chapter 1. Introduction to the NVIDIA vgpu Software License Server... 1 1.1. Overview

More information

Setting Up Identity Management

Setting Up Identity Management APPENDIX D Setting Up Identity Management To prepare for the RHCSA and RHCE exams, you need to use a server that provides Lightweight Directory Access Protocol (LDAP) and Kerberos services. The configuration

More information

Red Hat CloudForms 4.2

Red Hat CloudForms 4.2 Red Hat CloudForms 4.2 Installing Red Hat CloudForms on Red Hat Virtualization How to install and configure Red Hat CloudForms on a Red Hat Virtualization environment Last Updated: 2017-12-18 Red Hat

More information

Seltestengine EX200 24q

Seltestengine EX200 24q Seltestengine EX200 24q Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 22.5 http://www.gratisexam.com/ Red Hat EX200 Red Hat Certified System AdministratorRHCSA Nicely written Questions

More information

TimeIPS Server. IPS256T Virtual Machine. Installation Guide

TimeIPS Server. IPS256T Virtual Machine. Installation Guide TimeIPS Server IPS256T Virtual Machine Installation Guide TimeIPS License Notification The terms and conditions applicable to the license of the TimeIPS software, sale of TimeIPS hardware and the provision

More information

Control Center Planning Guide

Control Center Planning Guide Control Center Planning Guide Release 1.4.2 Zenoss, Inc. www.zenoss.com Control Center Planning Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks

More information

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager Connector Installation and Configuration (Legacy Mode) VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until

More information

Changing user login password on templates

Changing user login password on templates Changing user login password on templates 1. Attach an ISO via the cloudstack interface and boot the VM to rescue mode. Click on attach iso icon highlighted below: A popup window appears from which select

More information

EX200 EX200. Red Hat Certified System Administrator RHCSA

EX200 EX200. Red Hat Certified System Administrator RHCSA EX200 Number: EX200 Passing Score: 800 Time Limit: 120 min File Version: 14.0 http://www.gratisexam.com/ EX200 Red Hat Certified System Administrator RHCSA EX200 QUESTION 1 Configure your Host Name, IP

More information

Data Warehouse: User Computer Configuration Guide

Data Warehouse: User Computer Configuration Guide University of Texas at San Antonio Data Warehouse: User Computer Configuration Guide Sponsored by the Vice Provost of Institutional Effectiveness DOCUMENT HISTORY This is an on-line document. Paper copies

More information

"Charting the Course... RHCE Rapid Track Course. Course Summary

Charting the Course... RHCE Rapid Track Course. Course Summary Course Summary Description This course is carefully designed to match the topics found in the Red Hat RH299 exam prep course but also features the added benefit of an entire extra day of comprehensive

More information

At course completion. Overview. Audience profile. Course Outline. : 55187B: Linux System Administration. Course Outline :: 55187B::

At course completion. Overview. Audience profile. Course Outline. : 55187B: Linux System Administration. Course Outline :: 55187B:: Module Title Duration : 55187B: Linux System Administration : 4 days Overview This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional

More information

Lockdown & support access guide

Lockdown & support access guide Lockdown & support access guide How to lock down your cloud, and enable the OnApp support team to help you with troubleshooting and ticket resolution. Document version 1.4 Document release date 21 st February

More information

About Backup and Restore, on page 1 Supported Backup and Restore Procedures, on page 3

About Backup and Restore, on page 1 Supported Backup and Restore Procedures, on page 3 About, on page 1 Supported Procedures, on page 3 Back Up Automation Data Using the GUI, on page 4 Restore Automation Data Using the GUI, on page 6 Schedule a Backup of Automation Data Using the GUI, on

More information

example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost :80 <VirtualHost :80> DocumentRoot /var/www/html/

example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost :80 <VirtualHost :80> DocumentRoot /var/www/html/ example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost 192.168.0.254:80 DocumentRoot /var/www/html/ ServerName station.domain40.example.com

More information

Course 55187B Linux System Administration

Course 55187B Linux System Administration Course Outline Module 1: System Startup and Shutdown This module explains how to manage startup and shutdown processes in Linux. Understanding the Boot Sequence The Grand Unified Boot Loader GRUB Configuration

More information

Notes for Installing RedHawk Linux 7.0 with Red Hat Enterprise Linux 7.0. Installation Notes. March 22 nd, 2015

Notes for Installing RedHawk Linux 7.0 with Red Hat Enterprise Linux 7.0. Installation Notes. March 22 nd, 2015 Notes for Installing RedHawk Linux 7.0 with Red Hat Enterprise Linux 7.0 Installation Notes March 22 nd, 2015 This page intentionally left blank 1. Introduction RedHawk Linux is supplied with CentOS Linux

More information

Upgrade Tool Guide. July

Upgrade Tool Guide. July Upgrade Tool Guide July 2015 http://www.liveaction.com 4.X to 5.0 The Upgrade Guide from 4.X to 5.0 consists of three parts: Upgrading the LiveAction Server Upgrading the LiveAction Node Upgrading the

More information

Isilon InsightIQ. Version Installation Guide

Isilon InsightIQ. Version Installation Guide Isilon InsightIQ Version 4.1.0 Installation Guide Copyright 2009-2016 EMC Corporation All rights reserved. Published October 2016 Dell believes the information in this publication is accurate as of its

More information

Getting Started with Pentaho and Cloudera QuickStart VM

Getting Started with Pentaho and Cloudera QuickStart VM Getting Started with Pentaho and Cloudera QuickStart VM This page intentionally left blank. Contents Overview... 1 Before You Begin... 1 Prerequisites... 1 Use Case: Development Sandbox for Pentaho and

More information

How to Deploy vcenter on the HX Data Platform

How to Deploy vcenter on the HX Data Platform First Published: 2016-07-11 Last Modified: 2019-01-08 vcenter on HyperFlex Cisco HX Data Platform deployment, including installation and cluster configuration and management, requires a vcenter server

More information

Red Hat System Administration I - RH124

Red Hat System Administration I - RH124 Course outline Red Hat System Administration I - RH124 Access the command line Log in to a Linux system and run simple commands using the shell. Manage files from the command line Copy, move, create, delete,

More information

Nested Home Lab Setting up Shared Storage

Nested Home Lab Setting up Shared Storage Nested Home Lab Setting up Shared Storage Andy Fox VCI VCAP-DCA VCP3 VCP4 Over the years teaching vsphere, several peers, colleagues and students have asked me how I setup shared storage in my nested test

More information

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx. Adding a New Disk, page 1 Mounting the Replication Set from Disk to tmpfs After Deployment, page 3 Manage Disks to Accommodate Increased Subscriber Load, page 5 Adding a New Disk This section describes

More information

Troubleshooting Cisco APIC-EM Single and Multi-Host

Troubleshooting Cisco APIC-EM Single and Multi-Host Troubleshooting Cisco APIC-EM Single and Multi-Host The following information may be used to troubleshoot Cisco APIC-EM single and multi-host: Recovery Procedures for Cisco APIC-EM Node Failures, page

More information

NetXplorer. Installation Guide. Centralized NetEnforcer Management Software P/N D R3

NetXplorer. Installation Guide. Centralized NetEnforcer Management Software P/N D R3 NetXplorer Centralized NetEnforcer Management Software Installation Guide P/N D357006 R3 Important Notice Important Notice Allot Communications Ltd. ("Allot") is not a party to the purchase agreement

More information

SECURE Gateway with Microsoft Azure Installation Guide. Version Document Revision 1.0

SECURE  Gateway with Microsoft Azure Installation Guide. Version Document Revision 1.0 SECURE Email Gateway with Microsoft Azure Installation Guide Version 4.7.0 Document Revision 1.0 Copyright Revision 1.0, November, 2017 Published by Clearswift Ltd. 1995 2017 Clearswift Ltd. All rights

More information

As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.

As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command. How to Increase the size of a Linux LVM by adding a new disk This post will cover how to increase the disk space for a VMware virtual machine running Linux that is using logical volume manager (LVM). First

More information

RHEV in the weeds - special sauce! Marc Skinner

RHEV in the weeds - special sauce! Marc Skinner RHEV in the weeds - special sauce! Marc Skinner Twin Cities Users Group :: Q3/2013 Introduction RHEV = Red Hat Enterprise Vitualization RHEV Manager = Red Hat Enterprise Hypervisor Manager DATACENTER VIRTUALIZATION

More information

Juniper Secure Analytics Patch Release Notes

Juniper Secure Analytics Patch Release Notes Juniper Secure Analytics Patch Release Notes 7.3.0 January 2018 7.3.0.20171205025101 patch resolves several known issues in Juniper Secure Analytics (JSA). Contents Administrator Notes..................................................

More information

Red Hat Satellite 6.3

Red Hat Satellite 6.3 Red Hat Satellite 6.3 Upgrading and Updating Red Hat Satellite Upgrading and updating Red Hat Satellite Server and Capsule Server Last Updated: 2018-07-12 Red Hat Satellite 6.3 Upgrading and Updating

More information

Red Hat Enterprise Linux 7 Getting Started with Cockpit

Red Hat Enterprise Linux 7 Getting Started with Cockpit Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started with Cockpit Red Hat Enterprise Linux Documentation Team Red Hat Enterprise Linux 7 Getting Started with Cockpit Getting Started

More information

Errata and Commentary Final, Submitted to Curriculum. ~]$ restorecon.ssh/authorized_keys

Errata and Commentary Final, Submitted to Curriculum. ~]$ restorecon.ssh/authorized_keys Page 1 p12 p15 (277) p15 (277) Connecting to Your Virtual Machines A console connection (e.g., virt-manager, virt-viewer or virsh console) is required to view boot sequence messages during a cluster node

More information

Data Protection Guide

Data Protection Guide SnapCenter Software 4.0 Data Protection Guide For VMs and Datastores using the SnapCenter Plug-in for VMware vsphere March 2018 215-12931_C0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Cisco Prime Service Catalog Virtual Appliance Quick Start Guide 2

Cisco Prime Service Catalog Virtual Appliance Quick Start Guide 2 Cisco Prime Service Catalog 11.1.1 Virtual Appliance Quick Start Guide Cisco Prime Service Catalog 11.1.1 Virtual Appliance Quick Start Guide 2 Introduction 2 Before You Begin 2 Preparing the Virtual Appliance

More information

cpouta exercises

cpouta exercises CSC Bioweek. 8.2. 2018 cpouta exercises 1 Table of Contents cpouta exercises 8.2. 2018 1. Launching a virtual machine... 2 1.1 Login to cpouta interface in... 2 1.2 Create your own SSH key pair... 2 A.

More information

Exam LFCS/Course 55187B Linux System Administration

Exam LFCS/Course 55187B Linux System Administration Exam LFCS/Course 55187B Linux System Administration About this course This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional

More information

Red Hat Gluster Storage 3.3

Red Hat Gluster Storage 3.3 Red Hat Gluster Storage 3.3 Quick Start Guide Getting Started with Web Administration Last Updated: 2017-12-15 Red Hat Gluster Storage 3.3 Quick Start Guide Getting Started with Web Administration Rakesh

More information

Dell EMC ME4 Series vsphere Client Plug-in

Dell EMC ME4 Series vsphere Client Plug-in Dell EMC ME4 Series vsphere Client Plug-in User's Guide Regulatory Model: E09J, E10J, E11J Regulatory Type: E09J001, E10J001, E11J001 Notes, cautions, and warnings NOTE: A NOTE indicates important information

More information

Dell Storage Integration Tools for VMware

Dell Storage Integration Tools for VMware Dell Storage Integration Tools for VMware Version 4.1 Administrator s Guide Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION:

More information

"Charting the Course... MOC B: Linux System Administration. Course Summary

Charting the Course... MOC B: Linux System Administration. Course Summary Description Course Summary This four-day instructor-led course is designed to provide students with the necessary skills and abilities to work as a professional Linux system administrator. The course covers

More information

Deploying VMware Identity Manager in the DMZ. JULY 2018 VMware Identity Manager 3.2

Deploying VMware Identity Manager in the DMZ. JULY 2018 VMware Identity Manager 3.2 Deploying VMware Identity Manager in the DMZ JULY 2018 VMware Identity Manager 3.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

Installation Guide. Cimatron Site Manager 2.0 Release Note

Installation Guide. Cimatron Site Manager 2.0 Release Note Installation Guide Cimatron Site Manager 2.0 Release Note Installation Guide i Table of Contents Introduction... 1 Cimatron Site Manager Components... 2 Installation... 3 Hardware Requirements... 3 Software

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

HySecure Quick Start Guide. HySecure 5.0

HySecure Quick Start Guide. HySecure 5.0 HySecure Quick Start Guide HySecure 5.0 Last Updated: 25 May 2017 2012-2017 Propalms Technologies Private Limited. All rights reserved. The information contained in this document represents the current

More information

Clearswift SECURE Gateway Installation & Getting Started Guide. Version 4.3 Document Revision 1.0

Clearswift SECURE  Gateway Installation & Getting Started Guide. Version 4.3 Document Revision 1.0 Clearswift SECURE Email Gateway Installation & Getting Started Guide Version 4.3 Document Revision 1.0 Copyright Revision 1.1, March, 2016 Published by Clearswift Ltd. 1995 2016 Clearswift Ltd. All rights

More information

Zenoss Resource Manager Upgrade Guide

Zenoss Resource Manager Upgrade Guide Zenoss Resource Manager Upgrade Guide Release 6.2.0 Zenoss, Inc. www.zenoss.com Zenoss Resource Manager Upgrade Guide Copyright 2018 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo

More information

FastTrack to Red Hat Linux System Administrator Course Overview

FastTrack to Red Hat Linux System Administrator Course Overview Course Overview This highly practical instructor led training course is designed to give experienced LINUX/UNIX administrators practical experience in the administration of a LINUX system to a level required

More information

Cisco Prime Performance 1.3 Installation Requirements

Cisco Prime Performance 1.3 Installation Requirements 1 CHAPTER Cisco Prime Performance 1.3 Installation Requirements The following topics provide the hardware and software requirements for installing Cisco Prime Performance Manager 1.3: Gateway and Unit

More information

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Last Updated: 2018-09-14 Red Hat Quay 2.9 Deploy Red Hat Quay - Basic Deploy Red Hat Quay Legal Notice Copyright 2018 Red Hat, Inc. The

More information

Fedora 12 Essentials

Fedora 12 Essentials Fedora 12 Essentials 2 Fedora 12 Essentials First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights

More information

VII. Corente Services SSL Client

VII. Corente Services SSL Client VII. Corente Services SSL Client Corente Release 9.1 Manual 9.1.1 Copyright 2014, Oracle and/or its affiliates. All rights reserved. Table of Contents Preface... 5 I. Introduction... 6 Chapter 1. Requirements...

More information

Booting a Galaxy Instance

Booting a Galaxy Instance Booting a Galaxy Instance Create Security Groups First time Only Create Security Group for Galaxy Name the group galaxy Click Manage Rules for galaxy Click Add Rule Choose HTTPS and Click Add Repeat Security

More information

Oracle Ksplice for Oracle Linux

Oracle Ksplice for Oracle Linux Oracle Ksplice for Oracle Linux Oracle Corporation Oracle Ksplice Oracle Ksplice is an exciting new addition to the Oracle Linux Premier Support subscription. The Oracle Ksplice technology allows customers

More information

User Manual op5 System 3.1

User Manual op5 System 3.1 User Manual op5 System 3.1 Table of Contents 1 Introduction... 2 2 Fundamentals... 2 2.1 op5 System... 2 2.2 System access... 2 2.2.1 The portal page... 2 2.2.2 Console and SSH access... 3 2.3 System accounts...

More information