Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage (AIX/Linux)

Size: px
Start display at page:

Download "Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage (AIX/Linux)"

Transcription

1 Automated Instance failover Using IBM DB High Availability Instance Configuration Utility (dbhaicu) on shared storage (AIX/Linux) Date: March 8, 00 Version:.0 Authors: Abhishek Iyer Neeraj Sharma Abstract: This is a step by step guide for setting up an end to end HA DB Instance on shared storage with TSA using the dbhaicu utility for Linux. Same procedure is applicable to AIX as well when used with equivalent commands for AIX.

2

3 Table of Contents. Introduction and Overview Before we begin Hardware Configuration used Software Configuration used Overall Architecture Hardware Topology Network Topology Pre Configuration steps Configuring the /etc/hosts file: Configuring the dbnodes.cfg file: Configuring the /etc/services file Configuring the Standby Node NFS Server settings (configuring /etc/exports file) NFS Client settings (updating /etc/fstab file) Storage failover settings (updating /etc/fstab file) Configuring a DB Instance for HA using dbhaicu Procedure of running dbhaicu Appendix for dbhaicu Configuring the NFS Server for HA Post Configuration steps (at the customer site) Cluster monitoring Lessons learnt during HA implementations Hostname conflict Prevent auto-mount of shared file systems Preventing file system consistency checks at boot time... 40

4 . Introduction and Overview This document is aimed to serve as an end to end guide for configuring a database instance as a HA (high available) instance across shared storage. A high available database instance across shared storage is typically needed in a Balance Warehouse (BCU) environment where one standby node acts a failover node for all the data and admin nodes. This will be discussed more in the Overall Architecture section below. The implementation discussed in this document is based on the DB High Availability (HA) feature and the DB High Availability Instance Configuration Utility (dbhaicu) which is available in DB Version 9.5 and beyond. This utility uses the Tivoli System Automation (TSA) cluster manager to configure the shared database instance. A user can use this utility in the following two modes: Interactive Mode: The user would need to provide all the required inputs step by step as prompted on the command line by the utility. XML Mode: In this mode all the inputs need to be written into an XML file which the utility would parse to extract the required data. This document explains how to configure a shared instance on shared storage using the step by step interactive mode (in section 5 below).. Before we begin It is important that you go through the setup information before moving on to the actual HA configuration steps. The hardware used in the current implementation is a D500 Balanced Warehouse (BCU) which has the following nodes (Linux boxes): Admin node (admin0) 3 Data nodes (data0,data0 and data03) Standby node (stdby0) Management node (mgmt0). Hardware Configuration used Each node is an x3650 server with Quad-Core Intel Xeon Proc X5470 (3.33 GHz). All nodes have 3 GB memory except the management node which has 8 GB. The Admin and the Data nodes have 4 external hard disks each of 46 GB capacity.. Software Configuration used DB Linux Enterprise Server Edition DWE IBM Tivoli System Automation

5 Operating System SUSE Linux Enterprise Server VERSION = 0 PATCHLEVEL = Kernel Information smp.3 Overall Architecture This section describes the overall architecture in terms of hardware, network topology of the highly available database cluster under implementation..3. Hardware Topology In a typical D500 Balanced Warehouse environment, the standby node is designed to be the failover node for the admin node as well as the data nodes. The management node is not a part of the HA cluster, as it is only used to manage the other nodes using the cluster system management utilities. Hence we ll not be referring to the management node henceforth in this document. As mentioned in the hardware configuration above, each of the admin and data nodes have their respective storage disks which are connected through Fiber Optic cables (shown by red lines in Figure below). The Standby node would be configured to take control of the storage mount points of the failed node in the event of a failover (shown by red dotted lines in Figure below). Even though the database instance resides across all of the admin and data nodes in a balanced warehouse, any external application would typically connect only to admin node which internally acts as the co-coordinator node. An NFS server runs on the admin node and all other data nodes are NFS clients. The communication between the admin and data nodes takes place using the Gigabit Ethernet network (shown in purple lines in Figure below). In the event that a data node fails over to the standby node, the standby node must start functioning as an NFS client and in case the admin node fails over, the standby node must function as the NFS server and take the role of the coordinator node. The step by step configuration of the standby node for each of these failover scenarios is described in detail in the following sections.

6 Figure : Hardware Topology.3. Network Topology A D500 Balanced Warehouse typically has the following networks: Cluster management network. This network supports the management, monitoring and administration of the cluster. The management node (mgmt0) uses this network to manage the network using the Cluster System Management Utilities. This network may or may not be made highly available. In the current implementation this network is on subnet (shown in brown lines in Figure above) and we would be making it highly available. Baseboard management controller. Additionally there is a service processor network that is linked to this network. This service processor, called the baseboard management controller (BMC), provides alerting, monitoring, and control of the servers. This is port of integrated network ports. This network is port is common to Cluster management network.

7 DB fast communications manager (FCM) network. The DB FCM network is the network which is used for internal database communication between database partitions on different physical servers. This Gigabit Ethernet network supports FCM traffic as well as the NFS instance directory used in a DB with Database Partitioning Feature (DPF) configuration. This network is made highly available as all the data transfers between different nodes happen over this network. In the current implementation, this network exists on subnet (shown in purple lines in Figure above). Corporate network (optional). This network allows external applications and clients to access and query the database. Typically, external applications would only require connecting to the admin node which would internally coordinate with all the other data nodes on the FCM network, but in some cases for more flexibility, data nodes are also made reachable on the corporate network. In the current implementation, only the admin and standby nodes are reachable on the corporate network on the subnet (shown with green lines in Figure above). Standby is made available on the corporate network to provide an alternate route for the external applications in case the admin node goes down. 3. Pre Configuration steps There are some pre-configuration steps that must be done in order to ensure that the HA configuration is successful. 3. Configuring the /etc/hosts file: All the nodes in the cluster must have similar entries in the /etc/hosts file (as shown below) to ensure all hosts are mutually recognizable. Please make sure that the format of the entries as shown below:!!! " " " " " " " " " " #$ $ $ $ $ $ $

8 /etc/hosts file contents for all nodes on cluster Entries for all networks for each database node should be exactly same on all nodes in the cluster. 3. Configuring the dbnodes.cfg file: Depending on the number of database partitions, the dbnodes.cfg file under the ~/sqllib/ directory must have the contents in the format shown below across all nodes. Typically in a D500 Balanced Warehouse, the dbnodes.cfg file is present under the /dbhome directory which is NFS shared from admin0: /shared_dbhome on all the nodes. In the current implementation there are total 3 partitions: on admin and 4 on each of the three data nodes. Hence the /dbhome/bculinux/sqllib/dbnodes.cfg file looks like: " %" & " "" "" "" """ dbnodes.cfg file contents 3.3 Configuring the /etc/services file All the nodes in the cluster must have the following entries in the /etc/services file to enable DB communication both inter and intra partitions. The first entry below (dbc_bculinux 5000/tcp) corresponds to the port number that is used for external communication for the node. The following entries (from DB_bculinux 60000/tcp to DB_bculinux_END 600/tcp) correspond to the port numbers that are used for intra partition communication for a node. In the current example since the standby node would be configured to take over the admin and the 3 data nodes with 4 partitions each, the maximum number of partitions that would run on the standby would be 3. Hence 3 ports from to 600 are reserved in this particular case. Also, since a BCU demands all the nodes must have the same configuration, the same 3 ports must be reserved on all nodes in the cluster. Please ensure that all these ports numbers are unique in the /etc/services file and are not used for any other communication. '( &)*! '( )*! '(' )*! '(' )*! '('" ")*! '('% %)*

9 ! '('& &)*! '(' )*! '(' )*! '(' )*! '(' )*! '(')*! '(')*! '('+,!)* DB port settings in /etc/services 4. Configuring the Standby Node This section describes the settings that need to be done so that all the storage in the cluster is visible and available to the standby node as well as mountable in the event of a failover. Also, the settings that need to be done on the standby node so that it acts as the NFS server (in case the admin node goes down) and/or as an NFS client in case any node goes down. 4. NFS Server settings (configuring /etc/exports file) As mentioned before, in a D500 Balanced Warehouse, the DB instance-owning admin node (admin0) acts as an NFS server for all the nodes in the cluster (including itself) which act as NFS clients. Typically there are two directories that are NFS shared across all the nodes: /shared_dbhome: The DB Instance home directory /shared_home: User home directory for all non-db users. The NFS server settings in a D500 Balanced Warehouse are: rw,sync,fsid=x,no_root_squash Open the /etc/exports file of the admin node to confirm this. It should have entries like: In case the admin node goes down, the standby node must be able to take over as the NFS server for all the nodes. Hence, manually edit the /etc/exports file on the standby node so that it has entries identical to that of the admin node: 4. NFS Client settings (updating /etc/fstab file) Since all the nodes in the cluster (including admin node) act as NFS clients, the standby node must be configured to act as an NFS client in the event of a failover. The NFS client settings in a D500 Balanced Warehouse are: rw,hard,bg,intr,suid,tcp,nfsvers=3,timeo=600,nolock

10 Check the /etc/fstab file on all the nodes (including admin) for entries like these:!"#$%!"#$% Manually edit the /etc/fstab file on the standby node and add identical entries:!"#$%!"#$% Create the /dbhome directory on standby and manually mount both /home and /dbhome on standby. &% & & 4.3 Storage failover settings (updating /etc/fstab file) This section describes the settings that need to be done to ensure failover of the storage mount points to the standby node. First we ll check whether all file systems are visible and available to standby and then we ll configure standby so to make these mountable in the event of a failover. Verify that all the logical volumes from all the nodes are visible to standby: As root, issue the following command on standby: '$! You should see a list of all the logical volumes from all the nodes in the HA group. In the current example, the output of the command is like: $-). )(/ 0 0#3 4#*5/ 0*$5 */ $ 66 */ $ 66 // $ 66 /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7*" /7*" 666&%% /7*% /7*% 666&%% /7*& /7*& 666&%% /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7* /7* 666&%% /7. / /7. /7 6666

11 7/7/7 / / Logical volume setup on standby node 6666% Verify that all these logical volumes are available on standby: As root, issue the following command on standby: '$! $ For every logical volume listed, you should see that the LV Status is listed as available, as shown in the following sample output: 6660/ 666 0, ) /)/7*)/7*, /7* 099:! ;<(67 68$&#6$,=6$<6#8>6$?@ 0< ) 0# / * 0#3 &%% 0+ "% #. A. / &"- Logical volume details If any logical volume is marked as not available, reboot the standby node and check again. Define the file systems on standby node and configure the /etc/fstab file so that it is able to mount respective storage in the event of a failover: Since all the admin and data nodes can failover to the standby node, all the file systems and /etc/fstab file on the standby node must be configured to be identical to that of admin and data nodes. In the current example we define the following file systems on standby: Define the file systems for the DB instance home directory, the user home directory, and the NFS control directory (which are present in admin): %( $) % % *+ $) $),!!$! )"$),--,!!$!)"$),--,!!$!!$!$ )"$),-- Define the file system for staging space (present in admin): ).).B) /)/ )/ ) ("CC '(BDD) )7

12 Define file systems for database partitions 0 (present in admin and data nodes): ' %( $)./0 *+ $) $),!! $! $)./0)"$),-- ' %( $)./0 (+ $) $),!! $! $)./0)"$),-- ' %( $)./0 (+ $) $),!! $! $)./0)"$),-- Similarly define the file systems for partitions 03 and add entries to /etc/fstab file. In the current implementation, the /etc/fstab has these added entries: ) /)/7)/7. ). '. ("CC '( ) /)/7)/7. ). '. ("CC '( ) /)/7)/7/7)/7 ("CC '( ) /)/ )/ ) ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*")/7*")7)(),4!+" ("CC '( ) /)/7*%)/7*%)7)(),4!+% ("CC '( ) /)/7*&)/7*&)7)(),4!+& ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( ) /)/7*)/7*)7)(),4!+ ("CC '( -). '. ). 7C.CCCC*C7/ E"C EC -). '. ). 7C.CCCC*C7/ E"C EC It is extremely important that the noauto mount option is used for each of the ext3 file system in the /etc/fstab file. This would prevent system from auto-mounting the file system after reboot. TSA would take care of mounting the required file systems based on which nodes are up. For instance, initially admin and data nodes would have all their respective file systems manually mounted. In case data0 goes down, TSA will mount all its mount points that need to be transferred on standby. Once data0 comes up and standby is made to go offline, TSA will again ensure that the mount control is transferred back to data0.

13 As part of the D500 Balanced Warehouse configuration, admin and data nodes would already have this noauto option set in their respective /etc/fstab files. In case not, please set the noauto option in /etc/fstab/ across all nodes. 5. Configuring a DB Instance for HA using dbhaicu Now that all the required pre-configuration is complete, we ll configure the DB Instance for high availability using the dbhaicu utility. Recall that the dbhaicu utility can be run in two modes viz. the XML mode and the Interactive mode. This document would cover the configuration using the step by step interactive mode on the command line. 5. Procedure of running dbhaicu. Prepare cluster nodes for dbhaicu: Run the following command on all admin, data and standby nodes as root: ' " On a D500 Balanced Warehouse, you can also use a single command from the mgmt node as root which uses the cluster management system utility dsh. '* ". Activate the database: On admin node, as instance owner (in this case bculinux), issue: &! 340 BCUDB is the database name used in the current implementation. 3. Run dbhaicu: On Admin node, as instance owner, issue the dbhaicu command. Once you run dbhaicu, you ll be prompted for inputs required for the HA configuration step by step. Below is the sample execution of the dbhaicu used in the current implementation. Please note that in the current implementation, bond0 had been created using two network ports on the FCM network (on each node) and bond was created using two network ports on the Cluster network (on each node). Typically, this utility is run in-house i.e. before the system was shipped to the customer, so the corporate network isn t available. Once the setup is delivered to the customer, additional configuration needs to be done to make the corporate network highly available which will be covered in the Post Configuration steps section. Text in RED indicates user inputs. Text in BLUE indicates questions prompted by system/utility. Text in BLACK indicates information message by system/utility.

14 dbhaicu Welcome to the DB High Availability Instance Configuration Utility (dbhaicu). You can find detailed diagnostic information in the DB server diagnostic log file called dbdiag.log. Also, you can use the utility called dbpd to query the status of the cluster domains you create. For more information about configuring your clustered environment using dbhaicu, see the topic called 'DB High Availability Instance Configuration Utility (dbhaicu)' in the DB Information Center. dbhaicu determined the current DB database manager instance is bculinux. The cluster configuration that follows will apply to this instance. dbhaicu is collecting information on your current setup. This step may take some time as dbhaicu will need to activate all databases for the instance to discover all paths... When you use dbhaicu to configure your clustered environment, you create cluster domains. For more information, see the topic 'Creating a cluster domain with dbhaicu' in the DB Information Center. dbhaicu is searching the current machine for an existing active cluster domain... dbhaicu did not find a cluster domain on this machine. dbhaicu will now query the system for information about cluster nodes to create a new cluster domain... dbhaicu did not find a cluster domain on this machine. To continue configuring your clustered environment for high availability, you must create a cluster domain; otherwise, dbhaicu will exit. Create a domain and continue? [] Create a unique name for the new domain: ha_domain Nodes must now be added to the new domain. How many cluster nodes will the domain ha_domain contain? 5 Enter the host name of a machine to add to the domain: admin0 Enter the host name of a machine to add to the domain: stdby0 Enter the host name of a machine to add to the domain: data0 Enter the host name of a machine to add to the domain: data0 Enter the host name of a machine to add to the domain: data03 dbhaicu can now create a new domain containing the 5 machines that you specified. If you choose not to create a domain now, dbhaicu will exit. Create the domain now? []

15 Creating domain ha_domain in the cluster... Creating domain ha_domain in the cluster was successful. You can now configure a quorum device for the domain. For more information, see the topic "Quorum devices" in the DB Information Center. If you do not configure a quorum device for the domain, then a human operator will have to manually intervene if subsets of machines in the cluster lose connectivity. Configure a quorum device for the domain called ha_domain? [] The following is a list of supported quorum device types:. Network Quorum Enter the number corresponding to the quorum device type to be used: [] Specify the network address of the quorum device: Refer to the appendix for details on quorum device Configuring quorum device for domain ha_domain... Configuring quorum device for domain ha_domain was successful. The cluster manager found 0 network interface cards on the machines in the domain. You can use dbhaicu to create networks for these network interface cards. For more information, see the topic 'Creating networks with dbhaicu' in the DB Information Center. Create networks for these network interface cards? [] Enter the name of the network for the network interface card: bond0 on cluster node: admin0. Create a new public network for this network interface card.. Create a new private network for this network interface card. Enter selection: Refer to the appendix below for more details Are you sure you want to add the network interface card bond0 on cluster node admin0 to the network db_private_network_0? [] Adding network interface card bond0 on cluster node admin0 to the network db_private_network_0... Adding network interface card bond0 on cluster node admin0 to the network db_private_network_0 was successful. Enter the name of the network for the network interface card: bond0 on cluster node: data0. db_private_network_0. Create a new public network for this network interface card. 3. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond0 on

16 cluster node data0 to the network db_private_network_0? [] Adding network interface card bond0 on cluster node data0 to the network db_private_network_0... Adding network interface card bond0 on cluster node data0 to the network db_private_network_0 was successful. Enter the name of the network for the network interface card: bond0 on cluster node: data0. db_private_network_0. Create a new public network for this network interface card. 3. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond0 on cluster node data0 to the network db_private_network_0? [] Adding network interface card bond0 on cluster node data0 to the network db_private_network_0... Adding network interface card bond0 on cluster node data0 to the network db_private_network_0 was successful. Enter the name of the network for the network interface card: bond0 on cluster node: data03. db_private_network_0. Create a new public network for this network interface card. 3. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond0 on cluster node data03 to the network db_private_network_0? [] Adding network interface card bond0 on cluster node data03 to the network db_private_network_0... Adding network interface card bond0 on cluster node data03 to the network db_private_network_0 was successful. Enter the name of the network for the network interface card: bond0 on cluster node: stdby0. db_private_network_0. Create a new public network for this network interface card. 3. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond0 on cluster node stdby0 to the network db_private_network_0? [] Adding network interface card bond0 on cluster node stdby0 to the network db_private_network_0... Adding network interface card bond0 on cluster node stdby0 to the network db_private_network_0 was successful. Enter the name of the network for the network interface card: bond on

17 cluster node: stdby0. db_private_network_0. Create a new public network for this network interface card. 3. Create a new private network for this network interface card. Enter selection: 3 Create a separate private network for bond Are you sure you want to add the network interface card bond on cluster node data03 to the network db_private_network_? [] Adding network interface card bond on cluster node data03 to the network db_private_network_... Adding network interface card bond on cluster node data03 to the network db_private_network_ was successful. Enter the name of the network for the network interface card: bond on cluster node: data0. db_private_network_. db_private_network_0 3. Create a new public network for this network interface card. 4. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond on cluster node data0 to the network db_private_network_? [] Adding network interface card bond on cluster node data0 to the network db _public_network_... Adding network interface card bond on cluster node data0 to the network db _public_network_ was successful. Enter the name of the network for the network interface card: bond on cluster n ode: data0. db_private_network_. db_private_network_0 3. Create a new public network for this network interface card. 4. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card bond on cluster node data0 to the network db_private_network_? [] Adding network interface card bond on cluster node data0 to the network db_private_network_... Adding network interface card bond on cluster node data0 to the network db_private_network_ was successful. Enter the name of the network for the network interface card: bond on cluster node: admin0. db_private_network_. db_private_network_0 3. Create a new public network for this network interface card. 4. Create a new private network for this network interface card. Enter selection:

18 Are you sure you want to add the network interface card bond on cluster node admin0 to the network db_private_network_? [] Adding network interface card bond on cluster node admin0 to the network db_private_network_... Adding network interface card bond on cluster node admin0 to the network db_private_network_ was successful. Retrieving high availability configuration parameter for instance bculinux... The cluster manager name configuration parameter (high availability configuration parameter) is not set. For more information, see the topic "cluster_mgr - Cluster manager name configuration parameter" in the DB Information Center. Do you want to set the high availability configuration parameter? The following are valid settings for the high availability configuration parameter:.tsa.vendor Enter a value for the high availability configuration parameter: [] Setting a high availability configuration parameter for instance bculinux to TSA. Now you need to configure the failover policy for the instance bculinux. The failover policy determines the machines on which the cluster manager will restart the database manager if the database manager is brought offline unexpectedly. The following are the available failover policies:. Local Restart -- during failover, the database manager will restart in place on the local machine. Round Robin -- during failover, the database manager will restart on any machine in the cluster domain 3. Mutual Takeover -- during failover, the database partitions on one machine will failover to a specific machine and vice versa (used with DPF instances) 4. M+N -- during failover, the database partitions on one machine will failover to any other machine in the cluster domain (used with DPF instances) 5. Custom -- during failover, the database manager will restart on a machine from a user-specified list Enter your selection: 4 You can identify mount points that are noncritical for failover. For more information, see the topic 'Identifying mount points that are noncritical for failover' in the DB Information Center. Are there any mount points that you want to designate as noncritical? [] The following DB database partitions can be made highly available: DB database partition number 0 DB database partition number DB database partition number DB database partition number 3 DB database partition number 4

19 DB database partition number 5 DB database partition number 6 DB database partition number 7 DB database partition number 8 DB database partition number 9 DB database partition number 0 DB database partition number DB database partition number Do you want to make all these DB database partitions highly available? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 0. Should the cluster node data0 be designated as an idle node for DB database partition 0? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 0? [] For all partitions we choose stdby0 as the idle node Should the cluster node data03 be designated as an idle node for DB database partition 0? [] Should the cluster node data0 be designated as an idle node for DB database partition 0? [] Adding DB database partition 0 to the cluster... Adding DB database partition 0 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 0? [] For details on virtual IP, refer to the appendix M+N failover policy was chosen. You will need to specify the idle nodes for database partition. Should the cluster node admin0 be designated as an idle node for DB database partition? [] Should the cluster node stdby0 be designated as an idle node for DB database partition? [] Should the cluster node data03 be designated as an idle node for DB database partition? []

20 Should the cluster node data0 be designated as an idle node for DB database partition? [] Adding DB database partition to the cluster... Adding DB database partition to the cluster was successful. Do you want to configure a virtual IP address for the DB partition:? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition. Should the cluster node admin0 be designated as an idle node for DB database partition? [] Should the cluster node stdby0 be designated as an idle node for DB database partition? [] Should the cluster node data03 be designated as an idle node for DB database partition? [] Should the cluster node data0 be designated as an idle node for DB database partition? [] Adding DB database partition to the cluster... Adding DB database partition to the cluster was successful. Do you want to configure a virtual IP address for the DB partition:? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 3. Should the cluster node admin0 be designated as an idle node for DB database partition 3? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 3? [] Should the cluster node data03 be designated as an idle node for DB

21 database partition 3? [] Should the cluster node data0 be designated as an idle node for DB database partition 3? [] Adding DB database partition 3 to the cluster... Adding DB database partition 3 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 3? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 4. Should the cluster node admin0 be designated as an idle node for DB database partition 4? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 4? [] Should the cluster node data03 be designated as an idle node for DB database partition 4? [] Should the cluster node data0 be designated as an idle node for DB database partition 4? [] Adding DB database partition 4 to the cluster... Adding DB database partition 4 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 4? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 5. Should the cluster node data0 be designated as an idle node for DB database partition 5? [] Should the cluster node admin0 be designated as an idle node for DB database partition 5? []

22 Should the cluster node stdby0 be designated as an idle node for DB database partition 5? [] Should the cluster node data03 be designated as an idle node for DB database partition 5? [] Adding DB database partition 5 to the cluster... Adding DB database partition 5 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 5? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 6. Should the cluster node data0 be designated as an idle node for DB database partition 6? [] Should the cluster node admin0 be designated as an idle node for DB database partition 6? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 6? [] Should the cluster node data03 be designated as an idle node for DB database partition 6? [] Adding DB database partition 6 to the cluster... Adding DB database partition 6 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 6? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 7. Should the cluster node data0 be designated as an idle node for DB database partition 7? [] Should the cluster node admin0 be designated as an idle node for DB database partition 7? []

23 Should the cluster node stdby0 be designated as an idle node for DB database partition 7? [] Should the cluster node data03 be designated as an idle node for DB database partition 7? [] Adding DB database partition 7 to the cluster... Adding DB database partition 7 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 7? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 8. Should the cluster node data0 be designated as an idle node for DB database partition 8? [] Should the cluster node admin0 be designated as an idle node for DB database partition 8? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 8? [] Should the cluster node data03 be designated as an idle node for DB database partition 8? [] Adding DB database partition 8 to the cluster... Adding DB database partition 8 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 8? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 9. Should the cluster node data0 be designated as an idle node for DB database partition 9? [] Should the cluster node admin0 be designated as an idle node for DB database partition 9? []

24 Should the cluster node stdby0 be designated as an idle node for DB database partition 9? [] Should the cluster node data0 be designated as an idle node for DB database partition 9? [] Adding DB database partition 9 to the cluster... Adding DB database partition 9 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 9? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition 0. Should the cluster node data0 be designated as an idle node for DB database partition 0? [] Should the cluster node admin0 be designated as an idle node for DB database partition 0? [] Should the cluster node stdby0 be designated as an idle node for DB database partition 0? [] Should the cluster node data0 be designated as an idle node for DB database partition 0? [] Adding DB database partition 0 to the cluster... Adding DB database partition 0 to the cluster was successful. Do you want to configure a virtual IP address for the DB partition: 0? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition. Should the cluster node data0 be designated as an idle node for DB database partition? [] Should the cluster node admin0 be designated as an idle node for DB database partition? []

25 Should the cluster node stdby0 be designated as an idle node for DB database partition? [] Should the cluster node data0 be designated as an idle node for DB database partition? [] Adding DB database partition to the cluster... Adding DB database partition to the cluster was successful. Do you want to configure a virtual IP address for the DB partition:? [] M+N failover policy was chosen. You will need to specify the idle nodes for database partition. Should the cluster node data0 be designated as an idle node for DB database partition? [] Should the cluster node admin0 be designated as an idle node for DB database partition? [] Should the cluster node stdby0 be designated as an idle node for DB database partition? [] Should the cluster node data0 be designated as an idle node for DB database partition? [] Adding DB database partition to the cluster... Adding DB database partition to the cluster was successful. Do you want to configure a virtual IP address for the DB partition:? [] The following databases can be made highly available: Database: BCUDB Do you want to make all active databases highly available? [] Adding database BCUDB to the cluster domain... Adding database BCUDB to the cluster domain was successful.

26 All cluster configurations have been completed successfully. dbhaicu exiting Check status of the cluster: Once dbhaicu exists, you can use the lssam command to check the status of the cluster. The details on how to interpret output would be covered in the Cluster monitoring section below. For now, just check that it shows Online for all instance partitions and storage mount points on respective nodes and Offline for all instance partitions and storage mount points on standby as illustrated below: lssam Online IBM.ResourceGroup:db_bculinux_0-rg Nominal=Online - Online IBM.Application:db_bculinux_0-rs - Online IBM.Application:db_bculinux_0-rs:admin0 '- Offline IBM.Application:db_bculinux_0-rs:stdby0 - Online IBM.Application:dbmnt-dbfs_bculinux_NODE0000-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0000-rs:admin0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0000-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_0-rg Nominal=Online - Online IBM.Application:db_bculinux_0-rs - Online IBM.Application:db_bculinux_0-rs:data03 '- Offline IBM.Application:db_bculinux_0-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE000-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE000-rs:data03 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE000-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_-rg Nominal=Online - Online IBM.Application:db_bculinux_-rs - Online IBM.Application:db_bculinux_-rs:data03 '- Offline IBM.Application:db_bculinux_-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE00-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE00-rs:data03 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE00-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_-rg Nominal=Online - Online IBM.Application:db_bculinux_-rs - Online IBM.Application:db_bculinux_-rs:data03 '- Offline IBM.Application:db_bculinux_-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE00-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE00-rs:data03 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE00-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_-rg Nominal=Online - Online IBM.Application:db_bculinux_-rs - Online IBM.Application:db_bculinux_-rs:data0 '- Offline IBM.Application:db_bculinux_-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE000-rs

27 - Online IBM.Application:dbmntdbfs_bculinux_NODE000-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE000-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_-rg Nominal=Online - Online IBM.Application:db_bculinux_-rs - Online IBM.Application:db_bculinux_-rs:data0 '- Offline IBM.Application:db_bculinux_-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE000-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE000-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE000-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_3-rg Nominal=Online - Online IBM.Application:db_bculinux_3-rs - Online IBM.Application:db_bculinux_3-rs:data0 '- Offline IBM.Application:db_bculinux_3-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0003-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0003-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0003-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_4-rg Nominal=Online - Online IBM.Application:db_bculinux_4-rs - Online IBM.Application:db_bculinux_4-rs:data0 '- Offline IBM.Application:db_bculinux_4-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0004-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0004-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0004-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_5-rg Nominal=Online - Online IBM.Application:db_bculinux_5-rs - Online IBM.Application:db_bculinux_5-rs:data0 '- Offline IBM.Application:db_bculinux_5-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0005-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0005-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0005-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_6-rg Nominal=Online - Online IBM.Application:db_bculinux_6-rs - Online IBM.Application:db_bculinux_6-rs:data0 '- Offline IBM.Application:db_bculinux_6-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0006-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0006-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0006-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_7-rg Nominal=Online - Online IBM.Application:db_bculinux_7-rs - Online IBM.Application:db_bculinux_7-rs:data0 '- Offline IBM.Application:db_bculinux_7-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0007-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0007-rs:data0 '- Offline IBM.Application:dbmnt-

28 dbfs_bculinux_node0007-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_8-rg Nominal=Online - Online IBM.Application:db_bculinux_8-rs - Online IBM.Application:db_bculinux_8-rs:data0 '- Offline IBM.Application:db_bculinux_8-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0008-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0008-rs:data0 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0008-rs:stdby0 Online IBM.ResourceGroup:db_bculinux_9-rg Nominal=Online - Online IBM.Application:db_bculinux_9-rs - Online IBM.Application:db_bculinux_9-rs:data03 '- Offline IBM.Application:db_bculinux_9-rs:stdby0 '- Online IBM.Application:dbmnt-dbfs_bculinux_NODE0009-rs - Online IBM.Application:dbmntdbfs_bculinux_NODE0009-rs:data03 '- Offline IBM.Application:dbmntdbfs_bculinux_NODE0009-rs:stdby0 5. Setting instance name in profiles.reg on standby: On the standby node verify that the registry file profiles.reg in the DB installation directory (/opt/ibm/dwe/db/<version>) contains the name of the DB instance. If necessary, add the instance name to this file :' $9 $) 6. Taking resource groups offline and online: Once the instance has been made highly available, you can use the chrg o <state> command to take the resource groups online and offline. For example, to take all the resource groups offline you can put the following commands in a file and run it as a script: *$ $)( *$ $)( *$ $)( *$ $)"( *$ $);( *$ $)<( *$ $)#( *$ $):( *$ $)=( *$ $)8( *$ $)( *$ $)( *$ $)(

29 Similarly, the offline script for this environment would contain: *$ $)( *$ $)( *$ $)( *$ $)"( *$ $);( *$ $)<( *$ $)#( *$ $):( *$ $)=( *$ $)8( *$ $)( *$ $)( *$ $)( Although these commands return immediately, it takes some time for the resource groups to be brought online or offline. You can use the lssam command to monitor the status of the resource groups. You can also check the HA domains and the nodes that are part of respective domains by using the lsrpdomain and lsrpnode commands as show below: lsrpdomain Name OpState RSCTActiveVersion MixedVersions TSPort GSPort ha_domain Online.5.. No lsrpnode Name OpState RSCTVersion admin0 Online.5.. stdby0 Online.5.. data0 Online.5.. data0 Online.5.. data03 Online Appendix for dbhaicu Network Quorum device: A network quorum device is an IP address to which every cluster domain node can connect (ping) at all times. In the current implementation, the FCM network gateway IP is used assuming that as long as the FCM network segment is UP and RUNNING, the gateway will always be ping-able. No special software is required to be installed in this quorum device. It should just be reachable (ping-able) to all the nodes all the time.

30 Public v/s Private address: In case the networks that you are trying to make highly available are private networks (internal to the Warehouse setup) like the FCM or the Cluster network, then you can choose to create a private network equivalency (e.g. db_private_network_0). For public networks, i.e. the networks which the external applications use to connect to the setup, like the corporate network, you can choose to create a public network equivalency (e.g. db_public_network_0) Virtual IP address: This is highly available IP address that the external clients/applications will use to connect to the database. Hence, this address should be configured on the same subnet which is exposed to the external clients/applications and only for database partition 0 on the administration BCU. Like in case of the current implementation, if dbhaicu is run before putting the system on corporate network, configuration of virtual IP is not required. Once the system is put up on the corporate network, this additional configuration can be done by running dbhaicu again. This is covered in the Post configuration steps section. 6. Configuring the NFS Server for HA As mentioned earlier, the admin node in a D500 Balanced Warehouse acts as an NFS server for all the other nodes (including itself). We will now see how to make this NFS server highly available to ensure that even if the admin node goes down, the NFS server keeps running on the standby node. Recollect that the two directories that are NFS shared across all nodes are /shared_dbhome and /shared_home. Procedure. The NFS server would already be running on the admin node, so before re-configuring the NFS server for high availability, take it offline using the follow sequence of steps: a. Take the DB instance offline. b. Un-mount all of the NFS clients. (/dbhome and /home) c. Take the NFS server offline.. Obtain an unused IP address in the same subnet (on the FCM network) as the admin node that will be used by the HA NFS server as a virtual IP address. In the current implementation we take the IP address as the NFS server virtual IP. 3. Since TSA starts the services required to run the NFS server automatically, these services must be turned off on admin and standby nodes. In the event of a failure on the admin node, TSA would automatically start the NFS services on the standby node to avoid any downtime. If the NFS services start automatically at boot time, the failed node (admin) will attempt to restart another NFS server after it is restored, even though the NFS server is already running on the standby node. To prevent this situation from happening, we need to ensure that the NFS server does not automatically start at boot time on both admin and standby nodes by executing the following commands:

31 %(($!$"< %(($!$"<! %(($!$"< 4. There is a small file system on the lvnfsvarlibnfs logical volume that is created on the admin node during the initial setup phase of the D500 Balanced Warehouse. On the admin node, mount this partition on /varlibnfs and copy all the files from /var/lib/nfs to this small file system. Then, un-mount the file system. 5. On the standby node create the /varlibnfs directory first. Then on both admin and standby node, back up the original /var/lib/nfs directory and create a link to the /varlibnfs mount point using the following commands: '!!$!$ 9$ '$*!$!$ 6. Verify that these conditions are still true for admin and standby nodes: On both servers, the shared_home file system exists on /dev/vgnfs/lvnfshome and the shared_dbhome file system exists on /dev/vgnfs/lvnfsdbhome. On both servers, the /shared_home and /shared_dbhome mount points exist. On both servers, the /etc/exports file includes the following entries for the shared home directories: On both servers, the /etc/fstab file includes the following entries for the shared home directories:!!$!)"$)!!$! )"$) On both servers check for the following entry in the /etc/fstab file:!!$!!$!$ )"$)

32 7. On all the NFS clients (all nodes), modify the /etc/fstab entries for the shared directories using the HA-NFS service IP address. We initially added the following entries:!"#$%!"#$% Modify these entries on all nodes so that they look like the following: :9#99;!"#$% :9#99;!"#$% 8. In the directory /usr/sbin/rsct/sapolicies/nfsserver on the admin node, edit the sanfsserver.conf file by changing the following lines. In the nodes field, add the host names of both servers: # --list of nodes in the NFS server cluster nodes="admin0 stdby0" Change the IP address to the virtual IP address and netmask of the NFS server used before: # --IP address and netmask for NFS server ip_=" , " Add the network interface name used by the NFS server and host name of each server: # --List of network interfaces ServiceIP ip_x depends on. # Entries are lists of the form <network-interface-name>:<node-name>,... nieq_="bond0:admin0,bond0:stdby0" Add the mount points for the varlibnfs, shared_home and shared_dbhome file systems: # --common local mountpoint for shared data # If more instances of <data_>, add more rows, like: data_tmp, data_proj...

33 # Note: the keywords need to be unique! data_varlibnfs="/varlibnfs" data_work="/shared_dbhome" data_home="/shared_home" This configuration file must be identical on the admin node and the standby node. Therefore, copy this file over to the standby node: # scp sa-nfsserver.conf stdby0mgt:/usr/sbin/rsct/sapolicies/nfsserver 9. The sam.policies package comes with two versions of the nfsserver scripts. On both the admin node and the standby node, make the DB version of the script the active version using the following commands: # mv /usr/sbin/rsct/sapolicies/nfsserver/nfsserverctrl-server \ /usr/sbin/rsct/sapolicies/nfsserver/nfsserverctrl-server.original # cp /usr/sbin/rsct/sapolicies/nfsserver/nfsserverctrl-server.db \ /usr/sbin/rsct/sapolicies/nfsserver/nfsserverctrl-server 0. On one of the servers, change to the /usr/sbin/rsct/sapolicies/nfsserver directory and then run the automatic configuration script to create the highly available NFS resources: # cd /usr/sbin/rsct/sapolicies/nfsserver # /usr/sbin/rsct/sapolicies/nfsserver/cfgnfsserver p. Bring up the highly available NFS server: $ chrg -o Online SA-nfsserver-rg Although this command returns immediately, it takes some time for the NFS server to come online. Verify the status of the resource groups by issuing the lssam command: After the resource groups have been brought online, your output for the SA-nfsserver-rg should look similar to the following: Online IBM.ResourceGroup:SA-nfsserver-rg Nominal=Online - Online IBM.Application:SA-nfsserver-data-home - Online IBM.Application:SA-nfsserver-data-home:admin0 '- Offline IBM.Application:SA-nfsserver-data-home:stdby0 - Online IBM.Application:SA-nfsserver-data-varlibnfs - Online IBM.Application:SA-nfsserver-data-varlibnfs:admin0 '- Offline IBM.Application:SA-nfsserver-data-varlibnfs:stdby0

34 - Online IBM.Application:SA-nfsserver-data-work - Online IBM.Application:SA-nfsserver-data-work:admin0 '- Offline IBM.Application:SA-nfsserver-data-work:stdby0 - Online IBM.Application:SA-nfsserver-server - Online IBM.Application:SA-nfsserver-server:admin0 '- Offline IBM.Application:SA-nfsserver-server:stdby0 '- Online IBM.ServiceIP:SA-nfsserver-ip- - Online IBM.ServiceIP:SA-nfsserver-ip-:admin0 '- Offline IBM.ServiceIP:SA-nfsserver-ip-:stdby0. Manually mount the client NFS mount points on all servers. Verify that the /home and /dbhome directories are mounted on both the admin node and standby node and that the /home and /dbhome directories are readable and writable by each server. 3. To verify the configuration, use the following command to move the location of the NFS server from admin0 to stdby0: rgreq o move SA-nfsserver-rg Verify that this command executes successfully. Issue the lssam command and verify that the NFS server resources are offline on admin0 and online on stdby0. Issue the same command to move the location of the NFS server from stdby0 back to admin0. 4. Create dependencies between the DB partitions and the NFS server by issuing the following commands from a script as the root user: # for DB resources: for each data partition 'x' do: mkrel -S IBM.Application:db_bculinux_${x}-rs -G IBM.Application:SA-nfsserver-server -p DependsOnAny db_bculinux_${x}-rs_dependson_sa-nfsserver-server-rel 5. Bring the DB instance back online and verify that all resources can start. 7. Post Configuration steps (at the customer site) Once the setup is delivered at the customer site and is put up on the corporate network, there are certain additional steps that must be undertaken to create public equivalency for this corporate network to make it highly available. You need to find an unused IP on the corporate network that would be used as a virtual IP (the highly available IP address which the external clients/applications would use to connect to the database). This additional setup does not require disturbing the initial setup which we did. You can run dbhaicu again as the instance owner and just create the new equivalencies. Procedure

35 . Run the dbhaicu tool as the instance owner and select option. Add or remove a network interface. Do you want to add or remove network interface cards to or from a network? []. Add. Remove Enter the name of the network interface card: eth Enter the logical port name of the corporate network of admin0 Enter the host name of the cluster node which hosts the network interface card eth: admin0 Enter the name of the network for the network interface card: eth on cluster node: admin0. SA-nfsserver-nieq-. db_private_network_0 3. Create a new public network for this network interface card. 4. Create a new private network for this network interface card. Enter selection: 3 We create a public network equivalency for the corporate network Are you sure you want to add the network interface card eth on cluster node admin0 to the network db_public_network_0? [] Adding network interface card eth on cluster node admin0 to the network db_public_network_0... Adding network interface card eth on cluster node admin0 to the network db_public_network_0 was successful. Do you want to add another network interface card to a network? [] Enter the name of the network interface card: eth Enter the logical port name of the corporate n/w of stdby0 Enter the host name of the cluster node which hosts the network interface card eth: stdby0 Enter the name of the network for the network interface card: eth on cluster node: stdby0. db_public_network_0. SA-nfsserver-nieq- 3. db_private_network_0 4. Create a new public network for this network interface card. 5. Create a new private network for this network interface card. Enter selection: Are you sure you want to add the network interface card eth on cluster node stdby0 to the network db_public_network_0? [] Adding network interface card eth on cluster node stdby0 to the

36 network db_public_network_0... Adding network interface card eth on cluster node stdby0 to the network db_public_network_0 was successful. We now need to configure the virtual IP which will be used as the highly available IP by the external applications and clients. This would be configured only on the database partition 0 (admin node). Find an unused IP address on the corporate network and run dbhaicu as the instance owner. Select option 6. Add or remove an IP address. Do you want to add or remove IP addresses to or from the cluster? []. Add. Remove Which DB database partition do you want this IP address to be associated with? 0 Enter the virtual IP address: Enter the subnet mask for the virtual IP address : [ ] Select the network for the virtual IP :. db_public_network_0. SA-nfsserver-nieq- 3. db_private_network_0 Enter selection: Adding virtual IP address to the domain... Adding virtual IP address to the domain was successful. Do you want to add another virtual IP address? [] 3. Create dependencies between database partition 0 and the corporate network equivalency created before. Take the DB resources offline and then run the following command as the root user to create the dependency: mkrel -S IBM.Application:db_bculinux_0-rs -G IBM.Equivalency:db_public_network_0 -p DependsOn db_bculinux_0-rs_dependson_db_public_network_0-rel 4. Create the network quorum device for the corporate network. Run the dbhaicu tool and select option 0. Create a new quorum device for the domain. Specify the gateway IP of the corporate network (in this case )

37 This is how the network would look like after the above HA configuration: 8. Cluster monitoring This section describes how to interpret the output on the lssam command that we used before and how to monitor the cluster once it is configured. We will also discuss how the lssam output would indicate that a data node or an admin node has successfully failed over to the standby node. To explain how to interpret the lssam output, lets take a snippet of the one we got once our HA configuration was done. Online IBM.ResourceGroup:db_bculinux_9-rg Nominal=Online - Online IBM.Application:db_bculinux_9-rs - Online IBM.Application:db_bculinux_9-rs:data03 '- Offline IBM.Application:db_bculinux_9-rs:stdby0

Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage

Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage Automated Instance failover Using IBM DB High Availability Instance Configuration Utility (dbhaicu) on shared storage Date: Oct 8, 04 Version: 3.0 Authors: Abhishek Iyer (iyer.abhishek@us.ibm.com) Neeraj

More information

IBM Tivoli System Automation for Multiplatforms (TSA MP) Rolling Upgrade Procedure for a TSAMP Automated HADR environment.

IBM Tivoli System Automation for Multiplatforms (TSA MP) Rolling Upgrade Procedure for a TSAMP Automated HADR environment. IBM Tivoli System Automation for Multiplatforms (TSA MP) Rolling Upgrade Procedure for a TSAMP Automated HADR environment. This whitepaper is written for a DB2 v9.5/9.7 Automated Failover HADR environment.

More information

IBM Tivoli System Automation for Multiplatforms (TSAMP) Rolling Upgrade Procedure for a DB2 Shared Disk environment which is automated by TSAMP.

IBM Tivoli System Automation for Multiplatforms (TSAMP) Rolling Upgrade Procedure for a DB2 Shared Disk environment which is automated by TSAMP. IBM Tivoli System Automation for Multiplatforms (TSAMP) Rolling Upgrade Procedure for a DB2 Shared Disk environment which is automated by TSAMP. September 2011 Authors: Gareth Holl, IBM Tivoli Support

More information

Automating DB2 HADR Failover on Linux using Tivoli System Automation for Multiplatforms

Automating DB2 HADR Failover on Linux using Tivoli System Automation for Multiplatforms August 2006 Automating DB2 HADR Failover on Linux using Tivoli System Automation for Multiplatforms Authors: Steve Raspudic, IBM Toronto Lab Melody Ng, IBM Toronto Lab Chris Felix, IBM Toronto Lab Table

More information

Parallels Virtuozzo Containers 4.6 for Windows

Parallels Virtuozzo Containers 4.6 for Windows Parallels Parallels Virtuozzo Containers 4.6 for Windows Deploying Microsoft Clusters Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o

More information

Implementing IBM DB2 Universal Database V8.1 Enterprise Server Edition with Microsoft Cluster Server

Implementing IBM DB2 Universal Database V8.1 Enterprise Server Edition with Microsoft Cluster Server Implementing IBM DB2 Universal Database V8.1 Enterprise Server Edition with Microsoft Cluster Server January 2003 Aslam Nomani DB2 UDB System Verification Test Implementing IBM DB2 Universal Database Extended

More information

Parallels Containers for Windows 6.0

Parallels Containers for Windows 6.0 Parallels Containers for Windows 6.0 Deploying Microsoft Clusters June 10, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse

More information

EXPRESSCLUSTER X 3.3 for Linux

EXPRESSCLUSTER X 3.3 for Linux EXPRESSCLUSTER X 3.3 for Linux Installation and Configuration Guide 04/10/2017 5th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual. 2nd 06/30/2015 Corresponds to the

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

IBM Spectrum NAS Version Network Guide IBM SC

IBM Spectrum NAS Version Network Guide IBM SC IBM Spectrum NAS Version 1.7.0.0 Network Guide IBM SC27-9231-00 IBM Spectrum NAS is a Software-defined storage solution that provides file storage and offers an active/active storage solution to the end

More information

ExpressCluster X 3.2 for Linux

ExpressCluster X 3.2 for Linux ExpressCluster X 3.2 for Linux Installation and Configuration Guide 5/23/2014 2nd Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual 2nd 5/23/2014 Corresponds to the internal

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris Cluster Server 7.3.1 Generic Application Agent Configuration Guide - AIX, Linux, Solaris Last updated: 2017-11-04 Legal Notice Copyright 2017 Veritas Technologies LLC. All rights reserved. Veritas and

More information

ExpressCluster X 3.1 for Linux

ExpressCluster X 3.1 for Linux ExpressCluster X 3.1 for Linux Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

The TinyHPC Cluster. Mukarram Ahmad. Abstract

The TinyHPC Cluster. Mukarram Ahmad. Abstract The TinyHPC Cluster Mukarram Ahmad Abstract TinyHPC is a beowulf class high performance computing cluster with a minor physical footprint yet significant computational capacity. The system is of the shared

More information

HACMP Smart Assist for Oracle User s Guide

HACMP Smart Assist for Oracle User s Guide High Availability Cluster Multi-Processing for AIX 5L HACMP Smart Assist for Oracle User s Guide Version 5.3 SC23-5178-01 Second Edition (August 2005) Before using the information in this book, read the

More information

EXPRESSCLUSTER X 4.0 for Linux

EXPRESSCLUSTER X 4.0 for Linux EXPRESSCLUSTER X 4.0 for Linux Installation and Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New manual. Copyright NEC Corporation 2018.

More information

LAN Setup Reflection

LAN Setup Reflection LAN Setup Reflection After the LAN setup, ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? o Are you able to log into other

More information

ExpressCluster X LAN V1 for Linux

ExpressCluster X LAN V1 for Linux ExpressCluster X LAN V1 for Linux Installation and Configuration Guide Revision 1NA Copyright NEC Corporation of America 2006-2007. All rights reserved. Copyright NEC Corporation 2006-2007. All rights

More information

Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3

Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3 Upgrading from TrafficShield 3.2.X to Application Security Module 9.2.3 Introduction Preparing the 3.2.X system for the upgrade Installing the BIG-IP version 9.2.3 software Licensing the software using

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.2 Configuring Hosts to Access Fibre Channel (FC) or iscsi Storage 302-002-568 REV 03 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published July

More information

SIOS Protection Suite for Linux v9.0. DB2 Recovery Kit Administration Guide

SIOS Protection Suite for Linux v9.0. DB2 Recovery Kit Administration Guide SIOS Protection Suite for Linux v9.0 DB2 Recovery Kit Administration Guide Sep 2015 This document and the information herein is the property of SIOS Technology Corp. (previously known as SteelEye Technology,

More information

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007 Oracle Database 11g Client Oracle Open World - November 2007 Bill Hodak Sr. Product Manager Oracle Corporation Kevin Closson Performance Architect Oracle Corporation Introduction

More information

WLM1200-RMTS User s Guide

WLM1200-RMTS User s Guide WLM1200-RMTS User s Guide Copyright 2011, Juniper Networks, Inc. 1 WLM1200-RMTS User Guide Contents WLM1200-RMTS Publication Suite........................................ 2 WLM1200-RMTS Hardware Description....................................

More information

Table of Contents 1 V3 & V4 Appliance Quick Start V4 Appliance Reference...3

Table of Contents 1 V3 & V4 Appliance Quick Start V4 Appliance Reference...3 Table of Contents 1 V & V4 Appliance Quick Start...1 1.1 Quick Start...1 1.2 Accessing Appliance Menus...1 1. Updating Appliance...1 1.4 Webmin...1 1.5 Setting Hostname IP Address...2 1.6 Starting and

More information

LAN Setup Reflection. Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external?

LAN Setup Reflection. Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? LAN Setup Reflection Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? o Are you able to log into other VMs in the classroom?

More information

Failover Clustering failover node cluster-aware virtual server one

Failover Clustering failover node cluster-aware virtual server one Failover Clustering Microsoft Cluster Service (MSCS) is available for installation on Windows 2000 Advanced Server, Windows 2000 Datacenter Server, and Windows NT Enterprise Edition with Service Pack 5

More information

B. Using Data Guard Physical Standby to migrate from an 11.1 database to Exadata is beneficial because it allows you to adopt HCC during migration.

B. Using Data Guard Physical Standby to migrate from an 11.1 database to Exadata is beneficial because it allows you to adopt HCC during migration. Volume: 71 Questions Question No : 1 Which two statements are true about migrating your database to Exadata? A. Because Exadata uses InfiniBand, in order to migrate your database to Exadata, you must have

More information

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5 Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5 Getting Started with ESX Server 3i Installable Revision: 20090313 Item:

More information

Modify IP Addresses for Servers Defined by IP Address

Modify IP Addresses for Servers Defined by IP Address 2 CHAPTER Modify IP Addresses for Servers Defined by IP Address November 20, 2013 Modify Publisher Server Defined by IP Address, page 2-1, page 2-5 This section describes how to change the IP addresses

More information

Configure the Cisco DNA Center Appliance

Configure the Cisco DNA Center Appliance Review Cisco DNA Center Configuration Wizard Parameters, page 1 Configure Cisco DNA Center Using the Wizard, page 5 Review Cisco DNA Center Configuration Wizard Parameters When Cisco DNA Center configuration

More information

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5

VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE. Version: 4.5 VI-CENTER EXTENDED ENTERPRISE EDITION GETTING STARTED GUIDE This manual provides a quick introduction to Virtual Iron software, and explains how to use Virtual Iron VI-Center to configure and manage virtual

More information

Manually Change Default Gateway Centos 5.5

Manually Change Default Gateway Centos 5.5 Manually Change Default Gateway Centos 5.5 To change the default gateway address and the hostname, edit the /etc/sysconfig/network file and change the GATEWAY and HOSTNAME parameters. The default interface

More information

TSAMP: Common Issues in a BCU Environment

TSAMP: Common Issues in a BCU Environment TSAMP: Common Issues in a BCU Environment Presented by: Clark Jackson & Gareth Holl Agenda Brief Introduction to TSAMP Basic TSAMP/RSCT commands Validate automation policy Broken Equivalency Broken Relationship

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

Configuring High Availability (HA)

Configuring High Availability (HA) 4 CHAPTER This chapter covers the following topics: Adding High Availability Cisco NAC Appliance To Your Network, page 4-1 Installing a Clean Access Manager High Availability Pair, page 4-3 Installing

More information

STORAGE MANAGEMENT USING OPENFILER II

STORAGE MANAGEMENT USING OPENFILER II STORAGE MANAGEMENT USING OPENFILER II The first part of this series gave readers step-by-step instructions to build Openfiler from scratch. This second part covers two additional important features of

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

DB2 Warehouse Version 9.5

DB2 Warehouse Version 9.5 DB2 Warehouse Version 9.5 Installation Launchpad GC19-1272-00 DB2 Warehouse Version 9.5 Installation Launchpad GC19-1272-00 Note: Before using this information and the product it supports, read the information

More information

Document Number ECX-Exchange2010-Migration-QSG, Version 1, May 2015 Copyright 2015 NEC Corporation.

Document Number ECX-Exchange2010-Migration-QSG, Version 1, May 2015 Copyright 2015 NEC Corporation. EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft Exchange Server 2010 Migration from a single-node configuration to a two-node mirror disk cluster Version 1 NEC EXPRESSCLUSTER X 3.x for Windows

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

SIOS Protection Suite for Linux DB2 Recovery Kit v Administration Guide

SIOS Protection Suite for Linux DB2 Recovery Kit v Administration Guide SIOS Protection Suite for Linux DB2 Recovery Kit v9.1.1 Administration Guide Jan 2017 This document and the information herein is the property of SIOS Technology Corp. (previously known as SteelEye Technology,

More information

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide SIOS Protection Suite for Linux: DataKeeper for Linux This document and the information herein is the property of SIOS Technology Corp. Any unauthorized use and reproduction is prohibited. SIOS Technology

More information

Industrial Serial Device Server

Industrial Serial Device Server 1. Quick Start Guide This quick start guide describes how to install and use the Industrial Serial Device Server. Capable of operating at temperature extremes of -10 C to +60 C, this is the Serial Device

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Configure the Cisco DNA Center Appliance

Configure the Cisco DNA Center Appliance Review Cisco DNA Center Configuration Wizard Parameters, on page 1 Configure Cisco DNA Center as a Single Host Using the Wizard, on page 5 Configure Cisco DNA Center as a Multi-Host Cluster Using the Wizard,

More information

Production Installation and Configuration. Openfiler NSA

Production Installation and Configuration. Openfiler NSA Production Installation and Configuration Openfiler NSA Table of Content 1. INTRODUCTION... 3 1.1. PURPOSE OF DOCUMENT... 3 1.2. INTENDED AUDIENCE... 3 1.3. SCOPE OF THIS GUIDE... 3 2. OPENFILER INSTALLATION...

More information

Avaya Aura TM System Platform R6.0.1 Service Pack Release Notes Issue 1.4

Avaya Aura TM System Platform R6.0.1 Service Pack Release Notes Issue 1.4 Avaya Aura TM Service Pack Release Notes Issue 1.4 INTRODUCTION This document introduces the Avaya Aura TM System Platform Release 6.0.1 Service Pack and describes known issues and the issues resolved

More information

Control Center Planning Guide

Control Center Planning Guide Release 1.2.0 Zenoss, Inc. www.zenoss.com Copyright 2016 Zenoss, Inc. All rights reserved. Zenoss and the Zenoss logo are trademarks or registered trademarks of Zenoss, Inc., in the United States and other

More information

ExpressCluster X 1.0 for Linux

ExpressCluster X 1.0 for Linux ExpressCluster X 1.0 for Linux Installation and Configuration Guide 10/12/2007 Fifth Edition Revision History Edition Revised Date Description First 2006/09/08 New manual Second 2006/12/12 EXPRESSCLUSTER

More information

ExpressCluster X 1.0 for Linux

ExpressCluster X 1.0 for Linux ExpressCluster X 1.0 for Linux Installation and Configuration Guide 12/12/2006 Second Edition Revision History Edition Revised Date Description First 2006/09/08 New manual Second 2006/12/12 EXPRESSCLUSTER

More information

Setting Up Identity Management

Setting Up Identity Management APPENDIX D Setting Up Identity Management To prepare for the RHCSA and RHCE exams, you need to use a server that provides Lightweight Directory Access Protocol (LDAP) and Kerberos services. The configuration

More information

Setting Up A Windows Server 2003 Cluster in VS Part II

Setting Up A Windows Server 2003 Cluster in VS Part II 2005 Bob Roudebush Page 1 5/2/05 Setting Up A Windows Server 2003 Cluster in VS2005 - Part II The intent of this post is to pickup where I left off in explaining how you, armed with Virtual Server 2005,

More information

Linux Lab: GPFS General Parallel FileSystem. Daniela Galetti (System Management Group)

Linux Lab: GPFS General Parallel FileSystem. Daniela Galetti (System Management Group) Linux Lab: GPFS General Parallel FileSystem Daniela Galetti (System Management Group) 1 GPFS history Designed for SP cluster (power 3) with AIX Proprietary license 2 GPFS properties High filesystem performances

More information

Figure 1 0: AMI Instances

Figure 1 0: AMI Instances Title: Configuring Control-M installation in Cloud environment. Last Update: July 4 th, 2018 Cause: Cloud Services Background Cloud Services is a collection of remote computing services that together make

More information

Creating and Managing a Content Server Cluster

Creating and Managing a Content Server Cluster CHAPTER 10 This chapter describes the main features, system requirements, setup, and management of a Cisco TelePresence Content Server (TCS) cluster. To a user, a Content Server Cluster behaves exactly

More information

Setting Up a Multihomed System

Setting Up a Multihomed System CHAPTER 4 By default, the installation of the Cisco Configuration Engine software offers a single-homed system setup. If you require a multihomed system setup, you must manually customize the network parameters

More information

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary v1.0 January 8, 2010 Introduction This guide describes the highlights of a data warehouse reference architecture

More information

Clearswift SECURE Gateway Installation & Getting Started Guide. Version 4.3 Document Revision 1.0

Clearswift SECURE  Gateway Installation & Getting Started Guide. Version 4.3 Document Revision 1.0 Clearswift SECURE Email Gateway Installation & Getting Started Guide Version 4.3 Document Revision 1.0 Copyright Revision 1.1, March, 2016 Published by Clearswift Ltd. 1995 2016 Clearswift Ltd. All rights

More information

Deployment Guide: Routing Mode with No DMZ

Deployment Guide: Routing Mode with No DMZ Deployment Guide: Routing Mode with No DMZ March 15, 2007 Deployment and Task Overview Description Follow the tasks in this guide to deploy the appliance as a router-firewall device on your network with

More information

Vendor: IBM. Exam Code: C Exam Name: Enterprise Technical Support for AIX and Linux -v2. Version: Demo

Vendor: IBM. Exam Code: C Exam Name: Enterprise Technical Support for AIX and Linux -v2. Version: Demo Vendor: IBM Exam Code: C4040-108 Exam Name: Enterprise Technical Support for AIX and Linux -v2 Version: Demo QUESTION 1 Which power reduction technology requires a software component in order to be activated?

More information

Configure the Cisco DNA Center Appliance

Configure the Cisco DNA Center Appliance Review Cisco DNA Center Configuration Wizard Parameters, page 1 Configure Cisco DNA Center Using the Wizard, page 5 Review Cisco DNA Center Configuration Wizard Parameters When Cisco DNA Center configuration

More information

SteelEye Protection Suite for Windows Microsoft Internet Information Services Recovery Kit v Administration Guide

SteelEye Protection Suite for Windows Microsoft Internet Information Services Recovery Kit v Administration Guide SteelEye Protection Suite for Windows Microsoft Internet Information Services Recovery Kit v8.0.1 Administration Guide March 2014 This document and the information herein is the property of SIOS Technology

More information

DB2 for Linux, UNIX, Windows - Adv. Recovery and High Availability

DB2 for Linux, UNIX, Windows - Adv. Recovery and High Availability DB2 for Linux, UNIX, Windows - Adv. Recovery and High Availability Duration: 4 Days Course Code: CL492G Overview: Gain a deeper understanding of the advanced recovery features of DB2 9 for Linux, UNIX,

More information

CLUSTERING. What is Clustering?

CLUSTERING. What is Clustering? What is Clustering? CLUSTERING A cluster is a group of independent computer systems, referred to as nodes, working together as a unified computing resource. A cluster provides a single name for clients

More information

ExpressCluster X R3 WAN Edition for Windows

ExpressCluster X R3 WAN Edition for Windows ExpressCluster X R3 WAN Edition for Windows Installation and Configuration Guide v2.1.0na Copyright NEC Corporation 2014. All rights reserved. Copyright NEC Corporation of America 2011-2014. All rights

More information

ExpressCluster X 3.0 for Windows

ExpressCluster X 3.0 for Windows ExpressCluster X 3.0 for Windows Installation and Configuration Guide 10/01/2010 First Edition Revision History Edition Revised Date Description First 10/01/2010 New manual Copyright NEC Corporation 2010.

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

HACMP High Availability Introduction Presentation February 2007

HACMP High Availability Introduction Presentation February 2007 HACMP High Availability Introduction Presentation February 2007 Introduction Scope HACMP Concepts HACMP Cluster Topologies NFS Cascading 1 way Cascading 2 way Rotating Concurrent HACMP Cluster Resources

More information

Quick Installation Guide

Quick Installation Guide Nortel IP Flow Manager Release: 2.0 Version: 02.01 Copyright 2009 Nortel Networks Nortel IP Flow Manager 2.0 Page 1 of 9 Nortel IP Flow Manager Release: 2.0 Publication: NN48015-300 Document status: Standard

More information

Dell Active Fabric Manager for Microsoft Cloud Platform System 2.2(0.0)

Dell Active Fabric Manager for Microsoft Cloud Platform System 2.2(0.0) Rev. A06 2017-04 Dell Active Fabric Manager for Microsoft Cloud Platform System 2.2(0.0) This document describes the new features, enhancements, and fixed issues for Active Fabric Manager for Microsoft

More information

Dell PowerVault MD3600i and MD3620i Storage Arrays. Deployment Guide

Dell PowerVault MD3600i and MD3620i Storage Arrays. Deployment Guide Dell PowerVault MD3600i and MD3620i Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Data ONTAP 8.1 Software Setup Guide for 7-Mode

Data ONTAP 8.1 Software Setup Guide for 7-Mode IBM System Storage N series Data ONTAP 8.1 Software Setup Guide for 7-Mode GA32-1044-03 Contents Preface................................ 1 About this guide.............................. 1 Supported features.............................

More information

ST0-12W Veritas Cluster Server 5 for Windows (STS)

ST0-12W Veritas Cluster Server 5 for Windows (STS) ST0-12W Veritas Cluster Server 5 for Windows (STS) Version 4.1 Topic 1, Volume A QUESTION NO: 1 Which Veritas Cluster Server component corresponds to hardware or software components of an application service?

More information

Microsoft Technical Solutions

Microsoft Technical Solutions Microsoft Technical Solutions How To Setup Microsoft Windows Server 2008 Failover Clustering Compellent Corporate Office Compellent Technologies 7625 Smetana Lane Eden Prairie, Minnesota 55344 www.compellent.com

More information

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux TECHNICAL WHITE PAPER Using Stateless Linux with Veritas Cluster Server Linux Pranav Sarwate, Assoc SQA Engineer Server Availability and Management Group Symantec Technical Network White Paper Content

More information

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 Product Group - Enterprise Dell White Paper By Farrukh Noman Ananda Sankaran April 2008 Contents Introduction... 3

More information

Setting up IBM Data Server Manager as a Highly Available Service

Setting up IBM Data Server Manager as a Highly Available Service Setting up IBM Data Server Manager as a Highly Available Service Version 1.0 March 21 st 2017 Xiao Min Zhao xmzhao@cn.ibm.com Copyright IBM Corporation 2017 IBM Route 100 Somers, NY 10589 Produced in the

More information

Root over NFS on User Mode Linux

Root over NFS on User Mode Linux Root over NFS on User Mode Linux Giorgos Kappes Dep. of Computer Science, University of Ioannina geokapp@gmail.com April 8, 2012 Abstract The boot disk of a UML instance is usually a file in the host s

More information

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5 Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5 Title: Getting Started with ESX Server 3i Embedded Revision: 20071022 Item: VMW-ENG-Q407-430 You can

More information

Intellicus Cluster and Load Balancing- Linux. Version: 18.1

Intellicus Cluster and Load Balancing- Linux. Version: 18.1 Intellicus Cluster and Load Balancing- Linux Version: 18.1 1 Copyright 2018 Intellicus Technologies This document and its content is copyrighted material of Intellicus Technologies. The content may not

More information

Using the Offline Diagnostic Monitor Menu

Using the Offline Diagnostic Monitor Menu APPENDIX B Using the Offline Diagnostic Monitor Menu During the boot process, you can access the Offline Diagnostic Monitor (Offline DM) Main menu. The Offline DM Main menu allows you to perform the following

More information

High Availability Procedures and Guidelines

High Availability Procedures and Guidelines IBM FileNet Image Services Version 4.2 High Availability Procedures and Guidelines SC19-3303-0 Contents About this manual 11 Audience 11 Document revision history 11 Accessing IBM FileNet Documentation

More information

Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7

Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7 Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7 Disclaimer: I haven t gone through RHCSA/RHCE EL 7. I am preparing for upgrade of my RHCE certificate from RHCE EL4 to RHCE EL7. I don

More information

OES2 SP1 Migration Utility By Kevin Hurni

OES2 SP1 Migration Utility By Kevin Hurni OES2 SP1 Migration Utility By Kevin Hurni Migration Scenario: Transfer all data and services from NetWare server to OES2 SP1 server. This includes the Identity transfer. Pre-requisites: OES2 SP1 server

More information

Planning the Installation and Installing SQL Server

Planning the Installation and Installing SQL Server Chapter 2 Planning the Installation and Installing SQL Server In This Chapter c SQL Server Editions c Planning Phase c Installing SQL Server 22 Microsoft SQL Server 2012: A Beginner s Guide This chapter

More information

Release Notes for Cisco Application Policy Infrastructure Controller Enterprise Module, Release x

Release Notes for Cisco Application Policy Infrastructure Controller Enterprise Module, Release x Release s for Cisco Application Policy Infrastructure Controller Enterprise Module, Release 1.3.3.x First Published: 2017-02-10 Release s for Cisco Application Policy Infrastructure Controller Enterprise

More information

Installing the Cisco Virtual Network Management Center

Installing the Cisco Virtual Network Management Center CHAPTER 4 Installing the Cisco Virtual Network Management Center This chapter provides procedures for installing the Cisco Virtual Network Management Center (VNMC). This chapter includes the following

More information

System Description. System Architecture. System Architecture, page 1 Deployment Environment, page 4

System Description. System Architecture. System Architecture, page 1 Deployment Environment, page 4 System Architecture, page 1 Deployment Environment, page 4 System Architecture The diagram below illustrates the high-level architecture of a typical Prime Home deployment. Figure 1: High Level Architecture

More information

ExpressCluster X 3.1 for Solaris

ExpressCluster X 3.1 for Solaris ExpressCluster X 3.1 for Solaris Installation and Configuration Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual Copyright NEC Corporation 2011.

More information

rdxlock_readme.txt Overland Tandberg rdxlock readme for Version Build Major Changes:

rdxlock_readme.txt Overland Tandberg rdxlock readme for Version Build Major Changes: Overland Tandberg rdxlock readme for Version 2.3.2 Build 216 1. Major Changes: RansomBlock feature Enhanced Security Mode (ESM) is mandatory for WORM volumes. Verified Retention Clock (VRC): enhancement

More information

The Microdrive and CF card are electrically compatible this means that a CF card reader can be used to program a Microdrive.

The Microdrive and CF card are electrically compatible this means that a CF card reader can be used to program a Microdrive. 1 This guide is for users wishing to use an embedded system or appliance, such as an Alix board, HP Thin Client (we strongly recommend and have had tremendous success with the HP T5710) with ICOM repeater

More information

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN A Dell EqualLogic best practices technical white paper Storage Infrastructure and Solutions Engineering Dell Product Group November 2012 2012

More information

Mission Critical Linux

Mission Critical Linux http://www.mclx.com kohari@mclx.com High Availability Middleware For Telecommunications September, 2002 1 Founded in 1999 as an engineering company with financial backing from top name venture capitalist

More information

User s Guide for SAS Software Navigator

User s Guide for SAS Software Navigator User s Guide for SAS Software Navigator Copyright Notice The correct bibliographic citation for this manual is as follows: SAS Institute Inc., User s Guide for SAS Software Navigator Title, Cary, NC: SAS

More information

Technical Brief: Titan & Alacritech iscsi Accelerator on Microsoft Windows

Technical Brief: Titan & Alacritech iscsi Accelerator on Microsoft Windows Technical Brief: Titan & Alacritech iscsi Accelerator on Microsoft Windows Abstract In today s information age, enterprise business processing & information systems are growing at an incredibly fast pace

More information

StorNext M330 Metadata Appliance Release Notes

StorNext M330 Metadata Appliance Release Notes StorNext M330 Metadata Appliance 4.3.3 Release Notes Purpose of this Release The StorNext M330 combines industry-proven Quantum hardware and StorNext software into one convenient, out-of-the-box solution.

More information

How to create a cluster with failover functionality on Windows 2008 Server Enterprise Edition

How to create a cluster with failover functionality on Windows 2008 Server Enterprise Edition How to create a cluster with failover functionality on Windows 2008 Server Enterprise Edition Software Version: DSS ver. 6.00 up10 Presentation updated: November 2009 TO CONFIGURE A CLUSTER WITH FAILOVER

More information

Control Center Planning Guide

Control Center Planning Guide Control Center Planning Guide Release 1.4.2 Zenoss, Inc. www.zenoss.com Control Center Planning Guide Copyright 2017 Zenoss, Inc. All rights reserved. Zenoss, Own IT, and the Zenoss logo are trademarks

More information

Integrated Virtualization Manager. ESCALA Power7 REFERENCE 86 A1 41FF 08

Integrated Virtualization Manager. ESCALA Power7 REFERENCE 86 A1 41FF 08 Integrated Virtualization Manager ESCALA Power7 REFERENCE 86 A1 41FF 08 ESCALA Models Reference The ESCALA Power7 publications concern the following models: Bull Escala E1-700 / E3-700 Bull Escala E1-705

More information