SUSE Linux Enterprise High Availability Extension

Size: px
Start display at page:

Download "SUSE Linux Enterprise High Availability Extension"

Transcription

1 SUSE Linux Enterprise High Availability Extension 11 SP2 March 29, 2012 High Availability Guide

2 High Availability Guide List of Authors: Tanja Roth, Thomas Schraitle Copyright Novell, Inc. and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License. For Novell trademarks, see the Novell Trademark and Service Mark list All other third party trademarks are the property of their respective owners. A trademark symbol (, etc.) denotes a Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither Novell, Inc., SUSE LINUX Products GmbH, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.

3 Contents About This Guide ix Part I Installation and Setup 1 1 Product Overview Key Features Benefits Cluster Configurations: Storage Architecture System Requirements Hardware Requirements Software Requirements Shared Disk System Requirements Other Requirements Installation and Basic Setup Definition of Terms Overview Installation as Add-on Automatic Cluster Setup (sleha-bootstrap) Manual Cluster Setup (YaST) Mass Deployment with AutoYaST

4 Part II Configuration and Administration 47 4 Configuration and Administration Basics Global Cluster Options Cluster Resources Resource Monitoring Resource Constraints For More Information Configuring and Managing Cluster Resources (GUI) Pacemaker GUI Overview Configuring Global Cluster Options Configuring Cluster Resources Managing Cluster Resources Configuring and Managing Cluster Resources (Web Interface) Hawk Overview Configuring Global Cluster Options Configuring Cluster Resources Managing Cluster Resources Multi-Site Clusters (Geo Clusters) Troubleshooting Configuring and Managing Cluster Resources (Command Line) crm Shell Overview Configuring Global Cluster Options Configuring Cluster Resources Managing Cluster Resources Setting Passwords Independent of cib.xml Retrieving History Information For More Information Adding or Modifying Resource Agents STONITH Agents Writing OCF Resource Agents OCF Return Codes and Failure Recovery Fencing and STONITH Classes of Fencing

5 9.2 Node Level Fencing STONITH Configuration Monitoring Fencing Devices Special Fencing Devices Basic Recommendations For More Information Access Control Lists Requirements and Prerequisites The Basics of ACLs Configuring ACLs with the Pacemaker GUI Configuring ACLs with the crm Shell For More Information Network Device Bonding Configuring Bonding Devices with YaST For More Information Load Balancing with Linux Virtual Server Conceptual Overview Configuring IP Load Balancing with YaST Further Setup For More Information Multi-Site Clusters (Geo Clusters) Challenges for Multi-Site Clusters Conceptual Overview Requirements Basic Setup Managing Multi-Site Clusters Part III Storage and Data Replication OCFS Features and Benefits OCFS2 Packages and Management Utilities Configuring OCFS2 Services and a STONITH Resource Creating OCFS2 Volumes Mounting OCFS2 Volumes

6 14.6 Using Quotas on OCFS2 File Systems For More Information Distributed Replicated Block Device (DRBD) Conceptual Overview Installing DRBD Services Configuring the DRBD Service Testing the DRBD Service Tuning DRBD Troubleshooting DRBD For More Information Cluster Logical Volume Manager (clvm) Conceptual Overview Configuration of clvm Configuring Eligible LVM2 Devices Explicitly For More Information Storage Protection Storage-based Fencing Ensuring Exclusive Storage Activation Samba Clustering Conceptual Overview Basic Configuration Debugging and Testing Clustered Samba Joining Active Directory Domains For More Information Disaster Recovery with ReaR Terminology Conceptual Overview Preparing for the Worst Scenarios: Disaster Recovery Plans Setting Up ReaR Setting Up rear-suse with AutoYaST For More Information

7 Part IV Troubleshooting and Reference Troubleshooting Installation and First Steps Logging Resources STONITH and Fencing Miscellaneous Fore More Information HA OCF Agents 313 Part V Appendix 427 A Example of Setting Up a Simple Testing Resource 429 A.1 Configuring a Resource with the GUI A.2 Configuring a Resource Manually B Example Configuration for OCFS2 and clvm 433 C Cluster Management Tools 435 D Upgrading Your Cluster to the Latest Product Version 439 D.1 Upgrading from SLES 10 to SLE HA D.2 Upgrading from SLE HA 11 to SLE HA 11 SP D.3 Upgrading from SLE HA 11 SP1 to SLE HA 11 SP E What's New? 447 E.1 Version 10 SP3 to Version E.2 Version 11 to Version 11 SP E.3 Version 11 SP1 to Version 11 SP Terminology 455 F GNU Licenses 461 F.1 GNU General Public License F.2 GNU Free Documentation License

8

9 About This Guide SUSE Linux Enterprise High Availability Extension is an integrated suite of open source clustering technologies that enables you to implement highly available physical and virtual Linux clusters. For quick and efficient configuration and administration, the High Availability Extension includes both a graphical user interface (GUI) and a command line interface (CLI). Additionally, it comes with the HA Web Konsole (Hawk), allowing you to administer your Linux cluster also via a Web interface. This guide is intended for administrators who need to set up, configure, and maintain High Availability (HA) clusters. Both approaches (GUI and CLI) are covered in detail to help the administrators choose the appropriate tool that matches their needs for performing the key tasks. This guide is divided into the following parts: Installation and Setup Before starting to install and configure your cluster, make yourself familiar with cluster fundamentals and architecture, get an overview of the key features and benefits. Learn which hardware and software requirements must be met and what preparations to take before executing the next steps. Perform the installation and basic setup of your HA cluster using YaST. Configuration and Administration Add, configure and manage cluster resources, using either the graphical user interface (Pacemaker GUI), the Web interface (HA Web Konsole), or the command line interface (crm shell). To avoid unauthorized access to the cluster configuration, define roles and assign them to certain users for fine-grained control. Learn how to make use of load balancing and fencing or how to set up a multi-site cluster. In case you consider writing your own resource agents or modifying existing ones, get some background information on how to create different types of resource agents. Storage and Data Replication SUSE Linux Enterprise High Availability Extension ships with a cluster-aware file system and volume manager: Oracle Cluster File System (OCFS2) and the clustered Logical Volume Manager (clvm). For replication of your data, use DRBD* (Distributed Replicated Block Device) to mirror the data of a High Availability service from the active node of a cluster to its standby node. Furthermore, a clustered

10 Samba server also provides a High Availability solution for heterogeneous environments. Troubleshooting and Reference Managing your own cluster requires you to perform a certain amount of troubleshooting. Learn about the most common problems and how to fix them. Find a comprehensive reference of the OCF agents shipped with this product. Appendix Lists the new features and behavior changes of the High Availability Extension since the last release. Learn how to migrate your cluster to the most recent release version and find an example of setting up a simple testing resource. Many chapters in this manual contain links to additional documentation resources. These include additional documentation that is available on the system as well as documentation available on the Internet. For an overview of the documentation available for your product and the latest documentation updates, refer to 1 Feedback Several feedback channels are available: Bugs and Enhancement Requests For services and support options available for your product, refer to To report bugs for a product component, log into the Novell Customer Center from support.novell.com/ and select My Support > Service Request. User Comments We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to and enter your comments there. x High Availability Guide

11 Mail For feedback on the documentation of this product, you can also send a mail to doc-team@suse.de. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL). 2 Documentation Conventions The following typographical conventions are used in this manual: /etc/passwd: directory names and filenames placeholder: replace placeholder with the actual value PATH: the environment variable PATH ls, --help: commands, options, and parameters user: users or groups Alt, Alt + F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard File, File > Save As: menu items, buttons amd64 em64t: This paragraph is only relevant for the architectures amd64, em64t, and ipf. The arrows mark the beginning and the end of the text block. Dancing Penguins (Chapter Penguins, Another Manual): This is a reference to a chapter in another manual. 3 About the Making of This Manual This book is written in Novdoc, a subset of DocBook (see The XML source files were validated by xmllint, processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. About This Guide xi

12 The final PDF is formatted through XEP from RenderX. The project's home page can be found at xii High Availability Guide

13 Part I. Installation and Setup

14

15 Product Overview 1 SUSE Linux Enterprise High Availability Extension is an integrated suite of open source clustering technologies that enables you to implement highly available physical and virtual Linux clusters, and to eliminate single points of failure. It ensures the high availability and manageability of critical network resources including data, applications, and services. Thus, it helps you maintain business continuity, protect data integrity, and reduce unplanned downtime for your mission-critical Linux workloads. It ships with essential monitoring, messaging, and cluster resource management functionality (supporting failover, failback, and migration (load balancing) of individually managed cluster resources). The High Availability Extension is available as add-on to SUSE Linux Enterprise Server 11 SP2. This chapter introduces the main product features and benefits of the High Availability Extension. Inside you will find several example clusters and learn about the components making up a cluster. The last section provides an overview of the architecture, describing the individual architecture layers and processes within the cluster. For explanations of some common terms used in the context of High Availability clusters, refer to Terminology (page 455). 1.1 Key Features SUSE Linux Enterprise High Availability Extension helps you ensure and manage the availability of your network resources. The following sections highlight some of the key features: Product Overview 3

16 1.1.1 Wide Range of Clustering Scenarios The High Availability Extension supports the following scenarios: Active/active configurations Active/passive configurations: N+1, N+M, N to 1, N to M Hybrid physical and virtual clusters, allowing virtual servers to be clustered with physical servers. This improves service availability and resource utilization. Local clusters Metro clusters ( stretched local clusters) Multi-site clusters (geographically dispersed clusters) Your cluster can contain up to 32 Linux servers. Any server in the cluster can restart resources (applications, services, IP addresses, and file systems) from a failed server in the cluster Flexibility The High Availability Extension ships with Corosync/OpenAIS messaging and membership layer and Pacemaker Cluster Resource Manager. Using Pacemaker, administrators can continually monitor the health and status of their resources, manage dependencies, and automatically stop and start services based on highly configurable rules and policies. The High Availability Extension allows you to tailor a cluster to the specific applications and hardware infrastructure that fit your organization. Time-dependent configuration enables services to automatically migrate back to repaired nodes at specified times Storage and Data Replication With the High Availability Extension you can dynamically assign and reassign server storage as needed. It supports Fibre Channel or iscsi storage area networks (SANs). Shared disk systems are also supported, but they are not a requirement. SUSE Linux Enterprise High Availability Extension also comes with a cluster-aware file system and 4 High Availability Guide

17 volume manager: Oracle Cluster File System (OCFS2) and the clustered Logical Volume Manager (clvm). For replication of your data, you can use DRBD* (Distributed Replicated Block Device) to mirror the data of an High Availability service from the active node of a cluster to its standby node. Furthermore, SUSE Linux Enterprise High Availability Extension also supports CTDB (Clustered Trivial Database), a technology for Samba clustering Support for Virtualized Environments SUSE Linux Enterprise High Availability Extension supports the mixed clustering of both physical and virtual Linux servers. SUSE Linux Enterprise Server 11 SP2 ships with Xen, an open source virtualization hypervisor and with KVM (Kernel-based Virtual Machine), a virtualization software for Linux which is based on hardware virtualization extensions. The cluster resource manager in the High Availability Extension is able to recognize, monitor and manage services running within virtual servers, as well as services running in physical servers. Guest systems can be managed as services by the cluster Support of Local, Metro, and Multi-Site Clusters SUSE Linux Enterprise High Availability Extension has been extended to support different geographical scenarios. Support for multi-site clusters is available as a separate option to SUSE Linux Enterprise High Availability Extension. Local Clusters A single cluster in one location (for example, all nodes are located in one data center). The cluster uses multicast or unicast for communication between the nodes and manages failover internally. Network latency can be neglected. Storage is typically accessed synchronously by all nodes. Metro Clusters A single cluster that can stretch over multiple buildings or data centers, with all sites connected by fibre channel. The cluster uses multicast or unicast for communication between the nodes and manages failover internally. Network latency is usually low (<5 ms for distances of approximately 20 miles). Storage is frequently replicated (mirroring or synchronous replication). Product Overview 5

18 Multi-Site Clusters (Geo Clusters) (page 219) Multiple, geographically dispersed sites with a local cluster each. The sites communicate via IP. Failover across the sites is coordinated by a higher-level entity. Multi-site clusters have to cope with limited network bandwidth and high latency. Storage is replicated asynchronously. The greater the geographical distance between individual cluster nodes, the more factors may potentially disturb the high availability of services the cluster provides. Network latency, limited bandwidth and access to storage are the main challenges for long-distance clusters Resource Agents SUSE Linux Enterprise High Availability Extension includes a huge number of resource agents to manage resources such as Apache, IPv4, IPv6 and many more. It also ships with resource agents for popular third party applications such as IBM WebSphere Application Server. For a list of Open Cluster Framework (OCF) resource agents included with your product, refer to Chapter 21, HA OCF Agents (page 313) User-friendly Administration Tools The High Availability Extension ships with a set of powerful tools for basic installation and setup of your cluster as well as effective configuration and administration: YaST A graphical user interface for general system installation and administration. Use it to install the High Availability Extension on top of SUSE Linux Enterprise Server as described in Section 3.3, Installation as Add-on (page 27). YaST also provides the following modules in the High Availability category to help configure your cluster or individual components: Cluster: Basic cluster setup. For details, refer to Section 3.5, Manual Cluster Setup (YaST) (page 31). DRBD: Configuration of a Distributed Replicated Block Device. 6 High Availability Guide

19 IP Load Balancing: Configuration of load balancing with Linux Virtual Server. For details, refer to Chapter 12, Load Balancing with Linux Virtual Server (page 207). Pacemaker GUI Installable graphical user interface for easy configuration and administration of your cluster. Guides you through the creation and configuration of resources and lets you execute management tasks like starting, stopping or migrating resources. For details, refer to Chapter 5, Configuring and Managing Cluster Resources (GUI) (page 75). HA Web Konsole (Hawk) A Web-based user interface with which you can administer your Linux cluster from non-linux machines. It is also an ideal solution in case your system does not provide a graphical user interface. It guides you through the creation and configuration of resources and lets you execute management tasks like starting, stopping or migrating resources. For details, refer to Chapter 6, Configuring and Managing Cluster Resources (Web Interface) (page 109). crm Shell A powerful unified command line interface to configure resources and execute all monitoring or administration tasks. For details, refer to Chapter 7, Configuring and Managing Cluster Resources (Command Line) (page 151). 1.2 Benefits The High Availability Extension allows you to configure up to 32 Linux servers into a high-availability cluster (HA cluster), where resources can be dynamically switched or moved to any server in the cluster. Resources can be configured to automatically migrate in the event of a server failure, or they can be moved manually to troubleshoot hardware or balance the workload. The High Availability Extension provides high availability from commodity components. Lower costs are obtained through the consolidation of applications and operations onto a cluster. The High Availability Extension also allows you to centrally manage the complete cluster and to adjust resources to meet changing workload requirements (thus, manually load balance the cluster). Allowing clusters of more than two nodes also provides savings by allowing several nodes to share a hot spare. Product Overview 7

20 An equally important benefit is the potential reduction of unplanned service outages as well as planned outages for software and hardware maintenance and upgrades. Reasons that you would want to implement a cluster include: Increased availability Improved performance Low cost of operation Scalability Disaster recovery Data protection Server consolidation Storage consolidation Shared disk fault tolerance can be obtained by implementing RAID on the shared disk subsystem. The following scenario illustrates some of the benefits the High Availability Extension can provide. Example Cluster Scenario Suppose you have configured a three-server cluster, with a Web server installed on each of the three servers in the cluster. Each of the servers in the cluster hosts two Web sites. All the data, graphics, and Web page content for each Web site are stored on a shared disk subsystem connected to each of the servers in the cluster. The following figure depicts how this setup might look. 8 High Availability Guide

21 Figure 1.1 Three-Server Cluster During normal cluster operation, each server is in constant communication with the other servers in the cluster and performs periodic polling of all registered resources to detect failure. Suppose Web Server 1 experiences hardware or software problems and the users depending on Web Server 1 for Internet access, , and information lose their connections. The following figure shows how resources are moved when Web Server 1 fails. Figure 1.2 Three-Server Cluster after One Server Fails Web Site A moves to Web Server 2 and Web Site B moves to Web Server 3. IP addresses and certificates also move to Web Server 2 and Web Server 3. Product Overview 9

22 When you configured the cluster, you decided where the Web sites hosted on each Web server would go should a failure occur. In the previous example, you configured Web Site A to move to Web Server 2 and Web Site B to move to Web Server 3. This way, the workload once handled by Web Server 1 continues to be available and is evenly distributed between any surviving cluster members. When Web Server 1 failed, the High Availability Extension software did the following: Detected a failure and verified with STONITH that Web Server 1 was really dead. STONITH is an acronym for Shoot The Other Node In The Head and is a means of bringing down misbehaving nodes to prevent them from causing trouble in the cluster. Remounted the shared data directories that were formerly mounted on Web server 1 on Web Server 2 and Web Server 3. Restarted applications that were running on Web Server 1 on Web Server 2 and Web Server 3. Transferred IP addresses to Web Server 2 and Web Server 3. In this example, the failover process happened quickly and users regained access to Web site information within seconds, and in most cases, without needing to log in again. Now suppose the problems with Web Server 1 are resolved, and Web Server 1 is returned to a normal operating state. Web Site A and Web Site B can either automatically fail back (move back) to Web Server 1, or they can stay where they are. This is dependent on how you configured the resources for them. Migrating the services back to Web Server 1 will incur some down-time, so the High Availability Extension also allows you to defer the migration until a period when it will cause little or no service interruption. There are advantages and disadvantages to both alternatives. The High Availability Extension also provides resource migration capabilities. You can move applications, Web sites, etc. to other servers in your cluster as required for system management. For example, you could have manually moved Web Site A or Web Site B from Web Server 1 to either of the other servers in the cluster. You might want to do this to upgrade or perform scheduled maintenance on Web Server 1, or just to increase performance or accessibility of the Web sites. 10 High Availability Guide

23 1.3 Cluster Configurations: Storage Cluster configurations with the High Availability Extension might or might not include a shared disk subsystem. The shared disk subsystem can be connected via high-speed Fibre Channel cards, cables, and switches, or it can be configured to use iscsi. If a server fails, another designated server in the cluster automatically mounts the shared disk directories that were previously mounted on the failed server. This gives network users continuous access to the directories on the shared disk subsystem. IMPORTANT: Shared Disk Subsystem with clvm When using a shared disk subsystem with clvm, that subsystem must be connected to all servers in the cluster from which it needs to be accessed. Typical resources might include data, applications, and services. The following figure shows how a typical Fibre Channel cluster configuration might look. Figure 1.3 Typical Fibre Channel Cluster Configuration Product Overview 11

24 Although Fibre Channel provides the best performance, you can also configure your cluster to use iscsi. iscsi is an alternative to Fibre Channel that can be used to create a low-cost Storage Area Network (SAN). The following figure shows how a typical iscsi cluster configuration might look. Figure 1.4 Typical iscsi Cluster Configuration Although most clusters include a shared disk subsystem, it is also possible to create a cluster without a shared disk subsystem. The following figure shows how a cluster without a shared disk subsystem might look. 12 High Availability Guide

25 Figure 1.5 Typical Cluster Configuration Without Shared Storage 1.4 Architecture This section provides a brief overview of the High Availability Extension architecture. It identifies and provides information on the architectural components, and describes how those components interoperate Architecture Layers The High Availability Extension has a layered architecture. Figure 1.6, Architecture (page 14) illustrates the different layers and their associated components. Product Overview 13

26 Figure 1.6 Architecture Messaging and Infrastructure Layer The primary or first layer is the messaging/infrastructure layer, also known as the Corosync/OpenAIS layer. This layer contains components that send out the messages containing I'm alive signals, as well as other information. The program of the High Availability Extension resides in the messaging/infrastructure layer Resource Allocation Layer The next layer is the resource allocation layer. This layer is the most complex, and consists of the following components: Cluster Resource Manager (CRM) Every action taken in the resource allocation layer passes through the Cluster Resource Manager. If other components of the resource allocation layer (or components 14 High Availability Guide

27 which are in a higher layer) need to communicate, they do so through the local CRM. On every node, the CRM maintains the Cluster Information Base (CIB). Cluster Information Base (CIB) The Cluster Information Base is an in-memory XML representation of the entire cluster configuration and current status. It contains definitions of all cluster options, nodes, resources, constraints and the relationship to each other. The CIB also synchronizes updates to all cluster nodes. There is one master CIB in the cluster, maintained by the Designated Coordinator (DC). All other nodes contain a CIB replica. Designated Coordinator (DC) One CRM in the cluster is elected as DC. The DC is the only entity in the cluster that can decide that a cluster-wide change needs to be performed, such as fencing a node or moving resources around. The DC is also the node where the master copy of the CIB is kept. All other nodes get their configuration and resource allocation information from the current DC. The DC is elected from all nodes in the cluster after a membership change. Policy Engine (PE) Whenever the Designated Coordinator needs to make a cluster-wide change (react to a new CIB), the Policy Engine calculates the next state of the cluster based on the current state and configuration. The PE also produces a transition graph containing a list of (resource) actions and dependencies to achieve the next cluster state. The PE always runs on the DC. Local Resource Manager (LRM) The LRM calls the local Resource Agents (see Section , Resource Layer (page 15)) on behalf of the CRM. It can thus perform start / stop / monitor operations and report the result to the CRM. It also hides the difference between the supported script standards for Resource Agents (OCF, LSB, Heartbeat Version 1). The LRM is the authoritative source for all resource-related information on its local node Resource Layer The highest layer is the Resource Layer. The Resource Layer includes one or more Resource Agents (RA). Resource Agents are programs (usually shell scripts) that have been written to start, stop, and monitor a certain kind of service (a resource). Resource Agents are called only by the LRM. Third parties can include their own agents in a Product Overview 15

28 defined location in the file system and thus provide out-of-the-box cluster integration for their own software Process Flow SUSE Linux Enterprise High Availability Extension uses Pacemaker as CRM. The CRM is implemented as daemon (crmd) that has an instance on each cluster node. Pacemaker centralizes all cluster decision-making by electing one of the crmd instances to act as a master. Should the elected crmd process (or the node it is on) fail, a new one is established. A CIB, reflecting the cluster s configuration and current state of all resources in the cluster is kept on each node. The contents of the CIB are automatically kept in sync across the entire cluster. Many actions performed in the cluster will cause a cluster-wide change. These actions can include things like adding or removing a cluster resource or changing resource constraints. It is important to understand what happens in the cluster when you perform such an action. For example, suppose you want to add a cluster IP address resource. To do this, you can use one of the command line tools or the GUI to modify the CIB. It is not required to perform the actions on the DC, you can use either tool on any node in the cluster and they will be relayed to the DC. The DC will then replicate the CIB change to all cluster nodes. Based on the information in the CIB, the PE then computes the ideal state of the cluster and how it should be achieved and feeds a list of instructions to the DC. The DC sends commands via the messaging/infrastructure layer which are received by the crmd peers on other nodes. Each crmd uses its LRM (implemented as lrmd) to perform resource modifications. The lrmd is not cluster-aware and interacts directly with resource agents (scripts). All peer nodes report the results of their operations back to the DC. Once the DC concludes that all necessary operations are successfully performed in the cluster, the cluster will go back to the idle state and wait for further events. If any operation was not carried out as planned, the PE is invoked again with the new information recorded in the CIB. 16 High Availability Guide

29 In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this Pacemaker comes with a fencing subsystem, stonithd. STONITH is an acronym for Shoot The Other Node In The Head and is usually implemented with a remote power switch. In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure. However, stonithd takes care of understanding the STONITH topology such that its clients simply request a node be fenced and it does the rest. Product Overview 17

30

31 System Requirements 2 The following section informs you about system requirements and some prerequisites for SUSE Linux Enterprise High Availability Extension, including hardware, software, shared disk system, and other essentials. 2.1 Hardware Requirements The following list specifies hardware requirements for a cluster based on SUSE Linux Enterprise High Availability Extension. These requirements represent the minimum hardware configuration. Additional hardware might be necessary, depending on how you intend to use your cluster. 1 to 32 Linux servers with software as specified in Section 2.2, Software Requirements (page 20). The servers do not require identical hardware (memory, disk space, etc.), but they must have the same architecture. Cross-platform clusters are not supported. At least two TCP/IP communication media. Cluster nodes use multicast or unicast for communication so the network equipment must support the communication means you want to use. The communication media should support a data rate of 100 Mbit/s or higher. Preferably, the Ethernet channels should be bonded. Optional: A shared disk subsystem connected to all servers in the cluster from where it needs to be accessed. System Requirements 19

32 A STONITH mechanism. STONITH is an acronym for Shoot the other node in the head. A STONITH device is a power switch which the cluster uses to reset nodes that are thought to be dead or behaving in a strange manner. Resetting non-heartbeating nodes is the only reliable way to ensure that no data corruption is performed by nodes that hang and only appear to be dead. IMPORTANT: No Support Without STONITH A cluster without STONITH is not supported. For more information, refer to Chapter 9, Fencing and STONITH (page 181). 2.2 Software Requirements Ensure that the following software requirements are met: SUSE Linux Enterprise Server 11 SP2 with all available online updates installed on all nodes that will be part of the cluster. SUSE Linux Enterprise High Availability Extension 11 SP2 including all available online updates installed on all nodes that will be part of the cluster. 2.3 Shared Disk System Requirements A shared disk system (Storage Area Network, or SAN) is recommended for your cluster if you want data to be highly available. If a shared disk subsystem is used, ensure the following: The shared disk system is properly set up and functional according to the manufacturer s instructions. The disks contained in the shared disk system should be configured to use mirroring or RAID to add fault tolerance to the shared disk system. Hardware-based RAID is recommended. Host-based software RAID is not supported for all configurations. If you are using iscsi for shared disk system access, ensure that you have properly configured iscsi initiators and targets. 20 High Availability Guide

33 When using DRBD* to implement a mirroring RAID system that distributes data across two machines, make sure to only access the device provided by DRBD, and never the backing device. Use the same (bonded) NICs that the rest of the cluster uses to leverage the redundancy provided there. 2.4 Other Requirements Time Synchronization Cluster nodes should synchronize to an NTP server outside the cluster. For more information, see the SUSE Linux Enterprise Server Administration Guide, available at Refer to the chapter Time Synchronization with NTP. If nodes are not synchronized, log files or cluster reports are very hard to analyze. NIC Names Must be identical on all nodes. Hostname and IP Address Configure hostname resolution by editing the /etc/hosts file on each server in the cluster. To ensure that cluster communication is not slowed down or tampered with by any DNS: Use static IP addresses. List all cluster nodes in this file with their fully qualified hostname and short hostname. It is essential that members of the cluster are able to find each other by name. If the names are not available, internal cluster communication will fail. For more information, see the SUSE Linux Enterprise Server Administration Guide, available at Refer to chapter Basic Networking > Configuring Hostname and DNS. SSH All cluster nodes must be able to access each other via SSH. Tools like hb_report (for troubleshooting) and the history explorer require passwordless SSH access between the nodes, otherwise they can only collect data from the current node. System Requirements 21

34 NOTE: Regulatory Requirements If passwordless SSH access does not comply with regulatory requirements, you can use the following work-around for hb_report: Create a user that can log in without a password (for example, using public key authentication). Configure sudo for this user so it does not require a root password. Start hb_report from command line with the -u option to specify the user's name. For more information, see the hb_report man page. For the history explorer there is currently no alternative for passwordless login. 22 High Availability Guide

35 Installation and Basic Setup 3 This chapter describes how to install and set up SUSE Linux Enterprise High Availability Extension 11 SP2 from scratch. Choose between an automatic setup which allows you to have a cluster up and running within a few minutes (with the choice to adjust any options later on) or decide for a manual setup, allowing you to set your individual options right at the beginning. Refer to chapter Appendix D, Upgrading Your Cluster to the Latest Product Version (page 439) if you want to migrate an existing cluster that runs an older version of SUSE Linux Enterprise High Availability Extension or if you want to update any software packages on nodes that are part of a running cluster. 3.1 Definition of Terms Existing Cluster The term existing cluster is used to refer to any cluster which consists of at least one node. Existing clusters have a basic Corosync configuration that defines the communication channels, but they do not necessarily have resource configuration yet. Multicast A technology used for a one-to-many communication within a network that can be used for cluster communication. If multicast does not comply with your corporate IT policy, use unicast instead. Installation and Basic Setup 23

36 NOTE: Switches and Multicast If you want to use multicast for cluster communication, make sure your switches support multicast. Multicast Address (mcastaddr) IP address to use for multicasting (either IPv4 or IPv6). You can use any multicast address in your private network. Multicast Port (mcastport) The port to use for cluster communication. Unicast A technology for sending messages to a single network destination. In Corosync, unicast is implemented as UDP-unicast (UDPU). Bind Network Address (bindnetaddr) The network address to bind to. To ease sharing configuration files across the cluster, OpenAIS uses network interface netmask to mask only the address bits that are used for routing the network. NOTE: Network Address for All Nodes As the same Corosync configuration will be used on all nodes, make sure to use a network address as bindnetaddr, not the address of a specific network interface. Redundant Ring Protocol (RRP) Corosync supports the Totem Redundant Ring Protocol. It allows the use of multiple redundant local-area networks for resilience against partial or total network faults. This way, cluster communication can still be kept up as long as a single network is operational. For more information, refer to icdcs02.pdf. When having defined redundant communication channels in Corosync, use RRP to tell the cluster how to use these interfaces. RRP can have three modes (rrp_mode): if set to active, Corosync uses both interfaces actively. If set to passive, Corosync uses the second interface only if the first ring fails. If rrp_mode is set to none, RRP is disabled. 24 High Availability Guide

37 Csync2 A synchronization tool that can be used to replicate configuration files across all nodes in the cluster. Csync2 can handle any number of hosts, sorted into synchronization groups. Each synchronization group has its own list of member hosts and its include/exclude patterns that define which files should be synchronized in the synchronization group. The groups, the hostnames belonging to each group, and the include/exclude rules for each group are specified in the Csync2 configuration file, /etc/csync2/csync2.cfg. For authentication, Csync2 uses the IP addresses and pre-shared keys within a synchronization group. You need to generate one key file for each synchronization group and copy it to all group members. For more information about Csync2, refer to csync2/paper.pdf conntrack Tools To synchronize the connection status between cluster nodes, the High Availability Extension uses the conntrack-tools. They allow to interact with the in-kernel Connection Tracking System for enabling stateful packet inspection for iptables. For detailed information, refer to AutoYaST AutoYaST is a system for installing one or more SUSE Linux Enterprise systems automatically and without user intervention. On SUSE Linux Enterprise you can create an AutoYaST profile that contains installation and configuration data. The profile tells AutoYaST what to install and how to configure the installed system to get a ready-to-use system in the end. This profile can then be used for mass deployment in different ways (for example, to clone existing cluster nodes). For detailed instructions on how to use AutoYaST in various scenarios, see the SUSE Linux Enterprise 11 SP2 Deployment Guide, available at Refer to chapter Automated Installation. Installation and Basic Setup 25

38 3.2 Overview The following basic steps are needed for installation and initial cluster setup. 1 Installation as Add-on (page 27): Install the software packages with YaST. Alternatively, you can install them from the command line with zypper: zypper in -t pattern sles_ha 2 Initial Cluster Setup: After installing the software on all nodes that will be part of your cluster, the following steps are needed to initially configure the cluster. 2a Defining the Communication Channels (page 32) 2b Optional: Defining Authentication Settings (page 37) 2c Configuring Services (page 38) 2d Transferring the Configuration to All Nodes (page 39). Whereas the configuration of Csync2 is done on one node only, the services Csync2 and xinetd need to be started on all nodes. 2e Optional: Synchronizing Connection Status Between Cluster Nodes (page 42) 2f Bringing the Cluster Online (page 43). The OpenAIS/Corosync service needs to be started on all nodes. The cluster setup steps can either be executed automatically (with bootstrap scripts) or manually (with the YaST cluster module or from command line). If you decide for an automatic cluster setup, refer to Section 3.4, Automatic Cluster Setup (sleha-bootstrap) (page 28). For a manual setup (or for adjusting any options after the automatic setup), refer to Section 3.5, Manual Cluster Setup (YaST) (page 31). 26 High Availability Guide

39 You can also use a combination of both setup methods, for example: set up one node with YaST cluster and then use sleha-join to integrate more nodes. Existing nodes can also be cloned for mass deployment with AutoYaST. The cloned nodes will have the same packages installed and the same system configuration. For details, refer to Section 3.6, Mass Deployment with AutoYaST (page 44). 3.3 Installation as Add-on The packages needed for configuring and managing a cluster with the High Availability Extension are included in the High Availability installation pattern. This pattern is only available after SUSE Linux Enterprise High Availability Extension has been installed as add-on to SUSE Linux Enterprise Server. For information on how to install add-on products, see the SUSE Linux Enterprise 11 SP2 Deployment Guide, available at Refer to chapter Installing Add-On Products. Procedure 3.1 Installing the High Availability Pattern 1 Start YaST as root user and select Software > Software Management. Alternatively, start the YaST package manager as root on a command line with yast2 sw_single. 2 From the Filter list, select Patterns and activate the High Availability pattern in the pattern list. 3 Click Accept to start installing the packages. NOTE The software packages needed for High Availability clusters are not automatically copied to the cluster nodes. 4 Install the High Availability pattern on all machines that will be part of your cluster. If you do not want to install SUSE Linux Enterprise Server 11 SP2 and SUSE Linux Enterprise High Availability Extension 11 SP2 manually on all nodes that Installation and Basic Setup 27

40 will be part of your cluster, use AutoYaST to clone existing nodes. For more information, refer to Section 3.6, Mass Deployment with AutoYaST (page 44). 3.4 Automatic Cluster Setup (sleha-bootstrap) The sleha-bootstrap package provides everything you need to get a one-node cluster up and running and to make other nodes join with the following basic steps: Automatically Setting Up the First Node (page 29) With sleha-init, define the basic parameters needed for cluster communication and (optionally) set up a STONITH mechanism to protect your shared storage. This leaves you with a running one-node cluster. Adding Nodes to an Existing Cluster (page 30) With sleha-join, add more nodes to your cluster. Both commands execute bootstrap scripts that require only a minimum of time and manual intervention. Any options set during the bootstrap process can be modified later with the YaST cluster module. Before starting the automatic setup, make sure that the following prerequisites are fulfilled on all nodes that will participate in the cluster: Prerequisites The requirements listed in Section 2.2, Software Requirements (page 20) and Section 2.4, Other Requirements (page 21) are fulfilled. The sleha-bootstrap package is installed. The network is configured according to your needs. For example, a private network is available for cluster communication and network device bonding is configured. For information on bonding, refer to Chapter 11, Network Device Bonding (page 203). If you want to use SBD for your shared storage, you need one shared block device for SBD. The block device need not be formatted. For more information, refer to Chapter 17, Storage Protection (page 271). 28 High Availability Guide

41 All nodes must be able to see the shared storage via the same paths (/dev/disk/ by-path/... or /dev/disk/by-id/...). Procedure 3.2 Automatically Setting Up the First Node The sleha-init command checks for configuration of NTP and guides you through configuration of the cluster communication layer (Corosync), and (optionally) through the configuration of SBD to protect your shared storage: 1 Log in as root to the physical or virtual machine you want to use as cluster node. 2 Start the bootstrap script by executing sleha-init If NTP has not been configured to start at boot time, a message appears. If you decide to continue anyway, the script will automatically generate keys for SSH access and for the Csync2 synchronization tool and start the services needed for both. 3 To configure the cluster communication layer (Corosync): 3a Enter a network address to bind to. By default, the script will propose the network address of eth0. Alternatively, enter a different network address, for example the address of bond0. 3b Enter a multicast address. The script proposes a random address that you can use as default. 3c Enter a multicast port. The script proposes 5405 as default. 4 To configure SBD (optional), enter a persistent path to the partition of your block device that you want to use for SBD. The path must be consistent across all nodes in the cluster. Finally, the script will start the OpenAIS service to bring the one-node cluster online and enable the Web management interface Hawk. The URL to use for Hawk is displayed on the screen. 5 For any details of the setup process, check /var/log/sleha-bootstrap.log. Installation and Basic Setup 29

42 You now have a running one-node cluster. If you want to check the cluster status or start managing resources, proceed by logging in to one of the user interfaces, Hawk or the Pacemaker GUI. For more information, refer to Chapter 6, Configuring and Managing Cluster Resources (Web Interface) (page 109) and Chapter 5, Configuring and Managing Cluster Resources (GUI) (page 75). IMPORTANT: Secure Password The bootstrap procedure creates a linux user named hacluster with the password linux. You need it for logging in to the Pacemaker GUI or Hawk. Replace the default password with a secure one as soon as possible: passwd hacluster Procedure 3.3 Adding Nodes to an Existing Cluster If you have a cluster up and running (with one or more nodes), add more cluster nodes with the sleha-join bootstrap script. The script only needs access to an existing cluster node and will complete the basic setup on the current machine automatically. If you have configured the existing cluster nodes with the YaST cluster module, make sure the following prerequisites are fulfilled before you run sleha-join: The root user on the existing nodes has SSH keys in place for passwordless login. Csync2 is configured on the existing nodes. For details, refer to Procedure 3.8, Configuring Csync2 with YaST (page 39). If you are logged in to the first node via Hawk, you can follow the changes in cluster status and view the resources being activated in the Web interface. 1 Log in as root to the physical or virtual machine supposed to join the cluster. 2 Start the bootstrap script by executing: sleha-join If NTP has not been configured to start at boot time, a message appears. 3 If you decide to continue anyway, you will be prompted for the IP address of an existing node. Enter the IP address. 30 High Availability Guide

43 4 If you have not already configured a passwordless SSH access between both machines, you will also be prompted for the root password of the existing node. After logging in to the specified node, the script will copy the Corosync configuration, configure SSH and Csync2, and will bring the current machine online as new cluster node. Apart from that, it will start the service needed for Hawk. If you have configured shared storage with OCFS2, it will also automatically create the mountpoint directory for the OCFS2 file system. 5 Repeat the steps above for all machines you want to add to the cluster. 6 For details of the process, check /var/log/sleha-bootstrap.log. IMPORTANT: Check no-quorum-policy After adding all nodes, check if you need to adjust the no-quorum-policy in the global cluster options. This is especially important for two-node clusters. For more information, refer to Section 4.1.1, Option no-quorum-policy (page 50). 3.5 Manual Cluster Setup (YaST) See Section 3.2, Overview (page 26) for an overview of all steps for initial setup YaST Cluster Module The following sections guide you through each of the setup steps, using the YaST cluster module. To access it, start YaST as root and select High Availability > Cluster. Alternatively, start the module from command line with yast2 cluster. If you start the cluster module for the first time, it appears as wizard, guiding you through all the steps necessary for basic setup. Otherwise, click the categories on the left panel to access the configuration options for each step. Installation and Basic Setup 31

44 Figure 3.1 YaST Cluster Module Overview The YaST cluster module automatically opens the ports in the firewall that are needed for cluster communication on the current machine. The configuration is written to /etc/sysconfig/susefirewall2.d/services/cluster. Note that some options in the YaST cluster module apply only to the current node, whereas others may automatically be transferred to all nodes. Find detailed information about this in the following sections Defining the Communication Channels For successful communication between the cluster nodes, define at least one communication channel. IMPORTANT: Redundant Communication Paths However, it is highly recommended to set up cluster communication via two or more redundant paths. This can be done via: Network Device Bonding (page 203). 32 High Availability Guide

45 A second communication channel in Corosync. For details, see Procedure 3.5, Defining a Redundant Communication Channel (page 35). If possible, choose network device bonding. Procedure 3.4 Defining the First Communication Channel For communication between the cluster nodes, use either multicast (UDP) or unicast (UDPU). 1 In the YaST cluster module, switch to the Communication Channels category. 2 To use multicast: 2a Set the Transport protocol to UDP. 2b Define the Bind Network Address. Set the value to the subnet you will use for cluster multicast. 2c Define the Multicast Address. 2d Define the Multicast Port. Wit the values entered above, you have now defined one communication channel for the cluster. In multicast mode, the same bindnetaddr, mcastaddr, and mcastport will be used for all cluster nodes. All nodes in the cluster will know each other by using the same multicast address. For different clusters, use different multicast addresses. Installation and Basic Setup 33

46 Figure 3.2 YaST Cluster Multicast Configuration 3 To use unicast: 3a Set the Transport protocol to UDPU. 3b Define the Bind Network Address. Set the value to the subnet you will use for cluster unicast. 3c Define the Multicast Port. 3d For unicast communication, Corosync needs to know the IP addresses of all nodes in the cluster. For each node that will be part of the cluster, click Add and enter its IP address. To modify or remove any addresses of cluster members, use the Edit or Del buttons. 34 High Availability Guide

47 Figure 3.3 YaST Cluster Unicast Configuration 4 Activate Auto Generate Node ID to automatically generate a unique ID for every cluster node. 5 If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf. 6 If needed, define a second communication channel as described below. Or click Next and proceed with Procedure 3.6, Enabling Secure Authentication (page 37). Procedure 3.5 Defining a Redundant Communication Channel If network device bonding cannot be used for any reason, the second best choice is to define a redundant communication channel (a second ring) in Corosync. That way, two physically separate networks can be used for communication. In case one network fails, the cluster nodes can still communicate via the other network. IMPORTANT: Redundant Rings and /etc/hosts If multiple rings are configured, each node can have multiple IP addresses. This needs to be reflected in the /etc/hosts file of all nodes. Installation and Basic Setup 35

48 1 In the YaST cluster module, switch to the Communication Channels category. 2 Activate Redundant Channel. The redundant channel must use the same protocol as the first communication channel you defined. 3 If you use multicast, define the Bind Network Address, the Multicast Address and the Multicast Port for the redundant channel. If you use unicast, define the Bind Network Address, the Multicast Port and enter the IP addresses of all nodes that will be part of the cluster. Now you have defined an additional communication channel in Corosync that will form a second token-passing ring. In /etc/corosync/corosync.conf, the primary ring (the first channel you have configured) gets the ringnumber 0, the second ring (redundant channel) the ringnumber 1. 4 To tell Corosync how and when to use the different channels, select the rrp_mode you want to use (active or passive). For more information about the modes, refer to Redundant Ring Protocol (RRP) (page 24) or click Help. As soon as RRP is used, the Stream Control Transmission Protocol (SCTP) is used for communication between the nodes (instead of TCP). The High Availability Extension monitors the status of the current rings and automatically re-enables redundant rings after faults. Alternatively, you can also check the ring status manually with corosync-cfgtool. View the available options with -h. If only one communication channel is defined, rrp_mode is automatically disabled (value none). 5 If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf. 6 For further cluster configuration, click Next and proceed with Section 3.5.3, Defining Authentication Settings (page 37). Find an example file for a UDP setup in /etc/corosync/corosync.conf.example. An example for UDPU setup is available in /etc/corosync/ corosync.conf.example.udpu. 36 High Availability Guide

49 3.5.3 Defining Authentication Settings The next step is to define the authentication settings for the cluster. You can use HMAC/SHA1 authentication that requires a shared secret used to protect and authenticate messages. The authentication key (password) you specify will be used on all nodes in the cluster. Procedure 3.6 Enabling Secure Authentication 1 In the YaST cluster module, switch to the Security category. 2 Activate Enable Security Auth. 3 For a newly created cluster, click Generate Auth Key File. An authentication key is created and written to /etc/corosync/authkey. Figure 3.4 YaST Cluster Security If you want the current machine to join an existing cluster, do not generate a new key file. Instead, copy the /etc/corosync/authkey from one of the nodes to the current machine (either manually or with Csync2). 4 If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf. Installation and Basic Setup 37

50 5 For further cluster configuration, proceed with Section 3.5.4, Configuring Services (page 38) Configuring Services In the YaST cluster module define whether to start certain services on a node at boot time. You can also use the module to start and stop the services manually. To bring the cluster nodes online and start the cluster resource manager, OpenAIS must be running as a service. Procedure 3.7 Enabling OpenAIS and mgmtd 1 In the YaST cluster module, switch to the Service category. 2 To start OpenAIS each time this cluster node is booted, select the respective option in the Booting group. If you select Off in the Booting group, you must start OpenAIS manually each time this node is booted. To start OpenAIS manually, use the rcopenais start command. 3 If you want to use the Pacemaker GUI for configuring, managing and monitoring cluster resources, activate Enable mgmtd. This daemon is needed for the GUI. 4 To start or stop OpenAIS immediately, click the respective button. 5 If you modified any options for an existing cluster node, confirm your changes and close the cluster module. Note that the configuration only applies to the current machine, not to all cluster nodes. 38 High Availability Guide

51 Figure 3.5 YaST Cluster Services 6 For further cluster configuration, click Next and proceed with Section 3.5.5, Transferring the Configuration to All Nodes (page 39) Transferring the Configuration to All Nodes Instead of copying the resulting configuration files to all nodes manually, use the csync2 tool for replication across all nodes in the cluster. This requires the following basic steps: 1 Configuring Csync2 with YaST (page 39). 2 Synchronizing the Configuration Files with Csync2 (page 40). Procedure 3.8 Configuring Csync2 with YaST 1 In the YaST cluster module, switch to the Csync2 category. 2 To specify the synchronization group, click Add in the Sync Host group and enter the local hostnames of all nodes in your cluster. For each node, you must use exactly the strings that are returned by the hostname command. Installation and Basic Setup 39

52 3 Click Generate Pre-Shared-Keys to create a key file for the synchronization group. The key file is written to /etc/csync2/key_hagroup. After it has been created, it must be copied manually to all members of the cluster. 4 To populate the Sync File list with the files that usually need to be synchronized among all nodes, click Add Suggested Files. Figure 3.6 YaST Cluster Csync2 5 If you want to Edit, Add or Remove files from the list of files to be synchronized use the respective buttons. You must enter the absolute pathname for each file. 6 Activate Csync2 by clicking Turn Csync2 ON. This will execute chkconfig csync2 to start Csync2 automatically at boot time. 7 If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST then writes the Csync2 configuration to /etc/csync2/ csync2.cfg. To start the synchronization process now, proceed with Procedure 3.9, Synchronizing the Configuration Files with Csync2 (page 40). 8 For further cluster configuration, click Next and proceed with Section 3.5.6, Synchronizing Connection Status Between Cluster Nodes (page 42). Procedure 3.9 Synchronizing the Configuration Files with Csync2 To successfully synchronize the files with Csync2, make sure that the following prerequisites are met: 40 High Availability Guide

53 The same Csync2 configuration is available on all nodes. Copy the file /etc/ csync2/csync2.cfg manually to all nodes after you have configured it as described in Procedure 3.8, Configuring Csync2 with YaST (page 39). It is recommended to include this file in the list of files to be synchronized with Csync2. Copy the /etc/csync2/key_hagroup file you have generated on one node in Step 3 (page 40) to all nodes in the cluster as it is needed for authentication by Csync2. However, do not regenerate the file on the other nodes as it needs to be the same file on all nodes. Both Csync2 and xinetd must be running on all nodes. NOTE: Starting Services at Boot Time Execute the following commands on all nodes to make both services start automatically at boot time and to start xinetd now: chkconfig csync2 on chkconfig xinetd on rcxinetd start 1 On the node that you want to copy the configuration from, execute the following command: csync2 -xv This will synchronize all the files once by pushing them to the other nodes. If all files are synchronized successfully, Csync2 will finish with no errors. If one or several files that are to be synchronized have been modified on other nodes (not only on the current one), Csync2 will report a conflict. You will get an output similar to the one below: While syncing file /etc/corosync/corosync.conf: ERROR from peer hex-14: File is also marked dirty here! Finished with 1 errors. 2 If you are sure that the file version on the current node is the best one, you can resolve the conflict by forcing this file and resynchronizing: csync2 -f /etc/corosync/corosync.conf csync2 -x Installation and Basic Setup 41

54 For more information on the Csync2 options, run csync2 -help. NOTE: Pushing Synchronization After Any Changes Csync2 only pushes changes. It does not continuously synchronize files between the nodes. Each time you update files that need to be synchronized, you have to push the changes to the other nodes: Run csync2 -xv on the node where you did the changes. If you run the command on any of the other nodes with unchanged files, nothing will happen Synchronizing Connection Status Between Cluster Nodes To enable stateful packet inspection for iptables, configure and use the conntrack tools with the following basic steps: 1 Configuring the conntrackd with YaST (page 42). 2 Configuring a resource for conntrackd (class: ocf, provider: heartbeat). If you use Hawk to add the resource, use the default values proposed by Hawk. After configuring the conntrack tools, you can use them for load balancing with Linux Virtual Server. Procedure 3.10 Configuring the conntrackd with YaST Use the YaST cluster module to configure the user-space conntrackd. It needs a dedicated network interface that is not used for other communication channels. The daemon can be started via a resource agent afterward. 1 In the YaST cluster module, switch to the Configure conntrackd category. 2 Select a Dedicated Interface for synchronizing the connection status. The IPv4 address of the selected interface is automatically detected and shown in YaST. It must already be configured and it must support multicast. 42 High Availability Guide

55 3 Define the Multicast Address to be used for synchronizing the connection status. 4 In Group Number, define a numeric ID for the group to synchronize the connection status to. 5 Click Generate /etc/conntrackd/conntrackd.conf to create the configuration file for conntrackd. 6 Confirm your changes and close the cluster module. If you have done the initial cluster setup exclusively with the YaST cluster module, you have now completed the basic configuration steps. Proceed with Section 3.5.7, Bringing the Cluster Online (page 43). Figure 3.7 YaST Cluster conntrackd Bringing the Cluster Online After the initial cluster configuration is done, start the OpenAIS/Corosync service on each cluster node to bring the stack online: Procedure 3.11 Starting OpenAIS/Corosync and Checking the Status 1 Log in to an existing node. 2 Check if the service is already running: rcopenais status Installation and Basic Setup 43

56 If not, start OpenAIS/Corosync now: rcopenais start 3 Repeat the steps above for each of the cluster nodes. 4 On one of the nodes, check the cluster status with the following command: crm_mon If all nodes are online, the output should be similar to the following: ============ Last updated: Thu Nov 10 13:37: Last change: Wed Nov 9 17:40: by hacluster via cibadmin on auckland Stack: openais Current DC: auckland - partition with quorum Version: d8fad5daab7e f144cc765def Nodes configured, 2 expected votes 8 Resources configured. ============ Online: [ e231 e229 ] This output indicates that the cluster resource manager is started and is ready to manage resources. After the basic configuration is done and the nodes are online, you can start to configure cluster resources, using one of the cluster management tools like the crm shell, the Pacemaker GUI, or the HA Web Konsole. For more information, refer to the following chapters. 3.6 Mass Deployment with AutoYaST The following procedure is suitable for deploying cluster nodes which are clones of an already existing node. The cloned nodes will have the same packages installed and the same system configuration. 44 High Availability Guide

57 Procedure 3.12 Cloning a Cluster Node with AutoYaST IMPORTANT: Identical Hardware This scenario assumes you are rolling out SUSE Linux Enterprise High Availability Extension 11 SP2 to a set of machines with exactly the same hardware configuration. If you need to deploy cluster nodes on non-identical hardware, refer to the Rule-Based Autoinstallation section in the SUSE Linux Enterprise 11 SP2 Deployment Guide, available at 1 Make sure the node you want to clone is correctly installed and configured. For details, refer to Section 3.3, Installation as Add-on (page 27), and Section 3.4, Automatic Cluster Setup (sleha-bootstrap) (page 28) or Section 3.5, Manual Cluster Setup (YaST) (page 31), respectively. 2 Follow the description outlined in the SUSE Linux Enterprise 11 SP2 Deployment Guide for simple mass installation. This includes the following basic steps: 2a Creating an AutoYaST profile. Use the AutoYaST GUI to create and modify a profile based on the existing system configuration. In AutoYaST, choose the High Availability module and click the Clone button. If needed, adjust the configuration in the other modules and save the resulting control file as XML. 2b Determining the source of the AutoYaST profile and the parameter to pass to the installation routines for the other nodes. 2c Determining the source of the SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension installation data. 2d Determining and setting up the boot scenario for autoinstallation. 2e Passing the command line to the installation routines, either by adding the parameters manually or by creating an info file. 2f Starting and monitoring the autoinstallation process. Installation and Basic Setup 45

58 After the clone has been successfully installed, execute the following steps to make the cloned node join the cluster: Procedure 3.13 Bringing the Cloned Node Online 1 Transfer the key configuration files from the already configured nodes to the cloned node with Csync2 as described in Section 3.5.5, Transferring the Configuration to All Nodes (page 39). 2 To bring the node online, start the OpenAIS service on the cloned node as described in Section 3.5.7, Bringing the Cluster Online (page 43). The cloned node will now join the cluster because the /etc/corosync/corosync.conf file has been applied to the cloned node via Csync2. The CIB is automatically synchronized among the cluster nodes. 46 High Availability Guide

59 Part II. Configuration and Administration

60

61 Configuration and Administration Basics 4 The main purpose of an HA cluster is to manage user services. Typical examples of user services are an Apache web server or a database. From the user's point of view, the services do something specific when ordered to do so. To the cluster, however, they are just resources which may be started or stopped the nature of the service is irrelevant to the cluster. In this chapter, we will introduce some basic concepts you need to know when configuring resources and administering your cluster. The following chapters show you how to execute the main configuration and administration tasks with each of the management tools the High Availability Extension provides. 4.1 Global Cluster Options Global cluster options control how the cluster behaves when confronted with certain situations. They are grouped into sets and can be viewed and modified with the cluster management tools like Pacemaker GUI and the crm shell. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters after basic cluster setup: Option no-quorum-policy (page 50) Option stonith-enabled (page 51) Learn how to adjust those parameters with the cluster management tools of your choice: Configuration and Administration Basics 49

62 Hawk: Procedure 6.2, Modifying Global Cluster Options (page 114) Pacemaker GUI: Procedure 5.1, Modifying Global Cluster Options (page 79) crm shell: Section 7.2, Configuring Global Cluster Options (page 158) Option no-quorum-policy This global option defines what to do when the cluster does not have quorum (no majority of nodes is part of the partition). Allowed values are: ignore The quorum state does not influence the cluster behavior at all, resource management is continued. This setting is useful for the following scenarios: Two-node clusters: Since a single node failure would always result in a loss of majority, usually you want the cluster to carry on regardless. Resource integrity is ensured using fencing, which also prevents split brain scenarios. Resource-driven clusters: For local clusters with redundant communication channels, a split brain scenario only has a certain probability. Thus, a loss of communication with a node most likely indicates that the node has crashed, and that the surviving nodes should recover and start serving the resources again. If no-quorum-policy is set to ignore, a 4-node cluster can sustain concurrent failure of three nodes before service is lost, whereas with the other settings, it would lose quorum after concurrent failure of two nodes. freeze If quorum is lost, the cluster freezes. Resource management is continued: running resources are not stopped (but possibly restarted in response to monitor events), but no further resources are started within the affected partition. This setting is recommended for clusters where certain resources depend on communication with other nodes (for example, OCFS2 mounts). In this case, the default setting no-quorum-policy=stop is not useful, as it would lead to the following 50 High Availability Guide

63 scenario: Stopping those resources would not be possible while the peer nodes are unreachable. Instead, an attempt to stop them would eventually time out and cause a stop failure, triggering escalated recovery and fencing. stop (default value) If quorum is lost, all resources in the affected cluster partition are stopped in an orderly fashion. suicide Fence all nodes in the affected cluster partition Option stonith-enabled This global option defines if to apply fencing, allowing STONITH devices to shoot failed nodes and nodes with resources that cannot be stopped. By default, this global option is set to true, because for normal cluster operation it is necessary to use STONITH devices. According to the default value, the cluster will refuse to start any resources if no STONITH resources have been defined. If you need to disable fencing for any reasons, set stonith-enabled to false. IMPORTANT: No Support Without STONITH A cluster without STONITH enabled is not supported. For an overview of all global cluster options and their default values, see Pacemaker Explained, available from Refer to section Available Cluster Options. 4.2 Cluster Resources As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, servers, databases, file systems, virtual machines, and any other server-based applications or services you want to make available to users at all times. Configuration and Administration Basics 51

64 4.2.1 Resource Management Before you can use a resource in the cluster, it must be set up. For example, if you want to use an Apache server as a cluster resource, set up the Apache server first and complete the Apache configuration before starting the respective resource in your cluster. If a resource has specific environment requirements, make sure they are present and identical on all cluster nodes. This kind of configuration is not managed by the High Availability Extension. You must do this yourself. NOTE: Do Not Touch Services Managed by the Cluster When managing a resource with the High Availability Extension, the same resource must not be started or stopped otherwise (outside of the cluster, for example manually or on boot or reboot). The High Availability Extension software is responsible for all service start or stop actions. However, if you want to check if the service is configured properly, start it manually, but make sure that it is stopped again before High Availability takes over. After having configured the resources in the cluster, use the cluster management tools to start, stop, clean up, remove or migrate any resources manually. For details how to do so with your preferred cluster management tool: Hawk: Chapter 6, Configuring and Managing Cluster Resources (Web Interface) (page 109) Pacemaker GUI: Chapter 5, Configuring and Managing Cluster Resources (GUI) (page 75) crm shell: Chapter 7, Configuring and Managing Cluster Resources (Command Line) (page 151) Supported Resource Agent Classes For each cluster resource you add, you need to define the standard that the resource agent conforms to. Resource agents abstract the services they provide and present an accurate status to the cluster, which allows the cluster to be non-committal about the 52 High Availability Guide

65 resources it manages. The cluster relies on the resource agent to react appropriately when given a start, stop or monitor command. Typically, resource agents come in the form of shell scripts. The High Availability Extension supports the following classes of resource agents: Legacy Heartbeat 1 Resource Agents Heartbeat version 1 came with its own style of resource agents. As many people have written their own agents based on its conventions, these resource agents are still supported. However, it is recommended to migrate your configurations to High Availability OCF RAs if possible. Linux Standards Base (LSB) Scripts LSB resource agents are generally provided by the operating system/distribution and are found in /etc/init.d. To be used with the cluster, they must conform to the LSB init script specification. For example, they must have several actions implemented, which are, at minimum, start, stop, restart, reload, force-reload, and status. For more information, see -generic/iniscrptact.html. The configuration of those services is not standardized. If you intend to use an LSB script with High Availability, make sure that you understand how the relevant script is configured. Often you can find information about this in the documentation of the relevant package in /usr/share/doc/packages/packagename. Open Cluster Framework (OCF) Resource Agents OCF RA agents are best suited for use with High Availability, especially when you need master resources or special monitoring abilities. The agents are generally located in /usr/lib/ocf/resource.d/provider/. Their functionality is similar to that of LSB scripts. However, the configuration is always done with environmental variables which allow them to accept and process parameters easily. The OCF specification (as it relates to resource agents) can be found at -agent-api.txt?rev=head&content-type=text/vnd.viewcvs -markup. OCF specifications have strict definitions of which exit codes must be returned by actions, see Section 8.3, OCF Return Codes and Failure Recovery (page 177). The cluster follows these specifications exactly. For a detailed list of all available OCF RAs, refer to Chapter 21, HA OCF Agents (page 313). Configuration and Administration Basics 53

66 All OCF Resource Agents are required to have at least the actions start, stop, status, monitor, and meta-data. The meta-data action retrieves information about how to configure the agent. For example, if you want to know more about the IPaddr agent by the provider heartbeat, use the following command: OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/heartbeat/ipaddr meta-data The output is information in XML format, including several sections (general description, available parameters, available actions for the agent). STONITH Resource Agents This class is used exclusively for fencing related resources. For more information, see Chapter 9, Fencing and STONITH (page 181). The agents supplied with the High Availability Extension are written to OCF specifications Types of Resources The following types of resources can be created: Primitives A primitive resource, the most basic type of a resource. Learn how to create primitive resources with your preferred cluster management tool: Hawk: Procedure 6.4, Adding Primitive Resources (page 117) Pacemaker GUI: Procedure 5.2, Adding Primitive Resources (page 80) crm shell: Section 7.3.1, Creating Cluster Resources (page 159) Groups Groups contain a set of resources that need to be located together, started sequentially and stopped in the reverse order. For more information, refer to Section , Groups (page 56). 54 High Availability Guide

67 Clones Clones are resources that can be active on multiple hosts. Any resource can be cloned, provided the respective resource agent supports it. For more information, refer to Section , Clones (page 57). Masters Masters are a special type of clone resources, they can have multiple modes. For more information, refer to Section , Masters (page 58) Resource Templates If you want to create lots of resources with similar configurations, defining a resource template is the easiest way. Once defined, it can be referenced in primitives or in certain types of constraints, as described in Section 4.4.3, Resource Templates and Constraints (page 67). If a template is referenced in a primitive, the primitive will inherit all operations, instance attributes (parameters), meta attributes, and utilization attributes defined in the template. Additionally, you can define specific operations or attributes for your primitive. If any of these are defined in both the template and the primitive, the values defined in the primitive will take precedence over the ones defined in the template. Learn how to define resource templates with your preferred cluster configuration tool: Hawk: Section 6.3.4, Using Resource Templates (page 120) crm shell: Section 7.3.2, Creating Resource Templates (page 160) Advanced Resource Types Whereas primitives are the simplest kind of resources and therefore easy to configure, you will probably also need more advanced resource types for cluster configuration, such as groups, clones or masters. Configuration and Administration Basics 55

68 Groups Some cluster resources are dependent on other components or resources and require that each component or resource starts in a specific order and runs together on the same server with resources it depends on. To simplify this configuration, you can use groups. Example 4.1 Resource Group for a Web Server An example of a resource group would be a Web server that requires an IP address and a file system. In this case, each component is a separate cluster resource that is combined into a cluster resource group. The resource group would then run on a server or servers, and in case of a software or hardware malfunction, fail over to another server in the cluster the same as an individual cluster resource. Figure 4.1 Group Resource Groups have the following properties: Starting and Stopping Resources are started in the order they appear in and stopped in the reverse order. Dependency If a resource in the group cannot run anywhere, then none of the resources located after that resource in the group is allowed to run. 56 High Availability Guide

69 Contents Groups may only contain a collection of primitive cluster resources. Groups must contain at least one resource, otherwise the configuration is not valid. To refer to the child of a group resource, use the child s ID instead of the group s ID. Constraints Although it is possible to reference the group s children in constraints, it is usually preferable to use the group s name instead. Stickiness Stickiness is additive in groups. Every active member of the group will contribute its stickiness value to the group s total. So if the default resource-stickiness is 100 and a group has seven members (five of which are active), then the group as a whole will prefer its current location with a score of 500. Resource Monitoring To enable resource monitoring for a group, you must configure monitoring separately for each resource in the group that you want monitored. Learn how to create groups with your preferred cluster management tool: Hawk: Procedure 6.14, Adding a Resource Group (page 132) Pacemaker GUI: Procedure 5.13, Adding a Resource Group (page 98) crm shell: Section 7.3.9, Configuring a Cluster Resource Group (page 168) Clones You may want certain resources to run simultaneously on multiple nodes in your cluster. To do this you must configure a resource as a clone. Examples of resources that might be configured as clones include STONITH and cluster file systems like OCFS2. You can clone any resource provided. This is supported by the resource s Resource Agent. Clone resources may even be configured differently depending on which nodes they are hosted. There are three types of resource clones: Configuration and Administration Basics 57

70 Anonymous Clones These are the simplest type of clones. They behave identically anywhere they are running. Because of this, there can only be one instance of an anonymous clone active per machine. Globally Unique Clones These resources are distinct entities. An instance of the clone running on one node is not equivalent to another instance on another node; nor would any two instances on the same node be equivalent. Stateful Clones Active instances of these resources are divided into two states, active and passive. These are also sometimes referred to as primary and secondary, or master and slave. Stateful clones can be either anonymous or globally unique. See also Section , Masters (page 58). Clones must contain exactly one group or one regular resource. When configuring resource monitoring or constraints, masters have different requirements than simple resources. For details, see Pacemaker Explained, available from Refer to section Clones - Resources That Should be Active on Multiple Hosts. Learn how to create clones with your preferred cluster management tool: Hawk: Procedure 6.15, Adding or Modifying Clones (page 133) Pacemaker GUI: Procedure 5.15, Adding or Modifying Clones (page 102) crm shell: Section , Configuring a Clone Resource (page 169) Masters Masters are a specialization of clones that allow the instances to be in one of two operating modes (master or slave). Masters must contain exactly one group or one regular resource. When configuring resource monitoring or constraints, masters have different requirements than simple resources. For details, see Pacemaker Explained, available from 58 High Availability Guide

71 Refer to section Multi-state - Resources That Have Multiple Modes Resource Options (Meta Attributes) For each resource you add, you can define options. Options are used by the cluster to decide how your resource should behave they tell the CRM how to treat a specific resource. Resource options can be set with the crm_resource --meta command or with the Pacemaker GUI as described in Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82). Alternatively, use Hawk: Procedure 6.4, Adding Primitive Resources (page 117). Table 4.1 Option Options for a Primitive Resource Description priority target-role is-managed resource-stickiness migration-threshold multiple-active If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. In what state should the cluster attempt to keep this resource? Allowed values: stopped, started. Is the cluster allowed to start and stop the resource? Allowed values: true, false. How much does the resource prefer to stay where it is? Defaults to the value of default-resource-stickiness. How many failures should occur for this resource on a node before making the node ineligible to host this resource? Default: none. What should the cluster do if it ever finds the resource active on more than one node? Allowed values: block (mark the resource as unmanaged), stop_only, stop_start. Configuration and Administration Basics 59

72 Option failure-timeout allow-migrate Description How many seconds to wait before acting as if the failure had not occurred (and potentially allowing the resource back to the node on which it failed)? Default: never. Allow resource migration for resources which support migrate_to/migrate_from actions Instance Attributes (Parameters) The scripts of all resource classes can be given parameters which determine how they behave and which instance of a service they control. If your resource agent supports parameters, you can add them with the crm_resource command or with the GUI as described in Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82). Alternatively, use Hawk: Procedure 6.4, Adding Primitive Resources (page 117). In the crm command line utility and in Hawk, instance attributes are called params or Parameter, respectively. The list of instance attributes supported by an OCF script can be found by executing the following command as root: crm ra info [class:[provider:]]resource_agent or (without the optional parts): crm ra info resource_agent The output lists all the supported attributes, their purpose and default values. For example, the command crm ra info IPaddr returns the following output: Manages virtual IPv4 addresses (portable version) (ocf:heartbeat:ipaddr) This script manages IP alias IP addresses It can add an IP alias, or remove one. Parameters (* denotes required, [] the default): ip* (string): IPv4 address 60 High Availability Guide

73 The IPv4 address to be configured in dotted quad notation, for example " ". nic (string, [eth0]): Network interface The base network interface on which the IP address will be brought online. If left empty, the script will try and determine this from the routing table. Do NOT specify an alias interface in the form eth0:1 or anything here; rather, specify the base interface only. cidr_netmask (string): Netmask The netmask for the interface in CIDR format. (ie, 24), or in dotted quad notation ). If unspecified, the script will also try to determine this from the routing table. broadcast (string): Broadcast address Broadcast address associated with the IP. If left empty, the script will determine this from the netmask. iflabel (string): Interface label You can specify an additional label for your IP address here. lvs_support (boolean, [false]): Enable support for LVS DR Enable support for LVS Direct Routing configurations. In case a IP address is stopped, only move it to the loopback device to allow the local node to continue to service requests, but no longer advertise it on the network. local_stop_script (string): Script called when the IP is released local_start_script (string): Script called when the IP is added ARP_INTERVAL_MS (integer, [500]): milliseconds between gratuitous ARPs milliseconds between ARPs ARP_REPEAT (integer, [10]): repeat count How many gratuitous ARPs to send out when bringing up a new address ARP_BACKGROUND (boolean, [yes]): run in background run in background (no longer any reason to do this) ARP_NETMASK (string, [ffffffffffff]): netmask for ARP netmask for ARP - in nonstandard hexadecimal format. Operations' defaults (advisory minimum): Configuration and Administration Basics 61

74 start stop monitor_0 timeout=90 timeout=100 interval=5s timeout=20s NOTE: Instance Attributes for Groups, Clones or Masters Note that groups, clones and masters do not have instance attributes. However, any instance attributes set will be inherited by the group's, clone's or master's children Resource Operations By default, the cluster will not ensure that your resources are still healthy. To instruct the cluster to do this, you need to add a monitor operation to the resource s definition. Monitor operations can be added for all classes or resource agents. For more information, refer to Section 4.3, Resource Monitoring (page 65). Table 4.2 Operation id name Resource Operations Description Your name for the action. Must be unique. (The ID is not shown). The action to perform. Common values: monitor, start, stop. interval timeout requires on-fail How frequently to perform the operation. Unit: seconds How long to wait before declaring the action has failed. What conditions need to be satisfied before this action occurs. Allowed values: nothing, quorum, fencing. The default depends on whether fencing is enabled and if the resource s class is stonith. For STONITH resources, the default is nothing. The action to take if this action ever fails. Allowed values: 62 High Availability Guide

75 Operation Description ignore: Pretend the resource did not fail. block: Do not perform any further operations on the resource. stop: Stop the resource and do not start it elsewhere. restart: Stop the resource and start it again (possibly on a different node). fence: Bring down the node on which the resource failed (STONITH). standby: Move all resources away from the node on which the resource failed. enabled role record-pending description If false, the operation is treated as if it does not exist. Allowed values: true, false. Run the operation only if the resource has this role. Can be set either globally or for individual resources. Makes the CIB reflect the state of in-flight operations on resources. Description of the operation Timeout Values Timeouts values for resources can be influenced by the following parameters: default-action-timeout (global cluster option), op_defaults (global defaults for operations), a specific timeout value defined in a resource template, Configuration and Administration Basics 63

76 a specific timeout value defined for a resource. Of the default values, op_defaults takes precedence over default-action-timeout. If a specific value is defined for a resource, it always takes precedence over any of the defaults (and over a value defined in a resource template). For information on how to set the default parameters, refer to the technical information document default action timeout and default op timeout. It is available at &externalid= Getting timeout values right is very important. Setting them too low will result in a lot of (unnecessary) fencing operations for the following reasons: 1. If a resource runs into a timeout, it fails and the cluster will try to stop it. 2. If stopping the resource also fails (for example because the timeout for stopping is set too low), the cluster will fence the node (it considers the node where this happens to be out of control). The best practice for setting timeout values is as follows: 1 Check how long it takes your resources to start and stop (under load). 2 Adjust the (default) timeout values accordingly: 2a For example, set the default-action-timeout to 120 seconds. 2b For resources that need longer periods of time, define individual timeout values. 3 When configuring operations for a resource, add separate start and stop operations. When configuring operations with Hawk or the Pacemaker GUI, both will provide useful timeout proposals for those operations. 64 High Availability Guide

77 4.3 Resource Monitoring If you want to ensure that a resource is running, you must configure resource monitoring for it. If the resource monitor detects a failure, the following takes place: Log file messages are generated, according to the configuration specified in the logging section of /etc/corosync/corosync.conf. By default, the logs are written to syslog, usually /var/log/messages. The failure is reflected in the cluster management tools (Pacemaker GUI, Hawk, crm_mon), and in the CIB status section. The cluster initiates noticeable recovery actions which may include stopping the resource to repair the failed state and restarting the resource locally or on another node. The resource also may not be restarted at all, depending on the configuration and state of the cluster. If you do not configure resource monitoring, resource failures after a successful start will not be communicated, and the cluster will always show the resource as healthy. Usually, resources are only monitored by the cluster as long as they are running. However, to detect concurrency violations, also configure monitoring for resources which are stopped. For example: primitive dummy1 ocf:heartbeat:dummy \ op monitor interval="300s" role="stopped" timeout="10s" \ op monitor interval="30s" timeout="10s" This configuration triggers a monitoring operation every 300 seconds for the resource dummy1 as soon as it is in role="stopped". When running, it will be monitored every 30 seconds. Learn how to add monitor operations to resources with your preferred cluster management tool: Hawk: Procedure 6.13, Adding or Modifying Monitor Operations (page 130) Pacemaker GUI: Procedure 5.12, Adding or Modifying Monitor Operations (page 96) Configuration and Administration Basics 65

78 crm shell: Section 7.3.8, Configuring Resource Monitoring (page 168) 4.4 Resource Constraints Having all the resources configured is only part of the job. Even if the cluster knows all needed resources, it might still not be able to handle them correctly. Resource constraints let you specify which cluster nodes resources can run on, what order resources will load, and what other resources a specific resource is dependent on Types of Constraints There are three different kinds of constraints available: Resource Location Locational constraints that define on which nodes a resource may be run, may not be run or is preferred to be run. Resource Colocation Colocational constraints that tell the cluster which resources may or may not run together on a node. Resource Order Ordering constraints to define the sequence of actions. For more information on configuring constraints and detailed background information about the basic concepts of ordering and colocation, refer to the following documents. They are available at (page 73) and (page 74), respectively: Pacemaker Explained, chapter Resource Constraints Collocation Explained Ordering Explained 66 High Availability Guide

79 Learn how to add the various kinds of constraints with the GUI in Section 5.3.4, Configuring Resource Constraints (page 86). If you prefer the command line approach, see Section 7.3.4, Configuring Resource Constraints (page 162) Scores and Infinity When defining constraints, you also need to deal with scores. Scores of all kinds are integral to how the cluster works. Practically everything from migrating a resource to deciding which resource to stop in a degraded cluster is achieved by manipulating scores in some way. Scores are calculated on a per-resource basis and any node with a negative score for a resource cannot run that resource. After calculating the scores for a resource, the cluster then chooses the node with the highest score. INFINITY is currently defined as 1,000,000. Additions or subtractions with it stick to the following three basic rules: Any value + INFINITY = INFINITY Any value - INFINITY = -INFINITY INFINITY - INFINITY = -INFINITY When defining resource constraints, you specify a score for each constraint. The score indicates the value you are assigning to this resource constraint. Constraints with higher scores are applied before those with lower scores. By creating additional location constraints with different scores for a given resource, you can specify an order for the nodes that a resource will fail over to Resource Templates and Constraints If you have defined a resource template, it can be referenced in the following types of constraints: order constraints, colocation constraints, rsc_ticket constraints (for multi-site clusters). Configuration and Administration Basics 67

80 However, colocation constraints must not contain more than one reference to a template. Resource sets must not contain a reference to a template. Resource templates referenced in constraints stand for all primitives which are derived from that template. This means, the constraint applies to all primitive resources referencing the resource template. Referencing resource templates in constraints is an alternative to resource sets and can simplify the cluster configuration considerably. For details about resource sets, refer to Procedure 6.9, Using Resource Sets for Colocation or Order Constraints (page 125) Failover Nodes A resource will be automatically restarted if it fails. If that cannot be achieved on the current node, or it fails N times on the current node, it will try to fail over to another node. Each time the resource fails, its failcount is raised. You can define a number of failures for resources (a migration-threshold), after which they will migrate to a new node. If you have more than two nodes in your cluster, the node a particular resource fails over to is chosen by the High Availability software. However, you can specify the node a resource will fail over to by configuring one or several location constraints and a migration-threshold for that resource. For detailed instructions how to achieve this with the GUI, refer to Section 5.3.5, Specifying Resource Failover Nodes (page 90). If you prefer the command line approach, see Section 7.3.5, Specifying Resource Failover Nodes (page 164). 68 High Availability Guide

81 Example 4.2 Migration Threshold Process Flow For example, let us assume you have configured a location constraint for resource r1 to preferably run on node1. If it fails there, migration-threshold is checked and compared to the failcount. If failcount >= migration-threshold then the resource is migrated to the node with the next best preference. By default, once the threshold has been reached, the node will no longer be allowed to run the failed resource until the resource's failcount is reset. This can be done manually by the cluster administrator or by setting a failure-timeout option for the resource. For example, a setting of migration-threshold=2 and failure-timeout=60s would cause the resource to migrate to a new node after two failures and potentially allow it to move back (depending on the stickiness and constraint scores) after one minute. There are two exceptions to the migration threshold concept, occurring when a resource either fails to start or fails to stop: Start failures set the failcount to INFINITY and thus always cause an immediate migration. Stop failures cause fencing (when stonith-enabled is set to true which is the default). In case there is no STONITH resource defined (or stonith-enabled is set to false), the resource will not migrate at all. For details on using migration thresholds and resetting failcounts, refer to Section 5.3.5, Specifying Resource Failover Nodes (page 90). If you prefer the command line approach, see Section 7.3.5, Specifying Resource Failover Nodes (page 164) Failback Nodes A resource might fail back to its original node when that node is back online and in the cluster. If you want to prevent a resource from failing back to the node it was running on prior to failover, or if you want to specify a different node for the resource to fail back to, you must change its resource stickiness value. You can either specify resource stickiness when you are creating a resource, or afterwards. Configuration and Administration Basics 69

82 Consider the following implications when specifying resource stickiness values: Value is 0: This is the default. The resource will be placed optimally in the system. This may mean that it is moved when a better or less loaded node becomes available. This option is almost equivalent to automatic failback, except that the resource may be moved to a node that is not the one it was previously active on. Value is greater than 0: The resource will prefer to remain in its current location, but may be moved if a more suitable node is available. Higher values indicate a stronger preference for a resource to stay where it is. Value is less than 0: The resource prefers to move away from its current location. Higher absolute values indicate a stronger preference for a resource to be moved. Value is INFINITY: The resource will always remain in its current location unless forced off because the node is no longer eligible to run the resource (node shutdown, node standby, reaching the migration-threshold, or configuration change). This option is almost equivalent to completely disabling automatic failback. Value is -INFINITY: The resource will always move away from its current location Placing Resources Based on Their Load Impact Not all resources are equal. Some, such as Xen guests, require that the node hosting them meets their capacity requirements. If resources are placed such that their combined need exceed the provided capacity, the resources diminish in performance (or even fail). To take this into account, the High Availability Extension allows you to specify the following parameters: 1. The capacity a certain node provides. 70 High Availability Guide

83 2. The capacity a certain resource requires. 3. An overall strategy for placement of resources. Learn how to configure these settings with your preferred cluster management tool: Pacemaker GUI: Section 5.3.7, Configuring Placement of Resources Based on Load Impact (page 92) crm shell: Section 7.3.7, Configuring Placement of Resources Based on Load Impact (page 165) A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource's requirements. The nature of the required or provided capacities is completely irrelevant for the High Availability Extension, it just makes sure that all capacity requirements of a resource are satisfied before moving a resource to a node. To manually configure the resource's requirements and the capacity a node provides, use utilization attributes. You can name the utilization attributes according to your preferences and define as many name/value pairs as your configuration needs. However, the attribute's values must be integers. The High Availability Extension now also provides means to detect and configure both node capacity and resource requirements automatically: The NodeUtilization resource agent checks the capacity of a node (regarding CPU and RAM). To configure automatic detection, create a clone resource of the following class, provider, and type: ofc:pacemaker:nodeutilization. One instance of the clone should be running on each node. After the instance has started, a utilization section will be added to the node's configuration in CIB. For automatic detection of a resource's minimal requirements (regarding RAM and CPU) the Xen resource agent has been improved. Upon start of a Xen resource, it will reflect the consumption of RAM and CPU. Utilization attributes will automatically be added to the resource configuration. Apart from detecting the minimal requirements, the High Availability Extension also allows to monitor the current utilization via the VirtualDomain resource agent. It detects CPU and RAM use of the virtual machine. To use this feature, configure a resource of the following class, provider and type: ofc:heartbeat:virtualdomain. Add the dynamic_utilization instance Configuration and Administration Basics 71

84 attribute (parameter) and set its value to 1. This updates the utilization values in the CIB during each monitoring cycle. Independent of manually or automatically configuring capacity and requirements, the placement strategy must be specified with the placement-strategy property (in the global cluster options). The following values are available: default (default value) Utilization values are not considered at all. Resources are allocated according to location scoring. If scores are equal, resources are evenly distributed across nodes. utilization Utilization values are considered when deciding if a node has enough free capacity to satisfy a resource's requirements. However, load-balancing is still done based on the number of resources allocated to a node. minimal Utilization values are considered when deciding if a node has enough free capacity to satisfy a resource's requirements. An attempt is made to concentrate the resources on as few nodes as possible (in order to achieve power savings on the remaining nodes). balanced Utilization values are considered when deciding if a node has enough free capacity to satisfy a resource's requirements. An attempt is made to distribute the resources evenly, thus optimizing resource performance. NOTE: Configuring Resource Priorities The available placement strategies are best-effort they do not yet use complex heuristic solvers to always reach optimum allocation results. Thus, set your resource priorities in a way that makes sure that your most important resources are scheduled first. 72 High Availability Guide

85 Example 4.3 Example Configuration for Load-Balanced Placing The following example demonstrates a three-node cluster of equal nodes, with four virtual machines. node node1 utilization memory="4000" node node2 utilization memory="4000" node node3 utilization memory="4000" primitive xena ocf:heartbeat:xen utilization memory="3500" \ meta priority="10" primitive xenb ocf:heartbeat:xen utilization memory="2000" \ meta priority="1" primitive xenc ocf:heartbeat:xen utilization memory="2000" \ meta priority="1" primitive xend ocf:heartbeat:xen utilization memory="1000" \ meta priority="5" property placement-strategy="minimal" With all three nodes up, resource xena will be placed onto a node first, followed by xend. xenb and xenc would either be allocated together or one of them with xend. If one node failed, too little total memory would be available to host them all. xena would be ensured to be allocated, as would xend. However, only one of the remaining resources xenb or xenc could still be placed. Since their priority is equal, the result would still be open. To resolve this ambiguity as well, you would need to set a higher priority for either one. 4.5 For More Information Home page of Pacemaker, the cluster resource manager shipped with the High Availability Extension. Home page of the The High Availability Linux Project. Holds a number of comprehensive manuals, for example: Pacemaker Explained: Explains the concepts used to configure Pacemaker. Contains comprehensive and very detailed information for reference. Configuration and Administration Basics 73

86 Fencing and Stonith: How to configure and use STONITH devices. CRM Command Line Interface: Introduction to the crm command line tool. Features some more useful documentation, such as: Collocation Explained Ordering Explained 74 High Availability Guide

87 Configuring and Managing Cluster Resources (GUI) 5 This chapter introduces the Pacemaker GUI and covers basic tasks needed when configuring and managing cluster resources: modifying global cluster options, creating basic and advanced types of resources (groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning up or removing resources, and migrating resources manually. Support for the GUI is provided by two packages: The pacemaker-mgmt package contains the back-end for the GUI (the mgmtd daemon). It must be installed on all cluster nodes you want to connect to with the GUI. On any machine where you want to run the GUI, install the pacemaker-mgmt-client package. NOTE: User Authentication To log in to the cluster from the Pacemaker GUI, the respective user must be a member of the haclient group. The installation creates a linux user named hacluster and adds the user to the haclient group. Before using the Pacemaker GUI, either set a password for the hacluster user or create a new user which is member of the haclient group. Do this on every node you will connect to with the Pacemaker GUI. Configuring and Managing Cluster Resources (GUI) 75

88 5.1 Pacemaker GUI Overview To start the Pacemaker GUI, enter crm_gui at the command line. To access the configuration and administration options, you need to log in to a cluster Logging in to a Cluster To connect to the cluster, select Connection > Login. By default, the Server field shows the localhost's IP address and hacluster as User Name. Enter the user's password to continue. Figure 5.1 Connecting to the Cluster If you are running the Pacemaker GUI remotely, enter the IP address of a cluster node as Server. As User Name, you can also use any other user belonging to the haclient group to connect to the cluster. 76 High Availability Guide

89 5.1.2 Main Window After being connected, the main window opens: Figure 5.2 Pacemaker GUI - Main Window NOTE: Available Functions in Pacemaker GUI By default, users logged in as root or hacluster have full read-write access to all cluster configuration tasks. However, Access Control Lists (page 195) can be used to define fine-grained access permissions. If ACLs are enabled in the CRM, the available functions in the Pacemaker GUI depend on the user role and access permission assigned to you. To view or modify cluster components like the CRM, resources, nodes or constraints, select the respective subentry of the Configuration category in the left pane and use the options that become available in the right pane. Additionally, the Pacemaker GUI lets you easily view, edit, import and export XML fragments of the CIB for the following subitems: Resource Defaults, Operation Defaults, Nodes, Resources, and Constraints. Select any of the Configuration subitems and select Show > XML Mode in the upper right corner of the window. Configuring and Managing Cluster Resources (GUI) 77

90 If you have already configured your resources, click the Management category in the left pane to show the status of your cluster and its resources. This view also allows you to set nodes to standby and to modify the management status of nodes (if they are currently managed by the cluster or not). To access the main functions for resources (starting, stopping, cleaning up or migrating resources), select the resource in the right pane and use the icons in the toolbar. Alternatively, right-click the resource and select the respective menu item from the context menu. The Pacemaker GUI also allows you to switch between different view modes, influencing the behavior of the software and hiding or showing certain aspects: Simple Mode Lets you add resources in a wizard-like mode. When creating and modifying resources, shows the frequently-used tabs for sub-objects, allowing you to directly add objects of that type via the tab. Allows you to view and change all available global cluster options by selecting the CRM Config entry in the left pane. The right pane then shows the values that are currently set. If no specific value is set for an option, it shows the default values instead. Expert Mode Lets you add resources in either a wizard-like mode or via dialog windows. When creating and modifying resources, it only shows the corresponding tab if a particular type of sub-object already exists in CIB. When adding a new sub-object, you will be prompted to select the object type, thus allowing you to add all supported types of sub-objects. When selecting the CRM Config entry in the left pane, it only shows the values of global cluster options that have been actually set. It hides all cluster options that will automatically use the defaults (because no values have been set). In this mode, the global cluster options can only be modified by using the individual configuration dialogs. Hack Mode Has the same functions as the expert mode. Allows you to add additional attribute sets that include specific rules to make your configuration more dynamic. For example, you can make a resource have different instance attributes depending on the node it is hosted on. Furthermore, you can add a time-based rule for a meta attribute set to determine when the attributes take effect. 78 High Availability Guide

91 The window's status bar also shows the currently active mode. The following sections guide you through the main tasks you need to execute when configuring cluster options and resources and show you how to administer the resources with the Pacemaker GUI. Where not stated otherwise, the step-by-step instructions reflect the procedure as executed in the simple mode. 5.2 Configuring Global Cluster Options Global cluster options control how the cluster behaves when confronted with certain situations. They are grouped into sets and can be viewed and modified with the cluster management tools like Pacemaker GUI and crm shell. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters after basic cluster setup: Option no-quorum-policy (page 50) Option stonith-enabled (page 51) Procedure 5.1 Modifying Global Cluster Options 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Select View > Simple Mode. 3 In the left pane, select CRM Config to view the global cluster options and their current values. 4 Depending on your cluster requirements, set No Quorum Policy to the appropriate value. 5 If you need to disable fencing for any reasons, deselect stonith-enabled. IMPORTANT: No Support Without STONITH A cluster without STONITH enabled is not supported. Configuring and Managing Cluster Resources (GUI) 79

92 6 Confirm your changes with Apply. You can at any time switch back to the default values for all options by selecting CRM Config in the left pane and clicking Default. 5.3 Configuring Cluster Resources As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, servers, databases, file systems, virtual machines, and any other server-based applications or services you want to make available to users at all times. For an overview of resource types you can create, refer to Section 4.2.3, Types of Resources (page 54) Creating Simple Cluster Resources To create the most basic type of a resource, proceed as follows: Procedure 5.2 Adding Primitive Resources 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the left pane, select Resources and click Add > Primitive. 3 In the next dialog, set the following parameters for the resource: 3a Enter a unique ID for the resource. 3b From the Class list, select the resource agent class you want to use for that resource: heartbeat, lsb, ocf or stonith. For more information, see Section 4.2.2, Supported Resource Agent Classes (page 52). 3c If you selected ocf as class, also specify the Provider of your OCF resource agent. The OCF specification allows multiple vendors to supply the same resource agent. 80 High Availability Guide

93 3d From the Type list, select the resource agent you want to use (for example, IPaddr or Filesystem). A short description for this resource agent is displayed below. The selection you get in the Type list depends on the Class (and for OCF resources also on the Provider) you have chosen. 3e Below Options, set the Initial state of resource. 3f Activate Add monitor operation if you want the cluster to monitor if the resource is still healthy. 4 Click Forward. The next window shows a summary of the parameters that you have already defined for that resource. All required Instance Attributes for that resource are listed. You need to edit them in order to set them to appropriate values. You may also need to add more attributes, depending on your deployment and settings. For details how to do so, refer to Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82). 5 If all parameters are set according to your wishes, click Apply to finish the configuration of that resource. The configuration dialog is closed and the main window shows the newly added resource. Configuring and Managing Cluster Resources (GUI) 81

94 During or after creation of a resource, you can add or modify the following parameters for resources: Instance attributes they determine which instance of a service the resource controls. For more information, refer to Section 4.2.7, Instance Attributes (Parameters) (page 60). Meta attributes they tell the CRM how to treat a specific resource. For more information, refer to Section 4.2.6, Resource Options (Meta Attributes) (page 59). Operations they are needed for resource monitoring. For more information, refer to Section 4.2.8, Resource Operations (page 62). Procedure 5.3 Adding or Modifying Meta and Instance Attributes 1 In the Pacemaker GUI main window, click Resources in the left pane to see the resources already configured for the cluster. 2 In the right pane, select the resource to modify and click Edit (or double-click the resource). The next window shows the basic resource parameters and the Meta Attributes, Instance Attributes or Operations already defined for that resource. 3 To add a new meta attribute or instance attribute, select the respective tab and click Add. 4 Select the Name of the attribute you want to add. A short Description is displayed. 5 If needed, specify an attribute Value. Otherwise the default value of that attribute will be used. 82 High Availability Guide

95 6 Click OK to confirm your changes. The newly added or modified attribute appears on the tab. 7 If all parameters are set according to your wishes, click OK to finish the configuration of that resource. The configuration dialog is closed and the main window shows the modified resource. TIP: XML Source Code for Resources The Pacemaker GUI allows you to view the XML fragments that are generated from the parameters that you have defined. For individual resources, select Show > XML Mode in the top right corner of the resource configuration dialog. To access the XML representation of all resources that you have configured, click Resources in the left pane and then select Show > XML Mode in the upper right corner of the main window. The editor displaying the XML code allows you to Import or Export the XML elements or to manually edit the XML code Creating STONITH Resources IMPORTANT: No Support Without STONITH A cluster without STONITH running is not supported. By default, the global cluster option stonith-enabled is set to true: If no STONITH resources have been defined, the cluster will refuse to start any resources. To complete STONITH setup, you need to configure one or more STONITH resources. While they are configured similar to other resources, the behavior of STONITH resources is different in some respects. For details refer to Section 9.3, STONITH Configuration (page 184). Procedure 5.4 Adding a STONITH Resource 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the left pane, select Resources and click Add > Primitive. Configuring and Managing Cluster Resources (GUI) 83

96 3 In the next dialog, set the following parameters for the resource: 3a Enter a unique ID for the resource. 3b From the Class list, select the resource agent class stonith. 3c From the Type list, select the STONITH plug-in for controlling your STONITH device. A short description for this plug-in is displayed below. 3d Below Options, set the Initial state of resource. 3e Activate Add monitor operation if you want the cluster to monitor the fencing device. For more information, refer to Section 9.4, Monitoring Fencing Devices (page 189). 4 Click Forward. The next window shows a summary of the parameters that you have already defined for that resource. All required Instance Attributes for the selected STONITH plug-in are listed. You need to edit them in order to set them to appropriate values. You may also need to add more attributes or monitor operations, depending on your deployment and settings. For details how to do so, refer to Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82) and Section 5.3.8, Configuring Resource Monitoring (page 96). 5 If all parameters are set according to your wishes, click Apply to finish the configuration of that resource. The configuration dialog closes and the main window shows the newly added resource. To complete your fencing configuration, add constraints or use clones or both. For more details, refer to Chapter 9, Fencing and STONITH (page 181) Using Resource Templates If you want to create several resources with similar configurations, defining a resource template is the easiest way. Once defined, it can be referenced in primitives or in certain types of constraints. For detailed information about function and use of resource templates, refer to Section 4.4.3, Resource Templates and Constraints (page 67). 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 84 High Availability Guide

97 2 In the left pane, select Resources and click Add > Template. 3 Enter a unique ID for the template. 4 Specify the resource template as you would specify a primitive. Follow Procedure 5.2: Adding Primitive Resources, starting with Step 3b (page 80). 5 If all parameters are set according to your wishes, click Apply to finish the configuration of the resource template. The configuration dialog is closed and the main window shows the newly added resource template. Procedure 5.5 Referencing Resource Templates 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 To reference the newly created resource template in a primitive, follow these steps: 2a In the left pane, select Resources and click Add > Primitive. 2b Enter a unique ID and specify Class, Provider, and Type. 2c Select the Template to reference. 2d If you want to set specific instance attributes, operations or meta attributes that deviate from the template, continue to configure the resource as described in Procedure 5.2, Adding Primitive Resources (page 80). 3 To reference the newly created resource template in colocational or order constraints: 3a Configure the constraints as described in Procedure 5.7, Adding or Modifying Colocational Constraints (page 87) or Procedure 5.8, Adding or Modifying Ordering Constraints (page 88), respectively. 3b For colocation constraints, the Resources drop-down list will show the IDs of all resources and resource templates that have been configured. From there, select the template to reference. Configuring and Managing Cluster Resources (GUI) 85

98 3c Likewise, for ordering constraints, the First and Then drop-down lists will show both resources and resource templates. From there, select the template to reference Configuring Resource Constraints Having all the resources configured is only part of the job. Even if the cluster knows all needed resources, it might still not be able to handle them correctly. Resource constraints let you specify which cluster nodes resources can run on, what order resources will load, and what other resources a specific resource is dependent on. For an overview which types of constraints are available, refer to Section 4.4.1, Types of Constraints (page 66). When defining constraints, you also need to specify scores. For more information about scores and their implications in the cluster, see Section 4.4.2, Scores and Infinity (page 67). Learn how to create the different types of the constraints in the following procedures. Procedure 5.6 Adding or Modifying Locational Constraints 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster. 3 In the left pane, select Constraints and click Add. 4 Select Resource Location and click OK. 5 Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog. 6 Select the Resource for which to define the constraint. The list shows the IDs of all resources that have been configured for the cluster. 7 Set the Score for the constraint. Positive values indicate the resource can run on the Node you specify below. Negative values mean it should not run on this node. 86 High Availability Guide

99 Setting the score to INFINITY forces the resource to run on the node. Setting it to -INFINITY means the resources must not run on the node. 8 Select the Node for the constraint. 9 If you leave the Node and the Score field empty, you can also add rules by clicking Add > Rule. To add a lifetime, just click Add > Lifetime. 10 If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint. Procedure 5.7 Adding or Modifying Colocational Constraints 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster. 3 In the left pane, select Constraints and click Add. 4 Select Resource Colocation and click OK. 5 Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog. Configuring and Managing Cluster Resources (GUI) 87

100 6 Select the Resource which is the colocation source. The list shows the IDs of all resources and resource templates that have been configured for the cluster. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. 7 If you leave both the Resource and the With Resource field empty, you can also add a resource set by clicking Add > Resource Set. To add a lifetime, just click Add > Lifetime. 8 In With Resource, define the colocation target. The cluster will decide where to put this resource first and then decide where to put the resource in the Resource field. 9 Define a Score to determine the location relationship between both resources. Positive values indicate the resources should run on the same node. Negative values indicate the resources should not run on the same node. Setting the score to INFINITY forces the resources to run on the same node. Setting it to -INFINITY means the resources must not run on the same node. The score will be combined with other factors to decide where to put the resource. 10 If needed, specify further parameters, like Resource Role. Depending on the parameters and options you choose, a short Description explains the effect of the colocational constraint you are configuring. 11 If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint. Procedure 5.8 Adding or Modifying Ordering Constraints 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster. 3 In the left pane, select Constraints and click Add. 4 Select Resource Order and click OK. 88 High Availability Guide

101 5 Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog. 6 With First, define the resource that must be started before the resource specified with Then is allowed to. 7 With Then define the resource that will start after the First resource. Depending on the parameters and options you choose, a short Description explains the effect of the ordering constraint you are configuring. 8 If needed, define further parameters, for example: 8a Specify a Score. If greater than zero, the constraint is mandatory, otherwise it is only a suggestion. The default value is INFINITY. 8b Specify a value for Symmetrical. If true, the resources are stopped in the reverse order. The default value is true. 9 If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint. You can access and modify all constraints that you have configured in the Constraints view of the Pacemaker GUI. Configuring and Managing Cluster Resources (GUI) 89

102 Figure 5.3 Pacemaker GUI - Constraints Specifying Resource Failover Nodes A resource will be automatically restarted if it fails. If that cannot be achieved on the current node, or it fails N times on the current node, it will try to fail over to another node. You can define a number of failures for resources (a migration-threshold), after which they will migrate to a new node. If you have more than two nodes in your cluster, the node a particular resource fails over to is chosen by the High Availability software. However, you can specify the node a resource will fail over to by proceeding as follows: 1 Configure a location constraint for that resource as described in Procedure 5.6, Adding or Modifying Locational Constraints (page 86). 2 Add the migration-threshold meta attribute to that resource as described in Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82) and enter a Value for the migration-threshold. The value should be positive and less that INFINITY. 3 If you want to automatically expire the failcount for a resource, add the failure-timeout meta attribute to that resource as described in Procedure 5.3, 90 High Availability Guide

103 Adding or Modifying Meta and Instance Attributes (page 82) and enter a Value for the failure-timeout. 4 If you want to specify additional failover nodes with preferences for a resource, create additional location constraints. For an example of the process flow in the cluster regarding migration thresholds and failcounts, see Example 4.2, Migration Threshold Process Flow (page 69). Instead of letting the failcount for a resource expire automatically, you can also clean up failcounts for a resource manually at any time. Refer to Section 5.4.2, Cleaning Up Resources (page 104) for the details Specifying Resource Failback Nodes (Resource Stickiness) A resource might fail back to its original node when that node is back online and in the cluster. If you want to prevent a resource from failing back to the node it was running on prior to failover, or if you want to specify a different node for the resource to fail back to, you must change its resource stickiness value. You can either specify resource stickiness when you are creating a resource, or afterwards. For the implications of different resource stickiness values, refer to Section 4.4.5, Failback Nodes (page 69). Configuring and Managing Cluster Resources (GUI) 91

104 Procedure 5.9 Specifying Resource Stickiness 1 Add the resource-stickiness meta attribute to the resource as described in Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82). 2 As Value for the resource-stickiness, specify a value between -INFINITY and INFINITY Configuring Placement of Resources Based on Load Impact Not all resources are equal. Some, such as Xen guests, require that the node hosting them meets their capacity requirements. If resources are placed such that their combined need exceed the provided capacity, the resources diminish in performance (or even fail). To take this into account, the High Availability Extension allows you to specify the following parameters: 1. The capacity a certain node provides. 2. The capacity a certain resource requires. 3. An overall strategy for placement of resources. 92 High Availability Guide

105 Utilization attributes are used to configure both the resource's requirements and the capacity a node provides. The High Availability Extension now also provides means to detect and configure both node capacity and resource requirements automatically. For more details and a configuration example, refer to Section 4.4.6, Placing Resources Based on Their Load Impact (page 70). To manually configure the resource's requirements and the capacity a node provides, proceed as described in Procedure 5.10, Adding Or Modifying Utilization Attributes (page 93). You can name the utilization attributes according to your preferences and define as many name/value pairs as your configuration needs. Procedure 5.10 Adding Or Modifying Utilization Attributes In the following example, we assume that you already have a basic configuration of cluster nodes and resources and now additionally want to configure the capacities a certain node provides and the capacity a certain resource requires. The procedure of adding utilization attributes is basically the same and only differs in Step 2 (page 93) and Step 3 (page 93). 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 To specify the capacity a node provides: 2a In the left pane, click Node. 2b In the right pane, select the node whose capacity you want to configure and click Edit. 3 To specify the capacity a resource requires: 3a In the left pane, click Resources. 3b In the right pane, select the resource whose capacity you want to configure and click Edit. 4 Select the Utilization tab and click Add to add an utilization attribute. Configuring and Managing Cluster Resources (GUI) 93

106 5 Enter a Name for the new attribute. You can name the utilization attributes according to your preferences. 6 Enter a Value for the attribute and click OK. The attribute value must be an integer. 7 If you need more utilization attributes, repeat Step 5 (page 94) to Step 6 (page 94). The Utilization tab shows a summary of the utilization attributes that you have already defined for that node or resource. 8 If all parameters are set according to your wishes, click OK to close the configuration dialog. Figure 5.4, Example Configuration for Node Capacity (page 94) shows the configuration of a node which would provide 8 CPU units and 16 GB of memory to resources running on that node: Figure 5.4 Example Configuration for Node Capacity An example configuration for a resource requiring 4096 memory units and 4 of the CPU units of a node would look as follows: 94 High Availability Guide

107 Figure 5.5 Example Configuration for Resource Capacity After (manual or automatic) configuration of the capacities your nodes provide and the capacities your resources require, you need to set the placement strategy in the global cluster options, otherwise the capacity configurations have no effect. Several strategies are available to schedule the load: for example, you can concentrate it on as few nodes as possible, or balance it evenly over all available nodes. For more information, refer to Section 4.4.6, Placing Resources Based on Their Load Impact (page 70). Procedure 5.11 Setting the Placement Strategy 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Select View > Simple Mode. 3 In the left pane, select CRM Config to view the global cluster options and their current values. 4 Depending on your requirements, set Placement Strategy to the appropriate value. 5 If you need to disable fencing for any reasons, deselect Stonith Enabled. 6 Confirm your changes with Apply. Configuring and Managing Cluster Resources (GUI) 95

108 5.3.8 Configuring Resource Monitoring Although the High Availability Extension can detect a node failure, it also has the ability to detect when an individual resource on a node has failed. If you want to ensure that a resource is running, you must configure resource monitoring for it. Resource monitoring consists of specifying a timeout and/or start delay value, and an interval. The interval tells the CRM how often it should check the resource status. You can also set particular parameters, such as Timeout for start or stop operations. Procedure 5.12 Adding or Modifying Monitor Operations 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the Pacemaker GUI main window, click Resources in the left pane to see the resources already configured for the cluster. 3 In the right pane, select the resource to modify and click Edit. The next window shows the basic resource parameters and the meta attributes, instance attributes and operations already defined for that resource. 4 To add a new monitor operation, select the respective tab and click Add. To modify an existing operation, select the respective entry and click Edit. 5 In Name, select the action to perform, for example monitor, start, or stop. The parameters shown below depend on the selection you make here. 96 High Availability Guide

109 6 In the Timeout field, enter a value in seconds. After the specified timeout period, the operation will be treated as failed. The PE will decide what to do or execute what you specified in the On Fail field of the monitor operation. 7 If needed, expand the Optional section and add parameters, like On Fail (what to do if this action ever fails?) or Requires (what conditions need to be satisfied before this action occurs?). 8 If all parameters are set according to your wishes, click OK to finish the configuration of that resource. The configuration dialog is closed and the main window shows the modified resource. For the processes which take place if the resource monitor detects a failure, refer to Section 4.3, Resource Monitoring (page 65). To view resource failures in the Pacemaker GUI, click Management in the left pane, then select the resource whose details you want to see in the right pane. For a resource that has failed, the Fail Count and last failure of the resource is shown in the middle of the right pane (below the Migration threshold entry). Figure 5.6 Viewing a Resource's Failcount Configuring and Managing Cluster Resources (GUI) 97

110 5.3.9 Configuring a Cluster Resource Group Some cluster resources are dependent on other components or resources, and require that each component or resource starts in a specific order and runs together on the same server. To simplify this configuration we support the concept of groups. For an example of a resource group and more information about groups and their properties, refer to Section , Groups (page 56). NOTE: Empty Groups Groups must contain at least one resource, otherwise the configuration is not valid. Procedure 5.13 Adding a Resource Group 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the left pane, select Resources and click Add > Group. 3 Enter a unique ID for the group. 4 Below Options, set the Initial state of resource and click Forward. 5 In the next step, you can add primitives as sub-resources for the group. These are created similar as described in Procedure 5.2, Adding Primitive Resources (page 80). 6 If all parameters are set according to your wishes, click Apply to finish the configuration of the primitive. 7 In the next window, you can continue adding sub-resources for the group by choosing Primitive again and clicking OK. When you do not want to add more primitives to the group, click Cancel instead. The next window shows a summary of the parameters that you have already defined for that group. The Meta Attributes and Primitives of the group are listed. The position of the resources in the Primitive tab represents the order in which the resources are started in the cluster. 98 High Availability Guide

111 8 As the order of resources in a group is important, use the Up and Down buttons to sort the Primitives in the group. 9 If all parameters are set according to your wishes, click OK to finish the configuration of that group. The configuration dialog is closed and the main window shows the newly created or modified group. Figure 5.7 Pacemaker GUI - Groups Let us assume you already have created a resource group as explained in Procedure 5.13, Adding a Resource Group (page 98). The following procedure shows you how to modify the group to match Example 4.1, Resource Group for a Web Server (page 56). Configuring and Managing Cluster Resources (GUI) 99

112 Procedure 5.14 Adding Resources to an Existing Group 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the left pane, switch to the Resources view and in the right pane, select the group to modify and click Edit. The next window shows the basic group parameters and the meta attributes and primitives already defined for that resource. 3 Click the Primitives tab and click Add. 4 In the next dialog, set the following parameters to add an IP address as sub-resource of the group: 4a Enter a unique ID (for example, my_ipaddress). 4b From the Class list, select ocf as resource agent class. 4c As Provider of your OCF resource agent, select heartbeat. 4d From the Type list, select IPaddr as resource agent. 4e Click Forward. 4f In the Instance Attribute tab, select the IP entry and click Edit (or doubleclick the IP entry). 4g As Value, enter the desired IP address, for example, h Click OK and Apply. The group configuration dialog shows the newly added primitive. 5 Add the next sub-resources (file system and Web server) by clicking Add again. 6 Set the respective parameters for each of the sub-resources similar to steps Step 4a (page 100) to Step 4h (page 100), until you have configured all sub-resources for the group. 100 High Availability Guide

113 As we configured the sub-resources already in the order in that they need to be started in the cluster, the order on the Primitives tab is already correct. 7 In case you need to change the resource order for a group, use the Up and Down buttons to sort the resources on the Primitive tab. 8 To remove a resource from the group, select the resource on the Primitives tab and click Remove. 9 Click OK to finish the configuration of that group. The configuration dialog is closed and the main window shows the modified group Configuring a Clone Resource You may want certain resources to run simultaneously on multiple nodes in your cluster. To do this you must configure a resource as a clone. Examples of resources that might be configured as clones include STONITH and cluster file systems like OCFS2. You can clone any resource provided. This is supported by the resource s Resource Agent. Clone resources may even be configured differently depending on which nodes they are hosted. For an overview which types of resource clones are available, refer to Section , Clones (page 57). Configuring and Managing Cluster Resources (GUI) 101

114 Procedure 5.15 Adding or Modifying Clones 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 In the left pane, select Resources and click Add > Clone. 3 Enter a unique ID for the clone. 4 Below Options, set the Initial state of resource. 5 Activate the respective options you want to set for your clone and click Forward. 6 In the next step, you can either add a Primitive or a Group as sub-resources for the clone. These are created similar as described in Procedure 5.2, Adding Primitive Resources (page 80) or Procedure 5.13, Adding a Resource Group (page 98). 7 If all parameters in the clone configuration dialog are set according to your wishes, click Apply to finish the configuration of the clone. 5.4 Managing Cluster Resources Apart from the possibility to configure your cluster resources, the Pacemaker GUI also allows you to manage existing resources. To switch to a management view and to access the available options, click Management in the left pane. 102 High Availability Guide

115 Figure 5.8 Pacemaker GUI - Management Starting Resources Before you start a cluster resource, make sure it is set up correctly. For example, if you want to use an Apache server as a cluster resource, set up the Apache server first and complete the Apache configuration before starting the respective resource in your cluster. NOTE: Do Not Touch Services Managed by the Cluster When managing a resource with the High Availability Extension, the same resource must not be started or stopped otherwise (outside of the cluster, for example manually or on boot or reboot). The High Availability Extension software is responsible for all service start or stop actions. However, if you want to check if the service is configured properly, start it manually, but make sure that it is stopped again before High Availability takes over. Configuring and Managing Cluster Resources (GUI) 103

116 For interventions in resources that are currently managed by the cluster, set the resource to unmanaged mode first as described in Section 5.4.5, Changing Management Mode of Resources (page 108). During creation of a resource with the Pacemaker GUI, you can set the resource's initial state with the target-role meta attribute. If its value has been set to stopped, the resource does not start automatically after being created. Procedure 5.16 Starting A New Resource 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click Management in the left pane. 3 In the right pane, right-click the resource and select Start from the context menu (or use the Start Resource icon in the toolbar) Cleaning Up Resources A resource will be automatically restarted if it fails, but each failure raises the resource's failcount. View a resource's failcount with the Pacemaker GUI by clicking Management in the left pane, then selecting the resource in the right pane. If a resource has failed, its Fail Count is shown in the middle of the right pane (below the Migration Threshold entry). If a migration-threshold has been set for that resource, the node will no longer be allowed to run the resource as soon as the number of failures has reached the migration threshold. A resource's failcount can either be reset automatically (by setting a failure-timeout option for the resource) or you can reset it manually as described below. Procedure 5.17 Cleaning Up A Resource 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 104 High Availability Guide

117 2 Click Management in the left pane. 3 In the right pane, right-click the respective resource and select Cleanup Resource from the context menu (or use the Cleanup Resource icon in the toolbar). This executes the commands crm_resource -C and crm_failcount -D for the specified resource on the specified node. For more information, see the man pages of crm_resource and crm_failcount Removing Cluster Resources If you need to remove a resource from the cluster, follow the procedure below to avoid configuration errors: NOTE: Removing Referenced Resources Cluster resources cannot be removed if their ID is referenced by any constraint. If you cannot delete a resource, check where the resource ID is referenced and remove the resource from the constraint first. Procedure 5.18 Removing a Cluster Resource 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click Management in the left pane. 3 Select the respective resource in the right pane. 4 Clean up the resource on all nodes as described in Procedure 5.17, Cleaning Up A Resource (page 104). 5 Stop the resource. 6 Remove all constraints that relate to the resource, otherwise removing the resource will not be possible. Configuring and Managing Cluster Resources (GUI) 105

118 5.4.4 Migrating Cluster Resources As mentioned in Section 5.3.5, Specifying Resource Failover Nodes (page 90), the cluster will fail over (migrate) resources automatically in case of software or hardware failures according to certain parameters you can define (for example, migration threshold or resource stickiness). Apart from that, you can also manually migrate a resource to another node in the cluster. Procedure 5.19 Manually Migrating a Resource 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click Management in the left pane. 3 Right-click the respective resource in the right pane and select Migrate Resource. 4 In the new window, select the node to which to move the resource to in To Node. This creates a location constraint with an INFINITY score for the destination node. 5 If you want to migrate the resource only temporarily, activate Duration and enter the time frame for which the resource should migrate to the new node. After the ex- 106 High Availability Guide

119 piration of the duration, the resource can move back to its original location or it may stay where it is (depending on resource stickiness). 6 In cases where the resource cannot be migrated (if the resource's stickiness and constraint scores total more than INFINITY on the current node), activate the Force option. This forces the resource to move by creating a rule for the current location and a score of -INFINITY. NOTE This prevents the resource from running on this node until the constraint is removed with Clear Migrate Constraints or the duration expires. 7 Click OK to confirm the migration. To allow a resource to move back again, proceed as follows: Procedure 5.20 Clearing a Migration Constraint 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click Management in the left pane. 3 Right-click the respective resource in the right pane and select Clear Migrate Constraints. This uses the crm_resource -U command. The resource can move back to its original location or it may stay where it is (depending on resource stickiness). For more information, see the crm_resource man page or Pacemaker Explained, available from Refer to section Resource Migration. Configuring and Managing Cluster Resources (GUI) 107

120 5.4.5 Changing Management Mode of Resources When a resource is being managed by the cluster, it must not be touched otherwise (outside of the cluster). For maintenance of individual resources, you can set the respective resources to an unmanaged mode that allows you to modify the resource outside of the cluster. Procedure 5.21 Changing Management Mode of Resources 1 Start the Pacemaker GUI and log in to the cluster as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click Management in the left pane. 3 Right-click the respective resource in the right pane and from the context menu, select Unmanage Resource. 4 After you have finished the maintenance task for that resource, right-click the respective resource again in the right pane and select Manage Resource. From this point on, the resource will be managed by the High Availability Extension software again. 108 High Availability Guide

121 Configuring and Managing Cluster Resources (Web Interface) 6 In addition to the crm command line tool and the Pacemaker GUI, the High Availability Extension also comes with the HA Web Konsole (Hawk), a Web-based user interface for management tasks. It allows you to monitor and administer your Linux cluster from non-linux machines as well. Furthermore, it is the ideal solution in case your system only provides a minimal graphical user interface. This chapter introduces Hawk and covers basic tasks for configuring and managing cluster resources: modifying global cluster options, creating basic and advanced types of resources (groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning up or removing resources, and migrating resources manually. For detailed analysis of the cluster status, Hawk generates a cluster report (hb_report). You can view the cluster history or explore potential failure scenarios with the simulator. The Web interface is included in the hawk package. It must be installed on all cluster nodes you want to connect to with Hawk. On the machine from which you want to access a cluster node using Hawk, you only need a (graphical) Web browser with JavaScript and cookies enabled to establish the connection. NOTE: User Authentication To log in to the cluster from Hawk, the respective user must be a member of the haclient group. The installation creates a Linux user named hacluster and adds the user to the haclient group. Configuring and Managing Cluster Resources (Web Interface) 109

122 Before using Hawk, either set a password for the hacluster user or create a new user which is a member of the haclient group. Do this on every node you will connect to with Hawk. 6.1 Hawk Overview Learn how to start Hawk, how to log in to the cluster and get to know the Hawk basics Starting Hawk and Logging In Procedure 6.1 Starting Hawk To use Hawk, the respective Web service must be started on the node that you want to connect to via the Web interface. For communication, the standard HTTPS protocol and port 7630 is used. 1 On the node you want to connect to, open a shell and log in as root. 2 Check the status of the service by entering rchawk status 3 If the service is not running, start it with rchawk start If you want Hawk to start automatically at boot time, execute the following command: chkconfig hawk on 1 On any machine, start a Web browser and make sure that JavaScript and cookies are enabled. 2 Point it at the IP address or hostname of any cluster node running the Hawk Web service, or the address of any IPaddr(2) resource that you may have configured: High Availability Guide

123 NOTE: Certificate Warning Depending on your browser and browser options, you may get a certificate warning when trying to access the URL for the first time. This is because Hawk uses a self-signed certificate that is not considered trustworthy per default. To proceed anyway, you can add an exception in the browser to bypass the warning. To avoid the warning in the first place, the self-signed certificate can also be replaced with a certificate signed by an official Certificate Authority. For information on how to do so, refer to Replacing the Self-Signed Certificate (page 150). 3 On the Hawk login screen, enter the Username and Password of the hacluster user (or of any other user that is a member of the haclient group) and click Log In. The Cluster Status screen appears, displaying the status of your cluster nodes and resources similar to the output of the crm_mon Main Screen: Cluster Status After logging in, Hawk displays the Cluster Status screen. It shows a summary with the most important global cluster parameters and the status of your cluster nodes and resources. The following color code is used for status display of nodes and resources: Hawk Color Code Green: OK. For example, the resource is running or the node is online. Red: Bad, unclean. For example, the resource has failed or the node was not shut down cleanly. Yellow: In transition. For example, the node is currently being shut down. Gray: Not running, but the cluster expects it to be running. For example, nodes that the administrator has stopped or put into standby mode. Also nodes that are offline are displayed in gray (if they have been shut down cleanly). Configuring and Managing Cluster Resources (Web Interface) 111

124 If a resource has failed, a failure message with the details is shown in red at the top of the screen. Figure 6.1 Hawk Cluster Status (Summary View) The Cluster Status screen refreshes itself in near real-time. Choose between the following views: Summary View Shows the most important global cluster parameters and the status of your cluster nodes and resources at the same time. If your setup includes Multi-Site Clusters (Geo Clusters) (page 144), the summary view also shows tickets. To view details about all elements belonging to a certain category (tickets, nodes, or resources), click the category title, which is marked as link. Otherwise click the individual elements for details. Tree View Presents an expandable view of the most important global cluster parameters and the status of your cluster nodes and resources. Click the arrows to expand or collapse the elements belonging to the respective category. In contrast to the Summary View this view not only shows the IDs and status of resources but also the type (for example, primitive, clone, or group). Table View This view is especially useful for larger clusters, because it shows in a concise way which resources are currently running on which node. Inactive nodes or resources are also displayed. 112 High Availability Guide

125 To perform basic operator tasks on nodes and resources (like starting or stopping resources, bringing nodes online, or viewing details), click the wrench icon next to the respective node or resource to access a context menu. For more complex tasks like configuring resources, constraints, or global cluster options, use the navigation bar at the left hand side. From there, you can also generate a cluster report (hb_report), view the cluster history, or explore potential failure scenarios with the simulator. NOTE: Available Functions in Hawk By default, users logged in as root or hacluster have full read-write access to all cluster configuration tasks. However, Access Control Lists (page 195) can be used to define fine-grained access permissions. If ACLs are enabled in the CRM, the available functions in Hawk depend on the user role and access permissions assigned to you. In addition, the following functions in Hawk can only be executed by the user hacluster: Generating a hb_report. Using the history explorer. Viewing recent events for nodes or resources. 6.2 Configuring Global Cluster Options Global cluster options control how the cluster behaves when confronted with certain situations. They are grouped into sets and can be viewed and modified with cluster management tools like Pacemaker GUI, Hawk, and crm shell. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters after basic cluster setup: Option no-quorum-policy (page 50) Option stonith-enabled (page 51) Configuring and Managing Cluster Resources (Web Interface) 113

126 Procedure 6.2 Modifying Global Cluster Options 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Properties to view the global cluster options and their current values. 3 Depending on your cluster requirements, set no-quorum-policy to the appropriate value. 4 If you need to disable fencing for any reasons, deselect stonith-enabled. IMPORTANT: No Support Without STONITH A cluster without STONITH enabled is not supported. 5 To remove a cluster property, click the minus icon next to the property. 6 To add a new property, choose one from the drop-down list and click the plus icon. 7 Confirm your changes. Figure 6.2 Hawk Global Cluster Properties 114 High Availability Guide

127 6.3 Configuring Cluster Resources As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, mail servers, databases, file systems, virtual machines, and any other server-based applications or services you want to make available to users at all times. For an overview of resource types you can create, refer to Section 4.2.3, Types of Resources (page 54). Apart from the basic specification of a resource (ID, class, provider, and type), you can add or modify the following parameters during or after creation of a resource: Instance attributes (parameters) determine which instance of a service the resource controls. For more information, refer to Section 4.2.7, Instance Attributes (Parameters) (page 60). When creating a resource, Hawk automatically shows any required parameters. Edit them to get a valid resource configuration. Meta attributes tell the CRM how to treat a specific resource. For more information, refer to Section 4.2.6, Resource Options (Meta Attributes) (page 59). When creating a resource, Hawk automatically lists the important meta attributes for that resource (for example, the target-role attribute that defines the initial state of a resource. By default, it is set to Stopped, so the resource will not start immediately). Operations are needed for resource monitoring. For more information, refer to Section 4.2.8, Resource Operations (page 62). When creating a resource, Hawk displays the most important resource operations (monitor, start, and stop). Configuring and Managing Cluster Resources (Web Interface) 115

128 6.3.1 Configuring Resources with the Setup Wizard The High Availability Extension comes with a predefined set of resources for some frequently used cluster scenarios, for example, setting up a highly available Web server. Find the predefined sets in the hawk-templates package. You can also define your own wizard templates. For detailed information, refer to Hawk provides a wizard that guides you through all configuration steps. Procedure 6.3 Using the Setup Wizard 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Setup Wizard. The Cluster Setup Wizard shows a list of available resource sets. If you click an entry, Hawk displays a short help text about the resource set. 3 Select the resource set you want to configure and click Next. 4 Follow the instructions on the screen. If you need information about an option, click it to display a short help text in Hawk. Figure 6.3 Hawk Setup Wizard 116 High Availability Guide

129 6.3.2 Creating Simple Cluster Resources To create the most basic type of a resource, proceed as follows: Procedure 6.4 Adding Primitive Resources 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources. It lists any resources that are already defined. 3 Select the Primitive category and click the plus icon. 4 Specify the resource: 4a Enter a unique Resource ID. 4b From the Class list, select the resource agent class you want to use for the resource: heartbeat, lsb, ocf or stonith. For more information, see Section 4.2.2, Supported Resource Agent Classes (page 52). 4c If you selected ocf as class, specify the Provider of your OCF resource agent. The OCF specification allows multiple vendors to supply the same resource agent. 4d From the Type list, select the resource agent you want to use (for example, IPaddr or Filesystem). A short description for this resource agent is displayed. The selection you get in the Type list depends on the Class (and for OCF resources also on the Provider) you have chosen. 5 Hawk automatically shows any required parameters for the resource plus an empty drop-down list that you can use to specify an additional parameter. To define Parameters (instance attributes) for the resource: 5a Enter values for each required parameter. A short help text is displayed as soon as you click the input field next to a parameter. Configuring and Managing Cluster Resources (Web Interface) 117

130 5b To completely remove a parameter, click the minus icon next to the parameter. 5c To add another parameter, click the blank drop-down list, select a parameter and enter a value for it. 6 Hawk automatically shows the most important resource Operations and proposes default values. If you do not modify any settings here, Hawk will add the proposed operations and their default values as soon as you confirm your changes. For details on how to modify, add or remove operations, refer to Procedure 6.13, Adding or Modifying Monitor Operations (page 130). 7 Hawk automatically lists the most important meta attributes for the resource, for example target-role. To modify or add Meta Attributes: 7a To set a (different) value for an attribute, select one from the drop-down list next to the attribute. 7b To completely remove a meta attribute, click the minus icon next to it. 7c To add another meta attribute, click the empty drop-down list and select an attribute. The default value for the attribute is displayed. If needed, change it as described above. 8 Click Create Resource to finish the configuration. A message at the top of the screen shows if the resource was successfully created or not. 118 High Availability Guide

131 Figure 6.4 Hawk Primitive Resource Creating STONITH Resources IMPORTANT: No Support Without STONITH A cluster without STONITH is not supported. By default, the global cluster option stonith-enabled is set to true: If no STONITH resources have been defined, the cluster will refuse to start any resources. Configure one or more STONITH resources to complete the STONITH setup. While they are configured similar to other resources, the behavior of STONITH resources is different in some respects. For details refer to Section 9.3, STONITH Configuration (page 184). Procedure 6.5 Adding a STONITH Resource 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources and lists all defined resources. 3 Select the Primitive category and click the plus icon. Configuring and Managing Cluster Resources (Web Interface) 119

132 4 Specify the resource: 4a Enter a unique Resource ID. 4b From the Class list, select the resource agent class stonith. 4c From the Type list, select the STONITH plug-in for controlling your STONITH device. A short description for this plug-in is displayed. 5 Hawk automatically shows the required Parameters for the resource. Enter values for each parameter. 6 Hawk displays the most important resource Operations and proposes default values. If you do not modify any settings here, Hawk will add the proposed operations and their default values as soon as you confirm. 7 Adopt the default Meta Attributes settings if there is no reason to change them. 8 Confirm your changes to create the STONITH resource. To complete your fencing configuration, add constraints, use clones or both. For more details, refer to Chapter 9, Fencing and STONITH (page 181) Using Resource Templates If you want to create lots of resources with similar configurations, defining a resource template is the easiest way. Once defined, it can be referenced in primitives or in certain types of constraints. For detailed information about function and use of resource templates, refer to Section 4.4.3, Resource Templates and Constraints (page 67). 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources plus a Template category. 3 Select the Template category and click the plus icon. 4 Enter a Template ID. 120 High Availability Guide

133 5 Specify the resource template as you would specify a primitive. Follow Procedure 6.4: Adding Primitive Resources, starting with Step 4b (page 117). 6 Click Create Resource to finish the configuration. A message at the top of the screen shows if the resource template was successfully created. Figure 6.5 Hawk Resource Template Procedure 6.6 Referencing Resource Templates 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 To reference the newly created resource template in a primitive, follow these steps: 2a In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources. It lists all defined resources. 2b Select the Primitive category and click the plus icon. 2c Enter a unique Resource ID. 2d Activate Use Template and, from the drop-down list, select the template to reference. 2e If needed, specify further Parameters, Operations, or Meta Attributes as described in Procedure 6.4, Adding Primitive Resources (page 117). Configuring and Managing Cluster Resources (Web Interface) 121

134 3 To reference the newly created resource template in colocational or order constraints, proceed as described in Procedure 6.8, Adding or Modifying Colocational or Order Constraints (page 123) Configuring Resource Constraints After you have configured all resources, specify how the cluster should handle them correctly. Resource constraints let you specify on which cluster nodes resources can run, in which order resources will be loaded, and what other resources a specific resource depends on. For an overview of available types of constraints, refer to Section 4.4.1, Types of Constraints (page 66). When defining constraints, you also need to specify scores. For more information on scores and their implications in the cluster, see Section 4.4.2, Scores and Infinity (page 67). Learn how to create the different types of constraints in the following procedures. Procedure 6.7 Adding or Modifying Location Constraints For location constraints, specify a constraint ID, resource, score and node: 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Constraints. The Constraints screen shows categories for all types of constraints. It lists all defined constraints. 3 To add a new Location constraint, click the plus icon in the respective category. To modify an existing constraint, click the wrench icon next to the constraint and select Edit Constraint. 4 Enter a unique Constraint ID. When modifying existing constraints, the ID is already defined. 5 Select the Resource for which to define the constraint. The list shows the IDs of all resources that have been configured for the cluster. 122 High Availability Guide

135 6 Set the Score for the constraint. Positive values indicate the resource can run on the Node you specify in the next step. Negative values mean it should not run on that node. Setting the score to INFINITY forces the resource to run on the node. Setting it to -INFINITY means the resources must not run on the node. 7 Select the Node for the constraint. 8 Click Create Constraint to finish the configuration. A message at the top of the screen shows if the constraint was successfully created. Figure 6.6 Hawk Location Constraint Procedure 6.8 Adding or Modifying Colocational or Order Constraints For both types of constraints specify a constraint ID and a score, then add resources to a dependency chain: 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Constraints. The Constraints screen shows categories for all types of constraints and lists all defined constraints. 3 To add a new Colocation or Order constraint, click the plus icon in the respective category. To modify an existing constraint, click the wrench icon next to the constraint and select Edit Constraint. Configuring and Managing Cluster Resources (Web Interface) 123

136 4 Enter a unique Constraint ID. When modifying existing constraints, the ID is already defined. 5 Define a Score. For colocation constraints, the score determines the location relationship between the resources. Positive values indicate the resources should run on the same node. Negative values indicate the resources should not run on the same node. Setting the score to INFINITY forces the resources to run on the same node. Setting it to -INFINITY means the resources must not run on the same node. The score will be combined with other factors to decide where to put the resource. For order constraints, the constraint is mandatory if the score is greater than zero, otherwise it is only a suggestion. The default value is INFINITY. 6 For order constraints, you can usually keep the option Symmetrical enabled. This specifies that resources are stopped in reverse order. 7 To define the resources for the constraint, follow these steps: 7a Select a resource from the list Add resource to constraint. The list shows the IDs of all resources and all resource templates configured for the cluster. 7b To add the selected resource, click the plus icon next to the list. A new list appears beneath. Select the next resource from the list. As both colocation and order constraints define a dependency between resources, you need at least two resources. 7c Select one of the remaining resources from the list Add resource to constraint. Click the plus icon to add the resource. Now you have two resources in a dependency chain. If you have defined an order constraint, the topmost resource will start first, then the second etc. Usually the resources will be stopped in reverse order. However, if you have defined a colocation constraint, the arrow icons between the resources reflect their dependency, but not their start order. As the topmost resource depends on the next resource and so on, the cluster will first decide where to put the last resource, then place the depending ones based on that 124 High Availability Guide

137 decision. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. 7d Add as many resources as needed for your colocation or order constraint. 7e If you want to swap the order of two resources, click the double arrow at the right hand side of the resources to swap the resources in the dependency chain. 8 If needed, specify further parameters for each resource, like the role (Master, Slave, Started, or Stopped). 9 Click Create Constraint to finish the configuration. A message at the top of the screen shows if the constraint was successfully created. Figure 6.7 Hawk Colocation Constraint As an alternative format for defining colocation or ordering constraints, you can use resource sets. They have the same ordering semantics as groups. Procedure 6.9 Using Resource Sets for Colocation or Order Constraints 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 Define colocation or order constraints as described in Procedure 6.8, Adding or Modifying Colocational or Order Constraints (page 123). Configuring and Managing Cluster Resources (Web Interface) 125

138 3 When you have added the resources to the dependency chain, you can put them into a resource set by clicking the chain icon at the right hand side. A resource set is visualized by a frame around the resources belonging to a set. 4 You can also add multiple resources to a resource set or create multiple resource sets. 5 To extract a resource from a resource set, click the scissors icon above the respective resource. The resource will be removed from the set and put back into the dependency chain at its original place. 6 Confirm your changes to finish the constraint configuration. For more information on configuring constraints and detailed background information about the basic concepts of ordering and colocation, refer to the documentation available at (page 73) and (page 74), respectively: Pacemaker Explained, chapter Resource Constraints Collocation Explained Ordering Explained 126 High Availability Guide

139 Procedure 6.10 Removing Constraints 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Constraints. The Constraints screen shows categories for all types of constraints and lists all defined constraints. 3 Click the wrench icon next to a constraint and select Remove Constraint Specifying Resource Failover Nodes A resource will be automatically restarted if it fails. If that cannot be achieved on the current node, or it fails N times on the current node, it will try to fail over to another node. You can define a number of failures for resources (a migration-threshold), after which they will migrate to a new node. If you have more than two nodes in your cluster, the node to which a particular resource fails over is chosen by the High Availability software. You can specify a specific node to which a resource will fail over by proceeding as follows: 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 Configure a location constraint for the resource as described in Procedure 6.7, Adding or Modifying Location Constraints (page 122). 3 Add the migration-threshold meta attribute to the resource as described in Procedure 6.4: Adding Primitive Resources, Step 7 (page 118) and enter a Value for the migration-threshold. The value should be positive and less than INFINITY. 4 If you want to automatically expire the failcount for a resource, add the failure-timeout meta attribute to the resource as described in Procedure 6.4: Adding Primitive Resources, Step 7 (page 118) and enter a Value for the failuretimeout. 5 If you want to specify additional failover nodes with preferences for a resource, create additional location constraints. Configuring and Managing Cluster Resources (Web Interface) 127

140 The process flow regarding migration thresholds and failcounts is demonstrated in Example 4.2, Migration Threshold Process Flow (page 69). Instead of letting the failcount for a resource expire automatically, you can also clean up failcounts for a resource manually at any time. Refer to Section 6.4.2, Cleaning Up Resources (page 136) for details Specifying Resource Failback Nodes (Resource Stickiness) A resource may fail back to its original node when that node is back online and in the cluster. To prevent this or to specify a different node for the resource to fail back to, change the stickiness value of the resource. You can either specify the resource stickiness when creating it or afterwards. For the implications of different resource stickiness values, refer to Section 4.4.5, Failback Nodes (page 69). Procedure 6.11 Specifying Resource Stickiness 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 Add the resource-stickiness meta attribute to the resource as described in Procedure 6.4: Adding Primitive Resources, Step 7 (page 118). 3 Specify a value between -INFINITY and INFINITY for the resource-stickiness Configuring Placement of Resources Based on Load Impact Not all resources are equal. Some, such as Xen guests, require that the node hosting them meets their capacity requirements. If resources are placed so that their combined needs exceed the provided capacity, the performance of the resources diminishes or they fail. To take this into account, the High Availability Extension allows you to specify the following parameters: 128 High Availability Guide

141 1. The capacity a certain node provides. 2. The capacity a certain resource requires. 3. An overall strategy for placement of resources. Utilization attributes are used to configure both the resource's requirements and the capacity a node provides. The High Availability Extension now also provides means to detect and configure both node capacity and resource requirements automatically. For more details and a configuration example, refer to Section 4.4.6, Placing Resources Based on Their Load Impact (page 70). To display a node's capacity values (defined via utilization attributes) as well as the capacity currently consumed by resources running on the node, switch to the Cluster Status screen in Hawk and select the node you are interested in. Click the wrench icon next to the node and select Show Details. Figure 6.8 Hawk Viewing a Node's Capacity Values After you have configured the capacities your nodes provide and the capacities your resources require, you need to set the placement strategy in the global cluster options, otherwise the capacity configurations have no effect. Several strategies are available to schedule the load: for example, you can concentrate it on as few nodes as possible, or balance it evenly over all available nodes. For more information, refer to Section 4.4.6, Placing Resources Based on Their Load Impact (page 70). Procedure 6.12 Setting the Placement Strategy 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). Configuring and Managing Cluster Resources (Web Interface) 129

142 2 In the left navigation bar, select Cluster Properties to view the global cluster options and their current values. 3 From the Add new property drop-down list, choose placement-strategy. 4 Depending on your requirements, set Placement Strategy to the appropriate value. 5 Click the plus icon to add the new cluster property including its value. 6 Confirm your changes Configuring Resource Monitoring The High Availability Extension can not only detect a node failure, but also when an individual resource on a node has failed. If you want to ensure that a resource is running, configure resource monitoring for it. For resource monitoring, specify a timeout and/or start delay value, and an interval. The interval tells the CRM how often it should check the resource status. You can also set particular parameters, such as Timeout for start or stop operations. Procedure 6.13 Adding or Modifying Monitor Operations 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources and lists all defined resources. 3 Select the resource to modify, click the wrench icon next to it and select Edit Resource. The resource definition is displayed. Hawk automatically shows the most important resource operations (monitor, start, stop) and proposes default values. 4 To change the values for an operation: 4a Click the pen icon next to the operation. 4b In the dialog that opens, specify the following values: 130 High Availability Guide

143 Enter a timeout value in seconds. After the specified timeout period, the operation will be treated as failed. The PE will decide what to do or execute what you specified in the On Fail field of the monitor operation. For monitoring operations, define the monitoring interval in seconds. If needed, use the empty drop-down list at the bottom of the monitor dialog to add more parameters, like On Fail (what to do if this action fails?) or Requires (what conditions need to be fulfilled before this action occurs?). 4c Confirm your changes to close the dialog and to return to the Edit Resource screen. 5 To completely remove an operation, click the minus icon next to it. 6 To add another operation, click the empty drop-down list and select an operation. A default value for the operation is displayed. If needed, change it as described above. 7 Click Apply Changes to finish the configuration. A message at the top of the screen shows if the resource was successfully updated or not. For the processes which take place if the resource monitor detects a failure, refer to Section 4.3, Resource Monitoring (page 65). Configuring and Managing Cluster Resources (Web Interface) 131

144 Configuring a Cluster Resource Group Some cluster resources depend on other components or resources and require that each component or resource starts in a specific order and runs on the same server. To simplify this configuration we support the concept of groups. For an example of a resource group and more information about groups and their properties, refer to Section , Groups (page 56). NOTE: Empty Groups Groups must contain at least one resource, otherwise the configuration is not valid. In Hawk, primitives cannot be created or modified while creating a group. Before adding a group, create primitives and configure them as desired. For details, refer to Procedure 6.4, Adding Primitive Resources (page 117). Procedure 6.14 Adding a Resource Group 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources and lists all defined resources. 3 Select the Group category and click the plus icon. 4 Enter a unique Group ID. 5 To define the group members, select one or multiple entries in the list of Available Primitives and click the < icon to add them to the Group Children list. Any new group members are added to the bottom of the list. To define the order of the group members, you currently need to add and remove them in the order you desire. 6 If needed, modify or add Meta Attributes as described in Adding Primitive Resources, Step 7 (page 118). 7 Click Create Group to finish the configuration. A message at the top of the screen shows if the group was successfully created. 132 High Availability Guide

145 Figure 6.9 Hawk Resource Group Configuring a Clone Resource If you want certain resources to run simultaneously on multiple nodes in your cluster, configure these resources as a clones. For example, cloning makes sense for resources like STONITH and cluster file systems like OCFS2. You can clone any resource provided. Cloning is supported by the resource s Resource Agent. Clone resources may be configured differently depending on which nodes they are running on. For an overview of the available types of resource clones, refer to Section , Clones (page 57). NOTE: Sub-resources for Clones Clones can either contain a primitive or a group as sub-resources. In Hawk, sub-resources cannot be created or modified while creating a clone. Before adding a clone, create sub-resources and configure them as desired. For details, refer to Procedure 6.4, Adding Primitive Resources (page 117) or Procedure 6.14, Adding a Resource Group (page 132). Procedure 6.15 Adding or Modifying Clones 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). Configuring and Managing Cluster Resources (Web Interface) 133

146 2 In the left navigation bar, select Resources. The Resources screen shows categories for all types of resources and lists all defined resources. 3 Select the Clone category and click the plus icon. 4 Enter a unique Clone ID. 5 From the Child Resource list, select the primitive or group to use as a sub-resource for the clone. 6 If needed, modify or add Meta Attributes as described in Procedure 6.4: Adding Primitive Resources, Step 7 (page 118). 7 Click Create Clone to finish the configuration. A message at the top of the screen shows if the clone was successfully created. Figure 6.10 Hawk Clone Resource 6.4 Managing Cluster Resources In addition to configuring your cluster resources, Hawk allows you to manage existing resources from the Cluster Status screen. For a general overview of the screen, its different views and the color code used for status information, refer to Section 6.1.2, Main Screen: Cluster Status (page 111). Basic resource operations can be executed from any cluster status view. Both Tree View and Table View let you access the individual resources directly. However, in the Sum- 134 High Availability Guide

147 mary View you need to click the links in the resources category first to display the resource details Starting Resources Before you start a cluster resource, make sure it is set up correctly. For example, if you want to use an Apache server as a cluster resource, set up the Apache server first and complete the Apache configuration before starting the respective resource in your cluster. NOTE: Do Not Touch Services Managed by the Cluster When managing a resource via the High Availability Extension, the same resource must not be started or stopped otherwise (outside of the cluster, for example manually or on boot or reboot). The High Availability Extension software is responsible for all service start or stop actions. However, if you want to check if the service is configured properly, start it manually, but make sure that it is stopped again before High Availability takes over. For interventions in resources that are currently managed by the cluster, set the resource to unmanaged mode first as described in Procedure 6.21, Changing Management Mode of Resources (page 139). When creating a resource with Hawk, you can set its initial state with the target-role meta attribute. If you set its value to stopped, the resource does not start automatically after being created. Procedure 6.16 Starting A New Resource 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 In one of the individual resource views, click the wrench icon next to the resource and select Start. To continue, confirm the message that appears. As soon as the re- Configuring and Managing Cluster Resources (Web Interface) 135

148 source has started, Hawk changes the resource's color to green and shows on which node it is running Cleaning Up Resources A resource will be automatically restarted if it fails, but each failure increases the resource's failcount. If a migration-threshold has been set for the resource, the node will no longer run the resource when the number of failures reaches the migration threshold. A resource's failcount can either be reset automatically (by setting a failure-timeout option for the resource) or you can reset it manually as described below. Procedure 6.17 Cleaning Up A Resource 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 In one of the individual resource views, click the wrench icon next to the failed resource and select Clean Up. To continue, confirm the message that appears. This executes the commands crm_resource -C and crm_failcount -D for the specified resource on the specified node. For more information, see the man pages of crm_resource and crm_failcount Removing Cluster Resources If you need to remove a resource from the cluster, follow the procedure below to avoid configuration errors: 136 High Availability Guide

149 NOTE: Removing Referenced Resources A cluster resource cannot be removed if its ID is referenced by any constraint. If you cannot delete a resource, check where the resource ID is referenced and remove the resource from the constraint first. Procedure 6.18 Removing a Cluster Resource 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 Clean up the resource on all nodes as described in Procedure 6.17, Cleaning Up A Resource (page 136). 4 In one of the individual resource views, click the wrench icon next to the resource and select Stop. To continue, confirm the message that appears. 5 Remove all constraints that relate to the resource, otherwise removing the resource will not be possible. For details, refer to Procedure 6.10, Removing Constraints (page 127). 6 If the resource is stopped, click the wrench icon next to it and select Delete Resource Migrating Cluster Resources As mentioned in Section 6.3.6, Specifying Resource Failover Nodes (page 127), the cluster will fail over (migrate) resources automatically in case of software or hardware failures according to certain parameters you can define (for example, migration threshold or resource stickiness). Apart from that, you can also manually migrate a resource to another node in the cluster (or decide to just move it away from the current node and leave the decision where to put it to the cluster). Procedure 6.19 Manually Migrating a Resource 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). Configuring and Managing Cluster Resources (Web Interface) 137

150 2 In the left navigation bar, select Cluster Status. 3 In one of the individual resource views, click the wrench icon next to the resource and select Move. 4 In the new window, select the node to which to move the resource. This creates a location constraint with an INFINITY score for the destination node. 5 Alternatively, select to move the resource Away from current node. This creates a location constraint with a -INFINITY score for the current node. 6 Click OK to confirm the migration. To allow a resource to move back again, proceed as follows: Procedure 6.20 Clearing a Migration Constraint 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 In one of the individual resource views, click the wrench icon next to the resource and select Drop Relocation Rule. To continue, confirm the message that appears. This uses the crm_resource -U command. The resource can move back to its original location or it may stay where it is (depending on resource stickiness). 138 High Availability Guide

151 For more information, see the crm_resource man page or Pacemaker Explained, available from Refer to section Resource Migration Changing Management Mode of Resources When a resource is being managed by the cluster, it must not be touched otherwise (outside of the cluster). For maintenance of individual resources, you can set the respective resources to an unmanaged mode that allows you to modify the resource outside of the cluster. Procedure 6.21 Changing Management Mode of Resources 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Resources. Select the resource you want to put in unmanaged mode, click the wrench icon next to the resource and select Edit Resource. 3 Open the Meta Attributes category. 4 From the empty drop-down list, select the is-managed attribute and click the plus icon to add it. 5 Deactivate the checkbox next to is-managed to put the resource in unmanaged mode and confirm your changes. 6 After you have finished the maintenance task for that resource, reactivate the checkbox next to the is-managed attribute for that resource. From this point on, the resource will be managed by the High Availability Extension software again Viewing the Cluster History Hawk provides the following possibilities to view past events on the cluster (on different levels and in varying detail). Configuring and Managing Cluster Resources (Web Interface) 139

152 Procedure 6.22 Viewing Recent Events of Nodes or Resources 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 In the Tree View or Table View, click the wrench icon next to the resource or node you are interested in and select View Recent Events. The dialog that opens shows the events of the last hour. Procedure 6.23 Viewing Transitions with the History Explorer The History Explorer provides transition information for a time frame that you can define: 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select History Explorer. 3 By default, the period to explore is set to the last 24 hours. To modify this, set another Start Time and End Time. 4 Click Display to start collecting transition data. Figure 6.11 Hawk History Report 140 High Availability Guide

153 The following information is displayed: The time line of all past transitions in the cluster. The pe-input files for each transition. For each transition, the cluster saves a copy of the state which is provided to the policy engine as input. The path to this archive is logged. The files are only generated on the Designated Coordinator (DC), but as the DC can change, there may be pe-input files from several nodes listed in the History Explorer. The files show what the Policy Engine (PE) planned to do. Graph, XML representation and details of each transition. If you choose to show the Graph, the PE is reinvoked (using the pe-input files), and generates a graphical visualization of the transition. Alternatively, you can view the XML representation of the graph. Clicking Details shows snippets of var/log/ messages that belong to that particular transition. Figure 6.12 Hawk History Report Transition Graph Exploring Potential Failure Scenarios Hawk provides a Simulator that allows you to explore failure scenarios before they happen. After switching to the simulator mode, change the status of nodes or execute Configuring and Managing Cluster Resources (Web Interface) 141

154 multiple resource operations to see how the cluster would behave should these events occur. Procedure 6.24 Switching to Simulator Mode 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Simulator. Hawk's background changes color to indicate the simulator is active. A simulator dialog opens in the bottom right hand corner of the screen. Its title Simulator (initial state) indicates that Cluster Status screen still reflects the current state of the cluster. 3 To simulate status change of a node: 3a Click +Node in the simulator control dialog. 3b Select the Node you want to manipulate and select its target State. 3c Confirm your changes to add them to the queue of events listed in the controller dialog below Injected State. 4 To simulate a resource operation: 4a Click +Op in the simulator control dialog. 4b Select the Resource to manipulate and the Operation to simulate. If necessary, define an Interval. Select the Node on which to run the operation and the targeted Result. 4c Confirm your changes to add them to the queue of events listed in the controller dialog below Injected State. 5 Repeat the previous steps for any other events you wish to simulate. 6 To remove any of the events listed in Injected State, select the entry and click the minus icon beneath the list. 142 High Availability Guide

155 7 To start the simulation, click Run in the simulator control dialog. The Cluster Status screen displays the simulated events. For example, if you marked a node as unclean, it will now be shown offline, and all its resources will be stopped. The simulator control dialog changes to Simulator (final state). 8 To view more detailed information: 8a To see log snippets of what occurred, click the Details link in the simulator dialog. 8b To see the transition graph, click the Graph link. 8c To display the initial CIB state, click CIB (in). 8d To see what the CIB would look like after the transition, click CIB (out). 9 To start from scratch with a new simulation, use the Reset button. 10 To exit the simulation mode, close the simulator control dialog. The Cluster Status screen switches back to its normal color and displays the current cluster state. Figure 6.13 Hawk Simulator Configuring and Managing Cluster Resources (Web Interface) 143

156 6.4.8 Generating a Cluster Report For analysis and diagnosis of problems occurring on the cluster, Hawk can generate a cluster report that collects information from all nodes in the cluster. Procedure 6.25 Generating a hb_report 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Generate hb_report. 3 By default, the period to examine is the last hour. To modify this, set another Start Time and End Time. 4 Click Generate. 5 After the report has been created, download the *.tar.bz2 file by clicking the respective link. For more information about the log files that the hb_report covers, refer to How can I create a report with an analysis of all my cluster nodes? (page 309). 6.5 Multi-Site Clusters (Geo Clusters) Support for multi-site clusters is available as a separate option to SUSE Linux Enterprise High Availability Extension. For an introduction and detailed information on how to set up multi-site clusters, refer to Chapter 13, Multi-Site Clusters (Geo Clusters) (page 219). Some of the required steps need to be executed within the individual clusters (that are then combined to a multi-site cluster), while other steps take place on an inter-cluster layer. Therefore not all setup steps can alternatively be executed with Hawk. But Hawk offers the following features related to multi-site clusters: Viewing Tickets (page 145) Configuring Additional Cluster Resources and Constraints (page 146) 144 High Availability Guide

157 Testing the Impact of Ticket Failover (page 148) Viewing Tickets Procedure 6.26 Viewing Tickets with Hawk Tickets are visible in Hawk if they have been granted or revoked at least once or if they are referenced in a ticket dependency see Procedure 6.27, Configuring Ticket Dependencies (page 147). In case a ticket is referenced in a ticket dependency, but has not been granted to any site yet, Hawk displays it as revoked. 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Cluster Status. 3 If the Summary View is not already active, click the respective view icon on the upper right-hand side. Along with information about cluster nodes and resources, Hawk also displays a Ticket category. 4 For more details, either click the title of the Ticket category or the individual ticket entries that are marked as links. Hawk displays the ticket's name and, in a tooltip, the last time the ticket has been granted to the current site. Configuring and Managing Cluster Resources (Web Interface) 145

158 Figure 6.14 Hawk Cluster Status (Summary View) Ticket Details NOTE: Managing Tickets To grant or revoke tickets, use the booth client command as described in Section 13.5, Managing Multi-Site Clusters (page 228). As managing tickets takes place on an inter-cluster layer, you cannot do so with Hawk Configuring Additional Cluster Resources and Constraints Apart from the resources and constraints that you need to define for your specific cluster setup, multi-site clusters require additional resources and constraints as outlined in Section , Configuring Cluster Resources and Constraints (page 223). All of those can be configured with the CRM shell, but alternatively also with Hawk. This section focuses on ticket dependencies only as they are specific to multi-site clusters. For general instructions on how to configure resource groups and order constraints with Hawk, refer to Section , Configuring a Cluster Resource Group (page 132) and Procedure 6.8, Adding or Modifying Colocational or Order Constraints (page 123), respectively. 146 High Availability Guide

159 Procedure 6.27 Configuring Ticket Dependencies For multi-site clusters, you can specify which resources depend on a certain ticket. Together with this special type of constraint, you can set a loss-policy that defines what should happen to the respective resources if the ticket is revoked. The attribute loss-policy can have the following values: fence: Fence the nodes that are running the relevant resources. stop: Stop the relevant resources. freeze: Do nothing to the relevant resources. demote: Demote relevant resources that are running in master mode to slave mode. 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Constraints. The Constraints screen shows categories for all types of constraints and lists all defined constraints. 3 To add a new ticket dependency, click the plus icon in the Ticket category. To modify an existing constraint, click the wrench icon next to the constraint and select Edit Constraint. 4 Enter a unique Constraint ID. When modifying existing constraints, the ID is already defined. 5 Set a Loss Policy. 6 Enter the ID of the ticket that the resources should depend on. 7 Select a resource from the list Add resource to constraint. The list shows the IDs of all resources and all resource templates configured for the cluster. 8 To add the selected resource, click the plus icon next to the list. A new list appears beneath, showing the remaining resources. Add as many resources to the constraint as you would like to depend on the ticket. Configuring and Managing Cluster Resources (Web Interface) 147

160 Figure 6.15 Hawk Example Ticket Dependency Figure 6.15, Hawk Example Ticket Dependency (page 148) shows a constraint with the ID rsc1-req-ticketa. It defines that the resource rsc1 depends on ticketa and that the node running the resource should be fenced in case ticketa is revoked. 9 Click Create Constraint to finish the configuration. A message at the top of the screen shows if the constraint was successfully created Testing the Impact of Ticket Failover Hawk's Simulator allows you to explore failure scenarios before they happen. To explore if your resources that depend on a certain ticket behave as expected, you can also test the impact of granting or revoking tickets. Procedure 6.28 Simulating Granting and Revoking Tickets 1 Start a Web browser and log in to the cluster as described in Section 6.1.1, Starting Hawk and Logging In (page 110). 2 In the left navigation bar, select Simulator. Hawk's background changes color to indicate the simulator is active. A simulator dialog opens in the bottom right hand corner of the screen. Its title Simulator (initial state) indicates that Cluster Status screen still reflects the current state of the cluster. 148 High Availability Guide

161 3 To simulate status change of a ticket: 3a Click +Ticket in the simulator control dialog. 3b Select the Action you want to simulate. 3c Confirm your changes to add them to the queue of events listed in the controller dialog below Injected State. 4 To start the simulation, click Run in the simulator control dialog. The Cluster Status screen displays the impact of the simulated events. The simulator control dialog changes to Simulator (final state). 5 To exit the simulation mode, close the simulator control dialog. The Cluster Status screen switches back to its normal color and displays the current cluster state. Figure 6.16 HawkSimulator Tickets For more information about Hawk's Simulator (and which other scenarios can be explored with it), refer to Section 6.5.3, Testing the Impact of Ticket Failover (page 148). Configuring and Managing Cluster Resources (Web Interface) 149

162 6.6 Troubleshooting Hawk Log Files Find the Hawk log files in /srv/www/hawk/log. Check these files in case you cannot access Hawk. If you have trouble starting or stopping a resource with Hawk, check the Pacemaker log messages. By default, Pacemaker logs to /var/log/messages. Authentication Fails If you cannot log in to Hawk with a new user that is a member of the haclient group (or if you experience delays until Hawk accepts logins from this user), stop the nscd daemon with rcnscd stop and try again. Replacing the Self-Signed Certificate To avoid the warning about the self-signed certificate on first Hawk startup, replace the automatically created certificate with your own certificate or a certificate that was signed by an official Certificate Authority (CA). The certificate is stored in /etc/lighttpd/certs/hawk-combined.pem and contains both key and certificate. After you have created or received your new key and certificate, combine them by executing the following command: cat keyfile certificatefile > /etc/lighttpd/certs/hawk-combined.pem Change the permissions to make the file only accessible by root: chown root.root /etc/lighttpd/certs/hawk-combined.pem chmod 600 /etc/lighttpd/certs/hawk-combined.pem Login to Hawk Fails After Using History Explorer/hb_report Depending on the period of time you defined in the History Explorer or hb_report and the events that took place in the cluster during this time, Hawk might collect an extensive amount of information stored in log files in the /tmp directory. This might consume the remaining free disk space on your node. In case Hawk should not respond after using the History Explorer or hb_report, check the hard disk of your cluster node and remove the respective log files. 150 High Availability Guide

163 Configuring and Managing Cluster Resources (Command Line) 7 To configure and manage cluster resources, either use the graphical user interface (the Pacemaker GUI) or the crm command line utility. For the GUI approach, refer to Chapter 5, Configuring and Managing Cluster Resources (GUI) (page 75). This chapter introduces crm, the command line tool and covers an overview of this tool, how to use templates, and mainly configuring and managing cluster resources: creating basic and advanced types of resources (groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning up or removing resources, and migrating resources manually. NOTE: User Privileges Sufficient privileges are necessary to manage a cluster. The crm command and its subcommands have to be run either as root user or as the CRM owner user (typically the user hacluster). However, the user option allows you to run crm and its subcommands as a regular (unprivileged) user and to change its ID using sudo whenever necessary. For example, with the following command crm will use hacluster as the privileged user ID: crm options user hacluster Note that you need to set up /etc/sudoers so that sudo does not ask for a password. Configuring and Managing Cluster Resources (Command Line) 151

164 7.1 crm Shell Overview After installation you usually need the crm command only. This command has several subcommands which manage resources, CIBs, nodes, resource agents, and others. Run crm help to get an overview of all available commands. It offers a thorough help system with embedded examples. The crm command can be used in the following ways: Directly Concatenate all subcommands to crm, press Enter and you see the output immediately. For example, enter crm help ra to get information about the ra subcommand (resource agents). As crm Shell Script two ways: Use crm and its commands in a script. This can be done in crm -f script.cli crm < script.cli The script can contain any command from crm. For example: # A small example statusnode list Any line starting with the hash symbol (#) is a comment and is ignored. If a line is too long, insert a backslash (\) at the end and continue in the next line. Interactive as Internal Shell Type crm to enter the internal shell. The prompt changes to crm(live)#. With help you can get an overview of the available subcommands. As the internal shell has different levels of subcommands, you can enter one by just typing this subcommand and press Enter. For example, if you type resource you enter the resource management level. Your prompt changes to crm(live)resource#. If you want to leave the internal shell, use the commands quit, bye, or exit. If you need to go one level back, use up, end, or cd. You can enter the level directly by typing crm and the respective subcommand(s) without any options and hit Enter. 152 High Availability Guide

165 The internal shell supports also tab completion for subcommands and resources. Type the beginning of a command, press and crm completes the respective object. In addition to previously explained methods, the crm shell also supports synchronous command execution. Use the -w option to activate it. If you have started crm without -w, you can enable it later with the user preference's wait set to yes (options wait yes). If this option is enabled, crm waits until the transition is finished. Whenever a transaction is started, dots are printed to indicate progress. Synchronous command execution is only applicable for commands like resource start. NOTE: Differentiate Between Management and Configuration Subcommands The crm tool has management capability (the subcommands resource and node) and can be used for configuration (cib, configure). The following subsections give you an overview about some important aspects of the crm tool Displaying Information about OCF Resource Agents As you have to deal with resource agents in your cluster configuration all the time, the crm tool contains the ra command to get information about resource agents and to manage them (for additional information, see also Section 4.2.2, Supported Resource Agent Classes (page 52)): # crm ra crm(live)ra# The command classes gives you a list of all classes and providers: crm(live)ra# classes heartbeat lsb ocf / heartbeat linbit lvm2 ocfs2 pacemaker stonith To get an overview about all available resource agents for a class (and provider) use the list command: Configuring and Managing Cluster Resources (Command Line) 153

166 crm(live)ra# list ocf AoEtarget AudibleAlarm CTDB ClusterMon Delay Dummy EvmsSCC Evmsd Filesystem HealthCPU HealthSMART ICP IPaddr IPaddr2 IPsrcaddr IPv6addr LVM LinuxSCSI MailTo ManageRAID ManageVE Pure-FTPd Raid1 Route SAPDatabase SAPInstance SendArp ServeRAID... An overview about a resource agent can be viewed with info: crm(live)ra# info ocf:drbd:linbit This resource agent manages a DRBD* resource as a master/slave resource. DRBD is a shared-nothing replicated storage device. (ocf:linbit:drbd) Master/Slave OCF Resource Agent for DRBD Parameters (* denotes required, [] the default): drbd_resource* (string): drbd resource name The name of the drbd resource from the drbd.conf file. drbdconf (string, [/etc/drbd.conf]): Path to drbd.conf Full path to the drbd.conf file. Operations' defaults (advisory minimum): start timeout=240 promote timeout=90 demote timeout=90 notify timeout=90 stop timeout=100 monitor_slave_0 interval=20 timeout=20 start-delay=1m monitor_master_0 interval=10 timeout=20 start-delay=1m Leave the viewer by pressing Q. Find a configuration example at Appendix A, Example of Setting Up a Simple Testing Resource (page 429). TIP: Use crm Directly In the former example we used the internal shell of the crm command. However, you do not necessarily have to use it. You get the same results, if you add the respective subcommands to crm. For example, you can list all the OCF resource agents by entering crm ra list ocf in your shell. 154 High Availability Guide

167 7.1.2 Using Configuration Templates Configuration templates are ready-made cluster configurations for the crm shell. Do not confuse them with the resource templates (as described in Section 7.3.2, Creating Resource Templates (page 160)). Those are templates for the cluster and not for the crm shell. Configuration templates require minimum effort to be tailored to the particular user's needs. Whenever a template creates a configuration, warning messages give hints which can be edited later for further customization. The following procedure shows how to create a simple yet functional Apache configuration: 1 Log in as root and start the crm tool: # crm configure 2 Create a new configuration from a configuration template: 2a Switch to the template subcommand: crm(live)configure# template 2b List the available configuration templates: crm(live)configure template# list templates gfs2-base filesystem virtual-ip apache clvm ocfs2 gfs2 2c Decide which configuration template you need. As we need an Apache configuration, we choose the apache template: crm(live)configure template# new intranet apache INFO: pulling in template apache INFO: pulling in template virtual-ip 3 Define your parameters: 3a List the just created configuration: crm(live)configure template# intranet list Configuring and Managing Cluster Resources (Command Line) 155

168 3b Display the minimum of required changes which have to be filled out by you: crm(live)configure template# show ERROR: 23: required parameter ip not set ERROR: 61: required parameter id not set ERROR: 65: required parameter configfile not set 3c Invoke your preferred text editor and fill out all lines that have been displayed as errors in Step 3b (page 156): crm(live)configure template# edit 4 Show the configuration and check whether it is valid (bold text depends on the configuration you have entered in Step 3c (page 156)): crm(live)configure template# show primitive virtual-ip ocf:heartbeat:ipaddr \ params ip=" " primitive apache ocf:heartbeat:apache \ params configfile="/etc/apache2/httpd.conf" monitor apache 120s:60s group intranet \ apache virtual-ip 5 Apply the configuration: crm(live)configure template# crm(live)configure# cd.. crm(live)configure# show apply 6 Submit your changes to the CIB: crm(live)configure# commit It is possible to simplify the commands even more, if you know the details. The above procedure can be summarized with the following command on the shell: crm configure template \ new intranet apache params \ configfile="/etc/apache2/httpd.conf" ip=" " If you are inside your internal crm shell, use the following command: crm(live)configure template# new intranet apache params \ configfile="/etc/apache2/httpd.conf" ip=" " 156 High Availability Guide

169 However, the previous command only creates its configuration from the configuration template. It does not apply nor commit it to the CIB Testing with Shadow Configuration A shadow configuration is used to test different configuration scenarios. If you have created several shadow configurations, you can test them one by one to see the effects of your changes. The usual process looks like this: 1 Log in as root and start the crm tool: # crm configure 2 Create a new shadow configuration: crm(live)configure# cib new mynewconfig INFO: mynewconfig shadow CIB created 3 If you want to copy the current live configuration into your shadow configuration, use the following command, otherwise skip this step: crm(mynewconfig)# cib reset mynewconfig The previous command makes it easier to modify any existing resources later. 4 Make your changes as usual. After you have created the shadow configuration, all changes go there. To save all your changes, use the following command: crm(mynewconfig)# commit 5 If you need the live cluster configuration again, switch back with the following command: crm(mynewconfig)configure# cib use live crm(live)# Configuring and Managing Cluster Resources (Command Line) 157

170 7.1.4 Debugging Your Configuration Changes Before loading your configuration changes back into the cluster, it is recommended to review your changes with ptest. The ptest can show a diagram of actions that will be induced by committing the changes. You need the graphviz package to display the diagrams. The following example is a transcript, adding a monitor operation: # crm configure crm(live)configure# show fence-node2 primitive fence-node2 stonith:apcsmart \ params hostlist="node2" crm(live)configure# monitor fence-node2 120m:60s crm(live)configure# show changed primitive fence-node2 stonith:apcsmart \ params hostlist="node2" \ op monitor interval="120m" timeout="60s" crm(live)configure# ptest crm(live)configure# commit 7.2 Configuring Global Cluster Options Global cluster options control how the cluster behaves when confronted with certain situations. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters after basic cluster setup: Procedure 7.1 Modifying Global Cluster Options With crm 1 Log in as root and start the crm tool: # crm configure 2 Use the following commands to set the options for a two-node clusters only: crm(live)configure# property no-quorum-policy=ignore crm(live)configure# property stonith-enabled=false 3 Show your changes: 158 High Availability Guide

171 crm(live)configure# show property $id="cib-bootstrap-options" \ dc-version=" add2a3721a0ecccb24660a97dbfdaa3e68f51" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" 4 Commit your changes and exit: crm(live)configure# commit crm(live)configure# exit 7.3 Configuring Cluster Resources As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, servers, databases, file systems, virtual machines, and any other server-based applications or services you want to make available to users at all times. For an overview of resource types you can create, refer to Section 4.2.3, Types of Resources (page 54) Creating Cluster Resources There are three types of RAs (Resource Agents) available with the cluster (for background information, see Section 4.2.2, Supported Resource Agent Classes (page 52)). To create a cluster resource use the crm tool. To add a new resource to the cluster, proceed as follows: 1 Log in as root and start the crm tool: # crm configure 2 Configure a primitive IP address: crm(live)configure# primitive myip ocf:heartbeat:ipaddr \ params ip= op monitor interval=60s The previous command configures a primitive with the name myip. You need to choose a class (here ocf), provider (heartbeat), and type (IPaddr). Furthermore, Configuring and Managing Cluster Resources (Command Line) 159

172 this primitive expects other parameters like the IP address. Change the address to your setup. 3 Display and review the changes you have made: crm(live)configure# show 4 Commit your changes to take effect: crm(live)configure# commit Creating Resource Templates If you want to create several resources with similar configurations, a resource template simplifies the task. See also Section 4.4.3, Resource Templates and Constraints (page 67) for some basic background information. Do not confuse them with the normal templates from Section 7.1.2, Using Configuration Templates (page 155). Use the rsc_template command to get familiar with the syntax: # crm configure rsc_template usage: rsc_template <name> [<class>:[<provider>:]]<type> [params <param>=<value> [<param>=<value>...]] [meta <attribute>=<value> [<attribute>=<value>...]] [utilization <attribute>=<value> [<attribute>=<value>...]] [operations id_spec [op op_type [<attribute>=<value>...]...]] For example, the following command creates a new resource template with the name BigVM derived from the ocf:heartbeat:xen resource and some default values and operations: crm(live)configure# rsc_template BigVM ocf:heartbeat:xen \ params allow_mem_management="true" \ op monitor timeout=60s interval=15s \ op stop timeout=10m \ op start timeout=10m Once you defined the new resource template, you can use it in primitives or reference it in order, colocation, or rsc_ticket constraints. To reference the resource template, use sign: crm(live)configure# primitive \ params xmfile="/etc/xen/shared-vm/myvm1" name="myvm1" 160 High Availability Guide

173 The new primitive MyVM1 is going to inherit everything from the BigVM resource templates. For example, the equivalent of the above two would be: crm(live)configure# primitive MyVM1 ocf:heartbeat:xen \ params xmfile="/etc/xen/shared-vm/myvm1" name="myvm1" params allow_mem_management="true" \ op monitor timeout=60s interval=15s \ op stop timeout=10m \ op start timeout=10m If you want to overwrite some options or operations, add them to your (primitive) definition. For instance, the following new primitive MyVM2 doubles the timout for monitor operations but leaves others untouched: crm(live)configure# primitive \ params xmfile="/etc/xen/shared-vm/myvm2" name="myvm2" \ op monitor timeout=120s interval=30s A resource template may be referenced in constraints to stand for all primitives which are derived from that template. This helps to produce a more concise and clear cluster configuration. Resource template references are allowed in all constraints except location constraints. Colocation constraints may not contain more than one template reference Creating a STONITH Resource From the crm perspective, a STONITH device is just another resource. To create a STONITH resource, proceed as follows: 1 Log in as root and start the crm tool: # crm configure 2 Get a list of all STONITH types with the following command: crm(live)# ra list stonith apcmaster apcmastersnmp apcsmart baytech bladehpi cyclades drac3 external/drac5 external/dracmc-telnet external/hetzner external/hmchttp external/ibmrsa external/ibmrsa-telnet external/ipmi external/ippower9258 external/kdumpcheck external/libvirt external/nut external/rackpdu external/riloe external/sbd external/vcenter external/vmware external/xen0 external/xen0-ha fence_legacy ibmhmc ipmilan meatware nw_rpc100s Configuring and Managing Cluster Resources (Command Line) 161

174 rcd_serial rps10 suicide wti_mpc wti_nps 3 Choose a STONITH type from the above list and view the list of possible options. Use the following command: crm(live)# ra info stonith:external/ipmi IPMI STONITH external device (stonith:external/ipmi) ipmitool based power management. Apparently, the power off method of ipmitool is intercepted by ACPI which then makes a regular shutdown. If case of a split brain on a two-node it may happen that no node survives. For two-node clusters use only the reset method. Parameters (* denotes required, [] the default): hostname (string): Hostname The name of the host to be managed by this STONITH device Create the STONITH resource with the stonith class, the type you have chosen in Step 3, and the respective parameters if needed, for example: crm(live)# configure crm(live)configure# primitive my-stonith stonith:external/ipmi \ params hostname="node1" ipaddr=" " \ userid="admin" passwd="secret" \ op monitor interval=60m timeout=120s Configuring Resource Constraints Having all the resources configured is only one part of the job. Even if the cluster knows all needed resources, it might still not be able to handle them correctly. For example, try not to mount the file system on the slave node of drbd (in fact, this would fail with drbd). Define constraints to make these kind of information available to the cluster. For more information about constraints, see Section 4.4, Resource Constraints (page 66). 162 High Availability Guide

175 Locational Constraints This type of constraint may be added multiple times for each resource. All location constraints are evaluated for a given resource. A simple example that expresses a preference to run the resource fs1 on the node with the name earth to 100 would be the following: crm(live)configure# location fs1-loc fs1 100: earth Another example is a location with pingd: crm(live)configure# primitive pingd pingd \ params name=pingd dampen=5s multiplier=100 host_list="r1 r2" crm(live)configure# location node_pref internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd Colocational Constraints The colocation command is used to define what resources should run on the same or on different hosts. It is only possible to set a score of either +inf or -inf, defining resources that must always or must never run on the same node. It is also possible to use non-infinite scores. In that case the colocation is called advisory and the cluster may decide not to follow them in favor of not stopping other resources if there is a conflict. For example, to run the resources with the IDs filesystem_resource and nfs_group always on the same host, use the following constraint: crm(live)configure# colocation nfs_on_filesystem inf: nfs_group filesystem_resource For a master slave configuration, it is necessary to know if the current node is a master in addition to running the resource locally Ordering Constraints Sometimes it is necessary to provide an order of resource actions or operations. For example, you cannot mount a file system before the device is available to a system. Ordering constraints can be used to start or stop a service right before or after a different Configuring and Managing Cluster Resources (Command Line) 163

176 resource meets a special condition, such as being started, stopped, or promoted to master. Use the following commands in the crm shell to configure an ordering constraint: crm(live)configure# order nfs_after_filesystem mandatory: filesystem_resource nfs_group Constraints for the Example Configuration The example used for this chapter would not work without additional constraints. It is essential that all resources run on the same machine as the master of the drbd resource. The drbd resource must be master before any other resource starts. Trying to mount the DRBD device when it is not the master simply fails. The following constraints must be fulfilled: The file system must always be on the same node as the master of the DRBD resource. crm(live)configure# colocation filesystem_on_master inf: \ filesystem_resource drbd_resource:master The NFS server as well as the IP address must be on the same node as the file system. crm(live)configure# colocation nfs_with_fs inf: \ nfs_group filesystem_resource The NFS server as well as the IP address start after the file system is mounted: crm(live)configure# order nfs_second mandatory: \ filesystem_resource:start nfs_group The file system must be mounted on a node after the DRBD resource is promoted to master on this node. crm(live)configure# order drbd_first inf: \ drbd_resource:promote filesystem_resource:start Specifying Resource Failover Nodes To determine a resource failover, use the meta attribute migration-threshold. In case failcount exceeds migration-threshold on all nodes, the resource will remain stopped. For example: crm(live)configure# location r1-node1 r1 100: node1 164 High Availability Guide

177 Normally, r1 prefers to run on node1. If it fails there, migration-threshold is checked and compared to the failcount. If failcount >= migration-threshold then it is migrated to the node with the next best preference. Start failures set the failcount to inf depend on the start-failure-is-fatal option. Stop failures cause fencing. If there is no STONITH defined, the resource will not migrate at all. For an overview, refer to Section 4.4.4, Failover Nodes (page 68) Specifying Resource Failback Nodes (Resource Stickiness) A resource might fail back to its original node when that node is back online and in the cluster. If you want to prevent a resource from failing back to the node it was running on prior to failover, or if you want to specify a different node for the resource to fail back to, you must change its resource stickiness value. You can either specify resource stickiness when you are creating a resource, or afterwards. For an overview, refer to Section 4.4.5, Failback Nodes (page 69) Configuring Placement of Resources Based on Load Impact Some resources may have specific capacity requirements such as minimum amount of memory. Otherwise, they may fail to start completely or run with degraded performance. To take this into account, the High Availability Extension allows you to specify the following parameters: 1. The capacity a certain node provides. 2. The capacity a certain resource requires. 3. An overall strategy for placement of resources. Configuring and Managing Cluster Resources (Command Line) 165

178 For detailed background information about the parameters and a configuration example, refer to Section 4.4.6, Placing Resources Based on Their Load Impact (page 70). To configure the resource's requirements and the capacity a node provides, use utilization attributes as described in Procedure 5.10, Adding Or Modifying Utilization Attributes (page 93). You can name the utilization attributes according to your preferences and define as many name/value pairs as your configuration needs. In the following example, we assume that you already have a basic configuration of cluster nodes and resources and now additionally want to configure the capacities a certain node provides and the capacity a certain resource requires. Procedure 7.2 Adding Or Modifying Utilization Attributes With crm 1 Log in as root and start the crm tool: # crm configure 2 To specify the capacity a node provides, use the following command and replace the placeholder NODE_1 with the name of your node: crm(live)configure# node NODE_1 utilization memory=16384 cpu=8 With these values, NODE_1 would be assumed to provide 16GB of memory and 8 CPU cores to resources. 3 To specify the capacity a resource requires, use: crm(live)configure# primitive xen1 ocf:heartbeat:xen... \ utilization memory=4096 cpu=4 This would make the resource consume 4096 of those memory units from nodea, and 4 of the cpu units. 4 Configure the placement strategy with the property command: crm(live)configure# property... Four values are available for the placement strategy: 166 High Availability Guide

179 propertyplacement-strategy=default Utilization values are not taken into account at all, per default. Resources are allocated according to location scoring. If scores are equal, resources are evenly distributed across nodes. propertyplacement-strategy=utilization Utilization values are taken into account when deciding whether a node is considered eligible if it has sufficient free capacity to satisfy the resource's requirements. However, load-balancing is still done based on the number of resources allocated to a node. propertyplacement-strategy=minimal Utilization values are taken into account when deciding whether a node is eligible to serve a resource; an attempt is made to concentrate the resources on as few nodes as possible, thereby enabling possible power savings on the remaining nodes. propertyplacement-strategy=balanced Utilization values are taken into account when deciding whether a node is eligible to serve a resource; an attempt is made to spread the resources evenly, optimizing resource performance. The placing strategies are best-effort, and do not yet utilize complex heuristic solvers to always reach an optimum allocation result. Ensure that resource priorities are properly set so that your most important resources are scheduled first. 5 Commit your changes before leaving the crm shell: crm(live)configure# commit The following example demonstrates a three node cluster of equal nodes, with 4 virtual machines: crm(live)configure# node node1 utilization memory="4000" crm(live)configure# node node2 utilization memory="4000" crm(live)configure# node node3 utilization memory="4000" crm(live)configure# primitive xena ocf:heartbeat:xen \ utilization memory="3500" meta priority="10" crm(live)configure# primitive xenb ocf:heartbeat:xen \ utilization memory="2000" meta priority="1" crm(live)configure# primitive xenc ocf:heartbeat:xen \ utilization memory="2000" meta priority="1" crm(live)configure# primitive xend ocf:heartbeat:xen \ Configuring and Managing Cluster Resources (Command Line) 167

180 utilization memory="1000" meta priority="5" crm(live)configure# property placement-strategy="minimal" With all three nodes up, xena will be placed onto a node first, followed by xend. xenb and xenc would either be allocated together or one of them with xend. If one node failed, too little total memory would be available to host them all. xena would be ensured to be allocated, as would xend; however, only one of xenb or xenc could still be placed, and since their priority is equal, the result is not defined yet. To resolve this ambiguity as well, you would need to set a higher priority for either one Configuring Resource Monitoring To monitor a resource, there are two possibilities: either define a monitor operation with the op keyword or use the monitor command. The following example configures an Apache resource and monitors it every 60 seconds with the op keyword: crm(live)configure# primitive apache apache \ params... \ op monitor interval=60s timeout=30s The same can be done with: crm(live)configure# primitive apache apache \ params... crm(live)configure# monitor apache 60s:30s For an overview, refer to Section 4.3, Resource Monitoring (page 65) Configuring a Cluster Resource Group One of the most common elements of a cluster is a set of resources that needs to be located together. Start sequentially and stop in the reverse order. To simplify this configuration we support the concept of groups. The following example creates two primitives (an IP address and an resource): 1 Run the crm command as system administrator. The prompt changes to crm(live). 2 Configure the primitives: crm(live)# configure crm(live)configure# primitive Public-IP ocf:ipaddr:heartbeat \ 168 High Availability Guide

181 params ip= crm(live)configure# primitive lsb:exim 3 Group the primitives with their relevant identifiers in the correct order: crm(live)configure# group shortcut Public-IP For an overview, refer to Section , Groups (page 56) Configuring a Clone Resource Clones were initially conceived as a convenient way to start N instances of an IP resource and have them distributed throughout the cluster for load balancing. They have turned out to quite useful for a number of other purposes, including integrating with DLM, the fencing subsystem and OCFS2. You can clone any resource, provided the resource agent supports it. Learn more about cloned resources in Section , Clones (page 57) Creating Anonymous Clone Resources To create an anonymous clone resource, first create a primitive resource and then refer to it with the clone command. Do the following: 1 Log in as root and start the crm tool: # crm configure 2 Configure the primitive, for example: crm(live)configure# primitive Apache lsb:apache 3 Clone the primitive: crm(live)configure# clone apache-clone Apache Configuring and Managing Cluster Resources (Command Line) 169

182 Creating Stateful/Multi-State Clone Resources To create an stateful clone resource, first create a primitive resource and then the master/slave resource. The master/slave resource must support at least promote and demote operations. 1 Log in as root and start the crm tool: # crm configure 2 Configure the primitive. Change the intervals if needed: crm(live)configure# primitive myrsc ocf:mycorp:myappl \ op monitor interval=60 \ op monitor interval=61 role=master 3 Create the master slave resource: crm(live)configure# ms myrsc-clone myrsc 7.4 Managing Cluster Resources Apart from the possibility to configure your cluster resources, the crm tool also allows you to manage existing resources. The following subsections gives you an overview Starting a New Cluster Resource To start a new cluster resource you need the respective identifier. Proceed as follows: 1 Log in as root and start the crm tool: # crm configure 2 Switch to the resource level: crm(live)# resource 3 Start the resource with start and press the key to show all known resources: 170 High Availability Guide

183 crm(live)resource# start start ID Cleaning Up Resources A resource will be automatically restarted if it fails, but each failure raises the resource's failcount. If a migration-threshold has been set for that resource, the node will no longer be allowed to run the resource as soon as the number of failures has reached the migration threshold. 1 Open a shell and log in as user root. 2 Get a list of all your resources: crm resource list... Resource Group: dlm-clvm:1 dlm:1 (ocf::pacemaker:controld) Started clvm:1 (ocf::lvm2:clvmd) Started cmirrord:1 (ocf::lvm2:cmirrord) Started 3 Remove the resource: crm resource cleanup dlm-clvm For example, if you want to stop the DLM resource, from the dlm-clvm resource group, replace RSC with dlm Removing a Cluster Resource Proceed as follows to remove a cluster resource: 1 Log in as root and start the crm tool: # crm configure 2 Run the following command to get a list of your resources: crm(live)# resource status For example, the output can look like this (whereas myip is the relevant identifier of your resource): Configuring and Managing Cluster Resources (Command Line) 171

184 myip (ocf::ipaddr:heartbeat)... 3 Delete the resource with the relevant identifier (which implies a commit too): crm(live)# configure delete YOUR_ID 4 Commit the changes: crm(live)# configure commit Migrating a Cluster Resource Although resources are configured to automatically fail over (or migrate) to other nodes of the cluster in the event of a hardware or software failure, you can also manually move a resource to another node in the cluster using either the Pacemaker GUI or the command line. Use the migrate command for this task. For example, to migrate the resource ipaddress1 to a cluster node named node2, use these commands: # crm resource crm(live)resource# migrate ipaddress1 node2 7.5 Setting Passwords Independent of cib.xml In case your cluster configuration contains sensitive information, such as passwords, it should be stored in local files. That way, these parameters will never be logged or leaked in support reports. Before using secret, better run the show command first to get an overview of all your resources: # crm configure show primitive mydb ocf:heartbeat:mysql \ params replication_user=admin... If you want to set a password for the above mydb resource, use the following commands: 172 High Availability Guide

185 # crm resource secret mydb set passwd linux INFO: syncing /var/lib/heartbeat/lrm/secrets/mydb/passwd to [your node list] You can get the saved password back with: # crm resource secret mydb show passwd linux Note that the parameters need to be synchronized between nodes; the crm resource secret command will take care of that. We highly recommend to only use this command to manage secret parameters. 7.6 Retrieving History Information Investigating the cluster history is a complex task. To simplify this task, the crm shell contains the history command with its subcommands. It is assumed SSH is configured correctly. Each cluster moves states, migrates resources, or starts important processes. All these actions can be retrieved by subcommands of history. Alternatively, use Hawk as explained in Procedure 6.23, Viewing Transitions with the History Explorer (page 140). By default, all history commands look at the events of the last hour. To change this time frame, use the limit subcommand. The syntax is: # crm history crm(live)history# limitfrom_time [TO_TIME] Some valid examples include: limit4:00pm, limit16:00 Both commands mean the same, today at 4pm. limit2012/01/12 6pm January 12th 2012 at 6pm limit"sun 5 20:46" In the current year of the current month at Sunday the 5th at 8:46pm Find more examples and how to create time frames at -dateutil. Configuring and Managing Cluster Resources (Command Line) 173

186 The info subcommand shows all the parameters which are covered by the the hb_report: crm(live)history# info Source: live Period: :10:56 - end Nodes: earth Groups: Resources: To limit hb_report to certain parameters view the available options with the subcommand help. To narrow down the level of detail, use the subcommand detail with a level: crm(live)history# detail 2 The higher the number, the more detailed your report will be. Default is 0 (zero). After you have set above parameters, use log to show the log messages. To display the last transition, use the following command: crm(live)history# transition -1 INFO: fetching new logs, please wait... This command fetches the logs and runs dotty (from the graphviz package) to show the transition graph. The shell opens the log file which you can browse with the and cursor keys. If you do not want to open the transition graph, use the nograph option: crm(live)history# transition -1 nograph 7.7 For More Information The crm man page. See Highly Available NFS Storage with DRBD and Pacemaker ( Highly Available NFS Storage with DRBD and Pacemaker) for an exhaustive example. 174 High Availability Guide

187 Adding or Modifying Resource Agents 8 All tasks that need to be managed by a cluster must be available as a resource. There are two major groups here to consider: resource agents and STONITH agents. For both categories, you can add your own agents, extending the abilities of the cluster to your own needs. 8.1 STONITH Agents A cluster sometimes detects that one of the nodes is behaving strangely and needs to remove it. This is called fencing and is commonly done with a STONITH resource. All STONITH resources reside in /usr/lib/stonith/plugins on each node. WARNING: SSH and STONITH Are Not Supported It is impossible to know how SSH might react to other system problems. For this reason, SSH and STONITH agent are not supported for production environments. To get a list of all currently available STONITH devices (from the software side), use the command crm ra list stonith. As of yet there is no documentation about writing STONITH agents. If you want to write new STONITH agents, consult the examples available in the source of the cluster-glue package. Adding or Modifying Resource Agents 175

188 8.2 Writing OCF Resource Agents All OCF resource agents (RAs) are available in /usr/lib/ocf/resource.d/, see Section 4.2.2, Supported Resource Agent Classes (page 52) for more information. Each resource agent must supported the following operations to control it: start start or enable the resource stop stop or disable the resource status returns the status of the resource monitor similar to status, but checks also for unexpected states validate validate the resource's configuration meta-data returns information about the resource agent in XML The general procedure of how to create a OCF RA is like the following: 1 Load the file /usr/lib/ocf/resource.d/pacemaker/dummy as a template. 2 Create a new subdirectory for each new resource agents to avoid naming contradictions. For example, if you have a resource group kitchen with the resource coffee_machine, add this resource to the directory /usr/lib/ocf/ resource.d/kitchen/. To access this RA, execute the command crm: configureprimitive coffee_1 ocf:coffee_machine:kitchen... 3 Implement the different shell functions and save your file under a different name. 176 High Availability Guide

189 More details about writing OCF resource agents can be found at Find special information about several concepts at Chapter 1, Product Overview (page 3). 8.3 OCF Return Codes and Failure Recovery According to the OCF specification, there are strict definitions of the exit codes an action must return. The cluster always checks the return code against the expected result. If the result does not match the expected value, then the operation is considered to have failed and a recovery action is initiated. There are three types of failure recovery: Table 8.1 Recovery Type soft hard fatal Failure Recovery Types Description A transient error occurred. A non-transient error occurred. The error may be specific to the current node. A non-transient error occurred that will be common to all cluster nodes. This means a bad configuration was specified. Action Taken by the Cluster Restart the resource or move it to a new location. Move the resource elsewhere and prevent it from being retried on the current node. Stop the resource and prevent it from being started on any cluster node. Assuming an action is considered to have failed, the following table outlines the different OCF return codes and the type of recovery the cluster will initiate when the respective error code is received. Adding or Modifying Resource Agents 177

190 Table 8.2 OCF Return Codes OCF Return Code OCF Alias Description Recovery Type 0 OCF_SUCCESS Success. The command completed successfully. This is the expected result for all start, stop, promote and demote commands. soft 1 OCF_ERR_GENER- IC Generic there was a problem error code. soft 2 OCF_ERR_ARGS The resource s configuration is not valid on this machine (for example, it refers to a location/tool not found on the node). hard 3 OCF_ERR_UNIM- PLEMENTED The requested action is not implemented. hard 4 OCF_ERR_PERM The resource agent does not have sufficient privileges to complete the task. hard 5 OCF_ERR_IN- STALLED The tools required by the resource are not installed on this machine. hard 6 OCF_ERR_CON- FIGURED The resource s configuration is invalid (for example, required parameters are missing). fatal 7 OCF_NOT_RUN- NING The resource is not running. The cluster will not attempt to stop a resource that returns this for any action. N/A This OCF return code may or may not require resource recovery it depends on what is the expected resource status. If unexpected, then soft recovery. 178 High Availability Guide

191 OCF Return Code OCF Alias Description Recovery Type 8 OCF_RUN- NING_MASTER The resource is running in Master mode. soft 9 OCF_FAILED_MAS- TER The resource is in Master mode but has failed. The resource will be demoted, stopped and then started (and possibly promoted) again. soft other N/A Custom error code. soft Adding or Modifying Resource Agents 179

192

193 Fencing and STONITH 9 Fencing is a very important concept in computer clusters for HA (High Availability). A cluster sometimes detects that one of the nodes is behaving strangely and needs to remove it. This is called fencing and is commonly done with a STONITH resource. Fencing may be defined as a method to bring an HA cluster to a known state. Every resource in a cluster has a state attached. For example: resource r1 is started on node1. In an HA cluster, such a state implies that resource r1 is stopped on all nodes except node1, because an HA cluster must make sure that every resource may be started on only one node. Every node must report every change that happens to a resource. The cluster state is thus a collection of resource states and node states. When the state of a node or resource cannot be established with certainty, fencing comes in. Even when the cluster is not aware of what is happening on a given node, fencing can ensure that the node does not run any important resources. 9.1 Classes of Fencing There are two classes of fencing: resource level and node level fencing. The latter is the primary subject of this chapter. Resource Level Fencing Using resource level fencing the cluster can ensure that a node cannot access one or more resources. One typical example is a SAN, where a fencing operation changes rules on a SAN switch to deny access from the node. Fencing and STONITH 181

194 Resource level fencing can be achieved by using normal resources on which the resource you want to protect depends. Such a resource would simply refuse to start on this node and therefore resources which depend on it will not run on the same node. Node Level Fencing Node level fencing ensures that a node does not run any resources at all. This is usually done in a simple if brutal way: reset or power off the node. 9.2 Node Level Fencing In SUSE Linux Enterprise High Availability Extension, the fencing implementation is STONITH (Shoot The Other Node in the Head). It provides node level fencing. The High Availability Extension includes the stonith command line tool, an extensible interface for remotely powering down a node in the cluster. For an overview of the available options, run stonith --help or refer to the man page of stonith for more information STONITH Devices To use node level fencing, you first need to have a fencing device. To get a list of STONITH devices which are supported by the High Availability Extension, run the following command as root on any of the nodes: stonith -L STONITH devices may be classified into the following categories: Power Distribution Units (PDU) Power Distribution Units are an essential element in managing power capacity and functionality for critical network, server and data center equipment. They can provide remote load monitoring of connected equipment and individual outlet power control for remote power recycling. Uninterruptible Power Supplies (UPS) A stable power supply provides emergency power to connected equipment by supplying power from a separate source in the event of utility power failure. 182 High Availability Guide

195 Blade Power Control Devices If you are running a cluster on a set of blades, then the power control device in the blade enclosure is the only candidate for fencing. Of course, this device must be capable of managing single blade computers. Lights-out Devices Lights-out devices (IBM RSA, HP ilo, Dell DRAC) are becoming increasingly popular and may even become standard in off-the-shelf computers. However, they are inferior to UPS devices, because they share a power supply with their host (a cluster node). If a node stays without power, the device supposed to control it would be just as useless. In that case, the CRM would continue its attempts to fence the node indefinitely while all other resource operations would wait for the fencing/stonith operation to complete. Testing Devices Testing devices are used exclusively for testing purposes. They are usually more gentle on the hardware. Once the cluster goes into production, they must be replaced with real fencing devices. The choice of the STONITH device depends mainly on your budget and the kind of hardware you use STONITH Implementation The STONITH implementation of SUSE Linux Enterprise High Availability Extension consists of two components: stonithd stonithd is a daemon which can be accessed by local processes or over the network. It accepts the commands which correspond to fencing operations: reset, power-off, and power-on. It can also check the status of the fencing device. The stonithd daemon runs on every node in the CRM HA cluster. The stonithd instance running on the DC node receives a fencing request from the CRM. It is up to this and other stonithd programs to carry out the desired fencing operation. STONITH Plug-ins For every supported fencing device there is a STONITH plug-in which is capable of controlling said device. A STONITH plug-in is the interface to the fencing device. Fencing and STONITH 183

196 On each node, all STONITH plug-ins reside in /usr/lib/stonith/plugins (or in /usr/lib64/stonith/plugins for 64-bit architectures). All STONITH plug-ins look the same to stonithd, but are quite different on the other side reflecting the nature of the fencing device. Some plug-ins support more than one device. A typical example is ipmilan (or external/ipmi) which implements the IPMI protocol and can control any device which supports this protocol. 9.3 STONITH Configuration To set up fencing, you need to configure one or more STONITH resources the stonithd daemon requires no configuration. All configuration is stored in the CIB. A STONITH resource is a resource of class stonith (see Section 4.2.2, Supported Resource Agent Classes (page 52)). STONITH resources are a representation of STONITH plug-ins in the CIB. Apart from the fencing operations, the STONITH resources can be started, stopped and monitored, just like any other resource. Starting or stopping STONITH resources means loading and unloading the STONITH device driver on a node. Starting and stopping are thus only administrative operations and do not translate to any operation on the fencing device itself. However, monitoring does translate to logging it to the device (to verify that the device will work in case it is needed). When a STONITH resource fails over to another node it enables the current node to talk to the STONITH device by loading the respective driver. STONITH resources can be configured just like any other resource. For more information about configuring resources, see Section 5.3.2, Creating STONITH Resources (page 83), Section 6.3.3, Creating STONITH Resources (page 119), or Section 7.3.3, Creating a STONITH Resource (page 161). The list of parameters (attributes) depends on the respective STONITH type. To view a list of parameters for a specific device, use the stonith command: stonith -t stonith-device-type -n For example, to view the parameters for the ibmhmc device type, enter the following: stonith -t ibmhmc -n To get a short help text for the device, use the -h option: 184 High Availability Guide

197 stonith -t stonith-device-type -h Example STONITH Resource Configurations In the following, find some example configurations written in the syntax of the crm command line tool. To apply them, put the sample in a text file (for example, sample.txt) and run: crm < sample.txt For more information about configuring resources with the crm command line tool, refer to Chapter 7, Configuring and Managing Cluster Resources (Command Line) (page 151). WARNING: Testing Configurations Some of the examples below are for demonstration and testing purposes only. Do not use any of the Testing Configuration examples in real-life cluster scenarios. Example 9.1 Testing Configuration configure primitive st-null stonith:null \ params hostlist="node1 node2" clone fencing st-null commit Fencing and STONITH 185

198 Example 9.2 Testing Configuration An alternative configuration: configure primitive st-node1 stonith:null \ params hostlist="node1" primitive st-node2 stonith:null \ params hostlist="node2" location l-st-node1 st-node1 -inf: node1 location l-st-node2 st-node2 -inf: node2 commit This configuration example is perfectly alright as far as the cluster software is concerned. The only difference to a real world configuration is that no fencing operation takes place. Example 9.3 Testing Configuration A more realistic example (but still only for testing) is the following external/ssh configuration: configure primitive st-ssh stonith:external/ssh \ params hostlist="node1 node2" clone fencing st-ssh commit This one can also reset nodes. The configuration is similar to the first one which features the null STONITH device. In this example, clones are used. They are a CRM/Pacemaker feature. A clone is basically a shortcut: instead of defining n identical, yet differently named resources, a single cloned resource suffices. By far the most common use of clones is with STONITH resources, as long as the STONITH device is accessible from all nodes. 186 High Availability Guide

199 Example 9.4 Configuration of an IBM RSA Lights-out Device The real device configuration is not much different, though some devices may require more attributes. An IBM RSA lights-out device might be configured like this: configure primitive st-ibmrsa-1 stonith:external/ibmrsa-telnet \ params nodename=node1 ipaddr= \ userid=userid passwd=passw0rd primitive st-ibmrsa-2 stonith:external/ibmrsa-telnet \ params nodename=node2 ipaddr= \ userid=userid passwd=passw0rd location l-st-node1 st-ibmrsa-1 -inf: node1 location l-st-node2 st-ibmrsa-2 -inf: node2 commit In this example, location constraints are used for the following reason: There is always a certain probability that the STONITH operation is going to fail. Therefore, a STONITH operation on the node which is the executioner as well is not reliable. If the node is reset, it cannot send the notification about the fencing operation outcome. The only way to do that is to assume that the operation is going to succeed and send the notification beforehand. But if the operation fails, problems could arise. Therefore, by convention, stonithd refuses to kill its host. Fencing and STONITH 187

200 Example 9.5 Configuration of an UPS Fencing Device The configuration of a UPS type fencing device is similar to the examples above. The details are not covered here. All UPS devices employ the same mechanics for fencing. How the device is accessed varies. Old UPS devices only had a serial port, in most cases connected at 1200baud using a special serial cable. Many new ones still have a serial port, but often they also use a USB or Ethernet interface. The kind of connection you can use depends on what the plug-in supports. For example, compare the apcmaster with the apcsmart device by using the stonith -t stonith-device-type -n command: stonith -t apcmaster -h returns the following information: STONITH Device: apcmaster - APC MasterSwitch (via telnet) NOTE: The APC MasterSwitch accepts only one (telnet) connection/session a time. When one session is active, subsequent attempts to connect to the MasterSwitch will fail. For more information see List of valid parameter names for apcmaster STONITH device: ipaddr login password With stonith -t apcsmart -h you get the following output: STONITH Device: apcsmart - APC Smart UPS (via serial port - NOT USB!). Works with higher-end APC UPSes, like Back-UPS Pro, Smart-UPS, Matrix-UPS, etc. (Smart-UPS may have to be >= Smart-UPS 700?). See for protocol compatibility details. For more information see List of valid parameter names for apcsmart STONITH device: ttydev hostlist The first plug-in supports APC UPS with a network port and telnet protocol. The second plug-in uses the APC SMART protocol over the serial line, which is supported by many different APC UPS product lines. 188 High Availability Guide

201 9.3.2 Constraints Versus Clones As explained in Section 9.3.1, Example STONITH Resource Configurations (page 185), there are several ways to configure a STONITH resource: using constraints, clones, or both. The choice of which construct to use for configuration depends on several factors: nature of the fencing device, number of hosts managed by the device, number of cluster nodes, or personal preference. If clones are safe to use with your configuration and they reduce the configuration, then use cloned STONITH resources. 9.4 Monitoring Fencing Devices Just like any other resource, the STONITH class agents also support the monitoring operation for checking status. NOTE: Monitoring STONITH Resources Monitor STONITH resources regularly, yet sparingly. For most devices a monitoring interval of at least 1800 seconds (30 minutes) should suffice. Fencing devices are an indispensable part of an HA cluster, but the less you need to use them, the better. Power management equipment is often affected by too much broadcast traffic. Some devices cannot handle more than ten or so connections per minute. Some get confused if two clients try to connect at the same time. Most cannot handle more than one session at a time. Checking the status of fencing devices once every few hours should be enough in most cases. The probability that a fencing operation needs to be performed and the power switch fails is low. For detailed information on how to configure monitor operations, refer to Procedure 5.3, Adding or Modifying Meta and Instance Attributes (page 82) for the GUI approach or to Section 7.3.8, Configuring Resource Monitoring (page 168) for the command line approach. Fencing and STONITH 189

202 9.5 Special Fencing Devices In addition to plug-ins which handle real STONITH devices, there are special purpose STONITH plug-ins. WARNING: For Testing Only Some of the STONITH plug-ins mentioned below are for demonstration and testing purposes only. Do not use any of the following devices in real-life scenarios because this may lead to data corruption and unpredictable results: external/ssh ssh null external/kdumpcheck This plug-in checks if a kernel dump is in progress on a node. If so, it returns true, and acts as if the node has been fenced. The node cannot run any resources during the dump anyway. This avoids fencing a node that is already down but doing a dump, which takes some time. The plug-in must be used in concert with another, real STONITH device. For more details, see /usr/share/doc/packages/ cluster-glue/readme_kdumpcheck.txt. external/sbd This is a self-fencing device. It reacts to a so-called poison pill which can be inserted into a shared disk. On shared-storage connection loss, it stops the node from operating. Learn how to use this STONITH agent to implement storage-based fencing in Chapter 17, Storage Protection (page 271). See also for more details. IMPORTANT: external/sbd and DRBD The external/sbd fencing mechanism requires that the SBD partition is readable directly from each node. Thus, a DRBD* device must not be used for an SBD partition. 190 High Availability Guide

203 However, you can use the fencing mechanism for a DRBD cluster, provided the SBD partition is located on a shared disk that is not mirrored or replicated. external/ssh Another software-based fencing mechanism. The nodes must be able to log in to each other as root without passwords. It takes a single parameter, hostlist, specifying the nodes that it will target. As it is not able to reset a truly failed node, it must not be used for real-life clusters for testing and demonstration purposes only. Using it for shared storage would result in data corruption. meatware meatware requires help from the user to operate. Whenever invoked, meatware logs a CRIT severity message which shows up on the node's console. The operator then confirms that the node is down and issues a meatclient(8) command. This tells meatware to inform the cluster that the node should be considered dead. See /usr/share/doc/packages/cluster-glue/readme.meatware for more information. null This is a fake device used in various testing scenarios. It always claims that it has shot a node, but never does anything. Do not use it unless you know what you are doing. suicide This is a software-only device, which can reboot a node it is running on, using the reboot command. This requires action by the node's operating system and can fail under certain circumstances. Therefore avoid using this device whenever possible. However, it is safe to use on one-node clusters. suicide and null are the only exceptions to the I do not shoot my host rule. 9.6 Basic Recommendations Check the following list of recommendations to avoid common mistakes: Do not configure several power switches in parallel. Fencing and STONITH 191

204 To test your STONITH devices and their configuration, pull the plug once from each node and verify that fencing the node does takes place. Test your resources under load and verify the timeout values are appropriate. Setting timeout values too low can trigger (unnecessary) fencing operations. For details, refer to Section 4.2.9, Timeout Values (page 63). Use appropriate fencing devices for your setup. For details, also refer to Section 9.5, Special Fencing Devices (page 190). Configure one ore more STONITH resources. By default, the global cluster option stonith-enabled is set to true. If no STONITH resources have been defined, the cluster will refuse to start any resources. Do not set the global cluster option stonith-enabled to false for the following reasons: Clusters without STONITH enabled are not supported. DLM/OCFS2 will block forever waiting for a fencing operation that will never happen. Do not set the global cluster option startup-fencing to false. By default, it is set to true for the following reason: If a node is in an unknown state during cluster startup, the node will be fenced once to clarify its status. 9.7 For More Information /usr/share/doc/packages/cluster-glue In your installed system, this directory contains README files for many STONITH plug-ins and devices. Information about STONITH on the home page of the The High Availability Linux Project. 192 High Availability Guide

205 Fencing and Stonith: Information about fencing on the home page of the Pacemaker Project. Pacemaker Explained: Explains the concepts used to configure Pacemaker. Contains comprehensive and very detailed information for reference. 10/split-brain-quo.html Article explaining the concepts of split brain, quorum and fencing in HA clusters. Fencing and STONITH 193

206

207 Access Control Lists 10 The various tools for administrating clusters, like the crm shell, Hawk, or the Pacemaker GUI, can be used by root or any user in the group haclient. By default, these users have full read-write access. In some cases, you may want to limit access or assign more fine-grained access rights. Optional Access control lists (ACLs) allow you to define rules for users in the haclient group to allow or deny access to any part of the cluster configuration. Typically, sets or rules are combined into roles. Then you can assign users to a role that fits their tasks Requirements and Prerequisites Before you start using ACLs on your cluster, make sure the following conditions are fulfilled: The same users must be available on all nodes in your cluster. Use NIS to ensure this. All users must belong to the haclient group. All users have to run the crm shell by its absolute path /usr/sbin/crm. Note the following points: Access Control Lists 195

208 ACLs are an optional feature. If the ACL feature is disabled, root and users belonging to the haclient group have full read/write access to the cluster configuration. If you want to enable the ACL feature, use this command: crm configure property enable-acl=true If non-privileged users want to run the crm shell, they have to change the PATH variable and extend it with /usr/sbin. To use ACLs you need some knowledge about XPath. XPath is a language for selecting nodes in an XML document. Refer to XPath or look into the specification at The Basics of ACLs An ACL role is a set of rules which describe access rights to CIB. Rules consist of: access rights to read, write, or deny, and a specification where to apply the rule. This specification can be a tag, an id reference, a combination of both, or an XPath expression. In most cases, it is more convenient to bundle ACLs into roles and assign a role to a user. However, it is possible to give a user certain access rules without defining any roles. There are two methods to manage ACL rules: Via an XPath Expression You need to know the structure of the underlying XML to create ACL rules. Via a Tag and/or Ref Abbreviation Create a shorthand syntax and ACL rules apply to the matched objects. 196 High Availability Guide

209 Setting ACL Rules via XPath To manage ACL rules via XPath, you need to know the structure of the underlying XML. Retrieve the structure with the following command: crm configure show xml The XML structure can also be displayed in the Pacemaker GUI by selecting Configuration > View XML. Regardless of the tool, the output is your cluster configuration in XML (see Example 10.1, Excerpt of a Cluster Configuration in XML (page 197)). Example 10.1 Excerpt of a Cluster Configuration in XML <cib admin_epoch="0" cib-last-written="wed Nov 2 16:42: " crm_feature_set="3.0.5" dc-uuid="stuttgart" epoch="13" have-quorum="1" num_updates="42" update-client="cibadmin" update-origin="nuernberg" update-user="root"validate-with="pacemaker-1.2"> <configuration> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/> </cluster_property_set> </crm_config> <nodes> <node id="stuttgart" type="normal" uname="stuttgart"/> <node id="nuernberg" type="normal" uname="nuernberg"/> </nodes> <resources>... </resources> <constraints/> <rsc_defaults>... </rsc_defaults> <op_defaults>... </op_defaults> <configuration> </cib> With the XPath language you can locate nodes in this XML document. For example, to select the root node (cib) use the XPath expression /cib. To locate the global cluster configurations, use the XPath /cib/configuration/crm_config. The following table collects the access type and the XPath expression to create an operator role: Access Control Lists 197

210 Table 10.1 Type Write Types and XPath Expression for an Operator Role XPath/Explanation Turn maintenance mode on or off. Write Choose whether pending operations are recorded. Write Set node in online or standby mode. Write Start, stop, promote or demote any resource. Write Select if a resource should be managed or not. Write //constraints/rsc_location Migrate/move resources from one node to another. Read /cib View the status of the cluster Setting ACL Rules via Tag Abbreviations For users who do not want to deal with the XML structure there is an easier method. It is a combination of a tag specifier and/or a reference. For example, consider the following XPath: /cib/resources/primitive[@id='rsc1'] 198 High Availability Guide

211 primitive is a resource with the reference rsc1. The abbreviated syntax is: tag: "primitive" ref:"rsc1" This also works for constraints. Here is the verbose XPath: /cib/constraint/rsc_location The abbreviated syntax is written like this: tag: "rsc_location" The CIB daemon knows how to apply the ACL rules to the matched objects. The abbreviated syntax can be used in the crm Shell or the Pacemaker GUI Configuring ACLs with the Pacemaker GUI Use the Pacemaker GUI to define your roles and users. The following procedure adds a monitor role which has only read access to the CIB. Proceed as follows: Procedure 10.1 Adding a Monitor Role and Assigning a User with the Pacemaker GUI 1 Start the Pacemaker GUI and log in as described in Section 5.1.1, Logging in to a Cluster (page 76). 2 Click the ACLs entry in the Configuration tree. 3 Click Add. A dialog box appears. Choose between ACL User and ACL Role. 4 To define your ACL role(s): 4a Choose ACL Role. A window opens in which you add your configuration options. 4b Add a unique identifier in the ID textfield, for example monitor. 4c Click Add and choose the rights (Read, Write, or Deny). In our example, select Read and proceed with Ok. Access Control Lists 199

212 4d Enter the XPath expression /cib in the Xpath textfield. Proceed with Ok. Sidenote: If you have resources or constraints, you can also use the abbreviated syntax as explained in Section , Setting ACL Rules via Tag Abbreviations (page 198). In this case, enter your tag in the Tag textfield and the optional reference in the Ref textfield. In our example, there is no abbreviated form possible, so you can only use the XPath notation here. 4e If you have other conditions, repeat the steps (Step 4c (page 199) and Step 4d (page 200)). In our example, this is not the case so your role is finished and you can close the window with Ok. 5 Assign your role to a user: 5a Click the Add button. A dialog box appears to choose between ACL User and ACL Role. 5b Choose ACL User. A window opens in which you add your configuration options. 5c Enter the username in the ID textfield. Make sure this user belongs to the haclient group. 5d Click Add and choose Role Ref. 5e Use the role name specified in Step 4b (page 199) Configuring ACLs with the crm Shell The following procedure adds a monitor role as shown in Section 10.3, Configuring ACLs with the Pacemaker GUI (page 199) and assigns it to a user. Proceed as follows: Procedure 10.2 Adding a Monitor Role and Assigning a User with the crm Shell 1 Log in as root. 200 High Availability Guide

213 2 Start the interactive mode of the crm shell: # crm configure crm(live)configure# 3 Define your ACL role(s): 3a Use the role command to define your new role. To define a monitor role, use the following command: role monitor read xpath:"/cib" The previous command creates a new role with name monitor, sets the read rights and applies it to all elements in the CIB by using the XPath /cib. If necessary, you can add more access rights and XPath arguments. Sidenote: If you have resources or constraints, you can also use the abbreviated syntax as explained in Section , Setting ACL Rules via Tag Abbreviations (page 198). If you have a primitive resource with the ID rsc1, use the following notation to set the access rights: write tag:"primitive" ref:"rsc1". You can also refer to the ID with write ref:"rsc1". This has the advantage that it can match a primitive resource and a local resource manager resource (LRM), which enables you to configure rsc1 and also cleanup its status at the same time. 3b Add additional roles as needed. 4 Assign your roles to users. Make sure this user belongs to the haclient group. crm(live)configure# user tux role:monitor 5 Check your changes: crm(live)configure# show 6 Commit your changes: crm(live)configure# commit Access Control Lists 201

214 10.5 For More Information See High Availability Guide

215 Network Device Bonding 11 For many systems, it is desirable to implement network connections that comply to more than the standard data security or availability requirements of a typical Ethernet device. In these cases, several Ethernet devices can be aggregated to a single bonding device. The configuration of the bonding device is done by means of bonding module options. The behavior is determined through the mode of the bonding device. By default, this is mode=active-backup, which means that a different slave device will become active if the active slave fails. When using OpenAIS, the bonding device is not managed by the cluster software. Therefore, the bonding device must be configured on each cluster node that might possibly need to access the bonding device Configuring Bonding Devices with YaST To configure a bonding device, use the following procedure: 1 Start YaST as root and select Network Devices > Network Settings. 2 Click Add to configure a new network card and change the Device Type to Bond. Proceed with Next. Network Device Bonding 203

216 3 Select how to assign the IP address to the bonding device. Three methods are at your disposal: No IP Address Dynamic Address (with DHCP or Zeroconf) Statically assigned IP Address Use the method that is appropriate for your environment. If OpenAIS manages virtual IP addresses, select Statically assigned IP Address and assign a basic IP address to the interface. 4 Switch to the Bond Slaves tab. 5 To select the Ethernet devices that need to be included into the bond, activate the check box in front of the relevant Bond Slave. 6 Edit the Bond Driver Options. The following modes are available: balance-rr Provides load balancing and fault tolerance. active-backup Provides fault tolerance. 204 High Availability Guide

217 balance-xor Provides load balancing and fault tolerance. broadcast Provides fault tolerance 802.3ad Provides dynamic link aggregation if supported by the connected switch. balance-tlb Provides load balancing for outgoing traffic. balance-alb Provides load balancing for incoming and outgoing traffic, if the network devices used allow the modifying of the network device's hardware address while in use. 7 Make sure to add the parameter miimon=100 to Bond Driver Options. Without this parameter, the data integrity is not checked regularly. 8 Click Next and leave YaST with OK to create the device For More Information All modes, as well as many other options, are explained in detail in the Linux Ethernet Bonding Driver HOWTO, which can be found at /usr/src/linux/ Documentation/networking/bonding.txt once you have installed the package kernel-source. Network Device Bonding 205

218

219 Load Balancing with Linux Virtual Server 12 The goal of Linux Virtual Server (LVS) is to provide a basic framework that directs network connections to multiple servers that share their workload. Linux Virtual Server is a cluster of servers (one or more load balancers and several real servers for running services) which appears to be one large, fast server to an outside client. This apparent single server is called a virtual server. The Linux Virtual Server can be used to build highly scalable and highly available network services, such as Web, cache, mail, FTP, media and VoIP services. The real servers and the load balancers may be interconnected by either high-speed LAN or by geographically dispersed WAN. The load balancers can dispatch requests to the different servers. They make parallel services of the cluster appear as a virtual service on a single IP address (the virtual IP address or VIP). Request dispatching can use IP load balancing technologies or application-level load balancing technologies. Scalability of the system is achieved by transparently adding or removing nodes in the cluster. High availability is provided by detecting node or daemon failures and reconfiguring the system appropriately Conceptual Overview The following sections give an overview of the main LVS components and concepts. Load Balancing with Linux Virtual Server 207

220 Director The main component of LVS is the ip_vs (or IPVS) kernel code. It implements transportlayer load balancing inside the Linux kernel (layer-4 switching). The node that runs a Linux kernel including the IPVS code is called director. The IPVS code running on the director is the essential feature of LVS. When clients connect to the director, the incoming requests are load-balanced across all cluster nodes: The director forwards packets to the real servers, using a modified set of routing rules that make the LVS work. For example, connections do not originate or terminate on the director, it does not send acknowledgments. The director acts as a specialized router that forwards packets from end-users to real servers (the hosts that run the applications that process the requests). By default, the kernel does not have the IPVS module installed. The IPVS kernel module is included in the cluster-network-kmp-default package User Space Controller and Daemons The ldirectord daemon is a user-space daemon for managing Linux Virtual Server and monitoring the real servers in an LVS cluster of load balanced virtual servers. A configuration file, /etc/ha.d/ldirectord.cf, specifies the virtual services and their associated real servers and tells ldirectord how to configure the server as a LVS redirector. When the daemon is initialized, it creates the virtual services for the cluster. By periodically requesting a known URL and checking the responses, the ldirectord daemon monitors the health of the real servers. If a real server fails, it will be removed from the list of available servers at the load balancer. When the service monitor detects that the dead server has recovered and is working again, it will add the server back to the list of available servers. In case that all real servers should be down, a fall-back server can be specified to which to redirect a Web service. Typically the fall-back server is localhost, presenting an emergency page about the Web service being temporarily unavailable. 208 High Availability Guide

221 Packet Forwarding There are three different methods of how the director can send packets from the client to the real servers: Network Address Translation (NAT) Incoming requests arrive at the virtual IP and are forwarded to the real servers by changing the destination IP address and port to that of the chosen real server. The real server sends the response to the load balancer which in turn changes the destination IP address and forwards the response back to the client, so that the end user receives the replies from the expected source. As all traffic goes through the load balancer, it usually becomes a bottleneck for the cluster. IP Tunneling (IP-IP Encapsulation) IP tunneling enables packets addressed to an IP address to be redirected to another address, possibly on a different network. The LVS sends requests to real servers through an IP tunnel (redirecting to a different IP address) and the real servers reply directly to the client using their own routing tables. Cluster members can be in different subnets. Direct Routing Packets from end users are forwarded directly to the real server. The IP packet is not modified, so the real servers must be configured to accept traffic for the virtual server's IP address. The response from the real server is sent directly to the client. The real servers and load balancers have to be in the same physical network segment Scheduling Algorithms Deciding which real server to use for a new connection requested by a client is implemented using different algorithms. They are available as modules and can be adapted to specific needs. For an overview of available modules, refer to the ipvsadm(8) man page. Upon receiving a connect request from a client, the director assigns a real server to the client based on a schedule. The scheduler is the part of the IPVS kernel code which decides which real server will get the next new connection. Load Balancing with Linux Virtual Server 209

222 12.2 Configuring IP Load Balancing with YaST You can configure kernel-based IP load balancing with the YaST iplb module. It is a front-end for ldirectord. To access the IP Load Balancing dialog, start YaST as root and select High Availability > IP Load Balancing. Alternatively, start the YaST cluster module as root on a command line with yast2 iplb. The YaST module writes its configuration to /etc/ha.d/ldirectord.cf. The tabs available in the YaST module correspond to the structure of the /etc/ha.d/ ldirectord.cf configuration file, defining global options and defining the options for the virtual services. For an example configuration and the resulting processes between load balancers and real servers, refer to Example 12.1, Simple ldirectord Configuration (page 215). NOTE: Global Parameters and Virtual Server Parameters If a certain parameter is specified in both the virtual server section and in the global section, the value defined in the virtual server section overrides the value defined in the global section. Procedure 12.1 Configuring Global Parameters The following procedure describes how to configure the most important global parameters. For more details about the individual parameters (and the parameters not covered here), click Help or refer to the ldirectord man page. 1 With Check Interval, define the interval in which ldirectord will connect to each of the real servers to check if they are still online. 2 With Check Timeout, set the time in which the real server should have responded after the last check. 3 With Check Count you can define how many times ldirectord will attempt to request the real servers until the check is considered failed. 210 High Availability Guide

223 4 With Negotiate Timeout define a timeout in seconds for negotiate checks. 5 In Fallback, enter the hostname or IP address of the Web server onto which to redirect a Web service in case all real servers are down. 6 If you want to use an alternative path for logging, specify a path for the logs in Log File. By default, ldirectord writes its logs to /var/log/ldirectord.log. 7 If you want the system to send alerts in case the connection status to any real server changes, enter a valid address in Alert. 8 With Alert Frequency, define after how many seconds the alert should be repeated if any of the real servers remains inaccessible. 9 In Alert Status specify the server states for which alerts should be sent. If you want to define more than one state, use a comma-separated list. 10 With Auto Reload define, if ldirectord should continuously monitor the configuration file for modification. If set to yes, the configuration is automatically reloaded upon changes. 11 With the Quiescent switch, define if to remove failed real servers from the kernel's LVS table or not. If set to Yes, failed servers are not removed. Instead their weight is set to 0 which means that no new connections will be accepted. Already established connections will persist until they time out. Load Balancing with Linux Virtual Server 211

224 Figure 12.1 YaST IP Load Balancing Global Parameters Procedure 12.2 Configuring Virtual Services You can configure one or more virtual services by defining a couple of parameters for each. The following procedure describes how to configure the most important parameters for a virtual service. For more details about the individual parameters (and the parameters not covered here), click Help or refer to the ldirectord man page. 1 In the YaST iplb module, switch to the Virtual Server Configuration tab. 2 Add a new virtual server or Edit an existing virtual server. A new dialog shows the available options. 3 In Virtual Server enter the shared virtual IP address (IPv4 or IPv6) and port under which the load balancers and the real servers are accessible as LVS. Instead of IP address and port number you can also specify a hostname and a service. Alternatively, you can also use a firewall mark. A firewall mark is a way of aggregating an arbitrary collection of VIP:port services into one virtual service. 4 To specify the Real Servers, you need to enter the IP addresses (IPv4, IPv6, or hostnames) of the servers, the ports (or service names) and the forwarding method. 212 High Availability Guide

225 The forwarding method must either be gate, ipip or masq, see Section , Packet Forwarding (page 209). Click the Add button and enter the required arguments for each real server. 5 As Check Type, select the type of check that should be performed to test if the real servers are still alive. For example, to send a request and check if the response contains an expected string, select Negotiate. 6 If you have set the Check Type to Negotiate, you also need to define the type of service to monitor. Select it from the Service drop-down list. 7 In Request, enter the URI to the object that is requested on each real server during the check intervals. 8 If you want to check if the response from the real servers contains a certain string ( I'm alive message), define a regular expression that needs to be matched. Enter the regular expression into Receive. If the response from a real server contains this expression, the real server is considered to be alive. 9 Depending on the type of Service you have selected in Step 6 (page 213), you also need to specify further parameters like Login, Password, Database, or Secret. For more information, refer to the YaST help text or to the ldirectord man page. 10 Select the Scheduler to be used for load balancing. For information on the available schedulers, refer to the ipvsadm(8) man page. 11 Select the Protocol to be used. If the virtual service is specified as an IP address and port, it must be either tcp or udp. If the virtual service is specified as a firewall mark, the protocol must be fwm. 12 Define further parameters, if needed. Confirm your configuration with OK. YaST writes the configuration to /etc/ha.d/ldirectord.cf. Load Balancing with Linux Virtual Server 213

226 Figure 12.2 YaST IP Load Balancing Virtual Services 214 High Availability Guide

227 Example 12.1 Simple ldirectord Configuration The values shown in Figure 12.1, YaST IP Load Balancing Global Parameters (page 212) and Figure 12.2, YaST IP Load Balancing Virtual Services (page 214), would lead to the following configuration, defined in /etc/ha.d/ldirectord.cf: autoreload = yes checkinterval = 5 checktimeout = 3 quiescent = yes virtual = :80 checktype = negotiate fallback = :80 protocol = tcp real = :80 gate real = :80 gate receive = "still alive" request = "test.html" scheduler = wlc service = http Defines that ldirectord should continuously check the configuration file for modification. Interval in which ldirectord will connect to each of the real servers to check if they are still online. Time in which the real server should have responded after the last check. Defines not to remove failed real servers from the kernel's LVS table, but to set their weight to 0 instead. Virtual IP address (VIP) of the LVS. The LVS is available at port 80. Type of check that should be performed to test if the real servers are still alive. Server onto which to redirect a Web service all real servers for this service are down. Protocol to be used. Two real servers defined, both available at port 80. The packet forwarding method is gate, meaning that direct routing is used. Regular expression that needs to be matched in the response string from the real server. URI to the object that is requested on each real server during the check intervals. Selected scheduler to be used for load balancing. Type of service to monitor. Load Balancing with Linux Virtual Server 215

228 This configuration would lead to the following process flow: The ldirectord will connect to each real server once every 5 seconds and request :80/test.html or :80/test.html as specified in and. If it does not receive the expected still alive string from a real server within 3 seconds of the last check, it will remove the real server from the available pool. However, because of the quiescent=yes setting, the real server will not be removed from the LVS table, but its weight will be set to 0 so that no new connections to this real server will be accepted. Already established connections will be persistent until they time out Further Setup Apart from the configuration of ldirectord with YaST, you need to make sure the following conditions are fulfilled to complete the LVS setup: The real servers are set up correctly to provide the needed services. The load balancing server (or servers) must be able to route traffic to the real servers using IP forwarding. The network configuration of the real servers depends on which packet forwarding method you have chosen. To prevent the load balancing server (or servers) from becoming a single point of failure for the whole system, you need to set up one or several backups of the load balancer. In the cluster configuration, configure a primitive resource for ldirectord, so that ldirectord can fail over to other servers in case of hardware failure. As the backup of the load balancer also needs the ldirectord configuration file to fulfill its task, make sure the /etc/ha.d/ldirectord.cf is available on all servers that you want to use as backup for the load balancer. You can synchronize the configuration file with Csync2 as described in Section 3.5.5, Transferring the Configuration to All Nodes (page 39). 216 High Availability Guide

229 12.4 For More Information To learn more about Linux Virtual Server, refer to the project home page available at For more information about ldirectord, refer to its comprehensive man page. Load Balancing with Linux Virtual Server 217

230

231 Multi-Site Clusters (Geo Clusters) 13 Apart from local clusters and metro area clusters, SUSE Linux Enterprise High Availability Extension 11 SP2 also supports multi-site clusters (geo clusters). That means you can have multiple, geographically dispersed sites with a local cluster each. Failover between these clusters is coordinated by a higher level entity, the so-called booth. Support for multi-site clusters is available as a separate option to SUSE Linux Enterprise High Availability Extension Challenges for Multi-Site Clusters Typically, multi-site environments are too far apart to support synchronous communication between the sites and synchronous data replication. That leads to the following challenges: How to make sure that a cluster site is up and running? How to make sure that resources are only started once? How to make sure that quorum can be reached between the different sites and a split brain scenario can be avoided? How to manage failover between the sites? How to deal with high latency in case of resources that need to be stopped? In the following sections, learn how to meet these challenges with SUSE Linux Enterprise High Availability Extension. Multi-Site Clusters (Geo Clusters) 219

232 13.2 Conceptual Overview Multi-site clusters based on SUSE Linux Enterprise High Availability Extension can be considered as overlay clusters where each cluster site corresponds to a cluster node in a traditional cluster. The overlay cluster is managed by the booth mechanism. It guarantees that the cluster resources will be highly available across different cluster sites. This is achieved by using so-called tickets that are treated as failover domain between cluster sites, in case a site should be down. The following list explains the individual components and mechanisms that were introduced for multi-site clusters in more detail. Components and Concepts Ticket A ticket grants the right to run certain resources on a specific cluster site. A ticket can only be owned by one site at a time. Initially, none of the sites has a ticket each ticket must be granted once by the cluster administrator. After that, tickets are managed by the booth for automatic failover of resources. But administrators may also intervene and grant or revoke tickets manually. Resources can be bound to a certain ticket by dependencies. Only if the defined ticket is available at a site, the respective resources are started. Vice versa, if the ticket is removed, the resources depending on that ticket are automatically stopped. The presence or absence of tickets for a site is stored in the CIB as a cluster status. With regards to a certain ticket, there are only two states for a site: true (the site has the ticket) or false (the site does not have the ticket). The absence of a certain ticket (during the initial state of the multi-site cluster) is not treated differently from the situation after the ticket has been revoked: both are reflected by the value false. A ticket within an overlay cluster is similar to a resource in a traditional cluster. But in contrast to traditional clusters, tickets are the only type of resource in an overlay cluster. They are primitive resources that do not need to be configured nor cloned. Booth The booth is the instance managing the ticket distribution and thus, the failover process between the sites of a multi-site cluster. Each of the participating clusters 220 High Availability Guide

233 and arbitrators runs a service, the boothd. It connects to the booth daemons running at the other sites and exchanges connectivity details. Once a ticket is granted to a site, the booth mechanism will manage the ticket automatically: If the site which holds the ticket is out of service, the booth daemons will vote which of the other sites will get the ticket. To protect against brief connection failures, sites that lose the vote (either explicitly or implicitly by being disconnected from the voting body) need to relinquish the ticket after a time-out. Thus, it is made sure that a ticket will only be re-distributed after it has been relinquished by the previous site. See also Dead Man Dependency (loss-policy="fence") (page 221). Arbitrator Each site runs one booth instance that is responsible for communicating with the other sites. If you have a setup with an even number of sites, you need an additional instance to reach consensus about decisions such as failover of resources across sites. In this case, add one or more arbitrators running at additional sites. Arbitrators are single machines that run a booth instance in a special mode. As all booth instances communicate with each other, arbitrators help to make more reliable decisions about granting or revoking tickets. An arbitrator is especially important for a two-site scenario: For example, if site A can no longer communicate with site B, there are two possible causes for that: A network failure between A and B. Site B is down. However, if site C (the arbitrator) can still communicate with site B, site B must still be up and running. Dead Man Dependency (loss-policy="fence") After a ticket is revoked, it can take a long time until all resources depending on that ticket are stopped, especially in case of cascaded resources. To cut that process short, the cluster administrator can configure a loss-policy (together with the ticket dependencies) for the case that a ticket gets revoked from a site. If the losspolicy is set to fence, the nodes that are hosting dependent resources are fenced. This considerably speeds up the recovery process of the cluster and makes sure that resources can be migrated more quickly. Multi-Site Clusters (Geo Clusters) 221

234 Figure 13.1 Example Scenario: A Two-Site Cluster (4 Nodes + Arbitrator) As usual, the CIB is synchronized within each cluster, but it is not synchronized across cluster sites of a multi-site cluster. You have to configure the resources that will be highly available across the multi-site cluster for every site accordingly Requirements Software Requirements All clusters that will be part of the multi-site cluster must be based on SUSE Linux Enterprise High Availability Extension 11 SP2. SUSE Linux Enterprise Server 11 SP2 must be installed on all arbitrators. The booth package must be installed on all cluster nodes and on all arbitrators that will be part of the multi-site cluster. The most common scenario is probably a multi-site cluster with two sites and a single arbitrator on a third site. However, technically, there are no limitations with regards to the number of sites and the number of arbitrators involved. 222 High Availability Guide

SUSE Linux Enterprise High Availability Extension

SUSE Linux Enterprise High Availability Extension SUSE Linux Enterprise High Availability Extension 11 SP4 October 04, 2017 www.suse.com High Availability Guide High Availability Guide List of Authors: Tanja Roth, Thomas Schraitle Copyright 2006 2017

More information

Administration Guide. SUSE Linux Enterprise High Availability Extension 12 SP2

Administration Guide. SUSE Linux Enterprise High Availability Extension 12 SP2 Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP2 Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP2 by Tanja Roth and Thomas Schraitle This guide

More information

Administration Guide. SUSE Linux Enterprise High Availability Extension 15

Administration Guide. SUSE Linux Enterprise High Availability Extension 15 Administration Guide SUSE Linux Enterprise High Availability Extension 15 Administration Guide SUSE Linux Enterprise High Availability Extension 15 by Tanja Roth and Thomas Schraitle This guide is intended

More information

High Availability for Highly Reliable Systems

High Availability for Highly Reliable Systems High Availability for Highly Reliable Systems Mike Friesenegger SUSE Wednesday, February 6, 2013 Session: 12367 Agenda What is a high availability (HA) cluster? What is required to build a HA cluster using

More information

Installation Quick Start SUSE Linux Enterprise Server 11

Installation Quick Start SUSE Linux Enterprise Server 11 Installation Quick Start SUSE Linux Enterprise Server 11 NOVELL QUICK START CARD Use the following procedures to install a new version of SUSE Linux Enterprise Server 11. This document gives a quick overview

More information

Installation and Setup Quick Start

Installation and Setup Quick Start Installation and Setup Quick Start SUSE Linux Enterprise High Availability Extension 12 SP2 This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided

More information

Exploring History with Hawk

Exploring History with Hawk Exploring History with Hawk An Introduction to Cluster Forensics Kristoffer Grönlund High Availability Software Developer kgronlund@suse.com This tutorial High Availability in 5 minutes Introduction to

More information

Linux-HA Version 3. Open Source Data Center April 30 th, 2009

Linux-HA Version 3. Open Source Data Center April 30 th, 2009 MultiNET Services GmbH Linux-HA Version 3 Open Source Data Center April 30 th, 2009 Dr. Michael Schwartzkopff, MultiNET Services GmbH MultiNET Services GmbH, OSDC, 09/04/30: LinuxHAv2: page 1 Contents

More information

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK

Linux-HA Clustering, no SAN. Prof Joe R. Doupnik Ingotec, Univ of Oxford, MindworksUK Linux-HA Clustering, no SAN Prof Joe R. Doupnik jrd@netlab1.oucs.ox.ac.uk Ingotec, Univ of Oxford, MindworksUK Ingotec offers IT to charities A charity is receiving a new server, having GroupWise, POSIX

More information

RedHat Cluster (Pacemaker/Corosync)

RedHat Cluster (Pacemaker/Corosync) RedHat Cluster (Pacemaker/Corosync) Chapter 1:- Introduction and basic difference from previous RHEL cluster and latest RHEL Cluster. Red hat cluster allows you to configure and manage group of resources

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

Oracle Fail Safe. Concepts and Administration Guide Release 4.1 for Microsoft Windows E

Oracle Fail Safe. Concepts and Administration Guide Release 4.1 for Microsoft Windows E Oracle Fail Safe Concepts and Administration Guide Release 4.1 for Microsoft Windows E24699-01 December 2012 Oracle Fail Safe Concepts and Administration Guide, Release 4.1 for Microsoft Windows E24699-01

More information

1 BRIEF / Oracle Solaris Cluster Features and Benefits

1 BRIEF / Oracle Solaris Cluster Features and Benefits Oracle Solaris Cluster is a comprehensive high availability (HA) and disaster recovery (DR) solution for Oracle SPARC and x86 environments that is based on Oracle Solaris. It combines extreme service availability

More information

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING ABSTRACT This White Paper provides a best practice to install and configure SUSE SLES High Availability Extension (HAE) with EMC

More information

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013

MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager. Yves Trudeau November 2013 MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager Yves Trudeau November 2013 Agenda Geo-DR problems and challenges Introduction to Corosync Introduction to Pacemaker

More information

VERITAS Cluster Server. QuickStart. Product Overview

VERITAS Cluster Server. QuickStart. Product Overview VERITAS Cluster Server QuickStart Product Overview V E R I T A S W H I T E P A P E R Table of Contents VERITAS Cluster Server QuickStart Key Points..................................................1 Extending

More information

How To Make Databases on Linux on System z Highly Available

How To Make Databases on Linux on System z Highly Available How To Make Databases on Linux on System z Highly Available Mike Friesenegger (mikef@suse.com) SUSE Wednesday, August 14, 2013 Session Number 13437 Agenda Clarify the term "Availability" What is High Availability

More information

Linux Cluster next generation Vladislav Bogdanov

Linux Cluster next generation Vladislav Bogdanov Linux Cluster next generation Vladislav Bogdanov Heartbeat Simple (mostly two-node) clusters IP (UDP: unicast, broadcast, multicast) or serial communication Limited functionality (esp. haresources mode),

More information

Linux Administration

Linux Administration Linux Administration This course will cover all aspects of Linux Certification. At the end of the course delegates will have the skills required to administer a Linux System. It is designed for professionals

More information

RHEL Clustering and Storage Management. 5 Days

RHEL Clustering and Storage Management. 5 Days QWERTYUIOP{ RHEL Clustering and Storage Management 5 Days This hands on course covers the high availability clustering and storage management technologies found in Red Hat Enterprise Linux 6+. Each student

More information

MIMIX. Version 7.0 MIMIX Global Operations 5250

MIMIX. Version 7.0 MIMIX Global Operations 5250 MIMIX Version 7.0 MIMIX Global Operations 5250 Published: September 2010 level 7.0.01.00 Copyrights, Trademarks, and tices Contents Version 7.0 MIMIX Global Operations 5250 Who this book is for... 5 What

More information

High Availability Scenarios for Oracle Databases on IBM z Systems

High Availability Scenarios for Oracle Databases on IBM z Systems High Availability Scenarios for Oracle Databases on IBM z Systems Sam Amsavelu Oracle on z Architect IBM ISV & Channels Technical Sales Oracle samvelu@us.ibm.com 29 th Annual International Oracle on IBM

More information

CIT 668: System Architecture

CIT 668: System Architecture CIT 668: System Architecture Availability Topics 1. What is availability? 2. Measuring Availability 3. Failover 4. Failover Configurations 5. Linux HA Availability Availability is the ratio of the time

More information

SUSE Linux Enterprise High Availability Extension

SUSE Linux Enterprise High Availability Extension A Xen Cluster Success Story Using the SUSE Linux Enterprise High Availability Extension Mark Robinson Director, MrLinux mark@mrlinux.co.uk Case Study: R M Donaldson APC High Power Partner My first commercial

More information

Troubleshooting Cisco APIC-EM Single and Multi-Host

Troubleshooting Cisco APIC-EM Single and Multi-Host Troubleshooting Cisco APIC-EM Single and Multi-Host The following information may be used to troubleshoot Cisco APIC-EM single and multi-host: Recovery Procedures for Cisco APIC-EM Node Failures, page

More information

高可用集群的开源解决方案 的发展与实现. 北京酷锐达信息技术有限公司 技术总监史应生

高可用集群的开源解决方案 的发展与实现. 北京酷锐达信息技术有限公司 技术总监史应生 高可用集群的开源解决方案 的发展与实现 北京酷锐达信息技术有限公司 技术总监史应生 shiys@solutionware.com.cn You may know these words OCF Quorum Bonding Corosync SPOF Qdisk multicast Cman Heartbeat Storage Mirroring broadcast Heuristic OpenAIS

More information

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Modified on 11 MAY 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Essentials. Oracle Solaris Cluster. Tim Read. Upper Saddle River, NJ Boston Indianapolis San Francisco. Capetown Sydney Tokyo Singapore Mexico City

Essentials. Oracle Solaris Cluster. Tim Read. Upper Saddle River, NJ Boston Indianapolis San Francisco. Capetown Sydney Tokyo Singapore Mexico City Oracle Solaris Cluster Essentials Tim Read PRENTICE HALL Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico

More information

Microsoft E xchange 2010 on VMware

Microsoft E xchange 2010 on VMware : Microsoft E xchange 2010 on VMware Availability and R ecovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more

More information

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7

vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 vcenter Server Installation and Setup Update 1 Modified on 30 OCT 2018 VMware vsphere 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Virtualization And High Availability. Howard Chow Microsoft MVP

Virtualization And High Availability. Howard Chow Microsoft MVP Virtualization And High Availability Howard Chow Microsoft MVP Session Objectives And Agenda Virtualization and High Availability Types of high availability enabled by virtualization Enabling a highly

More information

Geographic LVM: Planning and administration guide

Geographic LVM: Planning and administration guide High Availability Cluster Multi-Processing XD (Extended Distance) Geographic LVM: Planning and administration guide SA23-1338-07 High Availability Cluster Multi-Processing XD (Extended Distance) Geographic

More information

GR Reference Models. GR Reference Models. Without Session Replication

GR Reference Models. GR Reference Models. Without Session Replication , page 1 Advantages and Disadvantages of GR Models, page 6 SPR/Balance Considerations, page 7 Data Synchronization, page 8 CPS GR Dimensions, page 9 Network Diagrams, page 12 The CPS solution stores session

More information

TANDBERG Management Suite - Redundancy Configuration and Overview

TANDBERG Management Suite - Redundancy Configuration and Overview Management Suite - Redundancy Configuration and Overview TMS Software version 11.7 TANDBERG D50396 Rev 2.1.1 This document is not to be reproduced in whole or in part without the permission in writing

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

Red Hat Clustering: Best Practices & Pitfalls. Lon Hohberger Principal Software Engineer Red Hat May 2013

Red Hat Clustering: Best Practices & Pitfalls. Lon Hohberger Principal Software Engineer Red Hat May 2013 Red Hat Clustering: Best Practices & Pitfalls Lon Hohberger Principal Software Engineer Red Hat May 2013 1 Red Hat Clustering: Best Practices & Pitfalls Why Cluster? I/O Fencing and Your Cluster 2-Node

More information

HA for OpenStack: Connecting the dots

HA for OpenStack: Connecting the dots HA for OpenStack: Connecting the dots Raghavan Rags Srinivas Rackspace OpenStack Meetup, Washington DC on Jan. 23 rd 2013 Rags Solutions Architect at Rackspace for OpenStack-based Rackspace Private Cloud

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information

Migration and Building of Data Centers in IBM SoftLayer

Migration and Building of Data Centers in IBM SoftLayer Migration and Building of Data Centers in IBM SoftLayer Advantages of IBM SoftLayer and RackWare Together IBM SoftLayer offers customers the advantage of migrating and building complex environments into

More information

Administrator s Guide. StorageX 7.8

Administrator s Guide. StorageX 7.8 Administrator s Guide StorageX 7.8 August 2016 Copyright 2016 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

Overview. SUSE OpenStack Cloud Monitoring

Overview. SUSE OpenStack Cloud Monitoring Overview SUSE OpenStack Cloud Monitoring Overview SUSE OpenStack Cloud Monitoring Publication Date: 08/04/2017 SUSE LLC 10 Canal Park Drive Suite 200 Cambridge MA 02141 USA https://www.suse.com/documentation

More information

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002 Maximum Availability Architecture: Overview An Oracle White Paper July 2002 Maximum Availability Architecture: Overview Abstract...3 Introduction...3 Architecture Overview...4 Application Tier...5 Network

More information

How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger

How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger How To Make Databases on SUSE Linux Enterprise Server Highly Available Mike Friesenegger SUSE Sales Engineer mikef@suse.com Agenda Clarify the term "Availability" What is High Availability Minimize a Database

More information

Step-by-Step Guide to Installing Cluster Service

Step-by-Step Guide to Installing Cluster Service Page 1 of 23 TechNet Home > Products & Technologies > Windows 2000 Server > Deploy > Configure Specific Features Step-by-Step Guide to Installing Cluster Service Topics on this Page Introduction Checklists

More information

Configuring High Availability (HA)

Configuring High Availability (HA) 4 CHAPTER This chapter covers the following topics: Adding High Availability Cisco NAC Appliance To Your Network, page 4-1 Installing a Clean Access Manager High Availability Pair, page 4-3 Installing

More information

Bull. HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02

Bull. HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02 Bull HACMP 4.4 Programming Locking Applications AIX ORDER REFERENCE 86 A2 59KX 02 Bull HACMP 4.4 Programming Locking Applications AIX Software August 2000 BULL CEDOC 357 AVENUE PATTON B.P.20845 49008

More information

HACMP High Availability Introduction Presentation February 2007

HACMP High Availability Introduction Presentation February 2007 HACMP High Availability Introduction Presentation February 2007 Introduction Scope HACMP Concepts HACMP Cluster Topologies NFS Cascading 1 way Cascading 2 way Rotating Concurrent HACMP Cluster Resources

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Overview Overview of the High Availability Add-On for Red Hat Enterprise Linux 7 Last Updated: 2018-02-08 Red Hat Enterprise Linux 7 High Availability

More information

Geo Clustering Quick Start

Geo Clustering Quick Start Geo Clustering Quick Start SUSE Linux Enterprise High Availability Extension 12 SP3 Tanja Roth and Thomas Schraitle Geo clustering allows you to have multiple, geographically dispersed sites with a local

More information

Parallels Virtuozzo Containers 4.6 for Windows

Parallels Virtuozzo Containers 4.6 for Windows Parallels Parallels Virtuozzo Containers 4.6 for Windows Deploying Microsoft Clusters Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o

More information

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Windows Server 2003 Windows Server 2008 5.1 Application Pack 1 Veritas Storage Foundation

More information

CONCEPTS GUIDE BlueBoxx

CONCEPTS GUIDE BlueBoxx CONCEPTS GUIDE BlueBoxx Version 2.1 SPHINX ASPERNBRÜCKENGASSE 2 1020 WIEN TEL: +43 1 599 31-0 FAX: +43 1 599 31-99 info@blueboxx.at www.blueboxx.at www.sphinx.at BlueBoxx Legal Notices Copyright 2016,

More information

Linux High Availability on IBM z Systems

Linux High Availability on IBM z Systems Linux High Availability on IBM z Systems An Overview Kristoffer Gronlund Berthold Gunreben High Availability Engineer SUSE Build Service Engineer SUSE Outline 2 Highly Available Hardware Introduction to

More information

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris

Cluster Server Generic Application Agent Configuration Guide - AIX, Linux, Solaris Cluster Server 7.3.1 Generic Application Agent Configuration Guide - AIX, Linux, Solaris Last updated: 2017-11-04 Legal Notice Copyright 2017 Veritas Technologies LLC. All rights reserved. Veritas and

More information

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01

CloudLink SecureVM. Administration Guide. Version 4.0 P/N REV 01 CloudLink SecureVM Version 4.0 Administration Guide P/N 302-002-056 REV 01 Copyright 2015 EMC Corporation. All rights reserved. Published June 2015 EMC believes the information in this publication is accurate

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

Live updates in high-availability (HA) clouds

Live updates in high-availability (HA) clouds Master of Science in Telecommunication Systems June 2018 Live updates in high-availability (HA) clouds Vivek Sanagari Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This

More information

HACMP Smart Assist for Oracle User s Guide

HACMP Smart Assist for Oracle User s Guide High Availability Cluster Multi-Processing for AIX 5L HACMP Smart Assist for Oracle User s Guide Version 5.3 SC23-5178-01 Second Edition (August 2005) Before using the information in this book, read the

More information

Novell Cluster Services Implementation Guide for VMware

Novell Cluster Services Implementation Guide for VMware www.novell.com/documentation Novell Cluster Services Implementation Guide for VMware Open Enterprise Server 2015 SP1 May 2016 Legal Notices For information about legal notices, trademarks, disclaimers,

More information

Parallels Containers for Windows 6.0

Parallels Containers for Windows 6.0 Parallels Containers for Windows 6.0 Deploying Microsoft Clusters June 10, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.9 August 2018 215-13526_A0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

VMware Mirage Getting Started Guide

VMware Mirage Getting Started Guide Mirage 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3. Installing and Configuring VMware Identity Manager Connector 2018.8.1.0 (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.3 You can find the most up-to-date technical documentation on

More information

Three Steps Toward Zero Downtime. Guide. Solution Guide Server.

Three Steps Toward Zero Downtime. Guide. Solution Guide Server. Three Steps Toward Zero Downtime Guide Solution Guide Server Server Solution Guide Three Steps Toward Zero Downtime Introduction Service uptime is a top priority for many business operations. From global

More information

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN Abstract This Technical Report describes how to use PS Series storage arrays with a Microsoft Windows Server 2003 cluster.

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

Administrator s Guide. StorageX 8.0

Administrator s Guide. StorageX 8.0 Administrator s Guide StorageX 8.0 March 2018 Copyright 2018 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

From an open storage solution to a clustered NAS appliance

From an open storage solution to a clustered NAS appliance From an open storage solution to a clustered NAS appliance Dr.-Ing. Jens-Peter Akelbein Manager Storage Systems Architecture IBM Deutschland R&D GmbH 1 IBM SONAS Overview Enterprise class network attached

More information

High Availability Options

High Availability Options , on page 1 Load Balancing, on page 2 Distributed VPN Clustering, Load balancing and Failover are high-availability features that function differently and have different requirements. In some circumstances

More information

SQL Server technical e-book series. Drive business continuity with SQL Server

SQL Server technical e-book series. Drive business continuity with SQL Server SQL Server technical e-book series 1 Drive business continuity with SQL Server Drive business continuity with SQL Server Content 01 Minimizing downtime is the need of every business today 02 SQL Server

More information

Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 7 High Availability Add-On Reference Reference Document for the High Availability Add-On for Red Hat Enterprise Linux 7 Last Updated: 2018-06-08 Red Hat Enterprise Linux 7 High

More information

Veritas Cluster Server from Symantec

Veritas Cluster Server from Symantec Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overviewview protects your most important applications from planned and unplanned downtime.

More information

Administrator s Guide. StorageX 7.6

Administrator s Guide. StorageX 7.6 Administrator s Guide StorageX 7.6 May 2015 Copyright 2015 Data Dynamics, Inc. All Rights Reserved. The trademark Data Dynamics is the property of Data Dynamics, Inc. StorageX is a registered trademark

More information

ORACLE SOLARIS CLUSTER

ORACLE SOLARIS CLUSTER Oracle Solaris Cluster is a comprehensive high availability and disaster recovery solution for Oracle's SPARC and x86 environments based on Oracle Solaris. It combines extreme service availability for

More information

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo Vendor: EMC Exam Code: E20-002 Exam Name: Cloud Infrastructure and Services Exam Version: Demo QUESTION NO: 1 In which Cloud deployment model would an organization see operational expenditures grow in

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance resilient disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents

More information

Installation and Cluster Deployment Guide for KVM

Installation and Cluster Deployment Guide for KVM ONTAP Select 9 Installation and Cluster Deployment Guide for KVM Using ONTAP Select Deploy 2.7 March 2018 215-12976_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

What's New in SUSE LINUX Enterprise Server 9

What's New in SUSE LINUX Enterprise Server 9 What's New in SUSE LINUX Enterprise Server 9 www.novell.com July 29, 2004 2 TABLE OF CONTENTS Table of Contents...2 Executive Summary...3 New Kernel...3 New Scalability and Availability s...5 New Systems

More information

Introduction to iscsi

Introduction to iscsi Introduction to iscsi As Ethernet begins to enter into the Storage world a new protocol has been getting a lot of attention. The Internet Small Computer Systems Interface or iscsi, is an end-to-end protocol

More information

Installing and Administering a Satellite Environment

Installing and Administering a Satellite Environment IBM DB2 Universal Database Installing and Administering a Satellite Environment Version 8 GC09-4823-00 IBM DB2 Universal Database Installing and Administering a Satellite Environment Version 8 GC09-4823-00

More information

ForeScout CounterACT Resiliency Solutions

ForeScout CounterACT Resiliency Solutions ForeScout CounterACT Resiliency Solutions User Guide CounterACT Version 7.0.0 About CounterACT Resiliency Solutions Table of Contents About CounterACT Resiliency Solutions... 5 Comparison of Resiliency

More information

Configuring Network Load Balancing

Configuring Network Load Balancing Configuring Network Load Balancing LESSON 1 70-412 EXAM OBJECTIVE Objective 1.1 Configure Network Load Balancing (NLB). This objective may include but is not limited to: Install NLB nodes; configure NLB

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0 Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0 High Availability for Windows NT An Oracle Technical White Paper Robert Cheng Oracle New England Development Center System Products

More information

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available

More information

Highly Available NFS Storage with DRBD and Pacemaker

Highly Available NFS Storage with DRBD and Pacemaker Florian Haas, Tanja Roth Highly Available NFS Storage with DRBD and Pacemaker SUSE Linux Enterprise High Availability Extension 11 SP4 October 04, 2017 www.suse.com This document describes how to set up

More information

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide

SIOS Protection Suite for Linux: DataKeeper for Linux. Evaluation Guide SIOS Protection Suite for Linux: DataKeeper for Linux This document and the information herein is the property of SIOS Technology Corp. Any unauthorized use and reproduction is prohibited. SIOS Technology

More information

IBM. Availability Implementing high availability. IBM i 7.1

IBM. Availability Implementing high availability. IBM i 7.1 IBM IBM i Availability Implementing high availability 7.1 IBM IBM i Availability Implementing high availability 7.1 Note Before using this information and the product it supports, read the information

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.1 for Windows brings advanced online storage management to Microsoft Windows Server environments,

More information

IBM PowerHA SystemMirror for AIX. Standard Edition. Version 7.1. PowerHA SystemMirror concepts IBM

IBM PowerHA SystemMirror for AIX. Standard Edition. Version 7.1. PowerHA SystemMirror concepts IBM IBM PowerHA SystemMirror for AIX Standard Edition Version 7.1 PowerHA SystemMirror concepts IBM IBM PowerHA SystemMirror for AIX Standard Edition Version 7.1 PowerHA SystemMirror concepts IBM Note Before

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

A GPFS Primer October 2005

A GPFS Primer October 2005 A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those

More information

High Availability and Disaster Recovery

High Availability and Disaster Recovery High Availability and Disaster Recovery ScienceLogic version 8.4.0 rev 2 Table of Contents High Availability & Disaster Recovery Overview 4 Overview 4 Disaster Recovery 4 High Availability 4 Differences

More information

Avaya Aura TM System Platform R6.0.1 Service Pack Release Notes Issue 1.4

Avaya Aura TM System Platform R6.0.1 Service Pack Release Notes Issue 1.4 Avaya Aura TM Service Pack Release Notes Issue 1.4 INTRODUCTION This document introduces the Avaya Aura TM System Platform Release 6.0.1 Service Pack and describes known issues and the issues resolved

More information