Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64)

Size: px
Start display at page:

Download "Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64)"

Transcription

1 Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) *T * Printed in the US HP Part Number: T Published: March 2009

2 Legal Notices Copyright (R) Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright.confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.the information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. SAP and Netweaver are trademarks or registered trademarks of SAP AG. HP-UX is a registered trademark of Hewlett-Packard Development Company, L.P. JAVA(R) is a U.S. trademark of Sun Microsystems, Inc. Intelfont(R) registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. Oracle (R) is a registered trademark of Oracle Corporation.EMC 2, Symmetrix, and SRDF are trademarks of EMC Corporation.

3 Table of Contents Printing History...11 About this Manual Understanding Serviceguard Extension for SAP on Linux...13 Designing Serviceguard Extension for SAP on Linux Cluster Scenarios...13 Failover Scenarios Using One and Two Package Concepts...14 Robust Failover Using the One Package Concept...15 About SAP Client/Server Architecture...16 About Two-Tier Configurations...17 About Three-Tier Configurations...18 About SAP Three-Tier Dual-Stack Client Configurations...19 Clustering Stand-Alone J2EE Components...20 Highly Available (HA) NFS...20 Dialog Instance Clusters as Simple Tool for Adaptive Enterprises...20 Handling of Redundant Dialog Instances...22 Enqueue Replication...22 Enqueue Replication for a Dual-Stack Configuration...23 Serviceguard Extension for SAP on Linux File Structure...25 Serviceguard and SGeSAP/LX Specific Package Files...25 Package Directory Content for the One-Package Model...26 Package Directory Content for the Two-Package Model Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment...29 Overview...29 About SAP Configuration Environments...29 About SAP Components...30 About Impacted File Systems...31 About Storage Options...32 LOCAL Storage...32 SHARED Storage...32 "LOCAL mount" or "SHARED mount"...32 LOCAL mount...33 SHARED NFS or SHARED EXCLUSIVE...33 SHARED NFS...33 SHARED EXCLUSIVE...33 Configuration Scenarios...34 One ABAP CI and one DB...34 SAP Instance...34 DB Instance...34 One ABAP CI, one ABAP DI and one DB...35 SAP Instance...35 DB Instance...35 Two ABAP DI's and one ABAP SCS and one DB...36 SAP Instance...36 DB Instance...36 Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, and one DB...36 SAP Instance...36 DB Instance...36 Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, one JAVA REP, and one DB...37 SAP Instance...37 DB Instance...37 Putting it all Together...37 Table of Contents 3

4 Why SAP Provides sapcpe...37 SHARED NFS: the NFS automounter...38 Summary of automount file systems...39 Overview Serviceguard packages...40 SGeSAP/LX Implementations...41 Conclusion: Mount Points, Access Types and SGeSAP/LX Package...42 MaxDB mount points and SGeSAP/LX packages...42 Oracle Database Instance Storage Considerations Step-by-Step Cluster Conversion...47 SGeSAP/LX Naming Conventions and Package Types...47 SGeSAP/LX Package Names...47 Installation Tasks and Installation Steps...47 Serviceguard Extension for SAP on Linux: cmcluster.config...48 SAP Preparation...49 SAP Installation Considerations...49 Netweaver 2004s High Availability System...49 Netweaver 2004 JAVA-only High Availability Option...53 ASCS installation...56 SCS Installation...56 Generic SAP Installation...57 Creating a Standalone Enqueue Service and a Enqueue Replication Service...57 Splitting a Central Instance...57 Creation of Enqueue Replication Server Instance...60 Linux Configuration...63 Directory Structure Configuration...64 Cluster Node Synchronization...67 Cluster Node Configuration...70 Cluster Configuration...74 Configuring SGeSAP/LX on Linux...74 Configuring SGeSAP/LX on Linux...75 Serviceguard Control Script Template - xxx.control.script...77 Installation and Configuration Considerations...80 Serviceguard NFS Toolkit Configuration...82 Serviceguard Extension for SAP on Linux Configuration - sap.config...83 Specification of the Packaged SAP Components...83 Configuration of Application Server Handling...85 Optional Parameters and Customizable Functions...90 Global Defaults...92 Database Configuration...93 Additional Steps for Oracle...93 Additional Steps for MaxDB...96 SAP WAS System Configuration...97 SAP ABAP Engine specific configuration steps...97 SAP J2EE Engine specific installation steps IPv6 Specific Installation SAP Supply Chain Management Planning the Volume Manager Setup LVM Option 1: Simple Clusters with Separate Packages LVM Option 2: Full Flexibility - No Constraints MaxDB Storage Considerations Linux Setup Clustered Node Synchronization Cluster Node Configuration Serviceguard Extension for SAP on Linux Package Configuration Service Monitoring APO Setup Changes Table of Contents

5 General Serviceguard Setup Changes Serviceguard Extension for SAP on Linux Cluster Administration Change Management System Level Changes SAP Software Changes Serviceguard Extension for SAP on Linux Administration Aspects Upgrading SAP Software Switching Serviceguard Extension for SAP on Linux Off and On Switching Off Serviceguard Extension for SAP on Linux Switching On Serviceguard Extension for SAP on Linux Table of Contents 5

6 6

7 List of Figures 1-1 Two-Package Failover with Mutual Backup Scenario Two-Tier Client Configuration Three-Tier Client Configuration Dual-Stack Client Configuration Failover Node with Application Server package Replicated Enqueue Clustering for ABAP or JAVA Instances Configuration Files Needed to Build a Cluster Comparison between Traditional SAP Architecture and SAP WEB AS Architecture Two node cluster with LOCAL and SHARED mounted / connected storage sapcpe Mechanism for Executables Necessary components for configuring and running a Serviceguard packages in and SAP environment sapcpe Mechanism for Executables Serviceguard Extension for SAP on Linux cluster displayed in the HP Serviceguard Manager GUI

8 8

9 List of Tables 1 Editions and Releases About the Two and Three Tier Layers About the Two and Three Tier Layers Understanding the Different SAP Environments and Components SAP Web AS Programming Options SAP Component Terminology SAP CI Instance Other SAP Instance Implementation Options as Listed in the Previous Sections: Mapping of SAP components to Serviceguard Extension for SAP on Linux package types SAP CI mount point and SGeSAP/LX package Other SAP component mount points and SGeSAP/LX packages: MaxDB mount points and SGeSAP/LX packages: NLS Files - Default Location Oracle DB Instance Overview of reasonable ASTREAT values Optional Parameters and Customizable Functions List Working with the two parts of the file IS1130 Installation Step File System Layout for livecache Package running separate from APO (Option 1) General File System Layout for livecache (Option 3)

10 10

11 Printing History Table 1 Editions and Releases Printing Date Part Number Serviceguard Extension for SAP on Linux Operating System Releases Oct/04 T x86_32 RHEL 3 SLES 8 Dec/05 T IA64 Integrity SLES 9 July/06 T IA64 Integrity and x86_64 RHEL 4 SLES 9 Feb/08 T IA64 Integrity and x86_64 RHEL 4, RHEL 5 and RHEL 5.1 SLES 9 and SLES 10 March/09 T IA64 Integrity and x86_64 RHEL 4, RHEL 5 and RHEL 5.1, RHEL 5.2, SLES 9 and SLES 10, SLES 10 SP1 AND SLES 10 SP2 The printing date and part number indicate the current edition. The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number changes when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition. HP Printing Division: Business Critical Computing Business Unit Hewlett-Packard Co Pruneridge Ave. Cupertino, CA About this Manual... This document describes how to configure and install highly available SAP systems on Linux using Linux Serviceguard. To get further information regarding SAP and High Availability in general, please refer to the SAP documents: SAP Web Application Server in Switchover Environments SAP BC High Availability To understand this document you have to have knowledge of the basic Serviceguard concepts and commands. Experience in the Basis Components of SAP will also be helpful. This manual consists of five chapters: Chapter 1 "Understanding Serviceguard Extension for SAP" describes how to design a High Availability SAP Environment and points out how SAP components can be clustered. Chapter 2 "Planning a File System Layout for SAP in a Serviceguard Cluster Environment" proposes the recommended high available file system and shared storage layout for the SAP landscape and database systems. Chapter 3 "Step-by-Step Cluster Conversion" describes the installation of Serviceguard Extension for SAP on Linux step-by-step down to the Linux command level. Chapter 4 "SAP Supply Chain Management" specifically deals with the SAP SCM and livecache technology, gives a Storage Layout proposal and leads through the Serviceguard Extension for SAP on Linux cluster conversion step-by-step. Chapter 5 "Serviceguard Extension for SAP on Linux Cluster Administration" gives a impression on Change Management and general Serviceguard Extension for SAP on Linux Administration aspects, as well as the use of different Linux platforms in a mixed cluster environment. About this Manual... 11

12 12

13 1 Understanding Serviceguard Extension for SAP on Linux HP Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) hereafter documented as HP Serviceguard Extension for SAP on Linux or SGeSAP/LX, extends HP Serviceguard's failover cluster capabilities to SAP application environments. Serviceguard Extension for SAP on Linux continuously monitors the health of each SAP cluster node and automatically responds to failures or threshold violations. It provides a flexible framework of package templates to easily define cluster packages that protect various components of a mission-critical SAP infrastructure. Serviceguard Extension for SAP on Linux provides a single, uniform interface to cluster SAP R/3, mysap, SAP Web Application Server (SAPWAS), SAP Netweaver and SAP standalone J2EE based SAP applications in a vast range of supported release versions. The Serviceguard Extension for SAP also clusters underlying databases, and SAP livecache. It is possible to combine all clustered components of a single SAP system into one failover package for simplicity and convenience. On the other side there is also the full flexibility to separate SAP components up into several packages. In the case of a failure of one of these SAP components only the failed component has to be failed over instead of the complete instance thereby lowering overall failover times. Multiple SAP applications of different type and release version can be consolidated in a single cluster. Serviceguard Extension for SAP on Linux can quickly adapt to changing load demands or maintenance needs by moving SAP Application Sever instances between hosts. This chapter introduces the basic concepts used by Serviceguard Extension for SAP on Linux and explains several naming conventions. The topics include: Designing Serviceguard Extension for SAP on Linux Cluster Scenarios Failover Scenarios Using One and Two Package Concepts Robust Failover Using the One Package Concept About SAP Client/Server Architecture About Two-Tier Client/Server Architecture About Three-Tier Client/Server Architecture About SAP Three-Tier Dual-Stack Configuration Designing Serviceguard Extension for SAP on Linux Cluster Scenarios SAP applications can be divided into one or more distinct software components. Most of these components share a common technology layer, the SAP Web Application Server (SAPWAS). The SAP Web Application Server is the central building block of the SAP Netweaver technology. Each Web Application Server implementation comes with a characteristic set of software Single Points of Failure. These will become installed across the cluster hardware according to several high availability considerations and off-topic constraints, resulting in an individual configuration recommendation. There are various publications available from SAP and third parties that describe the software components used by SAP applications in more detail. It is recommended to refer to these documents to get a basic familiarity. It is also recommended to familiarize with Serviceguard clustering by reading the Serviceguard product manual Managing Serviceguard, eleventh edition or higher. The latest version can always be found at docs.hp.com/en/ha.html#serviceguard%20for%20linux. Each software Single Point of Failure (SPOF) defines a Serviceguard Extension for SAP on Linux package type. Serviceguard Extension for SAP on Linux follows a consistent naming convention for these package types. The naming conventions were created to be independent of SAP software release versions. This allows using a similar approach for each SPOF, regardless of whether it appears in the latest SAP Netweaver stack or a SAP software that was released before the first design of SAP Web Application Server. Older SAP components sometimes only support a subset of the available clustering options. The following sections introduce different Serviceguard Extension for SAP on Linux package types and provide recommendations and examples for the cluster layouts. Each SAP uses a <SID> which is an acronym for System IDentification for defining which components belong to the same SAP installation. For the following Designing Serviceguard Extension for SAP on Linux Cluster Scenarios 13

14 the pointed brackets < > indicate the characters enclosed within are to be replaced by an installation specific SID. The <INR> contains the installation number of a dialog instance. Two Package Concept for a database and ABAP instance: db<sid>, ci<sid> Two Package Concept for a database and JAVA instance: db<sid>, jci<sid> One Package Concept for a database and ABAP instance: dbci<sid> One Package Concept for a database and JAVA instance: dbjci<sid> Dialog instance: d<inr><sid> ABAP System Central Services: ascs<sid> JAVA System Central Services: scs<sid> The ABAP and JAVA Replicated Enqueue: ers<inr><sid> Dedicated NFS Packages: sapnfs Failover Scenarios Using One and Two Package Concepts SAP ABAP Engine or JAVA engine based applications usually rely on two central software services that define the software SPOFs: the SAP Enqueue Service and the SAP Message Service. These services are traditionally combined and run as part of a unique SAP Instance that is called the Central Instance (ci). As any other SAP instance, the Central Instance has an Instance Name. Traditionally it is called DVEBMGS. Each letter represents a service that is delivered by the instance. The "E" and the "M" stand for the Enqueue and Message Service that were identified as SPOFs in the system. Therefore, the most important package type of Serviceguard Extension for SAP on Linux is the Central Instance package (ci). All other SAP services can be installed redundantly within additional Application Server instances, sometimes also called Dialog Instances. As its naming convention suggests, DVEBMGS, Dialog, Verbucher (=German for update) Enqueue Batch Message Server Gateway Spool) includes additional services within the Central Instance, than just those that cause SPOFs. NOTE: In an SAP system there can be more services/processes that are not reflected in the above naming convention, including; CO: SAP collector daemon, SE: syslog send daemon and IG: Internet Graphics. An undesirable result of this is, that a Central Instance is a complex software with a high resource demand. Shutdown and startup of Central Instances is slower and more error-prone than they could be. Starting with SAP Web Application Server 6.40 it is possible to isolate the SPOFs of the Central Instance in a separate Instance that is then called the ABAP System Central Service Instance, in short ASCS. The installer for SAP Web Application Server allows installing ASCS automatically. This installation procedure will then also create a standard Dialog Instance that is called DVEBMGS for compatibility reasons. This kind of DVEBMGS instance does not provide Enqueue Service or Message Services and is therefore not considered as a Central Instance even though its name implies otherwise. NOTE: A Serviceguard Extension for SAP on Linux (ci) package contains either a full DVEBMGS instance or a lightweight ASCS instance. In any case, the Serviceguard Extension for SAP on Linux (ci) package provides failover capabilities to SAP Enqueue Services and SAP Message Services. Every SAP instance also requires a database service. With SGeSAP/LX either ORACLE or MaxDB RDBMS systems are supported. The package type (db) clusters any type of these databases. It unifies the configuration, so that database package administration for all vendors is treated identically. A cluster can be configured in a way that two nodes back up each other. The principle layout is depicted in figure 1-1. This picture as well as the following drawings are meant to illustrate basic principles in a clear and simple fashion. They omit other aspects and the level of detail that would be required for a reasonable and complete high availability configuration. 14 Understanding Serviceguard Extension for SAP on Linux

15 Figure 1-1 Two-Package Failover with Mutual Backup Scenario A Serviceguard package is a description of resources such as file system, storage volumes or network addresses and commands for starting, stopping or monitoring an application within a Serviceguard cluster. Each SGeSAP/LX package name should include the SAP System Identifier (SID) of the system to which the package belongs. It is strongly recommended to base the package naming on the naming conventions for the SGeSAP/LX package type. This should be done by prefixing the package type to the SID to get the package name. This allows easy identification and support. Robust Failover Using the One Package Concept In a one-package configuration, both the database (db) and Central Instance (ci) run on the same node at all times and are configured in one Serviceguard Extension for SAP on Linux package. Other nodes in the Serviceguard cluster function as failover nodes for the primary node on which the system runs during normal operation. The naming convention for this package is (dbci<sid>)for an ABAP configuration and dbjci for a JAVA configuration. It is a good practice to keep a cluster setup as simple as possible. Less number of packages can easily be monitored. If database and Central Instance get combined in a single package, this can save administration overhead. Serviceguard Extension for SAP on Linux allows utilizing the secondary node(s) with different instances during normal operation. Therefore it is not required to maintain an expensive idle standby node. A common setup installs one or more non-mission critical SAP Systems on the failover nodes, typically SAP Consolidation, Quality Assurance or Development Systems. They can gracefully be shutdown by Serviceguard Extension for SAP on Linux during failover to free up the computing resources for the critical production system. Development environments tend to be less stable than production systems. This should be taken into consideration before mixing these cases in a single cluster. A feasible alternative would be to install Dialog Instances of the production system on the failover node. If the primary node fails, the database and the Central Instance fail over and continue functioning on an adoptive node. After failover, the system runs without any manual intervention needed. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can either stay up or Designing Serviceguard Extension for SAP on Linux Cluster Scenarios 15

16 be restarted triggered by a failover. A sample configuration in Figure 1-5 shows node1 with a failure, which causes the package containing the database and central instance to fail over to node2. A Quality Assurance System and additional Dialog Instances get shut down, before the database and Central Instance are restarted. NOTE: A J2EE Engine might also be part of the Central Instance. The JAVA instance will then automatically be moved with the package. Some Web Application Server installations automatically install more than just a Central Instance at a time. E.g. a SAP Exchange Infrastructure 3.0 installation creates a DVEBMGS instance as well as a System Central Service (SCS) instance using different Instance Numbers. This SCS instance is technically identical with an ASCS instance, but it delivers its services to the J2EE Engines instead of the ABAP Engines of the Web Application Server. NOTE: JAVA SCS Instances are clustered with the package type (jci), ABAP SCS instances are clustered with the package type (ci). The naming reflects that the SCS Instance delivers the critical Central Instance Services for the JAVA portion of the Web Application Server. A one-package installation for the SAP Exchange Infrastructure would need to include three mission-critical components: the database, the Central Instance for ABAP and the SCS Instance for JAVA. The package would now be called (dbcijcisid). It is allowed to create (cijcisid) packages that just fail over SAP software excluding the database which might be secured with different means. It is also possible to split a DVEBMGS instance into an ASCS and a Dialog Instance. This process is described later in this document. As a result, a single SAP System may have a SCS and an ASCS Instance at the same time. SAP Web Application Server 7.0 provides installation options that provide this combination of SCS and ASCS out of the box. About SAP Client/Server Architecture Depending on the configuration of the overall SAP system, SAP can support either a two- or three-tier client/server architecture. The number of tiers depend on how many physical machines are running the software layers; Presentation, (SAP) Application and Database. Table 1-1 About the Two and Three Tier Layers Layer Presentation (SAP) Application Database Description Consists of a front end, for users to connect to the SAP system. Typically front end is a tool called SAPGUI or a web browser. Runs the typically CPU intensive applications components, such as the SAP DVEBMGS, the Enqueue server, the Message server, and the Dialog processes to name a few. To increase processing power in a scaled-out approach, several physical servers can be added to the SAP system, each running parts of the Application Layer software components. In the case of Dialog instances, multiple instances can be configured, running with the same SAP SID, but with a different SAP instance numbers. Manages all database transactions. There can only be one database instance for each SAP system. 16 Understanding Serviceguard Extension for SAP on Linux

17 About Two-Tier Configurations A two-tier client server configuration consists of the database (Database Layer = DB ) and all the SAP applications (SAP Application Layer = CI ) running on the same physical machine, as one Serviceguard package (= dbci). Figure 1-2 Two-Tier Client Configuration NOTE: The SAPGUI does not get clustered. The benefits of a two-tier configuration include: 1. Fewer physical machines to manage and maintain 2. Only one package to create and administer About SAP Client/Server Architecture 17

18 About Three-Tier Configurations A three-tier client server configuration consists of database (Database Layer) on a machine, and some or all of the SAP applications (SAP Application Layer) running on one or more different machines. Figure 1-3 Three-Tier Client Configuration Benefits of a Three-Tier Configuration: 1. Making the SAP system more efficient by off-loading CPU intensive SAP processes to one or more machines. 2. Reduce failover times. When a failure occurs, typically, the majority of the downtime is needed to recover the database. Although the total recovery time of a failed database cannot be predicted, failover times can be reduced when the Central Instance and the database are on different machines. In the case of a database failure, the Database Serviceguard package will failover to another cluster node, independent of any SAP component. In the case of a Central Instance failure, it can be configured to automatically reconnect to a previously failed database, once the database has been restarted by Serviceguard Extension for SAP on Linux on the failover node. 3. When the database and Central Instance are on different machines, a failed Central Instance will not cause a failover of the database. 4. If the CI instance requires even additional off-loading, the Dual Stack Configuration example in the next section shows how the CI can be separated. 18 Understanding Serviceguard Extension for SAP on Linux

19 About SAP Three-Tier Dual-Stack Client Configurations A SAP dual stack configuration combines SAP ABAP stack and SAP J2EE (JAVA) stack into one infrastructure. The following example for a three tier implementation. Figure 1-4 Dual-Stack Client Configuration Table 1-2 About the Two and Three Tier Layers Layer Presentation (SAP) Application Database Description The Presentation Layer consists of two additional components: The Internet - represents process request from the Internet using server/port combinations. The Web Browser - represents HTTP requests, the Internet component. The SAP Application Layer consists of new J2EE instances and adds the Java Dialog instances JD, the Java Central Instance JCI, the JAVA SAP Central System SCS (with the Java Standalone Enqueue Server and Message Server) and the JAVA Enqueue Replication Service ERS. The Internet Communication Manager (ICM) handles the connection to the Internet. It handles incoming requests for protocols such as HTTP, HTTPS, SMTP, SOAP and Java Co. The JCI instance runs the Internet Graphic Service (IGS) to be able to display graphical data in an Web Browser such as a chart. The Software Delivery Manager SDM is the tool to deliver new software components to the Java Application Server. The SAP Application Layer also shows how the ABAP CI instance can be further separated into smaller components with the option of running these components on additional physical machines. Manages all database transactions. There can only be one database instance for each SAP system. In this context, SAP uses new terms to describe such a configuration. In an ABAP environment the instance running the Enqueue and Message Server is called a ASCS (ABAP Central System). In a JAVA environment, it is called SCS (SAP Central System). The Enqueue component is called the Standalone Enqueue Server, since it is independent of an CI instance. A new SAP component is introduced called the Enqueue Replication Server or Replication instance. About SAP Client/Server Architecture 19

20 The Standalone Enqueue Service has the ability to mirror its memory content to an Enqueue Replication Service instance, which must be running on a remote node in the cluster. Both Standalone Enqueue Service and Enqueue Replication Service are configured as Serviceguard packages. In case of a failure of the node running the Standalone Enqueue Service, the package configured for the Standalone Enqueue Service switches over to the node running the Enqueue Replication package. The Standalone Enqueue Service reclaims the locking data stored by the Enqueue Replication Service in shared memory. Clustering Stand-Alone J2EE Components Some SAP applications just contain standalone J2EE components and do not come with an ABAP context. An example is a SAP Enterprise Portal installation. Beginning with SAP Web Application Server 6.40, these components can be treated a fashion similar to ABAP installations. A one-package setup for pure JAVA environments implements a (dbjcisid) package. The (jci) package type is always dedicated to a JAVA System Central Service (SCS) Instance. It uses standalone Enqueue and Message Server implementations similar to the ones delivered with ABAP kernels. Highly Available (HA) NFS Distributed SAP environments require the Network File System (NFS) to be highly available. Serviceguard Extension for SAP on Linux packages can be combined with the Serviceguard NFS toolkit for Linux to achieve this functionality. Typically, the Serviceguard Extension for SAP on Linux database package also provides Highly Available NFS functionality. Serviceguard Extension for SAP on Linux cluster hosts can usually act as Highly Available NFS servers and as NFS clients that access these server's file systems. A cross-mounting approach is taken to achieve this. The logical volumes that contain the NFS file systems are mounted below a specific directory, usually called /export. They are accessed via a different path by the NFS clients that is linked to a location below /import which is monitored by an automounter instance that provides the linkage between corresponding /export and /import directories. This setup allows NFS servers in a Serviceguard cluster to be failed over to NFS client hosts without getting into trouble with busy mount points for the disk devices. It also avoids adding static mounts to Highly Available NFS file systems, which also cause problems if the NFS server is not running during reboot time. NOTE: Database data files need to be treated differently to avoid measurable performance impacts and data inconsistencies. Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Databases and Central Instances are Single Points of Failure. ABAP Dialog Instances can be installed in a redundant fashion. In theory, this allows to avoid additional SPOFs in Dialog Instances. This doesn't mean that it is impossible to configure the systems including SPOFs on Dialog Instances. A simple example for the need of a SAP Application Server package is to protect dedicated batch servers against hardware failures. SAP ABAP Dialog Instances use package type (d). This reflects the SAP abbreviation for Dialog Services. Dialog Instance packages differ from other packages in that there might be several of them for one SAP SID. Therefore the naming convention is slightly altered and the unique Instance Number of the Application Server is added. A Dialog Instance package for Instance Number 01 of SAP System SID would be called (d01<sid>). As with any other SAP component, Dialog Instances can now freely be added to packages that also protect other parts of the setup. It is suggested to only apply the Dialog Instance naming convention if there are no mission-critical core components in the package. In other words, adding a Dialog Server to a (dbcisid) package should not change the package name. 20 Understanding Serviceguard Extension for SAP on Linux

21 NOTE: Multiple dialog instances can be configured to start from a single SGeSAP package compared to all the other once from the appropriate package e.g. a SGeSAP database package can only start one database, a SCS package can only start one Enqueue and Message Sever. Dialog Instance packages allow an uncomplicated approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time. This might be desirable if the CPU resource consumption is eventually balanced poorly due to changed usage patterns. Dialog Instances can then be moved between the different hosts to address this. A Dialog Instance can also be moved to a standby host to allow planned hardware maintenance for the node it was running on. One can simulate this flexibility by installing Dialog Instances on every host and activating them if required. This might be a feasible approach for many purposes and saves the need to maintain virtual IP addresses for each Dialog Instance. But there are ways that SAP users unintentionally create additional short-term SPOFs during operation if they reference a specific instance via its hostname. This could e.g. be done during batch scheduling. With Dialog Instance packages, the system becomes invulnerable against this type of user error. Using Dialog Instance package SAP adoptive computing, packages become highly available, and the system itself becomes more robust and flexible. Figure 1-5 Failover Node with Application Server package Figure 1-5 illustrates a common configuration with the adoptive node running as a Dialog Server during normal operation. node1 and node2 have equal computing power and the load is evenly distributed between the combination of database and Central Instance on node1 and the additional Dialog Instance on node2. All software services provided by the Dialog Instance are also configured as part of the Central Instance. If node1 fails, the Dialog Instance package will be shut down during failover of the (dbcisid) package. This is similar to a one-package setup without Dialog Instance packaging. The advantage of this setup is, that after repair of node1, the Dialog Instance package can just be restarted on node1 instead of node2. This saves downtime that would otherwise be necessary caused by a fail back of the (dbcisid) package. The two instances can be separated to different machines without impacting the production environment negatively. It should be noted that for this scenario with just two hosts there is no requirement to enable automatic failover for the Dialog Instance package. Clustering Stand-Alone J2EE Components 21

22 Handling of Redundant Dialog Instances Non-critical SAP WAS ABAP Application Servers can be run on HP-UX, SUSE or RedHat Linux application server hosts. These hosts do not need to be part of the Serviceguard cluster. Even if the additional SAP WAS services are run on nodes in the Serviceguard cluster, they are not necessarily protected by Serviceguard packages. A combination of Windows/Linux application servers is technically possible but additional software is required to access Linux file systems or Unix-like remote shells from the Windows system. All non-packaged instance are called Additional Dialog Instances or sometimes synonymously Additional SAP Application Servers to distinguish them from mission-critical Dialog Instances. An additional Dialog Instance that runs on a cluster node is called an Internal Dialog Instance. External Dialog Instances run on HP-UX or Linux hosts that are not part of the cluster. Even if Dialog Instances are external to the cluster, they may be affected by package startup and shutdown. For convenience, non-critical Dialog Instances can be started and stopped with any Serviceguard Extension for SAP on Linux package that secures critical components. Some SAP applications require the whole set of Dialog Instances to be restarted during failover of the Central Instance package. This can also be done with Serviceguard Extension for SAP on Linux means. Dialog Instances can be marked to be of minor importance. They will then be shut down, if a critical component fails over to the host they run on in order to free up resources for the non-redundant packaged components. Additional Dialog Instances never get reflected in package names. NOTE: Declaring non-critical Dialog Instances in a package configuration doesn't add them to the components that are secured by the package. The package won't react to any error conditions of these additional instances. The concept is distinct from the Dialog Instance packages that was explained in the previous section. Standard Serviceguard Extension for SAP on Linux scenarios will protect database, NFS, Message Server and Enqueue Server. It is recommended that you also protect at least one instance of each additional SAP WAS ABAP Application Service in the Serviceguard cluster. If non-packaged Additional Dialog Instances are used, certain rules should be followed: Use saplogon with Application Server logon groups. When logging on to an application server group with two or more Dialog Instances, you don't need a different login procedure even if one of the Application Servers of the group fails. Also, using login groups provides workload balancing between Application Servers. Avoid specifying a destination host when defining a batch job. This allows the batch scheduler to choose a batch server that is available at the start time of the batch job. If you must specify a destination host, specify the batch server running on the Central Instance or a packaged Application Server Instance. Print requests stay in the system until a node is available again and the Spool Server has been restarted. These requests could be moved manually to other spool servers if one spool server is unavailable for a long period of time. An alternative is to print all time critical documents through the highly available spool server of the central instance. Configuring the Update Service as part of the packaged Central Instance is recommended. Consider using local update servers only if performance issues require it. In this case, configure Update Services for application services running on the same node. This ensures that the remaining SAP Instances on different nodes are not affected if an outage occurs on the Update Server. Otherwise a failure of the Update Service will lead to subsequent outages at different Dialog Instance nodes. Enqueue Replication Typically, the SAP Enqueue maintains a table of exclusive locks that you can temporarily grant to ongoing SAP dialog transactions. This lock mechanism prevents simultaneous access to the same data in the database. However, in case of a failure of the Enqueue Server, the table with all locks that have been granted get lost. After failover and restart of the Enqueue Server on another cluster node, the Dialog Instances are notified that the Enqueue locks are no longer valid. All SAP dialog transactions are canceled and must be restarted manually. The newer SAP Enqueue Replication consists of two components; the Enqueue Replication Service and the Standalone Enqueue Service, of which both must be configured as Serviceguard packages. The Standalone Enqueue Service mirrors its lock contents to an Enqueue Replication Service instance, running on a remote node in the cluster. 22 Understanding Serviceguard Extension for SAP on Linux

23 The Serviceguard package running the Standalone Enqueue should be configured to failover to the node running the Enqueue Replication Service. In the event of a failure of the Serviceguard package for the Standalone Package it will then failover to that node. It now can reclaim the replicated SAP locks that the Enqueue Replication Service had collected. Enqueue Replication for a Dual-Stack Configuration A SAP system making use of a dual-stack ABAP/JAVA configuration, will each have its own Standalone Enqueue Services and its own Enqueue Replication Services. For ABAP (ASCS) and ERS The Standalone Enqueue Service is part of the (ASCS) instance. The ASCS instance will also be running a SAP Message Server. One Enqueue Replication Services (ERS) will be running for the ABAP instance. For JAVA (SCS) and ERS The Standalone Enqueue Service is part of the (SCS) instance. The SCS instance will also be running a SAP Message Server. One Enqueue Replication Services (ERS) will be running for the JAVA instance. NOTE: If the SAP system only has ABAP components installed, SAP runs the following instances; ABAP ASCS and ERS. If the SAP system is configured to only run JAVA components, SAP runs the following instances; JAVA SCS and ERS. If ABAP and JAVA both are configured, SAP runs ABAP ASCS and ERS as well as JAVA SCS and ERS instances. Each instance must be configured to run as its own Serviceguard package. Furthermore, each SAP instance <SID> must have a unique instance number <INR> The following configuration is used as an example: SAP Instance ASCS02: ERS12: SCS03: ERS13: Description Runs the ABAP Standalone Enqueue Service and the ABAP Message Service for SAP system SX1. SAP instance number is 02. Runs the ABAP Enqueue Replication Service for SAP system SX1. SAP instance number is 12. It contains the SAP mirror lock data from instance ASCS02 Runs the JAVA Standalone Enqueue Service and the JAVA Message Service for SAP system SX1. SAP instance number is 03. Runs the JAVA Enqueue Replication Service for SAP system SX1. SAP instance number is 13. It contains the SAP mirror lock data from instance SCS03. The following SGeSAP naming conventions are used for Serviceguard packages: SAP Instance ascs<sid>: ers<inr><sid>: scs<sid>: ers<inr><sid>: Example: SGeSAP Package Name ascssx1 ers12sx1 scssx1 ers13sx1 Enqueue Replication 23

24 Figure 1-6 Replicated Enqueue Clustering for ABAP or JAVA Instances The integrated version of the Enqueue Service is not able to utilize replication features. To be able to run the replicated enqueue feature the DVEBMGS Instance needs to be manually split into a standard Central Instance (CI) and a ABAP System Central Service Instance (ASCS). NOTE: There are two SAP Enqueue variants possible with an SAP installation. A standalone version of the Enqueue Server is included with the newer binaries of SAP kernel 6.40 or higher kernel versions and the classical Enqueue Services as a integral part of each ABAP DVEBMGS Central Instance. 24 Understanding Serviceguard Extension for SAP on Linux

25 Serviceguard Extension for SAP on Linux File Structure The Linux distribution of Serviceguard uses a special file, /etc/cmcluster.config, to define the path locations for configuration and log files within the Linux file system. These paths differ depending on the Linux distribution. NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: SGCONF=/opt/cmcluster/conf for SUSE Linux SGCONF=/usr/local/cmcluster/conf for Redhat Linux The contents of this directory and any sub-directories must be synchronized manually on all nodes of the cluster. Serviceguard and SGeSAP/LX Specific Package Files Serviceguard package files are of two different file types - package configuration files and package control script files: xxx.config - Serviceguard package configuration file. This file contains configuration data for a specific Serviceguard package such as package name and the Serviceguard xxx.sh script to execute in the start or stop case. xxx.control.script - Serviceguard package control script. This file contains more configuration data such as storage volumes to mount or unmount for this package and the SGeSAP/LX sapwas.sh script to execute in the start or stop case. The cmapplyconf(1m)command together with the above two files (xxx.config and xxx.control.script) is used to create a Serviceguard package "xxx". The cmapplyconf command generates or updates the data that is stored in the binary configuration file ${SGCONF}/cmclconf and distributes it to all cluster nodes. The following are examples for a one package and two package SGeSAP configuration: SAP in one-package: dbci<sid>.config, dbci<sid>.control.script SAP in two-package ssh: db<sid>.config: db<sid>.control.script ci<sid>.config, ci<sid>.control.script NOTE: A package control script file is a Linux bash script. The files xxx.config and xxx.control.script mentioned above are specific to Serviceguard and the Serviceguard packages and not related to SGeSAP/LX. So the files xxx.config and xxx.control.script basically just define a Serviceguard package. The SGeSAP/LX specific scripts will be executed by adding these to the functions customer_defined_run_cmds and customer_defined_halt_cmds in file xxx.control.script. Some SGeSAP/LX specific files are sap.config, sap.functions, customer.functions and sapwas.sh. The entry into the SGeSAP/LX specific part begins with the execution of the sapwas.sh script as mentioned in the "xxx.control.script" section above. The script sapwas.sh just provides a set of basic function calls to start or stop a SAP instance or a database instance. The actual logic of these basic functions resides in file sap.functions, a bash script. The script sap.functions itself requires a configuration file sap.config for determining the SAP instance number, or the SAP instance type or the database used for this SAP instance. Finally the SGeSAP/LX script file customer functions can be used to implement customer / site specific start and stop of commands. Enqueue Replication 25

26 Package Directory Content for the One-Package Model NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: SGCONF=/opt/cmcluster/conf - for SUSE Linux SGCONF=/usr/local/cmcluster/conf - for Redhat Linux In the one package model, the SAP functionality-database and central instance along with the highly available NFS server have all been placed in one Serviceguard package. The following convention was used for the postfix name:.config for the Serviceguard / SGeSAP/LX package configuration file.control.script for the Serviceguard / SGeSAP/LX package control script The Serviceguard Extension for SAP on Linux scripts of the one package concept (dbci<sid>) one additional SAP dialog instance (d<instnr><sid>) are organized as follows: NOTE: All the following files reside in ${SGCONF}/<SID>/ except for ${SGCONF}/sap.functions ${SGCONF}/<SID>/dbci<SID>.config - Serviceguard generic package configuration file as created with cmmakepkg -p. It needs to be customized. ${SGCONF}/<SID>/dbci<SID>.control.script - Serviceguard package control file as created with cmmakepkg -s. It needs to be customized. ${SGCONF}/<SID>/d<INSTNR><SID>.config - Serviceguard generic package configuration file as created with cmmakepkg -p. This file configures an optional application server package. It needs to be customized. This file is optional. ${SGCONF}/<SID>/d<INSTNR><SID>.control.script Serviceguard SAP application server package control file as created with cmmakepkg -s. It needs to be customized. This file is optional. ${SGCONF}/<SID>/sap.config - SAP specific package configuration file. It needs to be customized. ${SGCONF}/<SID>/customer.functions - contains all functions which can be modified for special setups. This file may also contain your own additional functions. If there is a need to hotfix functions in ${SGCONF}/sap.functions, copy them over to ${SGCONF}/<SID>/customer.functions and modify them there. Never modify ${SGCONF}/sap.functions itself. Any function provided in customer.functions overrides its equivalent in sap.functions ${SGCONF}/<SID>/sapms.mon - SAP message service monitoring functions. This file is optional. ${SGCONF}/sap.functions - contains all standard runtime logic used by one package and two package configurations. Do not modify this file. ${SGCONF}/<SID>/sapwas.sh provides the basic control flow for starting or stopping a SAP instance through a set of function calls. These function calls themselves are defined in file ${SGCONF}/sap.functions. ${SGCONF}/sap.functions - contains all standard runtime logic used by one package and two package configurations. Do not modify this file. Use the one package concept for all configurations where you can put the database and central instance on one node and have an equally sized node as a backup. During normal operation, backup nodes can be used as an idle standby, application server host, or test system. If you are planning to distribute the database and central instance on two nodes in the near future apply the two package concept. 26 Understanding Serviceguard Extension for SAP on Linux

27 Package Directory Content for the Two-Package Model NOTE: In this document, references to ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: SGCONF=/opt/cmcluster/conf - for SUSE Linux SGCONF=/usr/local/cmcluster/conf - for Redhat Linux In the two package model, the SAP functionality is separated into two Serviceguard packages. One for the database (DB) and the other for the SAP central instance (CI). The database package contains the file systems for the NFS mount points. The Serviceguard Extension for SAP on Linux scripts of the two package concept (db<sid>, ci<sid>) and one additional SAP dialog instance (d<instnr><sid) are organized as follows: NOTE: All the following files reside in ${SGCONF}/<SID>/ except for ${SGCONF}/sap.functions ${SGCONF}/<SID>/db<SID>.config- Serviceguard generic package configuration file as created with cmmakepkg -p. This file configures the database package. It needs to be customized. ${SGCONF}/<SID>/db<SID>.control.script- Serviceguard database package control file for as created with cmmakepkg -s. It needs to be customized. ${SGCONF}/<SID>/ci<SID>.config - Serviceguard generic package configuration file as created with cmmakepkg -p. This file configures the central instance package. It needs to be customized. ${SGCONF}/<SID>/ci<SID>.control.script - Serviceguard SAP central instance package control file as created with cmmakepkg -s. It needs to be customized. ${SGCONF}/<SID>/d<INSTNR><SID>.config - Serviceguard generic package configuration file as created with cmmakepkg -p. This file configures an optional application server package. It needs to be customized. This file is optional. ${SGCONF}/<SID>/d<INSTNR><SID>.control.script - Serviceguard SAP application server package control file as created with cmmakepkg -s. It needs to be customized. This file is optional. ${SGCONF}/<SID>/sap.config - SAP specific package configuration file. It needs to be customized. ${SGCONF}/<SID>/customer.functions - contains all functions which can be modified for special setups. This file may also contain your own additional functions. If there is a need to do a hotfix to functions in ${SGCONF}/sap.functions, copy them over to ${SGCONF}/<SID>/customer.functions and modify them there. Never modify ${SGCONF}/sap.functions itself. Any function provided in customer.functions overrides its equivalent in sap.functions ${SGCONF}/<SID>/nfs.mon - NFS service monitoring functions. This file is optional. ${SGCONF}/<SID>/sapms.mon - SAP message service monitoring functions. This file is optional. ${SGCONF}/<SID>/sapdisp.mon - SAP dispatcher monitoring functions. This file is optional. ${SGCONF}/<SID>/sapenq.mon - SAP enqueue monitoring functions. This file is optional. ${SGCONF}/sap.functions - contains all standard runtime logic used by one package and two package configurations. Do not modify this file. ${SGCONF}/<SID>/sapwas.sh provides the basic control flow for starting or stopping a SAP instance through a set of function calls. These function calls themselves are defined in file ${SGCONF}/sap.functions. ${SGCONF}/sap.functions - contains all standard runtime logic used by one package and two package configurations. Do not modify this file. Figure 1-7 illustrates the required configuration files, and the questions they answer, for an Serviceguard Extension for SAP on Linux application package in Serviceguard. Enqueue Replication 27

28 Figure 1-7 Configuration Files Needed to Build a Cluster At the top of this structure, SGeSAP/LX provides a generic function pool and a generic package runtime logic file for SAP applications. livecache packages have a unique runtime logic file. All other package types are covered with the generic runtime logic. 28 Understanding Serviceguard Extension for SAP on Linux

29 2 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment Overview The goal of this chapter is to determine the correct storage layout for SAP file systems in a Serviceguard cluster configuration. The following table outlines important concepts that you should understand before using Serviceguard with SAP. Each step is cumulative, that is each step builds upon itself. Table 2-1 Understanding the Different SAP Environments and Components Step Concept Understanding the difference between the ABAP and JAVA SAP configuration environments Which SAP components are used with either the ABAP or JAVA configuration. For example, ABAP Central Instance, JAVA Central Instance, and so on. Depending on which SAP configuration environment and components, which file systems are impacted, for example for ABAP Dialog Instance, the following file system is impacted - /usr/sap/... Depending on which SAP configuration environment and components, in an SGeSAP/LX cluster environment, which file systems can be separated into LOCAL, SHARED NFS or SHARED EXCLUSIVE storage. Understanding how can the results of the analysis of step 1-4 be implemented in a Serviceguard package. See the following topic for more information About SAP Configuration Environments About SAP Components About Impacted File systems Configurations Scenarios Putting it all Together The following sections detail each step in greater detail. About SAP Configuration Environments The SAP Web AS (Application Server) options offer migration paths between the older ABAP engines and the newer J2EE JAVA engines. In general, you can choose to program your SAP applications using the following options: ABAP (Advanced Business and Application Programming) is the original programming language for developing applications for an SAP R/3 system..j2ee (JAVA 2 Enterprise Edition) and other JAVA components are part of the JAVA platform for developing and running distributed multi-tier architecture JAVA applications. JAVA is used for implementing web based applications and web based services for SAP R/3. The following configuration environments are possible: Table 2-2 SAP Web AS Programming Options SAP Web AS Programming Option SAP Web AS Programming Option SAP ABAP+JAVA Description This system option consists of the ABAP only components. There is no J2EE JAVA Engine. The ABAP engine requires its own database schema. Therefore a database instance installation is mandatory. ABAP components and J2EE JAVA engine components are both installed and can be operated in parallel. This is also called SAP Web AS - J2EE Add-in. The J2EE JAVA engine requires its own database scheme. The ABAP and J2EE though can both be installed in the same database. This is a combination of the SAP ABAP and the SAP Web AS JAVA variant. Overview 29

30 SAP Web AS Programming Option SAP Web AS JAVA Description This system option consists of the J2EE engine only and auxiliary services. There is no ABAP engine installed. The J2EE JAVA engine requires it's own database schema. Therefore a database instance installation is mandatory. The following sections detail each step in greater detail. About SAP Components The following terms are used to describe the components of an SAP system. NOTE: Refer to the SAP Web Application Server in Switchover Environments manual for additional information. Go to for this and other SAP documentation. Table 2-3 SAP Component Terminology Term SAP System SAP instance DB instance ABAP CI JAVA CI ASCS SCS D JD AREP REP Definition The whole SAP system consisting of one or more SAP instances and a database instance. These can either be the SAP Web AS ABAP, SAP Web AS JAVA Add-In, or the SAP Web AS JAVA variant. A SAP instance could be for example: a ABAP CI instance, a JAVA CI instance, a SCS instance, or a DI instance to name a few. A Database Instance: is a database that supports the SAP Web AS ABAP, SAP Web AS JAVA scenario, or the SAP Web AS JAVA Add-in scenario. Supported databases with SGeSAP/LX are MaxDB and Oracle. An ABAP Central Instance provides SAP services for the SAP Web AS ABAP or the SAP Web AS JAVA Add-in variant. These services consist of SAP work processes such as dialog, update, batch, spool, gateway, Message Server and the Enqueue Server. It does not include the database processes. NOTE: With newer versions of SAP the Message Server and the Enqueue Server are separated from the Central Instance. These two services are now grouped within the SAP Central Services Instance (ASCS). A JAVA Central instance provides server processes for the SAP Web AS JAVA and SAP Web AS JAVA Add-In variant. These processes include the JAVA dispatcher, JAVA server processes, and the Software Delivery Manager (SDM). It does not include the database processes. NOTE: The JAVA CI does not include the Message Server or the Enqueue Server processes. These two services are grouped within the SAP Central Services Instance (SCS). ABAP SAP Central Services. With newer version of SAP the Message Server and the Enqueue Server have been separated from the APAB CI and grouped into a standalone service described as ABAP SAP Central Services instance (ASCS). JAVA SAP Central Services. In newer SAP Web AS JAVA variants the Message Server and the Enqueue Server are implemented as services within the SAP System Central Services Instance (SCS) and are separated from the JAVA Central Instance (CI) in this way. NOTE: The name SCS is slightly misleading as it used for the JAVA SAP Central Services. Dialog Instance or sometimes called Application Server is a SAP instance, without the DB instance, consisting of a dispatcher process and dialog work processes. A dialog instance does not contain the system-critical enqueue and message service components. A JAVA Dialog Instance provides server processes for the SAP Web AS JAVA variant. These processes include the JAVA dispatcher, JAVA server processes, and newer SAP versions can include the optional Software Delivery Manager (SDM). ABAP Enqueue Replication Server. In a high-availability environment the Enqueue Server components consists of the Standalone Enqueue Server and an Enqueue Replication Server (ERS). The Enqueue Replication Server (AREP) runs on another host and contains a replica of the lock table (replication table). If the standalone Enqueue Server fails, the Enqueue Server must be restarted on the host on which the replication server is running, since this host contains the replication table in a shared memory segment. The restarted Enqueue Server needs this shared memory segment to generate the new lock table. Afterwards this shared memory segment is deleted. Note: The modern SAP terminology for both REP and AREP is Enqueue Replication Service (ERS). JAVA Enqueue Replication Server. Provides same functionality as the ABAP Enqueue Replication Server but for the replication of the JAVA Enqueue Server. 30 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

31 Term livecache Definition A livecache instance. SAP livecache technology is an enhancement of the MaxDB database system and was developed to manage logistical solutions such as SAP SCM/APO (Supply Chain Management / Advanced Planning Optimizer). In contrast to MaxDB and for performance reasons the database system is located in the main memory instead of being file system based. Figure 2-1 Comparison between Traditional SAP Architecture and SAP WEB AS Architecture NOTE: With SAP NetWeaver 04 JAVA, the Message Server and the Enqueue Server are separated from the central instance. These two services are grouped within the SAP SCS (SAP Central Services) Instance as services. From NW04s the ABAP SCS can be also separated from the central instance. Each stack, ABAP and JAVA, has its own Message Service and Enqueue Service. For ABAP systems the SAP Central Services are referred to as ASCS, for JAVA systems the SAP Central Services are referred to as SCS. The ASCS and the SCS are leveled as SPOF (Single Point of Failure) and require a high availability setup therefore. If the ASCS is integrated within the ABAP central instance (standard in NetWeaver 04) the central instance of the ABAP system also needs a high availability setup. About Impacted File Systems In an SAP environment, the storage, the file system layout and the mount points of the SAP system and the Database instance are key for configuring a Serviceguard cluster. The file systems are key because of the separation task of the file systems required: which file systems can be shared between cluster nodes and which file systems cannot. This requires some understanding of how the file systems are used within an SAP configuration. It is important to describe the application itself, the environment it runs in and its components. These could be: processes, IP addresses, storage, file systems, mount points and configuration profiles. About Impacted File Systems 31

32 About Storage Options For each of the above listed file system scenarios the following questions need to be answered: 1. Whether it needs to be kept as a LOCAL copy on internal disks of each node of the cluster. The file system requires a LOCAL mount point. 2. Which of the file systems have to be SHARED by all cluster nodes on a SAN storage, but have to be mounted in (SHARED EXCLUSIVE) mode by the cluster node that the SAP instance failed over to and will run that instance. 3. Which of the file systems can be SHARED in (SHARED NFS) mode in the cluster by all cluster nodes (e.g. access to the file system is required from more than one cluster node at the same time)? Figure 2-2 Two node cluster with LOCAL and SHARED mounted / connected storage LOCAL Storage The LOCAL connected storage device "Storage 1" can only be mounted by cluster node "Node 1", the storage device "Storage 4" can only be mounted by cluster node "Node 2". So "Storage 1" and "Storage 2" access is LOCAL for these cluster nodes. Sharing of data residing on Storage 1 or Storage 2 with the other cluster nodes is not required and the other cluster nodes do not require any direct access to this storage device. SHARED Storage The SHARED connected storage "Storage 2 and Storage 3 can be mounted from any of the cluster nodes Node 1 or Node 2. Storage will be SHARED between the cluster nodes. At some point and time one or more of the cluster nodes will require access to the shared storage. Depending on the type of "sharing" this can further be classified as SHARED NFS or SHARED EXCLUSIVE. In the SHARED EXCLUSIVE case the storage device is accessible from any of the cluster nodes but as some point and time only one cluster node will mount the storage exclusively in the cluster. It will be that cluster node that requires that file system to run the SAP instance. In the SHARED NFS case the file systems are mounted via NFS and available to all cluster nodes (via NFS export / import) at all times. "LOCAL mount" or "SHARED mount" The first step for each of the file systems used in an SAP configuration is to classify if the mount point will be of type SHARED or of type LOCAL. The disadvantage of a LOCAL mount point is that any changes to the contents of these file systems will require a copy to all cluster nodes to keep the contents of the file systems synchronized. The above example diagram shows a two node / cluster configuration. Up to 16 node / node configurations are supported with Linux. So for a 16 node cluster 16 copies of the local file systems have to be made requiring a considerable administration effort. If possible therefore the preferred option would be to use type SHARED for all mount points. 32 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

33 LOCAL mount There are several reasons a LOCAL mount point is required in an SAP configuration: The first reason is that some files used in an SAP configuration are not cluster aware. As an example consider the case of SAP file /usr/sap/tmp/coll.put. This file contains system performance data collected by the SAP performance collector. If the file system /usr/sap/tmp is shared with other cluster nodes the SAP performance collector running on those systems would write into the same file system and file coll.put - causing this shared file to become corrupted. Therefore this file system has to be mounted with a LOCAL mount point. A second reason could be performance reasons. A locally mounted file system will achieve higher I/O throughput than a NFS mounted file system. A third reason could be the wish of having SAP binaries available locally. In case of a network (and therefore NFS) failures the binaries could still be executed. On the other side if there is a NFS failure other SAP components will probably also fail. As mentioned above, local copies of the file system contents require administration overhead to keep them synchronized. Serviceguard provides a tool cmcp(1m) that allows easy replication of a single file to all cluster nodes. NOTE: In prior SGeSAP/LX documentation this category (LOCAL) was called "Environment specific". SHARED NFS or SHARED EXCLUSIVE SHARED NFS In this category file system data is shared among the SAP instances via NFS and is made available / exported to all cluster nodes. An example could be /sapmnt/<sid>/profile directory which contains the profile for all SAP instances in the cluster. Each cluster mounts this directory as an NFS client. One of the cluster nodes will also be the NFS server of these directories. This node will have both the NFS client and the NFS server mounts active at the same time. This is sometimes called a NFS loopback mount. Should this cluster node fail, then the NFS exported directories will be relocated to another cluster node and will be NFS served from this node. An advantage of an NFS shared mount is that any changes or updates to an NFS file system contents are available instantaneously to all cluster nodes. Compared to a LOCAL mount point and where the file systems have to be kept in sync cluster wide with local copies this option doesn't have this administration overhead. A limiting factor could be that NFS mounted file systems will not provide the same level of IO performance a LOCAL mounted file system. Many of the above listed SAP file systems though are more of STATIC nature in their IO behavior so therefore should not cause an I/O limitation. If high I/O is required then the SHARED EXCLUSIVE mount should be used. NOTE: In prior SGeSAP/LX documentation this category was called "System specific." NFS filesystems can be mounted using either the TCP or the default, UDP protocol. If mounted using TCP, when a failover occurs, the TCP connection will be held for approximately 15 minutes, thereby preventing the NFS directory access on the first node. The timeout value can be significantly reduced by setting net.ipv4. tcp_retries2 = 3 in /etc/sysctl.conf which also changes the timeout system-wide. SHARED EXCLUSIVE This type of storage mount, relocates the storage volume together with the relocation of the application and the Serviceguard package from one cluster node to another cluster node. The file system is exclusively mounted on the node the application is running on. The file system is not mounted by any of the other cluster nodes or accessed by these. Compared to the SHARED NFS category this solution will provide the highest I/O performance. However, there is the disadvantage and the risk of a failure during the relocation of the file systems to another cluster node. Any failure in the sequence of stopping the application, unmounting the file systems, deactivating About Impacted File Systems 33

34 the access to the storage devices on the first cluster node; activating the access to the storage devices on the second node, mounting the file systems will cause the startup of the application to fail. NOTE: In prior SGeSAP/LX documentation this category was called "Instance specific." Configuration Scenarios In the following sections the file systems used for several different SAP configuration scenarios will be analyzed. Other combinations of scenarios are possible. One ABAP CI and one DB One ABAP CI, one ABAP DI, and one DB Two ABAP DI's and one ABAP SCS and one DB Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, and one DB Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, one JAVA REP, and one DB One ABAP CI and one DB SAP Instance DB Instance The first configuration scenario is the classical SAP configuration with one SAP Central instance (CI) and with one Database instance (DB). The name<sid> (SAP System ID) is contained in the name of some of the paths as well as the type of SAP instance. The CI in this case can be identified by the following name "DVEBMGS00". The <SID> used in these examples is C11. For this scenario the typical SAP and database file system mount points to be analyzed in a cluster environment are: /usr/sap/tmp/ /usr/sap/trans/ /usr/sap/c11/dvebmgs00/ /usr/sap/c11/sys/exe/run /sapmnt/c11/ /home/c11adm One of the following two databases could be used: MaxDB: /etc/opt/sdb /sapdb/c11 /sapdb/data /sapdb/programs /var/spool/sql Oracle: $ORACLE_HOME /oracle/client /oracle/orainventory /oracle/stage /oracle/c11/saparch /oracle/c11/sapreorg /oracle/c11/sapdata Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

35 /oracle/c11/sapdatan /oracle/c11/origloga /oracle/c11/origlogb /oracle/c11/mirrloga /oracle/c11/mirrlogb One ABAP CI, one ABAP DI and one DB SAP Instance DB Instance This configuration scenario consists of one SAP Central Instance (= CI), one SAP Dialog Instance (=DI) and one Database instance (=DB). So one additional dialog instance (= D01) will be added when compared to the previous example. The CI again is identified by the following name "DVEBMGS00", the dialog instance is identified by the name "D01" in the file system path. So to add an SAP dialog instance to an existing configuration only one new file system is required: /usr/sap/c11/d01. The mount points to be analyzed in a cluster environment are: /usr/sap/tmp/ /usr/sap/trans/ /usr/sap/c11/dvebmgs00/ /usr/sap/c11/d01/ <--- new /usr/sap/c11/sys/exe/run /sapmnt/c11/ /home/c11/adm When compared with the previous example the file system layout has not changed for the databases (MAXDB and Oracle). The file systems are just listed here for completeness. The file system layout in the case of a MaxDB or livecache database: /etc/opt/sdb /sapdb/c11 /sapdb/data /sapdb/programs /var/spool/sql The file system layout in the case of an Oracle database: /$ORACLE_HOME /oracle/client /oracle/orainventory /oracle/stage /oracle/c11/saparch /oracle/c11/sapreorg /oracle/c11/sapdata1... //oracle/c11/sapdatan /oracle/c11/origloga /oracle/c11/origlogb /oracle/c11/mirrloga /oracle/c11/mirrlogb Configuration Scenarios 35

36 Two ABAP DI's and one ABAP SCS and one DB SAP Instance DB Instance Using the terminology of the previous example, the next configuration scenario consists of one ABAP Central instance (= ABAP CI), one ABAP Dialog instance (ABAP DI), one ABAP SAP Central Services (ABAP SCS) and one Database instance (=DB). So compared to the previous example one instance (ABAP SCS) will be added. This instance is identified by the name "ASCS02". By adding the ABAP SCS, the naming conventions of the SAP components in this scenario have changed. The original ABAP Central Instance (CI) became an ABAP Dialog Instance as the Message Server and the Enqueue Server are now running within the ASCS and not within the ABAP CI. To add an SAP ASCS instance to an existing configuration only one new file system is required and that is /usr/sap/c11/ascs02. The mount points to be analyzed in a cluster environment are: /usr/sap/tmp/ /usr/sap/trans/ /usr/sap/c11/dvebmgs00/ /usr/sap/c11/d01/ /usr/sap/c11/ascs02/ <--- new /usr/sap/c11/sys/exe/run /sapmnt/c11/ /home/c11/adm The file systems for the database have not changed and are now not listed any more. Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, and one DB SAP Instance DB Instance Compared with the previous scenario JAVA components will now be added. The JAVA SCS instance is identified by the name "SCS03", the JAVA CI instance is identified by JC04. The JAVA CI naming convention remains since the SDM services are part of the JAVA CI as described above. So to add these instances to an existing configuration two new file system are required: /usr/sap/c11/scs03 and /usr/sap/c11/jc04. The mount points to be analyzed in a cluster environment are: /usr/sap/tmp/ /usr/sap/trans/ /usr/sap/c11/dvebmgs00/ /usr/sap/c11/d01/ /usr/sap/c11/ascs02/ /usr/sap/c11/scs03/ <--- new /usr/sap/c11/jc04/ <--- new /usr/sap/c11/sys/exe/run /sapmnt/c11/ /home/c11/adm The file systems for the database have not changed and are now not listed any more. 36 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

37 Two ABAP DI's, one ABAP SCS, one JAVA CI, one JAVA SCS, one JAVA REP, and one DB SAP Instance DB Instance Compared to the previous example one JAVA enqueue replication server instance (JAVA REP) will be added. The JAVA REP instance is identified by the name "ERS13. For this instance one new file system is required: /usr/sap/c11/ers13. NOTE: In previous versions of SAP that the SAP instance number for SCS03 (03) has to be the same as for the replication instance ERS03 (03). This has changed and any unique instance number can be chosen. This example uses ERS13. The mount points to be analyzed in a cluster environment are: /usr/sap/tmp/ /usr/sap/trans/ /usr/sap/c11/dvebmgs00/ /usr/sap/c11/d01/ /usr/sap/c11/ascs02/ /usr/sap/c11/scs03/ /usr/sap/c11/jc04/ /usr/sap/c11/ers13/ <--- new /usr/sap/c11/sys/exe/run /sapmnt/c11/ /home/c11/adm The file systems for the database have not changed so they are not listed. Putting it all Together Considering the above discussion the following file system layout was chosen: Table 2-4 SAP CI Instance Path /usr/sap/trans/ /usr/sap/tmp/ /usr/sap/<sid>/dvebmgs00/ /usr/sap/<sid>/sys/ /sapmnt/<sid>/ /home/<sid>adm Type NFS LOCAL EXCLUSIVE to this ABAP CI INSTANCE LOCAL NFS LOCAL Why SAP Provides sapcpe Technically the file system /usr/sap/<sid>/sys/exe/run could be configured as an NFS mounted file system. The advantage of having configured this as a LOCAL mounted file system instead of NFS allows the SAP binaries in this directory to still be available in the case of a network failure. On the other side even with an LOCAL mount if there is a NFS failure there is a high probability other SAP components in the SAP system depending on the network will also have failed. To circumvent the disadvantage of LOCAL mounted file systems (overhead of cluster wide copy of file system contents to all cluster nodes) SAP is providing "sapcpe" - a tool that automatically updates the contents of /usr/sap/<sid>/sys/exe/run. Putting it all Together 37

38 In clustered SAP environments it is recommended to use local copies of the SAP executables. Local executables limit the dependencies between the SAP package and the database package of a two package installation to a minimum. They allow SAP packages to shutdown even if HA NFS is not available. To automatically keep the local executables of any instance in sync with the latest installed patches, SAP developed the sapcpe mechanism. With every startup of an SAP instance, sapcpe matches the latest executables that were stored centrally with those stored locally and copies all required updates. Figure 2-3 shows the file system layout with sapcpe. Figure 2-3 sapcpe Mechanism for Executables Table 2-5 Other SAP Instance Implementation Options as Listed in the Previous Sections: Path /usr/sap/<sid>/d01/ /usr/sap/<sid>/ascs02/ /usr/sap/<sid>/scs03/ /usr/sap/<sid>/jc04/ /usr/sap/<sid>/ers13/ Description EXCLUSIVE to this ABAP DI INSTANCE EXCLUSIVE to this ABAP ASCS INSTANCE EXCLUSIVE to this JAVA SCS INSTANCE LOCAL to this JAVA CI INSTANCE Note: According to the SAP documentation, it is recommended to configure JAVA CI and JAVA DI on LOCAL file systems and not on EXCLUSIVE file systems. EXCLUSIVE to this JAVA REP INSTANCE SHARED NFS: the NFS automounter The option for SHARED NFS is to use the NFS automounter as described below. The automount scheme means that with the access to a automounter mount point the NFS file system will be mounted to the local file system tree for as long as it is needed. When there is no activity on that file system the automounter will automatically unmount the file system. A static NFS mount is usually mounted at boot time, or done manually after boot, and only unmounted during shutdown, or by hand using the umount command. 38 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

39 To configure the NFS automounter the file system /usr/sap/trans will again be used as an example. The goal is to access /usr/sap/trans transparently from all cluster nodes. The initial approach for configuring the automounter on Linux: 1. As in the static NFS case from above: On the cluster node acting as the NFS server the /usr/sap/trans volumes are mounted under an /export directory and exported to the NFS clients. Enable a virtual host /nfsreloc (relocatable IP address) from where the NFS clients can mount file systems. The virtual host has the advantage to it is always enabled on the cluster node where the NFS directories are being exported from and the NFS clients only have to mount from this name. 2. On the NFS clients (regardless if cluster node or remote system) auto mount the NFS directories under an /import directory. The /import directory will be used as a pseudo mountpoint because Linux does not yet support direct map mount points when a parent mount point exists. For example, if the automounter map /usr/sap/trans nfsreloc:/export/usr/sap/trans is used it will fail since target directory /usr of /usr/sap/trans exists on a Linux system. This is because the Linux automounter wants to remove any files in target directory /usr before executing. So since we are using the automounter Linux the only way it will work is to use a pseudo mountpoint and create symbolic links from SAP directories pointing to the pseudo mount points. The /import directory will just serve as dummy place holder. 3. The configuration of automounter is based on two files /etc/auto.master and a mount specific file /etc/auto.import (also called an automounter map file). An example of automounter map file in /etc/auto.master: /import /etc/auto.import nfsvers=3 The automounter map file itself /etc/auto.import contains the following line: trans -fstype=nfs,rw,udp,nosymlink nfsreloc: /export/usr/sap/trans For each line in /etc/auto.master a separate automount process is started. If a user or a process tries to access any files or directories in /import (for example /import/usr/sap/trans), the automounter will read the automounter map file /etc/auto.import and find a description of how file systems below the /import mount point of the map are to be mounted. The basic format of a line in such maps is: key [-options] location. The automounter then tries to generate a unique key from filename /import/usr/sap/trans that will map to a line in /etc/auto.import. In this example trans is a unique key. NOTE: The names of the sub-directories in /import have to be chosen so that they map into a unique key / name pair. In the case of example file systems /usr/sap/trans and /usr/sap/tmp the key sap cannot be chosen as it would not create a unique mapping. Therefore keys trans and tmp have to be chosen. 4. Create a symbolic link that points from the original SAP directory names to automount NFS place holders in holders in /import ln -sf /import/trans /usr/sap/trans Summary of automount file systems The following is a summary of file systems that will be mounted with the automounter: For the SAP instance with SID: nfsreloc:/export/sapmnt/<sid> /import/<sid> nfsreloc:/export/usr/sap/trans /import/trans For the MaxDB / livecache database: nfsreloc:/export/var/spool/sql/in /import/in nfsreloc:/export/sapdb/programs /import/programs nfsreloc:/export/sapdb/data /import/data For the Oracle database: nfsreloc:/export/oracle/stage /import/stage SHARED NFS: the NFS automounter 39

40 Using the data from the SHARED NFS sections from above, the following lines are added to the automounter map file /etc/auto.import. <SID> -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/sapmnt/<sid> trans -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/usr/sap/trans ini -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/var/spool/sql/ini/ programs -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/sapdb/programs data -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/sapdb/data stage -fstype=nfs,rw,udp,nosymlink nfsreloc:/export/oracle/stage Lastly, create symbolic links that point from the original SAP and original database directory names to NFS place holders in /import. ln -sf /import/<sid> /sapmnt/<sid> ln -sf /import/trans /usr/sap/trans ln -sf /import/ini /var/spool/sql/ini ln -sf /import/programs /sapdb/programs ln -sf /import/data /sapdb/data ln -sf /import/oracle/stage /oracle/stage NOTE: The 2.6 Linux kernel use lvm2with the device-mapper and virtual devices. In a failover situation it can happen that the device minor number changes when a volume group fails over to another node resulting in 'stale NFS handle' errors. One solution to avoid this issue is to create logical volumes with persistent minor numbers. See the README file in /opt/cmcluster/nfstoolkit or /usr/local/cmcluster/conf/nfstoolkit for a detailed description. Overview Serviceguard packages A package groups application services together. A Serviceguard package consists of the following: 1. Package name 2. Virtual IP address /relocateable address 3. Application processes that run one cluster node and "belong" to that package 4. Storage that "belongs" to the application that is started with this Serviceguard package. The typical high availability package is a failover package. It usually is configured to run on several nodes in the cluster, and runs on one at a time. If a service, node, network, or other package resource fails on the node where it is running, Serviceguard can automatically transfer control of the package to another cluster node, allowing services to remain available with minimal interruption. The following describes how these components are used to implement a SGeSAP/LX package. The SGeSAP/LX package name is derived from the SAP component type and / or functionality. Table 2-6 Mapping of SAP components to Serviceguard Extension for SAP on Linux package types Component DB ABAP CI JAVA CI ASCS SCS DI AREP REP Description Database Instance for ABAP and JAVA components (db) ABAP Central Instance (ci) JAVA Central instance Note: not recommended by SAP for switchover. ABAP SAP Central Services (ci) JAVA SAP Central Services (jci) Dialog Instance (d) ABAP Replicated Enqueue Server (arep) JAVA Replicated Enqueue Server (rep) 40 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

41 Component livecache SAPNFS Description livecache instance (lc) NFS package (sapnfs) that provides NFS file systems to any of the above SAP instances. As described earlier this Serviceguard package provides the cluster wide SHARED NFS mount points required by any of the above SAP instance types. Some of the above listed Serviceguard packages can also be configured as a combination: (dbci, dbjci dbcijci...). For example a dbci package is a combination of the db instance and the ci instance and would be considered one entity. The advantage of such a combination is that the db instance and the ci instance always start, stops and relocate together as one single package instead of two. The other advantage is such a Serviceguard package only requires one virtual IP address instead of two. The disadvantage is that any time one of the package components (db or ci) has to be stopped, the other package component also has to be stopped. SGeSAP/LX Implementations The following diagram gives an overview of a Serviceguard cluster and its packages. Figure 2-4 Necessary components for configuring and running a Serviceguard packages in and SAP environment The diagram demonstrates what components are necessary for configuring and running a Serviceguard packages in an SAP environment. The components for this SAP instance with SID = "RIS" are: Four Serviceguard packages with the names dbris, ciris, d01ris and sapnfs The package dbris is responsible for starting and stopping the database application for SAP instance RIS. The package ciris is responsible for starting and stopping the SAP central instance for id RIS. The package d01ris is responsible for starting and stopping the SAP dialog instance D01 for id RIS. The package sapnfs mount all NFS related storage volumes and exports them to all cluster nodes The naming convention is the following: Lowercase instance type (ci, db, sap...) Overview Serviceguard packages 41

42 Uppercase SAP instance name (RIS, NFS) Communication with a Serviceguard package is based on virtual IP addresses dbris, ciris, d01ris and nfsris. Each package has its own IP address. Each package has one or more file systems (storage volumes) that get mounted when a package is started and unmounted when a package is stopped. Normally any of the packages can be run or relocated on any of the cluster nodes in any combination. So for example package dbris and d01ris can be run on the first node, sapnfs and ciris can be run on the second node. Or package dbris can be run on the first node and d01ris, sapnfs and ciris can be run on the second or third or... node. If the first cluster node fails all packages are relocated to the second cluster node. As described in the above section, two file system categories (SHARED NFS and LOCAL) are made available on all cluster nodes. In this example these are: The NFS mount points /sapdb/data, /sapdb/programs... The LOCAL copy mount points /usr/sap/tmp, /usr/sap/ris/sys... The contents of these directories have to be copied to all cluster nodes. File systems of type EXCLUSIVE get relocated together with its Serviceguard package. The combination of Serviceguard package name, virtual IP address, the EXCLUSIVE file system and the application to start and stop are considered to be an entity. Conclusion: Mount Points, Access Types and SGeSAP/LX Package The following gives a final overview of some mount points, the access type and the SGeSAP/LX package that will us the mount point. Table 2-7 SAP CI mount point and SGeSAP/LX package Mount Point /usr/sap/<sid>/dvebmgs<instnr> /usr/sap/<sid>/d<instnr> link /sapmnt/<sid>-> /import/sapmnt/<sid>/ nfsreloc:/export/sapmnt/<sid> /import/sapmnt/<sid> link /usr/sap/trans -> /import/trans/ nfsreloc:/export/usr/sap/trans /import/trans /usr/sap/tmp/usr/sap/<sid>/home/<sid>adm Access Type EXCLUSIVE NFS LOCAL SGeSAP/LX Package ci<sid> or dbci<sid> d<instnr><sid> sapnfs or db<sid>, dbci<sid> none Table 2-8 Other SAP component mount points and SGeSAP/LX packages: Mount Point /usr/sap/<sid>/scs<instnr> /usr/sap/<sid>/ascs<instnr> /usr/sap/<sid>/ers<instnr> /usr/sap/<sid>/ers<instnr> Access Type EXCLUSIVE SGeSAP/LX Package jci<sid> ci<sid> rep<sid> (1) arep<sid> (1) NOTE: The (1)indicates that the ABAP and the JAVA Enqueue Replication Services have different instance numbers. MaxDB mount points and SGeSAP/LX packages The considerations given below for MaxDB will also apply to livecache clusters unless otherwise noticed. /sapdb/programs: this can be viewed as a central directory with all MaxDB executables. The directory is shared between all MaxDB instances that reside on the same host. It is also possible to share the directory across hosts. But it is not possible to use different executable directories for two MaxDB instances on the same host. Furthermore, it might happen that different MaxDB versions get 42 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

43 installed on the same host. The files in /sapdb/programs have to be of the newest version that any MaxDB on the cluster nodes uses. Files in /sapdb/programs are downwards compatible. /sapdb/data/config: This directory is also shared between instances, though you can find lots of files that are Instance specific in here, e.g. /sapdb/data/config/<sid>.* According to SAP this path setting is static. /sapdb/data/wrk: The working directory for MaxDB. If a MaxDB restarts after a crash, it copies important files from this directory to a backup location. This information is then used to determine the reason of the crash. For MaxDB versions lower than version 7.5, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each MaxDB individually. Serviceguard Extension for SAP on Linux expects the work directory to be part of the database package. The mount point moves from /sapdb/data/wrk to /sapdb/data/<sid>/wrk for the clustered setup. This directory should not be mixed up with the directory /sapdb/data/<dbsid>/db/wrk that might also exist. Core files of the kernel processes are written into the working directory. These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. NOTE: For MaxDB starting with version 7.6 these limitations do not exist anymore. The working directory is used by all instances and can be globally shared. /var/spool/sql: This directory hosts local runtime data of all locally running MaxDB instances. Most of the data in this directory would become meaningless in the context of a different host after failover. The only critical portion that still has to be accessible after failover is the initialization data in /var/spool/sql/ini. This directory is usually very small (< 1Megabyte). With MaxDB and livecache 7.5 the only local files are contained in /var/spool/sql/ini, other paths are just links to local runtime data in IndepData path: dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node where the MaxDB or livecache instance is able to run: /etc/opt/sdb: Only exists when using MAXDB or livecache 7.5 or higher. Needs to be local on each node. NOTE: In high availability scenarios, valid for MaxDB versions up to 7.6, the runtime directory /sapdb/data/wrk is configured to be located at /sapdb/<dbsid>/wrk to support consolidated failover environments with several MaxDB instances. The local directory /sapdb/data/wrk though is still referred to by the MaxDB vserver processes (vserver, niserver), which means vserver core dump and log files will be located there. Table 2-9 MaxDB mount points and SGeSAP/LX packages: Mount Point /etc/opt/sdb /sapdb/<sid> Access Type LOCAL EXCLUSIVE SGeSAP/LX Package none db<sid> Conclusion: Mount Points, Access Types and SGeSAP/LX Package 43

44 Mount Point link /sapdb/data ->/import/data nfsreloc:/export/sapdb/data /import/data link /sapdb/programs -> /import/programs nfsreloc:/export/sapdb/programs /import/programs link /var/spool/sql/ini -> /import/ini nfsreloc:/export/var/spool/sql/ini /import/ini Access Type NFS SGeSAP/LX Package sapnfs or db<sid> or dbci<sid> Oracle Database Instance Storage Considerations Oracle server directories reside below /oracle/<dbsid>. These directories get shared via the database package In addition, any SAP Application Server needs access to the Oracle client libraries, including the Oracle National Language Support files (NLS) shown in Table 2-7 NLS Files Default Location. The default location to which the client NLS files get installed differs with the SAP kernel release used: Table 2-10 NLS Files - Default Location Kernel Version <= x 7.0 Client NLS Location $ORACLE_HOME/ocommon/NLS[_<nls_version>]/admin/data /oracle/<rdbms_version>/ocommon/nls/admin/data /oracle/client/<rdbms_version>/ocommon/nls/admin/data /oracle/client/<rdbms_version>/instantclient It is important to notice, that there always is a second type of NLS directory, called the "server" NLS directory. It gets created during database or SAP Central System installations. The location of the server NLS files is identical for all SAP kernel versions: $ORACLE_HOME/ocommon/nls/admin/data The setting of the ORA_NLS[<nls_version>]variable in the environments of <sid>adm and ora<sid> determines whether the client or the server path to NLS is used. The variable gets defined in the.dbenv_<hostname>.[c]sh files in the home directories of these users. During Serviceguard Extension for SAP on Linux installation it is necessary to create local copies of the client NLS files on each host to which a failover could take place. SAP Central Instances use the server path to NLS files, while Application Server Instances use the client path. Sometimes a single host may have an installation of both a Central Instance and an additional Application Server of the same SAP System. These instances need to share the same environment settings. SAP recommends using the server path to NLS files for both instances in this case. This won't work with Serviceguard Extension for SAP on Linux since switching the database would leave the application server without NLS file access. Oracle 9.x releases no longer maintain NLS compatibility with Oracle 8.x. Also, Oracle 9.x patches introduce incompatibilities with older Oracle 9.x NLS files. The following constraints need to be met: 1. The Oracle RDBMS and database tools rely on an ORA_NLS[<nls_version>] setting that refers to NLS files that are compatible to the version of the RDBMS. Oracle 9.x needs NLS files as delivered with Oracle 9.x. 2. The SAP executables rely on an ORA_NLS[<nls_version>] setting that refers to NLS files of the same versions as those that were used during kernel link time by SAP development. This is not necessarily identical to the installed database release. The Oracle database server and SAP server might need different types of NLS files. The server NLS files are part of the database Serviceguard package. The client NLS files are installed locally on all hosts. Special care has to be taken to not mix the access paths for ORACLE server and client processes. The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts on which an Application Server might run. They can not be relocated to different paths. The content needs to be identical to the content of the corresponding directories that are shared as part of the database Serviceguard Extension for SAP on Linux package (db 44 Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment

45 or dbci). The setup for these directories follows the "on top" mount approach, i.e., the directories might become hidden beneath identical copies that are part of the db/dbci package: $ORACLE_HOME/rdbms/mesg $ORACLE_HOME/oracore/zoneinfo $ORACLE_HOME/network/admin Table 2-11 Oracle DB Instance Mount Point /oracle/<sid>/saparch /oracle/<sid>/sapreorg /oracle/<sid>/sapdata1 /oracle/<sid>/sapdatan /oracle/<sid>/origloga /oracle/<sid>/origlogb /oracle/<sid>/mirrloga /oracle/<sid>/mirrlogb /oracle/stage $ORACLE_HOME Access type EXCLUSIVE NFS SGeSAP/LX Package db<sid> or dbci<sid> sapnfs Conclusion: Mount Points, Access Types and SGeSAP/LX Package 45

46 46

47 3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (Serviceguard Extension for SAP on Linux). It is written in the format of a step-by-step guide. It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualized SAP instances manually. If these tasks are already covered by different means, it might be sufficient to quickly check that the requested result is already achieved. For example, if the SAP application component was already installed using a virtual IP address, many steps from the SAP WAS configuration section can be omitted. SGeSAP/LX Naming Conventions and Package Types Various Serviceguard Extension for SAP on Linux package types are supported, including SAP Central Instances and ABAP System Central Services (ci) JAVA System Central Services (jci) Database Instances for ABAP components (db) Database Instances for J2EE components (db) Replicated Enqueue ABAP System Central Services (arep) Replicated Enqueue for the JAVA System Central Services (rep) ABAP Dialog Instances and Application Servers (d) Highly Available NFS (sapnfs) Combinations of these package types (dbci, dbjci, dbcijci,...) These package types are used to provide failover capabilities to the following SAP components: SAP WAS, SAP BW, SAP APO, SAP R/3, SAP R/3 Enterprise, SAP SRM/EBP, SAP CRM, SAP EP, SAP KW, SAP SEM, SAP WPS and SAP XI. Refer to the support matrix published in the release notes of Serviceguard Extension for SAP on Linux to get details whether your SAP component is supported in Serviceguard clusters on Linux. livecache packages require package type (lc). This package type gets covered in Chapter 4. SGeSAP/LX Package Names In order to ensure a consistent naming scheme in the following installation and configuration steps, the Serviceguard Extension for SAP on Linux package names are assembled from the package type and SAP System ID: <pkg_name> = <pkg_type><sid> or, in case of ambiguous results <pkg_name> = <pkg_type><instnr><sid> Examples: dbc11, cic11, dbcic11, dbcijcic11, dbc11, jcic11, dbjcic11, d01c11, d02c11, arepc11<sid> There is one exception in the naming convention, which concerns the Serviceguard package used for NFS file systems in an SAP environment. Since this package can provide NFS support for all the SAP packages in a cluster, the name (sapnfs) without any SID was chosen. For a detailed description and combination restrictions on those packages, refer to chapter 1 - Designing Serviceguard Extension for SAP on Linux Cluster Scenarios. Installation Tasks and Installation Steps The installation steps cover Suse, SLES and Redhat RHEL using Serviceguard or higher. The process is split into the following logical tasks: SAP Preparation Linux Configuration Cluster Configuration SGeSAP/LX Naming Conventions and Package Types 47

48 Database Configuration SAP WAS System Configuration The tasks are presented as a sequence of steps. Each installation step is accompanied by a unique number of the format XXnnn, where nnn are incrementing values and XX indicates the step relationship, as follows: ISnnnfont - Installation Steps mandatory for all SAP package types OSnnnfont - Optional Steps ORnnnfont - ORacle database only steps MDnnnfont - SAPDB/MAXDB or LC database only steps REnnnfont - Replicated Enqueue only steps NW04Jnnnfont - Installation Steps for a Netweaver 2004 JAVA-only installation NW04Snnnfont - Installation Steps for all Netweaver 2004s installations If you need assistance during the installation process and need to contact HP support, you can refer to the installation step numbers to specify your issue. Also include the version of this document. The installation step numbers were reorganized for this document version and don't resemble to earlier releases. Whenever appropriate, Linux sample commands are given to guide you through the integration process. It is assumed that hardware as well as the operating system and Serviceguard are already installed properly on all cluster hosts. Sometimes a condition is specified with the installation step. Follow the information presented only if the condition is true for your situation. NOTE: For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP WAS system in a switchover environment example values are given. These values are for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's latest recommendation. Whenever possible the SAP OSS note number is given. Serviceguard Extension for SAP on Linux: cmcluster.config The Linux distribution of Serviceguard uses a special file, /etc/cmcluster.config, to define the locations for configuration and log files within the Linux file system. These paths differ depending on the Linux distribution used. In this document, references to /etc/cmcluster.config can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: For SUSE Linux - SGCONF=/opt/cmcluster/conf for Redhat Linux - SGCONF=/usr/local/cmcluster/conf To initialize the variables in /etc/cmcluster.config execute the following command for the bash shell: NOTE: The leading dot is important.. /etc/cmcluster.config When the following command is executed the variable ${SGCONF} is expanded to path as specified above: /opt/cmcluster/conf or /usr/local/cmcluster/conf cd ${SGCONF} will change into directory: For SUSE Linux - /opt/cmcluster/conf For Redhat Linux - /usr/local/cmcluster/conf NOTE: The above variable expansion ${SGCONF} does not work for the Serviceguard package configuration file. In this file you must use the actual path either /opt/cmcluster/conf or /usr/local/cmcluster/conf. This is indicated by the pointed brackets <SGCONF> where <SGCONF> must be substituted with the actual path. 48 Step-by-Step Cluster Conversion

49 For example, for a Serviceguard package configuration file: dbc11.config, when modifying the RUN_SCRIPT variable, you would substitute: RUN_SCRIPT<SGCONF>/dbC11.name with the following: For SUSE - RUN_SCRIPT /opt/cmcluster/conf/dbc11.name For Redhat - RUN_SCRIPT /usr/local/cmcluster/conf/dbc11.name You can use the ${SGCONF} variable, when modifying the Serviceguard package control script file. For example for a Serviceguard package control script file, dbc11.control.script when modifying the SERVICE_CMD variable you would use ${SGCONF} and the system will automatically use the following: SERVICE_CMD= ${SGCONF}/C11/dbC11.mon" In principle all SAP cluster installations look very similar. Older SAP systems get installed in the same way as they would without a cluster. Cluster conversion takes place afterwards and includes a set of manual steps. Some of these steps can be omitted since the introduction of high availability installation options to the SAP installer SAPINST. In this case, a part of the cluster configuration is done prior to the SAP installation as such. The SAP Instances can then be installed into a virtualized environment, which obsoletes the SAP Web AS System Configuration steps that usually concluded a manual cluster conversion. Therefore, it is important to first decide which kind of SAP installation is intended. The installation of a SAP High Availability System was introduced with Netweaver 2004s. For Netweaver 2004 JAVA-only installations there is a similar High Availability Option for SAPINST. All older SAP systems need to be clustered manually. The SAP preparation chapter covers all three cases. It also describes how Enqueue Replication can be activated for use with Serviceguard Extension for SAP on Linux. The exact steps for that also depend on the SAP release that gets used. Alternative setups use Highly Available NFS packages and automounter technology to ensure cluster-wide access to file systems. Finally, the underlying database of course also causes slightly different installation steps. The SAP Web Application Server installation types are ABAP-only, JAVA-only and Add-In. The latter includes both the ABAP and the JAVA stack. SAP Preparation This section covers the SAP specific preparation, installation and configuration before creating a high available SAP System landscape. This includes the following logical tasks: SAP Installation Considerations Replicated Enqueue Conversion SAP Installation Considerations This section gives additional information that help with the task of performing SAP installations in HP Serviceguard clusters. It is not intended to replace any SAP installation manual. SAP installation instructions provide complementary information and should be consulted in addition to this. The following paragraphs are divided into these installation options: Netweaver 2004s High Availability System Netweaver 2004 JAVA-only High Availability Option Generic SAP Installation Netweaver 2004s High Availability System The Netweaver 2004s technology is based on the SAP Web Application Server 7.0. The SAPINST installer for this offers the following installation options out of the box: Central System Distributed System High Availability System SAP Preparation 49

50 The Central System and Distributed System installations build a traditional SAP landscape. They will install a database and a monolithic Central Instance. Exceptions are JAVA-only based installations. NOTE: For JAVA-only based installations the only possible installation option is a High Availability System installation. A Netweaver 2004s may consist of any combination of the following components: JAVA System Central Services Instance (SCS) [JAVA Message and Enqueue Server] ABAP System Central Services Instance (ASCS) [ABAP Message and Enqueue Server] Central Instance (D) [ABAP and/or JAVA stack] Dialog Instance(s) (D) [ABAP and/or JAVA stack] The potential SPOFs (Single Point of Failure) are the SCS, ASCS and DB instances. The Dialog instance can be installed redundantly on nodes inside or outside the cluster. The ABAP Central Instance for Netweaver 2004s is similar to a simple Dialog Instance, except that it is pre-configured to contain the services Batch, Update, Spool and Gateway. For JAVA, a Central Instance runs the JAVA Software Deployment Manager (SDM). These services though can be also configured redundantly with other Dialog Instances. The SAP Netweaver 2004s CDs/DVDs must be available either as physical copies or images on the local or shared file system for the duration of the installation. As preparation, simple Serviceguard packages for the clustered instances have to be created. They provide the virtual IP addresses that are required during installation. The package(s) will later on be altered to become Serviceguard Extension for SAP on Linux packages. It will be more convenient to do this once the SAP installation has taken place. The following steps are performed as root user to prepare the cluster for the SAP installation. Preparation Step: NW04S400 Create a Serviceguard package directory on each node of the cluster. This directory will be used by all packages that belong to the SAP System with a specific SAP System ID <SID>. mkdir -p ${SGCONF}/<SID> Preparation Step NW04S420: Create standard Serviceguard package configuration and control files for each package. The SAP Netweaver 2004s instances are mapped to Serviceguard Extension for SAP on Linux package types as follows: The ASCS Instance requires a ci package type. The SCS instance requires a jci package type. The database requires a db package type. The package files are recommended to be named <pkg_name>.control.script for a control file and <pkg_name>.config for a configuration file. The packages are created with the following commands: cmmakepkg -s ${SGCONF}/<SID>/<pkg_name>.control.script cmmakepkg -p ${SGCONF}/<SID>/<pkg_name>.config An example for SAP System C11 and a combined single Serviceguard package using the standard naming convention would be: cmmakepkg -s ${SGCONF}/<SID>/dbcijciC11.control.script cmmakepkg -p ${SGCONF}/<SID>/dbcijciC11.config Preparation Step: NW04S430 The created configuration file(s) need to be edited. Refer to the Managing Serviceguard user's guide for general information about the file content. A minimum configuration will do for the purpose of supporting the SAP installation. At least the following parameters should be edited in ${SGCONF}/<SID>/<pkg_name>.config: 50 Step-by-Step Cluster Conversion

51 PACKAGE_NAME, NODE_NAME, RUN_SCRIPT, HALT_SCRIPT, SUBNET Specify NODE_NAME entries for all hosts on which the package should be able to run. Specify the created control scripts that were created earlier on as run and halt scripts: RUN_SCRIPT <SGCONF>/<SID>/<pkg_name>.control.script HALT_SCRIPT <SGCONF>/<SID>/<pkg_name>.control.script Specify subnets to be monitored in the SUBNET section. In the ${SGCONF}<SID><pkg_name>.control.script file(s), there is a section that defines a virtual IP address array. All virtual IP addresses specified here will become associated to the SAP and database instances that are going to be installed. Edit the array and create at least one virtual IP: IP[0]= Distribute the edited package templates to all cluster nodes with a remote command such as ftp, scp or cmcp. Preparation Step: NW04S440 Create a debug file on the system on which the installation takes place. The debug file allows manual SAP instance shutdown and startup operations during installation. Basically all the required file systems for this package get mounted, the virtual IP address is enabled and the SAP instance now can be started or stopped manually. touch ${SGCONF}/<SID>/debug It does not matter, if the system is meant to be run in a multi-tier fashion that separates the database from the ASCS instance by running them on different cluster nodes during normal operation. For convenience, all installation steps should be done on a single machine. Due to the virtualization, it is easy to separate the instances later on. Preparation Step: NW04S445 The package configuration needs to be applied and the package started. This step assumes that the cluster as such is already configured and started. Please refer to the Managing Serviceguard user's guide if more details are required. cmapplyconf -P ${SGCONF}/C11/dbcijciC11.config cmrunpkg -n <installation_host> dbcijcic11 All virtual IP address(es) should now be configured. A ping command should reveal that they respond to communication requests. Preparation Step: NW04S1300 Before installing the SAP WAS 7.0 some OS-specific parameters have to be adjusted. Verify or modify the Linux kernel parameters as recommended by NW04s Master Guide Part 1. Be sure to propagate changes to all nodes in the cluster. The SAP Installer checks the OS parameter settings with a tool called "Prerequisite Checker" and stops the installer when the requirements are not met. Preparation Step: NW04S1310 The SAP J2EE engine needs a supported and tested JAVA SDK on all cluster nodes. Check the installed JAVA SDK meets the requirements in the NW04s Master Guide Part 1. Be sure to install the required SDK on all nodes in the cluster. In case the J2EE engine uses security features that utilize the need for the JAVA Cryptographic Toolkit, it has to be downloaded both from the Sun website as well as from the SAP Service Marketplace. Preparation Step: NW04S1320 Before invoking the SAP installer, some additional requirements have to be met that are described in the SAP installation documentation for SAP WAS 7.0. These are not specific to cluster implementations and apply to any SAP installation. They usually involve the setting of environment variables for SAPINST, like the DISPLAY variable, the temporary installation directory TMP, etc. SAP Preparation 51

52 SAP Netweaver 2004s High Availability System Installer Installation Step: NW04S1330 The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP WAS 7.0 will happen in several steps, depending on the installation type: First, the SAP System Central Services Component (SCS and/or ASCS) will be installed using the virtual IP contained in the corresponding Serviceguard package. Then, the database instance will be set up and installed on the relocatable IP address of the DB package. After that, the Central Instance and all required Dialog Instances are established. The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd <COMPONENT_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/sapinst\ SAPINST_USE_HOSTNAME=<reloc_jci> (A)SCS installation: The SAP installer should now start. Follow the instructions of the (A)SCS installation process and provide the details when asked by SAPINST. It is necessary to provide the virtual hostname to any (A)SCS instance related query. cd <DB_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/sapinst\ SAPINST_USE_HOSTNAME=<reloc_db> Database installation The SAP installer should now be starting. Follow the instructions of the database installation process and provide the details when asked by SAPINST. It is necessary to provide the virtual hostname to any DB instance related query. After installing the RDBMS software, SAPINST needs to be resumed and after finalizing the installation the SAP database setup is complete. Install the latest database patch. Installation Step: NW04S1340 The same procedure that uses virtual IPs provided by Serviceguard packages can be used for ABAP Central Instance and dialog instance installations. Both map to Serviceguard Extension for SAP on Linux package type (d). 52 Step-by-Step Cluster Conversion

53 cd <COMPONENT_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/sapinst\ SAPINST_USE_HOSTNAME=<reloc_d> Follow the instructions of the installation process and provide the required details when asked by SAPINST. Provide the virtual hostname to any instance related query. The instances are now able to run on the installation host, once the corresponding Serviceguard packages have been started. It is not yet possible to move instances to other nodes, monitor the instances or trigger automated failovers. Don't shut down the Serviceguard packages while the instances are running. Netweaver 2004 JAVA-only High Availability Option This section describes the installation of a JAVA-only based Web Application Server 6.40 as provided as part of Netweaver 2004SR1. A SAP application relying on a JAVA-only Web Application Server 6.40 is SAP Enterprise Portal 6.0. This component debuts with the virtualized installation command SAPINST that is generally used for Netweaver 2004s, but this early procedure differs in many details from the final process. In a landscape for a SAP WAS 6.40 JAVA the following components exist: JAVA System Central Services (SCS) [JAVA Message and Enqueue Server] JAVA Central Instance (JC) [JAVA Dispatcher(s), Server(s), Software Deployment Manager (SDM)] JAVA Dialog Instance(s) (JD) [JAVA Dispatcher(s), Server(s)] Database Instance (DB) The potential SPOFs (Single Point of Failure) are the SCS and DB instances. The JAVA Central instance needs to be installed locally on one server. Dialog instances can be installed redundantly on nodes inside or outside the cluster. As a preparation, simple Serviceguard packages for the clustered instances have to be created. They provide the virtual IP addresses that are required during installation. The package(s) will later on be altered to become Serviceguard Extension for SAP on Linux packages. It will be more convenient to do this once the SAP installation has completed. The following steps are performed as root user to prepare the cluster for the SAP installation. Netweaver 2004 JAVA-only High Availability Option This section describes the installation of a JAVA-only based Web Application Server 6.40 as provided as part of Netweaver 2004SR1. A SAP application relying on a JAVA-only Web Application Server 6.40 is SAP Enterprise Portal 6.0. This component debuts the virtualized installation with SAPINST that is generally used for Netweaver 2004s, but this early procedure differs in many details from the final process. In a landscape for a SAP WAS 6.40 JAVA the following components exist: JAVA System Central Services (SCS) [JAVA Message and Enqueue Server] JAVA Central Instance (JC) [JAVA Dispatcher(s), Server(s), Software Deployment Manager (SDM)] JAVA Dialog Instance(s) (JD) [JAVA Dispatcher(s), Server(s)] Database Instance (DB) The potential SPOFs are the SCS and DB instances. The JAVA Central instance needs to be installed locally on one server. Dialog instances can be installed redundantly on nodes inside or outside the cluster. As preparation, simple Serviceguard packages for the clustered instances have to be created. They provide the virtual IP addresses that are required during installation. The package(s) will later on be altered to become Serviceguard Extension for SAP on Linux packages. It will be more convenient to do this once the SAP installation has taken place. The following steps are performed as root user to prepare the cluster for the SAP installation. Preparation Step: NW04J400 Create a Serviceguard package directory on each node of the cluster. This directory will be used by all packages that belong to the SAP System with a specific SAP System ID <SID>. mkdir -p ${SGCONF}/<SID> Preparation Step: NW04J420 SAP Preparation 53

54 Create standard package configuration and control files for each package. The SAP JAVA instances are mapped to Serviceguard Extension for SAP on Linux package types as follows: The SCS Instance requires a (jci) package type. The database requires a (db) package type. The package files are recommended to be named <pkg_name>.control.script for a control file and <pkg_name>.config for a configuration file. The packages are created with the following commands: cmmakepkg -s ${SGCONF}/<SID>/<pkg_name>.control.script cmmakepkg -p ${SGCONF}/<SID>/<pkg_name>.config An example for SAP System C11 and a single package using the standard naming convention would be: cmmakepkg -s ${SGCONF}/<SID>/dbjciC11.control.script cmmakepkg -p ${SGCONF}/<SID>/dbjciC11.config Preparation Step: NW04J430 The created configuration file(s) need to be edited. Refer to the Managing Serviceguard user's guide for general information about the file content. A minimum configuration will do for the purpose of supporting the SAP installation. At least the following parameters should be edited in: ${SGCONF}/<SID>/<pkg_name>.config: PACKAGE_NAME, NODE_NAME, RUN_SCRIPT, HALT_SCRIPT, SUBNET Specify NODE_NAME entries for all hosts on which the package should be able to run. Specify the created control scripts that were created earlier on as run and halt scripts: RUN_SCRIPT <SGCONF>/<SID>/<pkg_name>.control.script HALT_SCRIPT <SGCONF>/<SID>/<pkg_name>.control.script In the${sgconf}/<sid>/<pkg_name>.control.script file(s), there is a section that defines a virtual IP address array. All virtual IP addresses specified here will become associated with the SAP instances that are going to be installed. Edit the array and create at least one virtual IP: IP[0]= Distribute the edited package templates to all cluster nodes. Only if the SAP components are meant to be able to run on different nodes, several packages have to be created using different virtual IPs. Preparation Step: NW04J440 Create a debug file on the system on which the installation will run. The debug file allows manual SAP instance shutdown and startup operations during installation. touch ${SGCONF}/<SID>/debug It does not matter, if the system is meant to be run in a multi-tier fashion that separates the database from the SCS instance by running them on different cluster nodes during normal operation. For convenience, all SCS and database installation steps should be done on a single machine. Due to the use of virtualized IP addresses, it is easy to separate the instances later on. Make sure that the JAVA Central Instance JC00 gets installed on the host where it is meant to be run afterwards. Preparation Step: NW04J445 The package configuration needs to be applied and the package started. This step assumes that the cluster as such is already configured and started. Please refer to the Managing Serviceguard user's guide if more details are required. cmapplyconf -p ${SGCONF}/C11/dbcijciC11.config cmrunpkg -n <installation_host> dbcijcic11 All virtual IP address(es) should now be configured. A ping command should reveal that they respond to communication requests. Preparation Step: NW04J1300 Before installing the J2EE Engine some OS-specific parameters have to be adjusted. Verify or modify the Linux kernel parameters as recommended by NW04 Master Guide Part Step-by-Step Cluster Conversion

55 Be sure to propagate changes to all nodes in the cluster. Preparation Step: NW04J1310 The SAP J2EE engine needs a supported and tested JAVA SDK on all cluster nodes. Check the installed JAVA SDK with the requirement as of the NW04 Master Guide Part 1. Be sure to install the required SDK on all nodes in the cluster. In case the J2EE engine uses security features that utilize the need for the JAVA Cryptographic Toolkit, it has to be downloaded both from the Sun website as well as from the SAP Service Marketplace. Preparation Step: NW04J1320 Before invoking the SAP installer, some additional requirements have to be met that are described in the SAP installation documentation for SAP WAS These are not specific to cluster implementations and apply to any SAP installation. They usually involve the setting of environment variables for SAPINST, like the DISPLAY variable, the temporary installation directory TMP, etc. Installation Step: NW04J1330 The installation is done using the virtual IP provided in the Serviceguard package. The SAPINST installer is invoked with the parameter SAPINST_USE_HOSTNAME to enable the installation on a virtual IP address of Serviceguard. This parameter is not mentioned in SAP installation documents, but it is officially supported. The installer will show a warning message that has to be confirmed. The installation of the SAP J2EE Engine 6.40 is done in several steps, depending on the installation type: First, the SAP System Central Services Component (SCS) will be installed using the virtual IP contained in the corresponding Serviceguard package. Then, the database instance will be set up and installed on the virtual IP address of the DB package. After that, the Central Instance and all required Dialog Instances are established. When starting the SAPINST installer, the first screen shows installation options that are generated from an XML file called product.catalog located at <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>. The standard catalog file product.catalog has to be either: replaced by product_ha.catalog in the same directory on a local copy of the DVD or the file product_ha.catalog can be passed as an argument to the SAPINST installer It is recommended to pass the catalog as an argument to SAPINST. The XML file that is meant to be used with Serviceguard Extension for SAP on Linux clusters is included on the installation DVD/CD's distributed by SAP. SAP Preparation 55

56 The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd <COMPONENT_TMP_DIR> <SAPINST_MASTER_DVD>/<X>_OS/SAPINST/UNIX/<OS>/sapinst\ <SAPINST_MASTER_DVD>/<X>_OS/SAPINST/UNIX/<OS>product_ha.catalog\ SAPINST_USE_HOSTNAME=reloc_jci> ASCS installation SCS Installation Follow the instructions of the ASCS installation process and provide the required details when asked by SAPINST. Provide the virtual hostname to any ASCS instance related query. cd <DB_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/sapinst\ SAPINST_USE_HOSTNAME=<reloc_db> The SAP installer should now start. Follow the instructions of the SCS installation process and provide the required details when asked by SAPINST. If it is necessary provide the virtual hostname to any SCS instance related query. NOTE: A known issue may occur as the SCS tries to start the database during the final installation phase. Given that the database is not yet installed the start is not possible. For a solution refer to SAP Note cd <DB_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/sapinst\ <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/product_ha.catalog\ SAPINST_USE_HOSTNAME=<reloc_JDB> The SAP installer should now be starting. Follow the instructions of the database installation process and provide required details when asked by SAPINST. It is necessary to provide the virtual hostname to any DB instance related query. After installing the RDBMS software, SAPINST needs to be resumed and after finalizing the installation the SAP database setup is complete. Install the latest database patches. Make sure that the JAVA Central Instance JC00 gets installed on the host where it is meant to be run afterwards. No SAPINST_USE_HOSTNAME parameter needs to get passed to SAPINST. The SAPINST_USE_HOSTNAME environment variable needs to be unset in case it got defined before. cd <COMPONENT_TMP_DIR> <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/SAPINST\ <SAPINST_MASTER_DVD>/IM<X>_OS/SAPINST/UNIX/<OS>/product_ha.catalog 56 Step-by-Step Cluster Conversion

57 The SAP installer should now start. Follow the instructions of the JAVA Central Instance installation process. JAVA dialog instances get installed in the same fashion. Afterwards, the virtualized installation for SAP J2EE Engine 6.40 should have completed, but the cluster still needs to be configured. The instances are now able to run on the installation host, provided the corresponding Serviceguard packages got started up front. It is not yet possible to move instances to other nodes, monitor the instances or trigger automated failovers. Do not shut down the Serviceguard packages while the instances are running. It is possible to continue installing content for the SAP J2EE Engine, before the cluster conversion as described in the sections below gets performed. Generic SAP Installation Netweaver 2004s introduced High Availability installation options. Similar options were used for Netweaver 2004SR1 JAVA-only installations. All SAP components that are based on pre-netweaver 2004s technology get installed in a standard fashion. No special case applies due to the fact that the system is going to be clustered afterwards. The required file systems should already be set up as documented in Chapter 2 Planning a File System Layout for SAP in a Serviceguard Cluster Environment to prevent avoidable conversion activities in a later stage. If the SAP application is not installed yet, consult the appropriate SAP Installation Guides and perform the installation according to the guidance given there. Creating a Standalone Enqueue Service and a Enqueue Replication Service This section describes how a SAP ABAP Central Instance DVEBMGS<INSTNR>can be converted to use the Enqueue Replication feature for seamless failover of the Enqueue Service. The whole section can be skipped if Enqueue Replication is not going to be used for the ABAP stack. It can also be skipped in case Replicated Enqueue is already installed. The manual conversion steps can be done for SAP applications that are based on ABAP kernel 4.6D, 6.20, 6.40 or 7.0. These kernels are supported on Serviceguard Extension for SAP on Linux with Replicated Enqueue. As SAP does not deliver installation routines that install Replicated Enqueue configurations for these releases, a manual conversion becomes necessary. If the SAP installation was done for Netweaver 2004 JAVA-only or Netweaver 2004s as documented in section 'SAP Installation Considerations', only the second part 'Creation of Replication Instance' is required. The split of the Central Instance is then already effective and a ASCS instance was created during installation. Using Replicated Enqueue heavily changes the SAP instance landscape and increases the resource demand: Two additional SAP instances will be generated during the splitting procedure. There is a requirement for at least one additional unique SAP System ID. Unique means, that the ID is not in use by any other SAP instance of the cluster. There is also a requirement for one or two additional shared LUNs on the SAN and one or two additional virtual IP addresses for each subnet. The LUNs need to have the size that is required for a SAP Instance directory of the targeted kernel release. Splitting a Central Instance The SPOFs (Single Point of Failure) of the DVEBMGS<INSTNR> instance will be isolated in a new instance called ABAP System Central Services Instance ASCS<INSTNR>. This instance will replace DVEBMGS<INSTNR> for the (ci) package type. The remaining parts of the Central Instance can then be configured as Dialog Instance D<INSTNR_2>. The ASCS Instance should then only be started and stopped with the cluster package startup and halt commands instead of using manual shell operations. NOTE: The Dialog Instance D<INSTNR_2> that results from the conversion also represents one or more Single Points of Failure for many scenarios. In these cases, D<INSTNR_2> should also be clustered with Serviceguard Extension for SAP on Linux. It is not even unusual to combine ASCS<INSTNR> and D<INSTNR_2>in a single Serviceguard Extension for SAP on Linux package. It makes sense, even though the resulting package contains the same components like a traditional package for DVEBMGS<INSTNR> would. Seamless failover with Replicated Enqueue can not be achieved without splitting up DVEBMGS<INSTNR> into two instances. Replicated Enqueue Conversion: RE010 Logon as root to the server on which the Central Instance DVEBMGS<INSTNR> was installed. Create a new mountpoint: SAP Preparation 57

58 su - <sid>adm mkdir /usr/sap/<sid>/ascs<instnr> Replicated Enqueue Conversion: RE020 A volume group needs to be created for the ASCS instance. The physical device(s) should be created as LUN(s) on shared storage. Storage connectivity is required from all nodes of the cluster that should run the ASCS. For the volume group, one logical volume should get configured. For the required size, refer to the capacity consumption of /usr/sap/<sid>/dvebmgs<instnr>. This should provide a conservative upper limit that leaves reasonable headroom. Mount the new logical volume to the mountpoint created in RE010. Replicated Enqueue Conversion: RE030 Create instance subdirectories in the mounted logical volume. They will be switched between the cluster nodes later. su - <sid>adm cd /usr/sap/<sid>/ascs<instnr> mkdir data log sec work Replicated Enqueue Conversion: RE040 If the used SAP kernel has a release smaller than 6.40: Download the executables for the Standalone Enqueue server from the SAP Service Marketplace and copy them to /sapmnt/<sid>/exe. There should be at least three files that are added/replaced: enserver, enrepserver and ensmon. Make sure these files are part of the sapcpe mechanism for local executable creation (see Chapter 2). This step will create a version mix for the executables. The Standalone Enqueue executables get taken from the 6.40 kernel. Special caution has to be taken to not replace them with older releases later on. This might happen by accident when kernel patches get applied. Replicated Enqueue Conversion: RE050 Create instance profile and startup profile for the ASCS Instance. These profiles get created as <sid>adm in the NFS-shared /sapmnt/<sid>/profile directory. Here is an example template for the instance profile <SID>_ASCS<INSTNR>_<CIRELOC>: # # general settings # SAPSYSTEMNAME=<SID> INSTANCE_NAME=ASCS<INSTNR> SAPSYSTEM=<NSTNR> SAPLOCALHOST=<CIRELOC> SAPLOCALHOSTFULL=<CIRELOC>.<domain> # # enqueueserver settings # enque/server/internal_replication = true enque/server/replication = true enque/server/threadcount = 1 enque/enrep/keepalive_count = 0 enque/process_location = local enque/table_size = 4096 ipc/shm_psize_34 = 0 # # messageserver settings # rdisp/mshost=<cireloc> # Step-by-Step Cluster Conversion

59 # prevent shmem pool creation # ipc/shm_psize_16 = 0 ipc/shm_psize_24 = 0 ipc/shm_psize_34 = 0 ipc/shm_psize_66 = 0 This template shows the minimum settings. Scan the old <SID>_DVEBMGS<INSTNR>_<local_ip> profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile. Here is an example template for the startup profile START_ASCS<INSTNR>_<CIRELOC> # SAPSYSTEMNAME =<SID>INSTANCE_NAME =ASCS<INSTNR> # start SCSA handling # Execute_00 =local $(DIR_EXECUTABLE)/sapmscsa -n pf=$(dir_profile)/<sid>_ascs<instnr>_<cireloc> # # start Message Server # MS =ms.sap<sid>_ascs<instnr> Execute_01 =local rm -f $(_MS) Execute_02 =local ln -s -f $(DIR_EXECUTABLE)/msg_server $(_MS) Start_Program_01 =local $(_MS) pf=$(dir_profile)/<sid>_ascs<instnr>_<cireloc> # # start syslog collector daemon # CO =co.sap<sid>_ascs<instnr> Execute_03 =local rm -f $(_CO)Execute_04 =local ln -s -f $(DIR_EXECUTABLE)/rslgcoll $(_CO) Start_Program_02 =local $(_CO) -F pf=$(dir_profile)/<sid>_ascs<instnr>_<cireloc> # # start Enqueue Server # EN = en.sap<sid>_ascs<instnr>execute_05 = local rm -f $(_EN) Execute_06 = local ln -s -f $(DIR_EXECUTABLE)/enserver $(_EN) Start_Program_03 = local $(_EN) pf=$(dir_profile)/<sid>_ascs<instnr>_<cireloc> # # start syslog send daemon # SE =se.sap<sid>_ascs<instnr>execute_07 =local rm -f $(_SE) Execute_08 =local ln -s -f $(DIR_EXECUTABLE)/rslgsend $(_SE) Start_Program_04 =local $(_SE) F pf=$(dir_profile)/<sid>_ascs<instnr>_<cireloc> # Replicated Enqueue Conversion: RE060 Adopt instance profile and startup profile for the DVEBMGS<INSTNR> instance. The goal of this step is to strip the Enqueue and Message Server entries away and create a standard Dialog Instance. A second alteration is the replacement of the Instance Number of the Central Instance which now belongs to ASCS and AREP. The new Dialog Instance profile <SID>_D<INSTNR_2>_<DRELOC> differs from the original <SID>_DVEBMGS<INSTNR>_<ip_address> profile in several ways: All configuration entries for Message and Enqueue Service need to be deleted, e.g. rdisp/wp_no_enq=1 must be removed. Several logical names and address references need to reflect a different relocatable address, a different Instance Name and a different Instance Number. For example: SAP Preparation 59

60 INSTANCE_NAME=D<INSTNR_2> SAPSYSTEM=<INSTNR_2> rdisp/vbname = <DRELOC>_<SID>_<INSTNR_2> SAPLOCALHOST=<DRELOC> SAPLOCALHOSTFULL=<DRELOC>.domain The exact changes depend on the individual appearance of the file for each installation. The startup profile is also individual, but usually can be created similar to the default startup profile of any Dialog Instance. Here is an example template for a startup profile START_D<INSTNR_2>_<DRELOC>: # SAPSYSTEMNAME = <SID>INSTANCE_NAME = D<INSTNR_2> # # start SCSA # Execute_00 = local $(DIR_EXECUTABLE)/sapmscsa -n\ pf=$(dir_profile)/<sid>_d<instnr_2>_<dreloc> # # start application server # _DW = dw.sap<sid>_d<dreloc> Execute_01 = local rm -f $(_DW) Execute_02 = local ln -s -f $(DIR_EXECUTABLE)/disp+work $(_DW) Start_Program_01 = local $(_DW) pf=$(dir_profile)/<sid>_d<instnr_2>_<dreloc> # # start syslog send daemon # _SE = se.sapra1_d51execute_03 = local rm -f $(_SE) Execute_04 = local ln -s -f $(DIR_EXECUTABLE)/rslgsend $(_SE) Start_Program_02 = local $(_SE) pf=$(dir_profile)/<sid>_d<instnr_2>_<dreloc> # Creation of Enqueue Replication Server Instance This section describes how to add Enqueue Replication Services (ERS) to a system that has ASCS and/or SCS instances. The ASCS (ABAP) or SCS (JAVA) instance will be accompanied with an AREP (ABAP) or REP (JAVA) instance that keeps a mirror of the ASCS or SCS internal state in shared memory. As a result, no information gets lost due to the fact of failing SAP System Central Services, if the Serviceguard Extension for SAP on Linux package that includes [A]SCS fails over to the node that runs [A]REP during runtime. Replicated Enqueue Conversion: RE070 Create a new mountpoint: su - <sid>adm mkdir /usr/sap/<sid>/ers<instnr> Replicated Enqueue Conversion: RE080 A volume group needs to be created for the ERS instance. The physical device(s) should be created as LUN(s) on shared storage. Storage connectivity is required from all nodes of the cluster that should run the Replicated Enqueue. For the volume group, one logical volume should get configured. Mount the new logical volume to the mountpoint created in RE060. Replicated Enqueue Conversion: RE090 Create instance subdirectories in the mounted logical volume. They will be switched between the cluster nodes later. su - <sid>adm cd /usr/sap/<sid>/ers<instnr> mkdir data log exe work profile 60 Step-by-Step Cluster Conversion

61 Replicated Enqueue Conversion: RE095 Copy SAP executables into the ERS exe directory. This is described in detail in the SAP NetWeaver Library under: For convenience the following example script is provided which copies the required files: ################################################################## echo "usage for ABAP ASCS - example: echo " setup_ers_files.sh SX1 ERS12 ers12_reloc ASCS02 ascs_reloc" echo "or" echo "usage for JAVA SCS - example:" echo " setup_ers_files.sh SX1 ERS13 ers13_reloc SCS03 scs_reloc" ################################################################### # copy/create the SAP Enqueue Replication files out of an # the SCS instance # # setup_ers_files.sh = program name # SX1 = SAP SID # ERS12 = instance name of the Enqueue Replication Service # ASCS02 -> ERS12 # SCS03 -> ERS13 # ers12_reloc = virtual hostname for the ERS12 instance # ers13_reloc = virtual hostname for the ERS13 instance # # ASCS02 - instance name of the enquue # ascs_reloc = virtual hostname for the ASCS02 instance # scs_reloc = virtual hostname for the SCS03 instance # # Arguments: # $1=SX1 # $2=ERS12 # $3=saplx-0-35 # S = Source directory of ASCS/SCS instance # D = Destination directory of ERS instance ################################################################### S="/sapmnt/$1/exe/" D="/usr/sap/$1/$2/" ERSLST="${D}/exe/ers.lst" echo "SAPSYSTEM=[$1]" echo "ERS Instance=[$2]" echo "ERS virtual IP=[$3]" echo "Source_dir=[$S]" echo "Dest_dir=[$D]" echo "ers.lst=[$erslst]" echo "mkdir -p ${D}" mkdir -p ${D} cd ${D} echo "mkdir data log exe work profile" mkdir data log exe work profile echo "mkdir -p ${D}/exe/servicehttp/sapmc" mkdir -p ${D}/exe/servicehttp/sapmc cp /dev/null ${ERSLST} ################################################################### SAP Preparation 61

62 # NOTE: the libxxx.30 is SAP version dependent # change as required ################################################################### for i in enqt \ enrepserver \ ensmon \ libicudata.so.30 \ libicui18n.so.30 \ libicuuc.so.30 \ libsapu16_mt.so \ libsapu16.so \ librfcum.so \ sapcpe \ sapstart \ sapstartsrv \ sapcontrol do echo "cp $S/$i ${D}/exe" cp $S/$i ${D}/exe echo $i >> ${ERSLST} done echo "servicehttp >> ${ERSLST}" echo servicehttp >> ${ERSLST} echo "ers.lst >> ${ERSLST}" echo ers.lst >> ${ERSLST} for i in sapmc.jar sapmc.html frog.jar soapclient.jar do echo "cp $S/servicehttp/sapmc/$i ${D}/exe/servicehttp/sapmc/$i" cp $S/servicehttp/sapmc/$i ${D}/exe/servicehttp/sapmc/$i done echo "cp /sapmnt/$1/profile/${1}_${4}_${5} /sapmnt/$1/profile/${1}_${2}_${3}" cp /sapmnt/$1/profile/${1}_${4}_${5} /sapmnt/$1/profile/${1}_${2}_${3} echo "cp /sapmnt/$1/profile/start_${4}_${5} /sapmnt/$1/profile/start_${2}_${3}" cp /sapmnt/$1/profile/start_${4}_${5} /sapmnt/$1/profile/start_${2}_${3} ################################################################### Replicated Enqueue Conversion: RE100 Create instance profile and startup profile for the ERS Instance. These profiles get created as <sid>adm in the NFS-shared /sapmnt/<sid>/profile directory. Here is an example template for the instance profile <SID>_ERS<INSTNR>_<[A]REPRELOC>: # # general settings # SAPSYSTEMNAME = <SID> INSTANCE_NAME = [A]REP<INSTNR> SAPSYSTEM = <INSTNR> # # standalone enqueue details from (A)SCS instance 62 Step-by-Step Cluster Conversion

63 # enque/process_location = REMOTESA rdisp/enqname = $(rdisp/myname) enque/serverinst = $(SCSID) enque/serverhost = <[J]CIRELOC> Here is an example template for the startup profile START_ERS<INSTNR>_<AREPRELOC> # SAPSYSTEM = <INSTNR> SAPSYSTEMNAME = <SID> INSTANCE_NAME = ERS<INSTNR> # # Special settings for this manually set up instance # SCSID = <INR> DIR_PROFILE = $(DIR_INSTALL)/profile _PF = $(DIR_PROFILE)/ <SID>_ERS<INSTNR>_<[A]REPRELOC> SETENV_00 = LD_LIBRARY_PATH=$(DIR_LIBRARY):%(LD_LIBRARY_PATH) SETENV_01 = SHLIB_PATH=$(DIR_LIBRARY):%(SHLIB_PATH) SETENV_02 = LIBPATH=$(DIR_LIBRARY):%(LIBPATH) _ER = er.sap$(sapsystemname)_$(instance_name) Execute_01 = local rm -f $(_ER) Execute_02 = local ln -s -f $(DIR_EXECUTABLE)/enrepserver$(FT_EXE) $(_ER) Start_Program_00 = local $(_ER) pf=$(_pf) NR=$(SCSID) Linux Configuration The valid Linux kernel to deploy to the servers needs to be certified with the SAP LinuxLab and at the same time needs to be supported with HP Serviceguard Extension for SAP on Linux. The SAP LinuxLab publishes kernels to deploy on its FTP server. The latest kernel to deploy can be obtained from SAP. Instructions from where to download these kernels can be found on Additional information on how to contact the SAP LinuxLab can also be found there. All supported Serviceguard Extension for SAP on Linux setups were tested with drivers that are supported by HP Serviceguard for Linux and at the same time are part of an SAP certified Linux distribution. For these setups, it is not required to install any driver in addition to those that come with the Linux distribution. LVM, network bonding and multi-pathing are supported via the software that comes with the Linux distribution. Activities that would endanger SAP support are: Recompiling/remaking the Linux kernel after installation. Adding drivers that are not part of the installed Linux distribution. Adding kernel modules that are not delivered with an open source license. A correct Linux configuration ensures that all cluster nodes provide the environment and system configuration required to run SAP. Several of the following steps must be repeated on each node. Record the steps completed for each node, as you complete them. This helps identify errors in the event of a malfunction later in the integration process. The Linux configuration task is split into the following sections: NOTE: The above step should only be performed once on the primary host. 1. Directory Structure Configuration - This section describes the changes that need to be done in order to distribute the SAP file systems across logical volumes on local and shared disk systems. The primary host is the host where the SAP Central Instance was installed. If your database is currently running on Linux Configuration 63

64 a machine different from this, the database vendor dependent steps have to be done on the host on which the database was installed. 2. Cluster Node Synchronization - This section consists of steps performed on the backup nodes. This ensures that the primary node and the backup nodes have a similar environment. 3. NOTE: Repeat the above steps for each node of the cluster that is different from the primary host. 4. Cluster Node Configuration - This section consists of steps performed on all the cluster nodes, regardless if the node is a primary node or a backup node. 5. NOTE: Repeat the above steps for each node of the cluster including the primary host. 6. External Application Server Host Configuration - This section consists of steps performed on any host outside of the cluster that runs another instance of the SAP component. NOTE: Repeat the above steps for each host that has an external application server installed. Performing some of the iterations in parallel is fine, just use caution in any complex setup situation. Directory Structure Configuration NOTE: The steps in this section are only performed once on the primary host. The main purpose of this section is to ensure the proper LVM layout and the right distribution of the different file systems that reside on shared disks. Logon as root to the system where the SAP Central Instance is installed. If the database is installed on a different host, also open a shell as root on the database machine. Installation Step: IS009 All SAP Instances (CI, SCS, ASCS, D,...) must have instance numbers that are unique for the whole cluster. For a clustered instance with instance number <INSTNR>execute the command: ls -d /usr/sap/???/*<instnr> It should not reply any match on all nodes. Otherwise refer to the SAP documentation how to change instance IDs for the relevant instance type. It might be simpler to reinstall one of the conflicting instances. Installation Step: IS010 Make sure that the volume group layout is compliant with the needs of the Serviceguard package(s) as specified in the tables of Section "Planning a File System Layout for SAP in a Serviceguard/LX Cluster Environment." All directories that are labeled shared in these tables need to reside on logical volumes on a shared external storage system. The recommended separation in different volume groups can also be taken from these tables. Installation Step: IS030 Comment out the references to any file system that classified as a SHARED in Chapter 2 of this manual from the /etc/fstab. Installation Step: IS031 If /usr/sap/<sid>is a mountpoint that contains the Central Instance directory DVEBMGS<INSTNR>, you have to move some files to a local logical volume and change the mountpoint. For example: mkdir /usr/sap/<sid>.new cd /usr/sap/<sid> df. # Remember the file system column. # It will be referred to as <dev_path> later. find. -depth -print cpio -pd 64 Step-by-Step Cluster Conversion

65 /usr/sap/<sid>.new cd / unmount /usr/sap/<sid> MaxDB Database Step: MD040 This step can be skipped for MaxDB instances starting with versions 7.6 and higher. NOTE: Make sure you have mounted a sharable logical volume on /sapdb/<dbsid>/wrk as discussed in section MaxDB Storage Considerations in Chapter 2. Change the path of the runtime directory of the MaxDB and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/<dbsid> find. -depth -print cpio -pd /sapdb/<dbsid>/wrk cd.. rmdir /sapdb/data/wrk/<dbsid> dbmcli -d <DBSID> -u control,control dbmcli on <DBSID>> param_directget RUNDIRECTORY OK RUNDIRECTORY /sapdb/data/wrk/<dbsid> dbmcli on <DBSID>> param_directput RUNDIRECTORY /sapdb/<dbsid>/wrk OK dbmcli on <DBSID>> Installation Step: IS050 The usage of local executables with Serviceguard Extension for SAP on Linux is required. Check if the Central Instance host and all application servers have a directory named /usr/sap/<sid>/sys/exe/ctrun. If the directory exists, this step can be skipped. The system is already using local executables through sapcpe. To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Figure 3-1 shows a SAP file system layout with sapcpe activated for two Application Servers. Linux Configuration 65

66 Figure 3-1 sapcpe Mechanism for Executables To create local executables, the SAP file system layout needs to be changed. The original link /usr/sap/<sid>/sys/run needs to be renamed to /usr/sap/<sid>/sys/ctrun. A new local directory /usr/sap/<sid>/sys/run will then be required to store the local copy. It needs to be initialized by copying the files sapstart and saposcol from the central executable directory /sapmnt/<sid>/exe. Make sure to match owner, group and permission settings. Do not forget to set the s-bit of saposcol. To complete the process, a plain text configuration file is needed that tells sapcpe which files need to be included in the copying process. A file of this type is usually called sapcpeft. If sapcpeft exists, sapcpe will automatically be executed as the first step of the next instance restart. It needs to be created in the central executable directory. For each additional file in the central executable directory it needs to contain a line with the following format: <filename> local_copy check_exist SAP also delivers template file lists within the central executable directory. These files have a filename of the format *.lst. The following files override sapcpeft: frontend.lst, instance.lst, instancedb.lst, tools.lst and inhouse.lst. Either the *.lst files should get removed, or they should be used instead of sapcpeft. In this case, they need to be changed to include all executables and libraries. If the local executable directory only holds links, sapcpe is not configured correctly. It is not an option to manually copy the local executables in this case. The next instance restart would replace the local copies with symbolic links. For latest information on how to utilize sapcpe refer to the SAP online documentation. 66 Step-by-Step Cluster Conversion

67 Cluster Node Synchronization NOTE: Repeat the steps in this section for each node of the cluster that is different than the primary host. Logon as root to the system where the SAP Central Instance is installed (primary host) and prepare a logon for each of its backup hosts - if not already available. Installation Step: IS070 Open the groupfile file, /etc/group, on the primary side. If any of the SAP and Database groups listed here (sapsys, dba, oper, sdba) exist on the primary node and they do not exist on the backup node, copy them from the primary node to the backup node. If any group exists, verify that it has the same GID on both the primary and backup nodes. Merge the group members lists. Installation Step: IS080 Open the password file, /etc/passwd, on the primary side. If any of the SAP and database users listed here (<sid>adm, ora<sid>, sqd<sid>, sdb...) exist on the primary node, recreate them on the backup node. Assign the users on the backup nodes the same user and group ID as the primary nodes. Installation Step: IS090 Look at the service file, /etc/services, on the primary side. Replicate all SAP services listed here (sapdp<inr>, sapdb<inr>s, sapms<sid>, sapgw<inr>, sapgw<inr>s, orasrv, tlissrv sql16...) on the Primary Node that exist on the primary node onto the backup node. Installation Step: IS100 Change the Linux kernel on the backup node to meet the SAP requirements. The entries on the primary node may act as a reference. Make sure the appropriate saplocales rpm packages are installed on all nodes of the cluster. Kernel parameters are set in file /etc/sysctl.conf Install all Linux kernel patches that are recommended for Serviceguard and patches recommended for SAP R/3. Installation Step: IS110 If the primary node has the Central Instance installed: Copy the <sid>adm home directory to the backup node(s). This is a local directory on each node. Default home directory path is /home/<sid>adm. Installation Step: IS120 On the second node, in the <sid>adm home directory the start, stop and environment scripts need to be renamed. If some of the scripts listed in the following do not exist (For certain releases, such as SAP 6.x or higher), these steps can be skipped. su - <sid>adm mv startsap_<primary>_<inr> startsap_<secondary>_<inr> mv stopsap_<primary> <INR> stopsap_<secondary>_<inr> mv.sapenv_<primary>.csh.sapenv_<secondary>.csh mv.sapenv_<primary>.sh.sapenv_<secondary>.sh mv.dbenv_<primary>.csh.dbenv_<secondary>.csh mv.dbenv_<primary>.sh.dbenv_<secondary>.sh mv.sapsrc_<primary>.csh.sapsrc_<secondary>.csh mv.sapsrc_<primary>.sh.sapsrc_<secondary>.sh mv.dbsrc_<primary>.csh.dbsrc_<secondary>.csh mv.dbsrc_<primary>.sh.dbsrc_<secondary>.sh Linux Configuration 67

68 The following statement should automate this activity for standard directory contents. Do not use a line break within the awk statement: su - <sid>adm ls -a awk '/<primary>/ { system( sprintf( "mv %s %s\n", $0, gensub("<primary>", "<secondary>", 1 ))) }' exit Never use the relocatable address in these filenames. If an application server was already installed, do not overwrite any files which will start the application server. If the rc-files have been modified, correct any hard coded references to the primary hostname. Installation Step: IS130 If the system has a SAP kernel release < 6.0: On each host the files /home/<sid>adm/startsap <local>_<inr> and /home/<sid>adm/stopsap_<local>_<inr> exist and contain a line that specifies the start profile. After a standard installation of a Central Instance this line is similar to: START_PROFILE="START_<component><INR>_<primary>" As<sid>adm, change the line individually on each node On the primary host keep: START_PROFILE="START_DVEBMGS<INR>_<primary>" On the secondary (and any other potential failover) host change the value in both files to: START_PROFILE="START_DVEBMGS<INR>_<secondary>" Oracle Database Step: OR150 If the primary node has the ORACLE database installed: Create additional links in /oracle/<sid> on the primary node. For example: su - ora<sid> ln.dbenv_<primary>.csh.dbenv_<secondary>.csh ln.dbenv_<primary>.sh.dbenv_<secondary>.sh exit NOTE: If you are implementing an Application Server package make sure that you install the Oracle Client libraries locally on all nodes you run the package on. Refer to OSS Note for more details. Oracle Database Step: OR160 If you are using ORACLE: Create a mount point for the Oracle files on the alternate nodes if it is not already there. For example: su - ora<sid> mkdir -p /oracle/<sid> exit MaxDB Database Step: MD170 Create a mount point for the MaxDB files on the alternate nodes if it is not already there. For example: su - sqd<dbsid> mkdir -p /sapdb/<dbsid> Installation Step: IS191 If the primary node has the Central Instance installed and the backup node has no internal application server with local executables installed, for example on the primary node: cd /usr/sap/<sid>/sys find. -depth -print cpio >/tmp/sys.cpio 68 Step-by-Step Cluster Conversion

69 Use ftp(1) to copy the file over to the secondary node. On the secondary node: su - <sid>adm mkdir -p /usr/sap/<sid>/sys cd /usr/sap/<sid>/sys cpio -id </tmp/sys.cpio Installation Step: IS192 Create a mountpoint for the Central Instance directory on the backup node so the node can run the Central Instance. For example: mkdir -p /usr/sap/<sid>/dvebmgs<instnr> Installation Step: IS210 Create a local directory for the saposcol temporary data on possible the backup nodes. For example: mkdir -p /usr/sap/tmp Installation Step: IS220 Create a local mount point for each file system that was specified in chapter "Planning the Storage Layout to have shared disk" to have a "NFS" access point. Refer to the tables in chapter 2 for LOCAL, SHARED or NFS mountpoints and the corresponding tables that represent entries for the required database type. Depending on the past usage of the hosts, some of the directories might already exist. su - <sid>adm mkdir -p /sapmnt/<sid> mkdir -p /usr/sap/trans mkdir -p /usr/sap/<sid>/<component><instnr>exit MaxDB Database Step: MD230 MaxDB database specific: su - sqd<sid> mkdir -p /sapdb/programs mkdir -p /sapdb/data mkdir -p /usr/spool/sql exit Ownership and permissions of these directories should be chosen equally to the already existing directories on the primary host. MaxDB Database Step: MD235 For releases starting with MaxDB 7.5: Copy file /etc/opt/sdb to the alternate cluster nodes. This file contains global path names for the MaxDB instance. MaxDB Database Step: MD236 For releases starting with MaxDB 7.5 Verify that the symbolic links listed below in directory/var/spool/sql exist on all cluster nodes running the database: dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid Linux Configuration 69

70 Cluster Node Configuration NOTE: Repeat the steps in this section for each node of the cluster. Installation Step: IS241 Logon as root Serviceguard Extension for SAP on Linux needs remote login to be enabled on all cluster hosts. The traditional way to achieve this is via remote shell commands. If security concerns prohibit this, it is also possible to use secure shell access instead. If you are planning to use the traditional remote shell access adjust the security settings in /etc/pam.d for the following services: In file /etc/pam.d/login comment out the following line: auth required /lib/security/pam_securetty.so In file /etc/pam.d/rsh change the following line: auth required /lib/security/pam_rhosts_auth.so to: auth sufficient /lib/security/pam_rhosts_auth.so Create an.rhosts file in the home directories of the Linux users root, <sid>adm. Allow login for root as root from all nodes including the node you are logged into. Allow login for root and <sid>adm as <sid>adm from all nodes including the node you are logged into. Be careful with this step, many problems result from an incorrect setup of remote access. Check the setup with remsh commands. If you have to provide a password, the.rhosts does not work. Installation Step: IS242 Use the following steps, if you are planning to use the secure shell mechanism: 1. Make sure that the openssh rpm-package is installed.rpm -qa grep ssh 2. Create a private and public key for the root user: ssh -keygen -t dsa Executing this command creates a.ssh directory in the root user's home directory including the following files: id_dsa id_dsa.pub The file id_dsa.pub contains the security information (public key) for the user@host pair e.g. root@<local>. This information needs to be added to the file $HOME/.ssh/authorized_keys2 of the root and <sid>adm user. Create these files if they are not already there. This will allow the root user on <local> to remotely execute commands via ssh under his own identity and under the identity of <sid>adm on all other relevant nodes. On each cluster node where a Serviceguard Extension for SAP on Linux package can run, test the remote access to all relevant systems as user root with the following commands: ssh <hostn> date ssh -l <sid>adm <hostn> date Do these tests twice since the first ssh command between two user/host pairs usually requires a keyboard response to acknowledge the exchange of system level id keys. Ensure that $HOME/.ssh/authorized_keys2 is not writable by group and others. The same is valid for the complete path. Permissions on ~<user> should be 755. Permissions on ~<user>/.ssh/authorized_keys 2 must be 600 or 644. Allowing group/other write access to.ssh or authorized_keys2 will disable automatic authentication. Installation Step: IS275 Make sure that the required rpm software packages are installed on the cluster nodes. Check the status of the rpm kits with: 70 Step-by-Step Cluster Conversion

71 rpm -q serviceguard rpm -q nfs-toolkit rpm -q sgesap-toolkit rpm -q sgcmon rpm -q pidentd If the rpm packages are missing install them with the following command rpm -Uhv serviceguard-<version>.product.<linux-os>.<platform>.rpm rpm -Uhv nfs-toolkit-<version>.product.<linux-os>.<platform>.rpm rpm -Uhv sgesap-toolkit-<version>.product.<linux-os>.<platform>.rpm rpm -Uhv sgcmon-<version>.product.<linux-os>.<platform>.rpm rpm -Uhv pidentd-<version>.product.<linux-os>.<platform>.rpm Installation Step: IS280 Create all central instance directories below /export as specified in Chapter 2. For example: su - <sid>adm mkdir -p /export/sapmnt/<sid> mkdir -p /export/usr/sap/trans MaxDB Database Step: MD290 Create all MaxDB directories below /export as specified in Chapter 2. For example: su - sqd<sid> mkdir -p /export/sapdb/programs mkdir -p /export/sapdb/data mkdir -p /export/var/spool/sql/ini Installation Step: IS300 Each package needs at least one relocatable IP address to be specified. In case there is a distinction between front-end, server, and LANs. It is likely that at least two addresses are needed per package. Dedicated heartbeat LANs would require additional addresses. Add all relocatable IP address information to/etc/hosts. Copy /etc/hosts to the other cluster nodes. Installation Step: IS380 In /etc/fstab - comment out any references to file systems that are identified as EXCLUSIVE in section "Planning the LVM layout for Clustered SAP Environments". Installation Step: IS390 Make sure that the content of /etc/fstab is not recreated during system reboot. The file /etc/rc.d/boot.vginit is executed during reboot by default. It should not perform any action for shared volume groups. If this file exists remove it from /etc/rc.d and save it in another backup directory Installation Step: IS391 By default Linux performs an LVM vgscan whenever the system is rebooted. LVM vgscan attempts to detect and allocated and non-allocated volumes and create /dev/xxx entries for volume groups it finds. This could "cause problems" for volumes used in a Serviceguard environment where a Serviceguard package has been stopped in a cluster. In this case the stopped volume would not be allocated by any cluster node. And therefore would be claimed by the booting node. When that Serviceguard package tries to start it will fail because the storage volume is already in use. For SUSE - the vgscan can be prevented by removing /etc/rc.d/boot.lvm and saving it in another backup directory. For Redhat - use the following procedure to prevent a vgscan command at boot time: Linux Configuration 71

72 After your volume groups have been created on a node, back them up using vgcfgbackup, then comment out (there are two places in this file that you need to comment out) the following lines in the /etc/rc.d/rc.sysinit file: if [ -e /proc/lvm -a -x /sbin/vgchange -a -f /etc/lvmtab ]; then action $"Setting up LVM:" /sbin/vgscan && /sbin/vgchange -a -y fi Installation Step: IS392 Check chapter 2 for more automounter details. In this environment the automounter (autofs) version 4 is required for mounting NFS file systems. Check the installed version with: lsmod grep autofs autofs If "autofs4" is not listed, install the corresponding rpm. For example: rpm -Uv autofs ia64.rpm Uncomment the following line in /etc/modprobe.conf: alias autofs autofs4 Then reload the autofs4 kernel module using modprobe autofs4 Restart the automounter process with command /etc/init.d/autofs start Also check that the automounter is configured as part of the system startup. Commands for checking are: /sbin/chkconfig -l autofs and for adding autofs to runlevel 3 and 5: /sbin/chkconfig autofs 35 Installation Step: IS393 Add the following entry to your /etc/auto.master file: /import /etc/auto.import nfsvers=3 The use of NFS version 3 is required. Installation Step: IS394 Edit the automounter file/etc/auto.import, the indirect automounter map file. In this file, all NFS client mounted directories will now be configured to be accessed / mounted via the /import directory. This action consists of several steps that belong together. The idea behind the action to be performed here is that each path below /export should be automounted into a placeholder directory that is below /import. NOTE: There is NOT a straight relationship between the path below /export and the path below /import (e.g: /etc/export/usr/sap/trans gets mapped to /import/trans and NOT /import/usr/sap/trans). Refer to the section "Planning the LVM layout for Clustered SAP Environments" to find out which file systems and their specified path have to configured for the automounter. For each directory that has a specified path that shows that it will be mounted below /export, the following steps are necessary: 72 Step-by-Step Cluster Conversion

73 1. Unmount any file systems that will be part of the automounter configuration. Make sure that these file systems are also commented out of /etc/fstab on all cluster nodes. Some typical automount file systems in an SAP environment are: SAP instance: umount /usr/sap/trans umount /sapmnt/<sid> MaxDB instance: umount /sapdb/programs umount /sapdb/data umount /var/spool/sql/ini 2. Create new directories directly below /import. It should be possible to use the last directory name of the specified path as new directory name. The "last directory name" will be used by the automounter as the key for mapping the mount points request onto a directory in /import. Example file entries: cd /import mkdir trans mkdir <SID> mkdir programs mkdir data mkdir ini Be sure to change the ownership of the directories according to your needs. Normally all SAP directories belong to <sidadm>:sapsys, while database specific file systems belong to the database user, e.g. ora<dbsid>:dba or sqd<dbsid>:sapsys. 3. Remove the original directories and replace them with a symbolic links point to the /import/ directories: rmdir /usr/sap/trans ln -s /import/trans /usr/sap/trans rmdir /sapmnt/<sid> ln -s /import/<sid> /sapmnt/<sid> rmdir /sapdb/programs ln -s /import/ programs /sapdb/programs rmdir /sapdb/data ln -s /import/data /sapdb/data rmdir /var/spool/sql/ini ln -s /import/ini /var/spool/sql/ini 4. Create an entry in the automounter map /etc/auto.import that maps the client /import directories to the server <nfsreloc>:/export directories. The variable <nfsreloc> is the virtual IP address for the NFS server. Example file entries: programs -fstype=nfs,nfsvers=3,udp,nosymlink <nfsreloc>:/export/sapdb/programs data -fstype=nfs,nfsvers=3,udp,nosymlink <nfsreloc>:/export/sapdb/data <SID> -fstype=nfs,nfsvers=3,udp,nosymlink <nfsreloc>:/export/sapmnt/<sid> ini -fstype=nfs,nfsvers=3,udp,nosymlink <nfsreloc>:/export/var/spool/sql/ini ini trans -fstype=nfs,nfsvers=3,udp,nosymlink Linux Configuration 73

74 <nfsreloc>:/export/usr/sap/trans It is important that the nosymlink option is specified. The nosymlink option does not create symbolic links to local directories if the NFS server file system is local. Instead it always mounts from the virtual IP address <nfsreloc>. Therefore a cluster node can be an NFS server and client at the same time and the NFS server file systems can be dynamically relocated to the other cluster nodes. 5. After completing the automounter changes restart the automounter with /etc/rc.d/autofs restart Cluster Configuration This section describes the cluster software configuration with the following topics: Serviceguard Configuration Serviceguard NFS Toolkit Configuration Serviceguard Extension for SAP on Linux Configuration Configuring SGeSAP/LX on Linux The following sections contain information on configuring SGeSAP/LX (Serviceguard Extension for SAP). It is assumed that a basic Serviceguard cluster is up and running. As mentioned in an earlier section of this document, references to variable ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: For SUSE Linux - SGCONF=/opt/cmcluster/conf For Redhat Linux - SGCONF=/usr/local/cmcluster/conf The Serviceguard NFS toolkit also uses a installation directory. This directory is different for SUSE Linux and Redhat Linux so any references to variable ${SGROOT} can be replaced by the following definition: For SUSE Linux - SGROOT=/opt/cmcluster For Redhat Linux - SGROOT=/usr/local/cmcluster SGeSAP/LX uses a staging directory during the installation. As the staging directory is different for SUSE Linux and Redhat Linux references to variable ${SAPSTAGE} can be replaced by the following definition: For SUSE Linux - SAPSTAGE=/opt/cmcluster/sap For Redhat Linux - SAPSTAGE=/usr/local/cmcluster/sap Export the variables so they are available to the user environment: export SGCONF export SGROOT export SAPSTAGE Variables ${SGCONF} and ${SGROOT} can also be set with the following command:. /etc/cmcluster.conf Installation Step: IS400 Create a SGeSAP/LX package directory. Logon as root on the primary host. This directory will be used by all packages that belong to the SAP System with a specific SAP System ID <SID>. mkdir -p ${SGCONF}/<SID> Installation Step: IS410 In a previous installation step, the command: rpm -Uhv sgesap-toolkit-<version>.product.<linux-os>.<platform>.rpm has copied all relevant SGeSAP/LX files in the ${SAPSTAGE}staging area. The following files need to be copied from the ${SAPSTAGE} staging to the SGeSAP/LX package directory: 74 Step-by-Step Cluster Conversion

75 cp ${SAPSTAGE}/sap.functions ${SGCONF}/sap.functions cp ${SAPSTAGE}/SID/sap.config ${SGCONF}/<SID>/sap.conf cp ${SAPSTAGE}/SID/customer.functions ${SGCONF}/<SID>/customer.functions cp ${SAPSTAGE}/SID/sapwas.sh ${SGCONF}/<SID>/sapwas.sh Enter the package directory ${SGCONF}/<SID>. For each Serviceguard package that will be configured create: a Serviceguard configuration template (xxx.config) a Serviceguard control script template (xxx.control.script) For example for a three package SAP configuration with one database instance (db) and one central instance (ci) and one dialog instance (d01). cmmakepkg -p db<sid>.config cmmakepkg -s db<sid>.control.script cmmakepkg -p ci<sid>.config cmmakepkg -s ci<sid>.control.script cmmakepkg -p d01<sid>.config cmmakepkg -s d01<sid>.control.script NOTE: The above creates a default Serviceguard package, which in the first step is not related to any of the SGeSAP/LX components yet. This package can be started, stopped and relocated to other cluster nodes without having any impact on the start or stop of any of the above SAP instances (db), (ci) and (d01). The relationship to the start and stop SAP instances will be established - in a later step - when the customer_defined_halt_cmds and the customer_defined_run_commands functions in the xxx.control.script scripts are edited a replaced with the appropriate SGeSAP/LX commands for these SAP instances. Configuring SGeSAP/LX on Linux The Serviceguard configuration templates (xxx.config) must be edited. They are located in directory ${SGCONF}/<SID>. Refer to the Managing Serviceguard User's Guide for general information about the file content. The naming convention<pkg_name> should be used for the PACKAGE_NAME parameter. In the case of SAP instance C11 the package name for (db), (ci) and (d01) would be: db.config: PACKAGE NAME dbc11 ci.config: PACKAGE NAME cic11 d01.config PACKAGE NAME d01c11 Optional Step: OS430 Specify NODE_NAME entries for all hosts on which the package should be able to run. If the package can run on any of the cluster nodes a "*" can be entered: NODE_NAME * In the Serviceguard configuration file (xxx.config) specify the script that gets executed when running or halting the Serviceguard package. The script that get executed will be the Serviceguard control script (xxx.control.script). db.config: RUN_SCRIPT <SGCONF>/<SID>/db.control.script HALT_SCRIPT <SGCONF>/<SID>/db.control.script ci.config: Cluster Configuration 75

76 RUN_SCRIPT <SGCONF>/<SID>/ci.control.script HALT_SCRIPT <SGCONF>/<SID>/ci.control.script d01.config: RUN_SCRIPT <SGCONF>/<SID>/d01.control.script HALT_SCRIPT <SGCONF>/<SID>/d01.control.script Optional Step: OS440 Serviceguard also allows the monitoring of resources and processes that belong to a Serviceguard package. The terminology used to describe this capability is called a "Serviceguard monitoring service". The "Serviceguard monitoring service" name is defined in the Serviceguard configuration file (xxx.config) and the script to execute for monitoring a resource is specified in the Serviceguard control script (xxx.control.script). In the case of SGeSAP/LX several "Serviceguard monitoring service" scripts are available: sapms.mon, sapdisp.mon, sapenq.mon and saplc.mon sapms.mon sapdisp.mon sapenq.mon saplc.mon monitors the SAP message server monitors the SAP dialog instance processes monitors the SAP Standalone Enqueue Service and the Enqueue Replication Service monitor the SAP livecache instance Monitoring scripts periodically check the availability and responsiveness of SAP components. If the monitor script recognizes a SAP component to be unavailable, Serviceguard will switch the package and try to restart the package on a different cluster node. The monitoring service name gets defined in the xxx.config file. The recommended naming for the monitoring service is <pkgtype ><SID>mon. This is an example for the (ci) instance, SID C11 and the cic11.config file: SERVICE_NAME cic11mon SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 NOTE: Only the Serviceguard "service" (or the "monitoring service" name: SERVICE_NAME) gets specified in the Serviceguard configuration file (xxx.config). The script that gets executed for monitoring will be added in the Serviceguard control script (xxx.control.script). The monitor itself will be pause if a debug file gets created for the package: touch ${SGCONF}/<SID>/debug The debug file can be used to allow manual SAP instance shutdowns and startups and testing without having the SGeSAP/LX monitor switch the package to another cluster node should it detect the SAP instance is down. If the debug file is removed the SGeSAP/LX monitor will restart its monitoring activities. NOTE: The manual startup using common startsap/stopsap scripts may not work correctly with Netweaver 2004s based Web Application Servers. In this case the instances need to started by directly using the sapstart binary passing the start profile as a parameter. For example: sapstart pf=/sapmnt/<sid>/profile/start_ <INAME><INR>_<RELOC_IP/hostname> The service monitor will not be able to detect a stopsap command and will consider a manual instance shutdown to be a failure. The package will then fail over the instance to another node. The existence of the debug file can prevent that. If the debug file gets removed, the service monitor continues to run. Make sure that the packaged SAP instance is already running when removing the debug file or an immediate failover will occur. As long as the debug file exists locally, a package shutdown would not try to stop the SAP instance or its database. A local restart of the package would only take care of the Serviceguard NFS portion. The instance would not be started with the package and the monitor would be started in paused mode. 76 Step-by-Step Cluster Conversion

77 First, the monitoring scripts need to be copied from the staging directory ${SAPSTAGE} to the current SGeSAP/LX package${sgconf} directory. cp ${SAPSTAGE}/SID/sapms.mon ${SGCONF}/<SID>/sapms.mon Example entries in cic11<pkg_name>.control.script: SERVICE_NAME[0]="ciC11mon" SERVICE_CMD[0]="${SGCONF}/C11/sapms.mon" SERVICE_RESTART[0]"" Optional Step: OS460 It is recommended to set AUTOSTART_CMCLD=1 in ${SGCONF}/cmcluster.rc. This variable controls the automatic cluster start. If a cluster already exists, the node attempts to join it. If no cluster is running, the node attempts to form a cluster consisting of all configured nodes. Distribute the file to all cluster nodes. Serviceguard Control Script Template - xxx.control.script Installation Step: IS470 The xxx.control.script, located in directory ${SGCONF}/<SID> is executed when "running" or "halting" a Serviceguard package. In the xxx.control.script the volume groups, logical volumes, the virtual IP addresses and subnets and last not least the SGeSAP/LX scripts to run and halt SAP instances are added. Edit this file. The following is an extract of two (db) db.control.script files: the first one for an Oracle configuration, the second one for a MaxDB configuration. The customer_defined_run_commands and the customer_defined_halt_commands enable the SAP specific scripts. The following is an extract of a (db) package based on Oracle for SAP instance XI7 in directory ${SGCONF}/XI7/dbX17.control.script: # VOLUME GROUPS # #Specify which volume groups are used by this package. Uncomment VG[0]="" # and fill in the name of your first volume group. You must begin with # VG[0], and increment the list in sequence. # # For example, if this package uses your volume groups vg01 and vg02, enter: # VG[0]=vg01 # VG[1]=vg02 # VG[0]="vgC11_oracleC11" # FILESYSTEMS # The filesystems are defined as entries specifying the logical # volume, the mount point, the file system type, the mount, # umount and fsck options. # Each filesystem will be fsck'd prior to being mounted. # The filesystems will be mounted in the order specified during package # startup and will be unmounted in reverse order during package # shutdown. Ensure that volume groups referenced by the logical volume # definitions below are included in volume group definitions. # Specify the filesystems which are used by this package. Uncomment # LV[0]=""; FS[0]=""; FS_TYPE[0]=""; FS_MOUNT_OPT[0]=""; # FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]="" and fill in # the name of your first logical volume, filesystem, type, mount, # umount and fsck options for the file system. # # Valid types for FS_TYPE are 'ext2' and 'reiserfs'. # Cluster Configuration 77

78 LV[0]="/dev/vgC11_oracleC11/lvol0"; \ FS[0]="/oracle/XI7"; \ FS_TYPE[0]="ext3"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]="" # IP ADDRESSES # Specify the IP and Subnet address pairs which are used by this package. # Uncomment IP[0]="" and SUBNET[0]="" and fill in the name of your first # IP and subnet address. You must begin with IP[0] and SUBNET[0] and # increment the list in sequence. # # For example, if this package uses an IP of and a subnet # of enter: # IP[0]= # SUBNET[0]= # (netmask= ) IP[0]=" " SUBNET[0]=" " # SERVICE NAMES and COMMANDS # Specify the service name, command and restart parameters which are # used by this package SERVICE_NAME[0]= cic11ms SERVICE_CMD[0]= ${SGCONF}/C11/sapms.mon SERVICE_RESTART[0]= # START OF CUSTOMER DEFINED FUNCTIONS # This function is a place holder for customer define functions. # You should define all actions you want to happen here, before the service # is started. You can create as many functions as you need. # function customer_defined_run_cmds {. /usr/local/cmcluster/conf/c11/sapwas.sh start test_return 51 } # This function is a place holder for customer define functions. # You should define all actions you want to happen here, before the service # is halted. # function customer_defined_halt_cmds {. /usr/local/cmcluster/conf/c11/sapwas.sh stop test_return 52 } # END OF CUSTOMER DEFINED FUNCTIONS The following is an extract of a (db) package based on MaxDB for SAP instance LXM in directory ${SGCONF}/LXM/dbLXM.control.script: # VOLUME GROUPS VG[0]="vgsapdbLXM" 78 Step-by-Step Cluster Conversion

79 # FILESYSTEMS LV[0]="/dev/vgsapdbLXM/lvsapdbLXM"; \ FS[0]="/sapdb/LXM"; \ FS_TYPE[0]="reiserfs"; \ FS_MOUNT_OPT[0]="-o rw"; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]="" # IP ADDRESSES IP[0]=" " SUBNET[0]=" " # START OF CUSTOMER DEFINED FUNCTIONS function customer_defined_run_cmds {. /usr/local/cmcluster/conf/lxm/sapwas.sh start test_return 51 } function customer_defined_halt_cmds {. /usr/local/cmcluster/conf/lxm/sapwas.sh stop test_return 52 } # END OF CUSTOMER DEFINED FUNCTIONS The following is an extract of a (db) package based on MaxDB for SAP instance LXM: # START OF CUSTOMER DEFINED FUNCTIONS function customer_defined_run_cmds {. /usr/local/cmcluster/conf/lxm/sapwas.sh start test_return 51 } function customer_defined_halt_cmds {. /usr/local/cmcluster/conf/lxm/sapwas.sh stop test_return 52 } # END OF CUSTOMER DEFINED FUNCTIONS Installation Step: IS500 Distribute the Serviceguard package ascii files to all cluster nodes. Copy all Serviceguard configuration files in directory${sgconf}/<sid> from this node into the same directory on all other cluster nodes. Also copy file${sgconf}/sap.functions from this host into the same directory on all cluster nodes. Installation Step: IS510 Use cmapplyconf(1m) to add the newly configured package(s) to the cluster. cmapplyconf -P verifies the Serviceguard package configuration, updates the binary cmlconfig file and distributes the binary file to all cluster node. cmapplyconf -v -P ${SGCONF}/<SID>/db.config NOTE: If you plan to use the Serviceguard (sapnfs) package specify this package in the last position of the cmapplyconf command. Later, if you force a shutdown of the whole cluster with cmhaltcl -f this Cluster Configuration 79

80 package is the last one stopped. This prevents global directories from disappearing before all SAP components in the cluster have completed their shutdown. Installation Step: IS515 Verify that the setup works correctly to this point. Use the following commands to run or halt a SGeSAP/LX package: cmrunpkg dbc11 cmhaltpkg dbc11 NOTE: Normally the (sapnfs) Serviceguard package is required to be configured and running before any of the other SGeSAP/LX packages can be started. For testing it can help to create a file called${sgconf}/<sid>/debug on all nodes in the cluster. This enables you to start the Serviceguard packages without running the Serviceguard Extension for SAP on Linux specific steps of starting or stopping an SAP instance. Do not forget to remove all debug files after the tests. Installation and Configuration Considerations This section details the installation and configuration of the SGeSAP scripts for the Standalone Enqueue Server and Enqueue Replication Server components. Prerequisites A Serviceguard cluster with at least two nodes attached to the network. For documentation purposes, clunode1 and clunode2 represent Serviceguard clusters nodes. Node clunode1 will run the Serviceguard package for the Standalone Enqueue Server. Node clunode2 runs the package for the Enqueue Replication Server. In the event of a failure the Standalone Enqueue the Serviceguard package for the Standalone Enqueue it will go through the following steps: 1. follow and restart the Serviceguard package for the Standalone Enqueue Service on the node running the Enqueue Replication Service. 2. Reclaim the replicated SAP locks that the Enqueue Replication Service had collected. 3. Halt the Enqueue Replication Service. Installation Step: IS516 Create Serviceguard templates and edit the files for both packages. In this example the Serviceguard package name for the Standalone Enqueue instance is ascssx1 and for the Enqueue Replication the name is ers12sx1 cmmakepkg -p ascssx1.config cmmakepkg -s ascssx1.control.script cmmakepkg -p ers12sx1.config cmmakepkg -s ers12sx1.control.script vi ascssx1.config PACKAGE_NAME ascssx1 NODE_NAME clunode1 NODE_NAME clunode2 RUN_SCRIPT /opt/cmcluster/conf/sx1/ascssx1.control.script HALT_SCRIPT /opt/cmcluster/conf/sx1/ascssx1.control.script SERVICE_NAME ascssx1_enqmon SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 60 vi ascssx1.control.script VG[0]="vgusrsapSX1_02" LV[0]="/dev/vgusrsapSX1_02/lvusrsapSX1_02"; \ 80 Step-by-Step Cluster Conversion

81 FS[0]="/usr/sap/SX1/ASCS02"; \ FS_TYPE[0]="ext3"; \ FS_MOUNT_OPT[0]="-o rw" IP[0]=" " SUBNET[0]=" " SERVICE_NAME[0]="ascsSX1_enqmon" SERVICE_CMD[0]="/opt/cmcluster/conf/SX1/sapenq.mon monitor ascssx1" SERVICE_RESTART[0]="" function customer_defined_run_cmds {. /opt/cmcluster/conf/sx1/sapwas.sh start test_return 51 } function customer_defined_halt_cmds {. /opt/cmcluster/conf/sx1/sapwas.sh stop test_return 52 } vi ers12sx1.config PACKAGE_NAME NODE_NAME RUN_SCRIPT HALT_SCRIPT ers12sx1 clunode2 /opt/cmcluster/conf/sx1/ers12sx1.control.script /opt/cmcluster/conf/sx1/ers12sx1.control.script vi ers12sx1.control.script VG[0]="vgusrsapSX1_12" LV[0]="/dev/vgusrsapSX1_12/lvusrsapSX1_12"; \ FS[0]="/usr/sap/SX1/ERS12"; \ FS_TYPE[0]="ext3"; \ FS_MOUNT_OPT[0]="-o rw" IP[0]=" " SUBNET[0]=" " function customer_defined_run_cmds {. /opt/cmcluster/conf/sx1/sapwas.sh start test_return 51 } function customer_defined_halt_cmds {. /opt/cmcluster/conf/sx1/sapwas.sh stop test_return 52 } Distribute the Serviceguard package files to all cluster nodes. scp -pr <SGCONF>/SX1 clunode2:<sgconf>/sx1 Cluster Configuration 81

82 Start the packages with the following commands. cmapplyconf -P ascssx1.config cmrunpkg n clunode1 ascssx1 cmmodpkg e ascssx1 cmapplyconf -P ers12sx1.config cmrunpkg n clunode2 ers12sx1 Repeat the above steps for the Java Standalone Enqueue and Java Enqueue Replication packages scssx1 and ers13sx1. Serviceguard NFS Toolkit Configuration The cross-mounted file systems need to be added to a package that provides Highly Available NFS services. This is usually the (dbci), the (db), or the standalone (sapnfs) package. The most flexibility is provided with the standalone Serviceguard NFS (sapnfs) package. Logon as root to the primary host to perform the following tasks. Installation Step: IS520 If it is intended to use a standalone Serviceguard NFS package, create a directory for the sapnfs Serviceguard package. To be somewhat consistent with the Serviceguard SAP SID package names, the uppercase name SAPNFS was chosen. mkdir -p ${SGCONF}/SAPNFS cp ${SGROOT}/nfstoolkit/hanfs.sh ${SGCONF}/SAPNFS/hanfs.sh cp ${SGROOT}/nfstoolkit/toolkit.sh ${SGCONF}/SAPNFS/toolkit.sh cp ${SGROOT}/nfstoolkit/hanfs.conf ${SGCONF}/SAPNFS/hanfs.conf cp ${SGROOT}/nfstoolkit/nfs.mon ${SGCONF}/SAPNFS/nfs.mon Installation Step: IS540 In this step the NFS Server directories that will be NFS exported are configured. Edit the hanfs.sh file in the ${SGCONF}}/SAPNFS package directory to configure the Serviceguard NFS service. Add all cross-mounted directories that have been identified in section "Planning the LVM layout for Clustered SAP Environments" to be of type "SHARED NFS". Example entries are: XFS[0]="-o rw *:/export/usr/sap/trans" XFS[1]="-o rw *:/export/sapmnt/<sid>" A MaxDB example would also have the following entries added: XFS[2]="-o rw *:/export/sapdb/programs" XFS[3]="-o rw *:/export/sapdb/data" XFS[4]="-o rw *:/export/var/spool/sql/ini Installation Step: IS550 A Serviceguard NFS monitor can be configured in section NFS MONITOR in hanfssh. The naming convention for the service should be sapnfs. An example: NFS_SERVICE_NAME[0]="sapnfs" NFS_SERVICE_CMD[0]="${SGCONF}/SAPNFS/nfs.mon" NFS_SERVICE_RESTART[0]="-r 0" 82 Step-by-Step Cluster Conversion

83 NOTE: It sometimes can be convenient for testing to allow root write access via NFS. This can be enabled with flag -o rw, no_root_squash... Use with caution. Serviceguard Extension for SAP on Linux Configuration - sap.config This section deals with the configuration of the SAP specifics of the Serviceguard packages SGeSAP/LX SAP specific configuration file called ${SGCONF}/<SID>/sap.config. The file is structured into four main sections that serve different purposes: Section 1: Specification of the Packaged SAP Components Section 2: Configuration of Application Server Handling Section 3: Optional Parameters and Customizable Functions Section 4: Global Defaults Specification of the Packaged SAP Components For each type of potentially mission-critical SAP software components there exists a set of configuration parameters in SECTION 1 of the ${SGCONF}/<SID>/sap.config file. The information delivered here will be specified exactly once and it will configure all packaged components of a <SID> within the cluster. This is a central place of configuration, even if the intention is to divide the components into several different packages. The mapping of components to packages will be done automatically. There is one subsection per package type: db: a database utilized by a SAP WAS-based application. ci: a SAP Instance that has SPOF services defined. This might be a Central Instance with integrated Enqueue and Message Service (DVEBMGS). It might also be a standalone Enqueue and Message Service as part of a separate ABAP System Central Service instance (ASCS). A Replicated Enqueue setup can be implemented by configuring at least one additional package in combination with this CI. arep: an ASCS Replicated Enqueue instance. rep: an SCS Replicated Enqueue instance. d: one or more virtualized SAP Application Instances that are considered mission-critical due to special use cases. jci: a SAP JAVA System Central Services Instance that provides Enqueue and Message Service to SAP J2EE engines that might or might not be part of SAP Web Application Servers (SCS). NOTE: It is not allowed to specify a [j]ci and an [a]rep component as part of the same package. Except Dialog Instance components of type d, each component can be specified once at most. Apart from these exceptions, any subset of the SAP mission critical components can be maintained in sap.config to be part of one or more packages. It's important to distinguish between the two components: the Standalone Enqueue Service and the Enqueue Replication Service. In SGeSAP/LX context the ABAP Standalone Enqueue Service is configured part of the (ci), the JAVA Standalone Enqueue Service is configured part of the (jci) component. In SGeSAP/LX context the (rep) component refers to the JAVA Enqueue Replication Service and the (arep) component refers to the JAVA Enqueue Replication Service. Logon to the primary host as root and open the file ${SGCONF}/<SID>/sap.config with a text editor. Installation Step: IS600 Specify the relocatable IP address where the Serviceguard NFS shared file systems can be reached. Format is IPv4: nnn.nnn.nnn.nnn. This value specifies the default for commonly shared file systems. An override is possible for the MaxDB ini-directory, the global transport directory and the central executable directory individually by specifying values for SQLRELOC, TRANSRELOC, CTEXERELOC. Serviceguard Extension for SAP on Linux Configuration - sap.config 83

84 NOTE: All xxxreloc parameters listed above have to use the same syntax as the IP[]-array in the package control file. When using the automounter the NFSRELOC does not need to be specified. Subsection for the DB component: OS610 Serviceguard Extension for SAP on Linux performs activities specific to the database you use. Specify the underlying database vendor using the DB parameter. Possible options are: ORACLE and MaxDB. Refer to the Serviceguard Extension for SAP on Linux release notes to see whether the required vendor is available for the SAP application component. Example: DB=ORACLE Specify the relocatible IP address of the database instance. Be sure to use exactly the same syntax as configured in the IP[]-array in the package control file. Example: DBRELOC= In the subsection for the DB component there is an optional paragraph for Oracle and MaxDB database parameters. Depending on your need for special Serviceguard setups and configurations have a look at those parameters and their description. The variable db<sid>_sgesap_comp links the Serviceguard package name (=db<sid>) to SGeSAP component type "DB". For example variable dbsx1_sgesap_comp=db links Serviceguard package dbsx1 with SGeSAP component type "DB". Example: db<sid>_sgesap_comp="db" Subsection for the CI component: OS620 Provide information about the Central Instance if it will part of Serviceguard Extension for SAP on Linux package. Set the parameter CINAME to the SAP instance name where ABAP Enqueue and Message Service are located. This usually is the Central Instance or the ABAP System Central Service. The name is either DVEBMGS or ASCS. Note that the instance ID as defined by CINR is not part of CINAME. Set the parameter CINR to the Instance ID of your Central Instance where ABAP Enqueue and Message Service are located. This is most likely a SAP Central Instance or a ABAP System Central Service Instance. The relocatible IP address of the instance should be specified in CIRELOC. The variable ascs<sid>_sgesap_comp links the Serviceguard package name (=ascs<sid>) to SGeSAP component type "CI". For example variable ascssx1_sgesap_comp=ci links Serviceguard package ascssx1 with SGeSAP component type "CI" The parameter CIENQUEUE_PROC_NAME can be used to specify the process name of the Standalone Enqueue Service to be monitored. An example: CIENQUEUE_PROC_NAME = "en.sapsx1_ascs02". Example: CINAME=DVEBMGS CINR=00 CIRELOC= ascs<sid>_sgesap_comp=ci In the subsection for the CI component there is a paragraph for optional parameters. Depending on your need for special Serviceguard setups and configurations have a look at those parameters and their description. Subsection for the AREP component: OS630 Serviceguard Extension for SAP on Linux supports SAP stand-alone Enqueue Service with Enqueue Replication. It's important to distinguish between the two components: the stand-alone Enqueue Service and the replication. The stand-alone Enqueue Service is part of the (ci) or (jci) component. The (arep) component refers to the replication unit for protecting ABAP System Services. 84 Step-by-Step Cluster Conversion

85 Specify the instance name in AREPNAME, the instance ID number in AREPNR and the relocatible IP address of the SAP instance for the replication service in AREPRELOC. The variable ers<nr><sid>_sgesap_comp links the Serviceguard package name (=ers<nr><sid>) to SGeSAP component type "AREP". For example variable ers12sx1_sgesap_comp=arep links Serviceguard package " ers12sx1" with SGeSAP component type "AREP" Example: AREPNAME=AREP AREPNR=00 AREPRELOC= ers<nr><sid>_sgesap_comp=arep Subsection for the D component: OS640 A set of SAP ABAP Application Servers and ABAP Dialog Instance can be configured in each Serviceguard package. Set the relocatible IP address of the dialog instance in DRELOC, the dialog instance name in DNAME and the instance ID number in DNR. The variable d<nr><sid>_sgesap_comp links the Serviceguard package name (=d<nr><sid>) to SGeSAP component type "D". For example variable d00sx1_sgesap_comp=d links Serviceguard package d00sx1 with SGeSAP component type "D" For example: DRELOC[0]= DNAME[0]=D DNR[0]=53 DRELOC[1]= DNAME[1]=D DNR[1]=54 d<nr><sid>_sgesap_comp=d Subsection for the JCI component: OS660 SAP J2EE Engine Enqueue and Message Services are isolated in the System Central Service Instance. Specify the instance name in JCINAME, the instance ID number in JCINR, the relocatible IP address in JCIRELOC and the package name to SGeSAP component mapping scs<sid>_sgesap_comp, as in the following example: JCINAME=SCS JCINR=01 JCIRELOC= scs<sid>_sgesap_comp=jci Subsection for the REP component: OS665 Specify the instance name in REPNAME, the instance ID number in REPNR and the relocatible IP address of the SAP instance for the replication service in REPRELOC and the package name to SGeSAP component mapping ers<nr><sid>_sgesap_comp, for example: REPNAME=REP REPNR=00 REPRELOC= ers<nr><sid>_sgesap_comp=rep Configuration of Application Server Handling In more complicated setups, in directory ${SGCONF}/<SID> there can be several XXX.config and XXX.conf files for each package in addition to the global sap.config file. Serviceguard Extension for SAP on Linux Configuration - sap.config 85

86 The file sap.config contains SGeSAP configuration data for all SGeSAP packages of a <SID> It could for example contain data to start additional remote SAP dialog instances which are not part of the cluster. Now every time any SGeSAP package (DB, CI,D01..) starts it scans through the sap.config file, finds the entry for starting a remote SAP dialog instances and starts this. However, if the goal is for only the CI Instance to start additional remote SAP dialog instances this data can be added to a private sapci<sid>.config and therefore only gets read and started when the CI instance is started. During startup of a package <pkg_name>, Serviceguard Extension for SAP on Linux checks the existence of SGeSAP global and private configuration files in the following order: If a sap.conf file exists, it is effective for compatibility reasons. If a sap<pkg_name>.conf file exists, it will merge and overrule previous files and is effective. If a sap.config file exists, it will merge and overrule previous files and is effective. If a sap<pkg_name>.config file exists, it will merge and overrule previous files and is effective. NOTE: When configuring package specific configuration files sap<pkg_name>.config you will usually isolate section 1 in a global sap.config file. All other sections should then be applied in the package specific configuration files sap<pkg_name>.config, excluding the first section. For example SAP System C11 containing a Central Instance package with Application Server handling would then have the following Serviceguard Extension for SAP on Linux configuration files: sap.config - Section 1,3,4 configured sapcic11.config - Section 2 configured Optional Step: OS670 The following arrays are used to configure special treatment of Application Servers during package start, halt or failover. These Application Servers do not have SAP adoptive computer enabled nor are they part of a Serviceguard package. However, an event can be triggered to start, stop or restart them together with the package. If any triggered attempt fails, it doesn't automatically cause failure of the ongoing package operation. The attempts are considered to be non-critical. In certain setups, it is necessary to free up resources on the failover node to allow the failover to succeed. Often, this includes stopping less important SAP Systems, perhaps consolidation or development environments. If any of these instances is a Central Instance, it might be that there are additional Application Servers belonging to it. Not all of them are necessarily running locally on the failover node. They can optionally be stopped before the Central Instance gets shut down. This mechanism replaces the deprecated SERVER_CONSOLIDATION variable and the RM*arrays of earlier SGeSAP versions. In the ASSID[*]-array specify the SAP System IDs of the instances that should be treated with the package. Making this a configurable option allows to specify instances that belong to the clustered SAP components. It also allows specification of different SIDs for less critical SAP applications. In the ASHOST[*]-array, refer to the hosts on which the instances reside - this can be either inside or outside of the cluster. The ASNAME[*]-array holds the instance names. These names are built by the abbreviations of the services that are offered by the Application Server. The name of a Dialog instance usually is D, the name of a Central Instance often is DVEBMGS. The ASNR[*]-array contains the instance IDs. If the corresponding ASHOST-entry specifies a host that is part of the cluster, be sure to provide an ID that is different from the IDs used by any packaged instance. You should also make sure that there is no other SAP instance on the same host is using that ID. The ASTREAT[*]-array defines the way the instance is treated if the status of the package changes. There are five different values that may be configured in any combination: START_WITH_PKG, 86 Step-by-Step Cluster Conversion

87 STOP_WITH_PKG, RESTART_DURING_FAILOVER, STOP_IF_LOCAL_AFTER_FAILOVER, STOP_DEPENDENT_INSTANCES. ASTREAT[*]=0means that the Application Server is not affected by any changes that happens to the package status. This value makes sense to (temporarily) unconfigure the instance. ${START_WITH_PKG}: Add 1 to ASTREAT[*] if the Application Server should automatically be started during startup of the package. ${STOP_WITH_PKG}: Add 2 to ASTREAT[*] if the Application Server should automatically be stopped during halt of the package. ${RESTART_DURING_FAILOVER}: Add 4 to ASTREAT[*] if the Application Server should automatically be restarted if a failover of the package occurs. If the restart option is not used the SAP ABAP Engine has to be configure to use DB-RECONNECT. ${STOP_IF_LOCAL_AFTER_FAILOVER}: Add 8 to ASTREAT[*] if the Application Server should automatically be shut down if the package fails over to its node. This treatment policy will overrule the ${RESTART_DURING_FAILOVER} value if applicable. ${STOP_DEPENDENT_INSTANCES}: Add 16 to ASTREAT[*] if the instance is a Central Instance and all specified Dialog Instances with the same ASSID[] should be stopped prior to it. This treatment policy overrules the treatment policies specified for Dialog Instances. For the ASPLATFORM[*]-array the platform for an Application Server that is controlled by the package is specified. Supported values are: "HP-UX": standard Application server running on HP-UX "LINUX": standard Application server running on LINUX "WINDOWS": Application Server handling is not standardized as there is no standard way to open a remote DOS/Windows shell that starts SAP Application Servers on the Windows platform. sap.functions provides demo functions using the ATAMAN (TM) TCP Remote Logon syntax. They should be replaced by implementations in customer.functions. "SG-PACKAGE": The Application Server runs as Serviceguard package within the same cluster. In this case ASHOST might have different values on the different package nodes. If this value is used, you also have to specify the Dialog Instance packagename in ASPKGNAME[*] The common naming conventions for Dialog Instance packages are: d<asnr><sid> (new) app<sid><asnr> (deprecated) ${START_WITH_PACKAGE}, ${STOP_WITH_PACKAGE} and {RESTART_DURING_FAILOVER} only make sense if ASSID[]=${SAPSYSTEMNAME}, i.e. these instances need to belong to the clustered SAP component. ${STOP_IF_LOCAL_AFTER_FAILOVER} and {STOP_DEPENDENT_INSTANCES} can also be configured for different SAP components. The following table gives an overview of ASTREAT[*] values and their combination. The ${START_WITH_PKG}, ${STOP_WITH_PKG}, ${RESTART_DURING_FAILOVER}, ${STOP_IF_LOCAL_AFTER_FAILOVER}, ${STOP_DEPENDENT_INSTANCES} columns represent binary values and decimal values in parenthesis that are accumulated in the ASTREAT column. Serviceguard Extension for SAP on Linux Configuration - sap.config 87

88 Table 3-1 Overview of reasonable ASTREAT values ASTREAT value STOP_DEP STOP_LOCAL RESTART STOP START Restrictions (1) Should only be configured for AS that belong to the same SID (2) (2) (4) (4) 0 1 (1) (4) 1 (2) (4) 1 (2) 1 (1) (8) Can be configured for AS belonging to same SID or AS part of other SID (8) 1 (8) (1) 1 (1) 0 Should only be configured for AS that belong to the same SID (8) 0 1 (2) 1 (1) (8) 1 (4) (8) 1 (4) 0 1 (1) (8) 1 (4) 1 (2) (8) 1 (4) 1 (2) 1 (1) 24 1 (16) 1 (8) Should only be configured for Instances that have different SID and have dependencies to AS START=${START_WITH_PKG} STOP=${STOP_WITH_PKG} RESTART=${RESTART_DURING_FAILOVER} STOP_LOCAL=${STOP_IF_LOCAL_AFTER_FAILOVER} STOP_DEP=${STOP_DEPENDENT_INSTANCES} Example 1: The primary node is also running a non-critical Dialog Instance with instance ID 01. It should be stopped and started with the package. In case of a failover a restart attempt should be made. There is a second instance outside of the cluster that should be treated the same. ASSID[0]=SG1; ASHOST[0]=node1; ASNAME[0]=D; ASNR[0]=01; ASTREAT[0]=7; ASPLATFORM[0]="HP-UX" ASSID[1]=SG1; ASHOST[1]=extern1; ASNAME[1]=D; ASNR[1]=02; ASTREAT[1]=7; ASPLATFORM[1]="HP-UX" Example 2: The failover node is running a Central Consolidation System QAS. It shall be stopped in case of a failover to this node. ASSID[0]=QAS; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=10; ASTREAT[0]=8; ASPLATFORM[0]="HP-UX" The failover node is running the Central Instance of Consolidation System QAS. There are two additional Application Servers configured for QAS, one inside of the cluster and one outside of the cluster. The complete QAS shall be stopped in case of a failover. In this case, ${STOP_DEPENDENT_INSTANCES} is specified for the Central Instance of QAS. 88 Step-by-Step Cluster Conversion

89 ASSID[0]=QAS; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=10; ASTREAT[0]=24; ASPLATFORM[0]="HP-UX" ASSID[1]=QAS; ASHOST[1]=node3; ASNAME[1]=D; ASNR[1]=11; ASTREAT[1]=8; ASPLATFORM[1]="HP-UX" ASSID[2]=QAS; ASHOST[2]=extern1; ASNAME[2]=D; ASNR[2]=12; ASTREAT[2]=8; ASPLATFORM[2]="HP-UX" The Central Instance is treated the same way as any of the additional packaged or unpackaged instances. Use the ASTREAT[*]-array to configure the treatment of a Central Instance. For Example, if a Central Instance should be restarted as soon as a DB-package fails over specify the following in sapdb<sid>.config: ASSID[0]=SG1; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=00; ASTREAT[0]=${RESTART_DURING_FAILOVER}; ASPLATFORM[0]="SG-PACKAGE"; ASPKGNAME="ciSG1" If you don't restart the SAP Instances during database failover, make sure to turn on the DB-RECONNECT feature within their profile. NOTE: It is not possible to configure a JAVA-only instance in the AS*array as a dependent component. Installation Step: IS680 The REM_COMM value defines the method to be used to remotely execute commands for Application Server handling. It can be set to ssh to provide secure encrypted communications between untrusted hosts over an insecure network. Information on how to set up ssh for each node refer to Section Cluster Node Configuration. Default value is remsh. Example: REM_COMM=remsh Installation Step: IS690 Use the AS_PSTART variable to specify the startup procedure of the configured Application Servers. The default value here is AS_PSTART=1 for parallel startup and AS_PSTART=0 for sequential startup. Be aware that setting AS_PSTART=0 will slow down your total failover time. Example: AS_PSTART=1 Optional Step: OS700 If your setup consists of application servers that are significantly slower than the Central Instance host, it is possible that the Central Instance shuts down before application server shutdown is completed. This can lead to unsafe shutdowns and Instance crash. To be safe, specify one of the following: WAIT_OWN_AS=1 the shutdown of all application servers takes place in parallel, but the scripts do not continue before all of these shutdown processes have come to an end. WAIT_OWN_AS=2 if the package should also wait for all application servers to come up successfully. You have to use this value if you want to prevent the integration from temporarily opening a new process group for each application server during startup. WAIT_OWN_AS=0 can significantly speed up the package start and stop, especially if Windows NT application servers are used. Use this value only if you have carefully tested and verified that timing issues will not occur. Optional Step: OS710 Specify SAPOSCOL_STOP=1 if saposcol should be stopped together with each instance that is stopped. Serviceguard Extension for SAP on Linux Configuration - sap.config 89

90 The collector will only be stopped if there is no instance of an SAP system running on the host. Specify SAPOSCOL_START=1 to start the saposcol even with packages that don't use the implicit startup mechanism that comes with SAP instance startup, e.g. database-only packages or packages that only have a SCS, ASCS or AREP instance. Optional Step: OS720 If several packages start on a single node after a failover, it is likely that some packages start up faster than others on which they might depend. To allow synchronization in this case, there is a loop implemented that polls the missing resources regularly. After the first poll, the script will wait 5 seconds before initiating the next polling attempt. The polling interval is doubled after each attempt with an upper limit of 30 seconds. Polling continues until the resource becomes available or up to a maximum of DELAY_INTERVALS polling attempts. Optional Step: OS730 On Linux, a shared memory shortage can occur if you install more than one instance on a host. Control the handling of resources. Prior to any instance startup the Serviceguard Extension for SAP on Linux tries to free up unused or unimportant resources to make the startup more likely to succeed. A database package only frees up database related resources, a Central Instance package only removes IPCs belonging to SAP administrators. The following list summarizes how the behavior of Serviceguard Extension for SAP on Linux is affected by changing the CLEANUP_POLICY parameter: lazy - no action, no cleanup of resources normal - removes unused resources belonging to the own system strict - uses Linux commands to free up all semaphores and shared memory segments that belong to any SAP Instance of any SAP system on the host if the Instance is to be started soon. It uses Linux to free up all semaphores and shared memory segments that belong to any database if the SAP database is to be started soon. NOTE: Do not use the strict policy unless it is critical that you do. Be aware that the strict option can crash running instances of different SAP systems on the backup host. Use this value only if you have a productive system that is much more important than any other SAP system you have. In this case a switchover of the productive system is more robust, but additional SAP systems will crash. You can also use strict policy, if your SAP system is the only one running at the site and you are low on memory. A strict policy frees up more of its own shared memory segments than the normal policy does. For the cleanup of resources cleanipc is used which is an SAP tool to free up the IPC resources of specific SAP Instances. It is used to free up resources of an Instance that is to be started soon on Linux. This prevents shared memory issues caused by the remains of a prior crash of the Instance. Accordingly, an obsolete ORACLE SGA is also removed if a database crash occurred. Optional Parameters and Customizable Functions In ${SGCONF}/<SID> there is a file called customer.functions that provides a set of predefined function hooks. They allow the specification of individual startup or runtime steps at certain phases of the package script execution. Any customer specific changes or customer additional scripts must be added in customer.functions and not in sap.functions. Therefore it is not necessary and also not supported to change the sap.functions. In the following there is a summary of the function hooks in chronological order: Table 3-2 Optional Parameters and Customizable Functions List Command: start_addons_predb start_addons_postdb start_addons_preci Description: additional startup steps on database host after all low level resources are made available after the system has been cleaned up for db start before start of database instance additional startup steps on database host after start of the database instance additional startup steps on Central Instance host after start of the database instance 90 Step-by-Step Cluster Conversion

91 Command: start_addons_postci start_addons_postciapp Description: additional startup steps on Central Instance host after start of the Central Instance before start of any Application Server Instance additional startup steps that are performed on the Central Instance host after startup of all Application Server Instances Equally, there are hooks for the stop procedure of packages: stop_addons_preciapp stop_addons_preci stop_addons_postci stop_addons_predb stop_addons_postdb usually this function contains actions that relate to what has been added to start_addons_postciapp relates to start_addons_postci relates to start_addons_preci relates to start_addons_postdb relates to start_addons_predb The customer.functions template that is delivered with Serviceguard Extension for SAP on Linux provides several examples within the hook functions. They can be activated via additional parameters as described in the following. Stubs for JAVA based components work in a similar fashion. Optional Step: OS740 SAPROUTER is an example for an additional program that always starts on the Central Instance host. The array SAPROUTER_START[*] should be set to 1 for each saprouter that should be started with the CI package. Make sure that multiple saprouters do not share the same SAPROUTER_PORT. The SAPROUTER_HOST[*]-array specifies where the saprouter should be started. These parameters are mandatory. An optional SAPROUTER_STRING may be set that will be passed to the saprouter, for example to set the path to the saprouttab file to reside on a shared volume. Example: SAPROUTER_START[0]=1 SAPROUTER_PORT[0]="-S3299" SAPROUTER_HOST[0]=hpcc006 SAPROUTER_STRING[0]="-W R /sapmnt/sg1/profile/saprouttab3299 -T sapmnt/sg1/profile/dev_rout3299" The saprouter(s) get(s) started in the script customer.functions (start_addons_postciapp) and get(s) halted sequentially during package shutdown (stop_addons_preciapp). Optional Step: OS750 Specify RFCADAPTER_START=1 to automatically start the RFC adapter component, e.g. for some early Exchange Infrastructure versions. Make sure that the JVM executables can be reached via the path of SIDADM. Example: RFCADAPTER_START=1 RFCADAPTER_CMD="run_adapter.sh" Optional Step: OS760 Specify SAPCCMSR_START=1 if there should be a CCMS agent started on the DB host automatically. You should also specify a path to the profile that resides on a shared volume. Example: SAPCCMSR_START=1 SAPCCMSR_CMD=sapccmsr SAPCCMSR_PFL="/usr/sap/ccms/${SAPSYSTEMNAME}_${CINR}/sapccmsr/${SAPSYSTEMNAME}.pfl" Serviceguard Extension for SAP on Linux Configuration - sap.config 91

92 Global Defaults The fourth section of sap.config is rarely needed. It mainly provides various variables that allow overriding commonly used default parameters. Optional Step: OS770 If there is a special demand to use values different from the default, it is possible to redefine some global parameters. Depending on your need for special high availability setups and configurations have a look at those parameters and their description in the SAP specific configuration file sap.config. Optional Step for Netweaver 2004s: NW04S780 This installation step only applies if Netweaver 2004s with SAP WAS 7.0 is installed. SAP introduced a control framework with Netweaver 2004s and SAP WAS 7.0 that enables the management of SAP instances from third party products using a defined application interface. This control framework is implemented using a daemon process that is either started by: the operating system during boot time or the SAP startup script when the instance is started. During the installation of a SAP Netweaver 2004s Web AS, the SAPInst edits a file in /etc/sysconfig/sapr3 and adds a line for each sapstartservice instance. During boot time a init script is executed located in: /etc/init.d/sapinit referenced by /etc/rc3.d/s<###>sapinit The sapinit script reads the content in file /etc/sysconfig/sapr3 and starts a sapstartsrv for each instance during OS boot time. This can lead to error output in the boot messages logfile as file systems for exclusive shared storage is not available at that time. If the use of the SAP control framework is not required, then remove the reference link from the sapinit script. Furthermore, any running sapstartsrv processes can be killed from the process list. For example: # rm /etc/rc3.d/s<###>sapinit # ps -ef grep sapstartsrv # kill {PID of sapstartsrv } To make use of the sapstartservice configure the SAPSTARTSRV_START and SAPSTARTSRV_STOP values. The sapstartsrv daemon is started and stopped accordingly. For example: SAPSTARTSRV_START=1 SAPSTARTSRV_STOP=1 Optional Step for sap.config: OS785 Variables JMSSERV_BASE and MSSERV_BASE can be used to set the base Message Server port. Default values for the ABAP base Message Server port is 3600 and for the JAVA base Message Server port is Optional Step for sap.config: OS790 Variable EXEDIR can be changed to point to the SAP runu directory if a unicode installation is being used. 92 Step-by-Step Cluster Conversion

93 Database Configuration This section deals with additional database specific installation steps and contains the following: Additional Steps for Oracle Additional Steps for MaxDB Additional Steps for Oracle The Oracle RDBMS includes a two-phase instance and crash recovery mechanism that enables a faster and predictable recovery time after a crash. The instance and crash recovery is initiated automatically and consists of two phases: Roll-forward phase: Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. Following parameters can be used to tune the roll forward phase: The parameter RECOVERY_PARALLELISM controls the number of concurrent recovery processes. The parameter FAST_START_IO_TARGET controls the time a crash / instance recovery may take. Use this parameter to make crash / instance recovery predictable. Roll-back phase: Oracle applies information in the rollback segments to undo changes made by uncommitted transactions to the data blocks. Following parameters can be used to tune the roll-back phase: Fast-Start On-Demand rollback: with this feature Oracle automatically allows new transactions to begin immediately after the roll forward phase of recovery completes. This means that the database will be available again right after the completion of phase one roll-forward. This means that there will be no long waits until long running transactions are rolled back. Fast-Start Parallel Rollback: configure the FAST_START_PARALLEL_ROLLBACK parameter to roll-back set of transaction in parallel. This parameter is similar to the RECOVERY_PARALLELISM parameter for the roll-forward phase. All these parameters can be used to tune the duration of Instance / Crash recovery. More details on these useful High-Availability features can be found both in the Oracle8i and Oracle9i documentation Oracle8i Designing and Tuning for Performance (Chapter 24: Tuning Instance recovery Performance) and Oracle9i Database Performance Tuning Guide and Reference Release 2 (9.2). The following steps have to be performed in order to adjust the Oracle RDBMS setting to the High Availability configuration. Logon as root to the primary host of the database where the package is running in debug mode. Oracle Database Step: OR850 Perform the following step as <sid>adm. To ensure that a database that crashed during an online backup starts correctly after the crash, all data files that were in begin backup state need to be altered with an end backup statement. Adjust the required steps in /sapmnt/<sid>/exe/startdb. Therefore, startdb needs to be adjusted accordingly. Insert / Change the following code within the /sapmnt/<sid>/exe/startdb file. The sample code can be found in the file: ${SAPSTAGE}/SID/startdb.sql # # Startup the database without changing the ARCHIVELOG state # echo "connect internal;" > $SRVMGRDBA_CMD_FILE echo "startup;" >> $SRVMGRDBA_CMD_FILE echo "exit;" >> $SRVMGRDBA_CMD_FILE eval $SRVMGRDBA command=@$srvmgrdba_cmd_file >> $LOG 2>&1 # # Startup the database without changing the ARCHIVELOG state # alter datafile 'end backup' when instance crashed during Database Configuration 93

94 # backup echo "connect internal;" > $SRVMGRDBA_CMD_FILE echo "startup mount;" >> $SRVMGRDBA_CMD_FILE echo "spool endbackup.log" >> $SRVMGRDBA_CMD_FILE echo "select 'alter database datafile ''' f.name ''' end backup;'" >> $SRVMGRDBA_CMD_FILE echo "from v\$datafile f, v\$backup b" >> $SRVMGRDBA_CMD_FILE echo "where b.file# = f.file# and b.status = 'ACTIVE'" >> $SRVMGRDBA_CMD_FILE echo "/" >> $SRVMGRDBA_CMD_FILE echo "spool off" >> $SRVMGRDBA_CMD_FILE echo "!grep '^alter' endbackup.log >endbackup.sql" >> $SRVMGRDBA_CMD_FILE echo "@endbackup.sql" >> $SRVMGRDBA_CMD_FILE echo "!rm endbackup.*" >> $SRVMGRDBA_CMD_FILE echo "alter database open;" >> $SRVMGRDBA_CMD_FILE echo "exit;" >> $SRVMGRDBA_CMD_FILE eval $SRVMGRDBA command=@$srvmgrdba_cmd_file >> $LOG 2>&1 Oracle Database Step: OR860 Configure the listener to listen on the relocatible name of the database package. Perform the following steps as ora<sid>. To do this, change all references from <local> to the relocatible name <relocdb> in the files on the shared volume group. Be careful if these files were customized after the SAP installation. For example: $ORACLE_HOME/network/admin/listener.ora $ORACLE_HOME/network/admin/tnsnames.ora When using Oracle 10g: /sapmnt/<sid>/profile/oracle/listener.ora /sapmnt/<sid>/profile/oracle/tnsnames.ora Oracle Database Step: OR870 Copy $ORACLE_HOME/network/admin/tnsnames.ora to all additional application server hosts. Be careful if these files were customized after the SAP installation. Oracle Database Step: OR880 Be sure to configure and install the required Oracle NLS files and client libraries as mentioned in section Oracle Storage Considerations included in chapter Planning the Storage Layout Also refer to SAP OSS Note for more details. Optional Step: OR890 If you use more than one SAP system inside of your cluster: It is possible that more than one database is running on the same node. Even though one listener is capable of serving many database instances problems can occur in switchover environments because needed file systems may not be available at the startup time of the listener. Use a dedicated listener process for each database. You can use the standard file that is created during the installation of one SAP system <SID1> as a template. Double its contents, for example: cat listener.ora listener.ora >listener.ora.new mv listener.ora.new listener.ora The file consists of two identical parts 94 Step-by-Step Cluster Conversion

95 Table 3-3 Working with the two parts of the file Part of the file: first part second part Instruction: Replace each occurrence of the word LISTENER by a new listener name. You can choose what suits your needs, but it is recommended to use the syntax LISTENER<SID1>: ( host = <relocdb_1> ) Change nothing. Replace each occurrence of the word LISTENER by a new listener name different from the one chosen above. For example, use LISTENER<SID2> if <SID2> is the SID of the second SAP system. Replace any other occurrence of <SID1> by <SID2>. The line should be modified to contain the appropriate relocatible address belonging to the database package (db, or dbci) of the second system. For example: ( host = <relocdb_2> ) In the line: ( port = 1527 ) a new previously unused port should be placed. For example: ( port = 1528 ) Adapt the (host=...) and (port=...) lines corresponding to the values you have chosen in the listener.ora file. Test your setup by starting the listeners as ora<sid1/2>: lsnrctl start LISTENER<SID1/2> Create a /etc/services entry for the new port you specified above. Use tlisrv<sid2> as service name. The name is not needed anyhow. This entry has to be made on all hosts that run an instance that belongs to the system. This includes all external application server hosts outside of the cluster. Optional Step: OR900 If you use multiple packages for the database and SAP components: Set the optional parameter SQLNET.EXPIRE_TIME in sqlnet.ora to a reasonable value in order to take advantage of the Dead Connection Detection feature of Oracle. The parameter file sqlnet.ora resides either in /usr/sap/trans or in $ORACLE_HOME/network/admin. The value of SQLNET.EXPIRE_TIME determines how often (in seconds) SQL*Net sends a probe to verify that a client-server connection is still active. If the Central Instance switches, the application servers may crash, thereby leaving shadow processes running on the database host. While the Central Instance package cleans up the application server hosts, it does not touch the ORACLE shadow processes running on the database host. Remove them, because their number increases with every Central Instance package switch. After an application server crash, a connection to the database shadow process may be left open indefinitely. If the SQLNET.EXPIRE_TIME parameter is specified,sql*net sends a probe periodically to determine whether there is an invalid connection that should be terminated. It finds the dead connections and returns an error, causing the server process to exit. Oracle Step: OR910 Additional steps for Oracle 8.1.x RDBMS: The sapbackup directory needs to be in $ORACLE_HOME/sapbackup. If you are using an Oracle 8.1.x DB the $ORACLE_HOME directory is set to /oracle/<sid>/8.1.x whereas the sapbackup directory is still placed /oracle/<sid>/sapbackup. It's best practice to create the symbolic link $ORACLE_HOME/sapbackup -> /oracle/<sid>/sapbackup. Oracle Step: OR920 Additional steps for Application server and Oracle 8.1.x RDBMS: If you run an Oracle DB >= 8.1.x and install additional SAP Application Server follow the procedure described in OSS to install the ORACLE client libraries. If you configure multiple Application servers to be started with the parallel startup option in sap.config, make sure the tcp_conn_request_max ndd parameter on the DB-nodes (primary and backup) are configured with an appropriate value: Example: tcp_conn_request_max = 1024 Database Configuration 95

96 If this parameter is set too low, incoming tcp connections from starting SAP Application servers that want connect to the DB via the Oracle Listener may halt. This will hang the starting process of the SAP Application server. Oracle Step: OR940 Additional steps for Oracle 9i RDBMS: With newer revisions of Oracle 9i the Oracle Installer creates symbolic links in the client library directory /oracle/client/92x_64/lib that reference to SID-specific libraries residing in the $ORACLE_HOME/lib of that database instance. This causes trouble when the database package is not running locally - these symbolic links can not be resolved. Therefore these links are required to be resolved, i.e. copied locally in the client library directory. But only copying the client libraries locally does not solve the issue completely: certain client libraries are linked during installation so that the 'shared dynamic path search' contains the following entries (chatr output): LD_LIBRARY_PATH enabled first SHLIB_PATH enabled second embedded path enabled third /oracle/<sid>/920_64/lib This means if the libraries are not found in LD_LIBRARY_PATH or SHLIB_PATH, the dynamic loader again searches in the instance specific path $ORACLE_HOME/lib. To resolve this issue, the needed libraries have to be provided either in LD_LIBRARY_PATH or SHLIB_PATH as local copies or symbolic links. For example create symbolic links to these libraries in the central executable directory /sapmnt/<sid>/exe is a possible solution. Oracle Step: OR941 Additional steps for Oracle 10g RDBMS: Even though no cluster services are needed when installing a Oracle 10g Single Instance, the cluster services daemon ($ORACLE_HOME/bin/ocssd.bin) is installed. It will remain running, even when the database is shut down. This keeps the file system busy during package shut down. Therefore it is mandatory that the cluster services daemon is disabled. Comment or remove the following entry in /etc/inittab: # h1:3:respawn:/sbin/init.d/init.cssd run >/dev/null 2>&1 </dev/null Additionally there is a init script in /sbin/init.d: /etc/init.d/init.cssd that is linked to by /etc/rc3.d/s<###>init.cssd and /etc/rc3.d/k<###>init.cssd Remove the following referenced links to prevent the Oracle Cluster Services Daemon from initiating during boot time. # rm /etc/rc3.d/s<###>init.cssd # rm /etc/rc3.d/k<###>init.cssd Additional Steps for MaxDB Logon as root to the primary host of the database where the (db) or (dbci) package is running in debug mode. MaxDB Database Step: MD930 Create additional links in /sapdb/<dbsid> on the primary node. For example: su - sqd<sid> ln -s.dbenv_<primary>.csh.dbenv_<secondary>.csh ln -s.dbenv_<primary>.sh.dbenv_<secondary>.sh exit MaxDB Database Step: MD Step-by-Step Cluster Conversion

97 Make sure that /usr/spool exists as a symbolic link to /var/spool on all cluster nodes on which the database can run. MaxDB Database Step: MD950 Configure the XUSER file in the <sid>adm user home directory. The XUSER file in the home directory of the SAP Administrator keeps the connection information and grant information for a client connecting to the MaxDB database. The XUSER content needs to be adopted to the relocatable IP the MaxDB RDBMS is running on. To list all mappings in the XUSER file, run the following command as <sid>adm : # dbmcli -ux SAPR3,SAP -ul for MaxDB 7.6: su - <lcsid>adm dbmcli -ux SAP<SID>, <password> -ul This command produces a list of MaxDB user keys that may be mapped to the MaxDB database schema via a local hostname. SAP created keys commonly include c, w and DEFAULT. If a mapping was created without specifying a key, entries of the form <num><dbsid><hostname>, e.g. 1SDBrhpcc072 exist. These will only work on one of the cluster hosts. To find out if a given user key mapping <user_key> works throughout the cluster, the relocatable address should be added to the primary host using cmmodnet -a. For all previously listed user keys run the following command as <sid>adm Administrator: # dbmcli -U <user_key> quit exits the upcoming prompt: # dbmcli on <hostname> : <DBSID>> quit <hostname>should be relocatable. If it is not, the XUSER mapping has to be recreated. For example: # dbmcli -d <DBSID> -n <reloclc_s> -us <user>,<passwd> -ux SAP<SID>,SAP -uk <user_key> Entries of the form <num><dbsid><reloclc_s> could be created by omitting the -ukoption. NOTE: Refer to the SAP documentation to learn more about the dbmcli syntax. After recreation always check the connection: # dbmcli -U <user_key> # dbmcli on <reloclc_s> : <DBSID>> quit MaxDB Database Step: MD960 Distribute the XUSER mappings by distributing the file /home/<sid>adm/.xuser.<version> to all database package nodes. SAP WAS System Configuration This section describes some required configuration steps that are necessary for SAP to become compatible to a cluster environment. NOTE: The following configuration steps do not need to be performed if the SAP System was installed using virtual IPs. The steps are only required to make a non-clustered installation usable for clustering. SAP ABAP Engine specific configuration steps SAP J2EE Engine specific configuration steps SAP ABAP Engine specific configuration steps Logon as <sid>adm on the primary node on which the Central Instance has been installed. The appropriate Serviceguard package should still run on this host in Serviceguard debug mode. SAP WAS System Configuration 97

98 Installation Step: IS1100 For ci SAP computing change into the profile directory by typing the alias: cdpro In the DEFAULT.PFL change the following entries and replace the hostname with the relocatible name if you cluster a (ci) component. For example: SAPDBHOST = <relocdb> rdisp/mshost = <relocci> rdisp/sna_gateway = <relocci> rdisp/vbname = <relocci>_<sid>_<instnr> rdisp/btcname = <relocci>_<sid>_<instnr> rslg/collect_daemon/host = <relocci> If you don't have Replicated Enqueue: rdisp/enqname = <relocci>_<sid>_<instnr> The following parameters are only necessary if an application server is installed on the adoptive node. For example: rslg/send_daemon/listen port rslg/collect_daemon/listen port rslg/collect_daemon/talk port Optional Step: IS1110 If you want to use the DB-Reconnect functionality for ABAP Instances: Add entries to: Instance Profiles of all application servers that use DB-Reconnect When configuring the DBRECONNECT feature follow the appropriate OSS notes , and For example: rsdb/reco_trials = 15 rsdb/reco_sleep_time = 60 rsdb/reco_sosw_for_db = off(based on OSS #109036) rsdb/reco_sync_all_server = on Installation Step: IS1120 In the instance profile of each clustered ABAP instance, two values have to be added to specify the virtual addressing, Fr example in <SID>_DVEBMGS<INSTNR>: SAPLOCALHOST = <relocci> SAPLOCALHOSTFULL = <relocci>.<domain> The parameter SAPLOCALHOSTFULL must be set even if you do not use DNS. In this case you should set it to the name without the domain name: SAPLOCALHOSTFULL=<relocci> The instance profile name is often extended by the hostname. You do not need to change this filename to include the relocatible hostname. Serviceguard Extension for SAP on Linux also supports the full instance virtualization of SAPWAS 6.40 and beyond. The startsap mechanism can then be called by specifying the instance's virtual IP address. For this case, it is required to use the virtual IP addressing in the filenames of the instance profile and instance start profile. All references to the instance profiles that occur in the start profile need to be changed to include the virtual IP address instance profile filename. SAPLOCALHOST is set to the hostname per default at startup time and is used to build the SAP application server name: <SAPLOCALHOST>_<SID>_<INSTNR> 98 Step-by-Step Cluster Conversion

99 This parameter represents the communication path inside an SAP system and between different SAP systems. SAPLOCALHOSTFULL is used for rfc-connections. Set it to the fully qualified hostname. The application server name appears in the server list held by the Message Server, which contains all instances, hosts and services of the instances. The application server name or the hostname is also stored in some system tables on the database. Installation Step: IS1130 If you cluster a single-instance database, you need to maintain the transport management parameters. In the transport directory parameter file(s), e.g. /usr/sap/trans/bin/tpparam, modify the dbhost entry as follows: <SID>/dbhost = <relocdb> Optional Step: IS1140 If you have already received your licenses from SAP install them on all the nodes where the Central Instance can start. Refer to the Serviceguard Extension for SAP Release Notes for further information on how to do this. The package comes up without a license, too. But certain restrictions apply to the SAP application. SAP will now be ready to run on all cluster nodes. Test the manual startup on all adoptive nodes. Installation Step: IS1150 Connect with a SAPGUI. Import the changed SAP profiles within SAP. The transaction is called RZ10. After importing the profiles, check with rsparam in SE38 if the parameters SAPLOCALHOST and SAPLOCALHOSTFULL are correct. If you do not import the profiles, the profiles within SAP can be edited by the SAP Administrator. The values listed from the SAP system will be wrong and, when saved, will overwrite the values which edited on the HP-UX level. Installation Step: IS1160 The destination for print formatting, which is done by a Spool Work process, uses the application server name. If the application server name stays the same because SAPLOCALHOST has been set to the relocatible name, after a switch no changes need to be done. Printing works consistently. A print job in process at the time of the failure is canceled and needs to be reissued manually after the failover. To make a spooler highly available on the Central Instance server, set the destination of the printer to e.g. <relocci>_<sid>_<instnr> using transaction SPAD. Installation Step: IS1170 Batch jobs can be scheduled to run on a particular instance. You select a particular instance by its hostname at the time of job scheduling. The application server name and the hostname retrieved from the Message Server are stored in the Batch control tables TBTCO, TBTCS... When the batch job is ready to run, the application server name is used to start it on the appropriate instance. When using <relocci> to build the application server name for the SAP instance, you do not need to change batch jobs, which are tied to the locality, after a switchover, even if the hostname which is also stored in the above tables differs. Installation Step: IS1180 Within the SAP Computing Center Management System (CCMS) you can define operation modes for SAP instances. An operation mode defines a resource configuration for the instances in your SAP system. It can be used to determine which instances are started and stopped, and how the individual services are allocated for each instance in the configuration. An instance definition for a particular operation mode consist of the number and types of work processes as well as start and instance profiles (starting with version 3.0 the CCMS allows profile maintenance from within the SAP system). When defining an instance for an operation mode, enter the hostname and the system number of the application server. By using <relocci> to fill in the hostname field, the instance is working under control of the CCMS after a failover without any change. SAP WAS System Configuration 99

100 If an instance is running on the standby node in normal operation (and is stopped when switching over), the control console shows this instance to be down (for example, you will get a red node on a graphical display) after the switchover. Installation Step: IS1190 A SAP Internet Communication Manager (ICM) may run as part of any Application Server. It is started as a separate multi-threaded process and can be restarted independently from the Application Server. E.g. usage of BW web reporting, Business Server Pages (BSP) rely on ICM to work properly. In scenarios using external web communication, one or more SAP Web Dispatcher often accompanies the ICMs. The SAP Web Dispatcher can be layed out redundantly with hardware load balancing. SAP also allows Serviceguard Extension for SAP on Linux packaging for a single Web Dispatcher. To find out if ICM is configured as part of an Application Server, check for the following SAP profile parameter: rdisp/start_icman=true ICM gets switched over with the Instance package. In order to make it work, ICM has to be registered with the Message Server using a virtual IP address. This mechanism is different from SAPLOCALHOST and SAPLOCALHOSTFULL since ICM allows HTTP Server Aliasing via icm/server_port_<nn> settings. During startup, icman reads its configuration and propagates it to the SAP Message Server or the SAP Web Dispatcher Server. These servers then act as the physical point of access for HTTP requests. They classify requests and send HTTP redirects to the client in order to connect them to the required ICM Instance. This only works properly if the bound ICM instances propagate virtual IP addresses. Therefore a parameter icm/host_name_full=<relocatible_ip> needs to be set within the Instance Profile of the Application Server. Double-check the settings with report RSBB_URL_PREFIX_GET or review the parameters within the SMMS transaction. Installation Step: IS1200 If you cluster ci, configure the frontend PCs to attach to <relocci>. Most of the time, this can be achieved by distributing a new saplogon.ini file to the windows directory. SAP J2EE Engine specific installation steps This section is applicable for SAP J2EE Engine 6.40 based installations that were performed prior to the introduction of SAPINST installations on virtual IPs. There is a special SAP OSS Note that explains how to treat a hostname change and thus the SAP adoptive computing benefits of a SAP J2EE Engine Depending on the SAP application component that is clustered, there might be additional steps documented as part of a SAP whitepaper that covers clustering of the component. Installation Step: NW04J1210 If the SAP component installed also depends and utilizes a SAP J2EE Engine, some configuration tasks have to be performed in order for the J2EE instance to interact in an environment with virtual IPs. The following lines will give some hints about the settings necessary for the switchover of the J2EE Engine 6.40, but always refer to SAP OSS Note because this note is always kept up-to-date. For virtualizing the J2EE DB Connection: Log on to the J2EE Configuration Tool Choose secure store Change the following values: - admin/host/<sid> <relocdb> For Oracle databases replace the hostname in the connection string: jdbc/pool/<sid>/url jdbc:oracle:thin@<relocdb>:1527:<sid> Installation Step: NW04J Step-by-Step Cluster Conversion

101 These settings have to be adjusted for the switchover of the J2EE part of the SAP WEB AS; the following configuration has to be performed in the Offline Configuration Editor: Log on to the Offline Configuration Editor. Table 3-4 IS1130 Installation Step Choose... cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet LockingManager Change the following values enqu.host = <relocci> enq.profile.filename=\ sapmnt/<sid>/profile/<sid>_<component> <INR>_<relocci> cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet ClusterManager cluster_data -> server -> cfg -> kernel -> Propertysheet LockingManager ms.host = <relocci> enqu.host = <relocci> enq.profile.filename=\ sapmnt<sid>/profile/<sid>_<component><inr>_<relocci> cluster_data -> server -> cfg -> kernel -> Propertysheet ClusterManager cluster_data -> Propertysheet instance.properties.idxxxxxx (in file) /usr/sap/<sid>/<component> <INR>/j2ee/additionalsystemproperties ms.host = <relocci> instance.ms.host localhost to <relocci> com.sapl.instanceid=<sid>_ <relocci>_<inr> Check for the latest version of SAP OSS Note Installation is complete. Run some comprehensive switchover testing covering all possible failure scenarios. It is important that all relevant SAP application functions are tested on the switchover nodes. There exist several documents provided by HP or SAP that can guide you through this process. IPv6 Specific Installation Release A of SGeSAP/LX now supports IPv6 network configurations in addition to the traditional IPv4 configurations. The cluster nodes will run as a dual network stack configuration - the network interfaces have IPv6 and IPv4 enabled at the same time. NOTE: Currently both Oracle and MaxDB only support IPv4 addressing, SAP Netweaver supports both IPv4 and IPv6. The perl Socket6 module provides an IPv6 function set for the perl language and is required by the SGeSAP/LX A scripts to work properly in an IPv6 environment. In the following examples IPv6 address 2001::101:146 will be used for cluster node sap146, IPv6 address 2001::101:147 will be used for node sap147 The SAP SID <SX1> will be used in this configuration Installation Step: IP1200 Check if the perl-socket6 module is available on the system by running the command: rpm -qa grep perl-socket6 If the module name is not returned install it if available from the original distribution media If it not available the perl-socket6 module can also be downloaded from the following web sites: Installation Step: IP1210 IPv6 Specific Installation 101

102 Add IPv6 addresses to /etc/hosts an all cluster nodes. The following virtual and physical addresses will be used for this IPv6 configuration. vi /etc/hosts 2001::101:51 sap51 # vip ascssx1 ABAP SCS 2001::101:53 sap53 # vip D00SX1 - CI 2001::101:54 sap54 # vip D01SX1 DIALOG 2001::101:146 sap146 # phys node1 - CLUSTER 2001::101:147 sap147 # phys node2 CLUSTER sap51 # vip ascssx sap53 # vip D00SX sap54 # vip D01SX sap146 # phys node1 - CLUSTER sap147 # phys node2 CLUSTER Installation Step: IP1220 The followings steps show how to enable IPv6 for a RHEL node. In this example IPv6 address 2001::101:147/24 will be used for this node. First edit and set NETWORKING_IPV6=yes in the following file: vi /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes IPV6FORWARDING=no IPV6_AUTOCONF=no IPV6_AUTOTUNNEL=no HOSTNAME=sap147 GATEWAY= Edit and set IPV6ADDR=2001::101:147/24 in the following file: vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static BROADCAST= HWADDR=00:12:79:94:D4:C8 IPADDR= NETMASK= NETWORK= ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001::101:147/24 IPV6_MTU=1280 Restart the network interface with the following command: service network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] 102 Step-by-Step Cluster Conversion

103 Bringing up interface eth0.01: [ OK ] The IPv6 address and network interface can be checked with the following command: ifconfig eth0 Link encap:ethernet HWaddr 00:12:79:94:D4:C8 inet addr: Bcast: Mask: inet6 addr: 2001::101:147/24 Scope:Global inet6 addr: fe80::212:79ff:fe94:d4c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Repeat these steps for all RHEL nodes in the cluster. Installation Step: IP1230 The followings steps show how to enable IPv6 for a SLES node. IPv6 address 2001::101:147/24 will be used for this node. First determine the configuration file by running the ifconfig command and matching the MAC address in the output with the filename ifcfg-eth-id-00:13:21:20:52:2e Edit and set IPADDR1=2001::101:146/24 in the following file: vi /etc/sysconfig/network/ifcfg-eth-id-00:13:21:20:52:2e BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR=' ' MTU='' NAME='Compaq NC7782 Gigabit Server Adapter (PCI-X, 10,100,1000-T)' NETMASK=' ' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' UNIQUE='rBUF.RfYI5sfd_88' USERCONTROL='no' _nm_name='bus-pci-0000:03:06.0' PREFIXLEN='' IPADDR1=2001::101:146/24 To enable the new network configuration restart the network subsystem with the command: rcnetwork restart Shutting down network interfaces: eth0 device: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10) eth0 configuration: eth-id-00:13:21:20:52:2e Check the settings with the ifconfig command: ifconfig eth0 Link encap:ethernet HWaddr 00:13:21:20:52:2E inet addr: Bcast: Mask: inet6 addr: 2001::101:146/24 Scope:Global inet6 addr: fe80::213:21ff:fe20:522e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric: Repeat these steps for all SLES nodes in the cluster. Installation Step: IP240 IPv6 Specific Installation 103

104 Edit the Serviceguard cluster configuration file and add the IPv6 STATIONARY_IP addresses for the cluster nodes. vi $SGCONF/cmclconfig.ascii NODE_NAME node146 NETWORK_INTERFACE eth0 HEARTBEAT_IP STATIONARY_IP 2001::101:146 NODE_NAME node147 NETWORK_INTERFACE eth0 HEARTBEAT_IP STATIONARY_IP 2001::101:147 Update the Serviceguard cluster configuration database with the following command. cmapplyconf -C cmclconfig.ascii... Modify the cluster configuration ([y]/n)? y Completed the cluster creation Installation Step: IP1250 Edit the SGeSAP sap.config file and add the IPv6 address for the SAP instances: vi ${SGCONF}/SX1/sap.config CIRELOC=2001::101:51 CINAME=ASCS CINR=02 DRELOC[0]=2001::101:53 DNAME[0]=DVEBMGS DNR[0]=00 DRELOC[1]=2001::101:54 DNAME[1]=D DNR[1]=01 Copy the sap.config file to all cluster nodes Installation Step: IP1260 Edit the SGeSAP sap.config file and add the IPv6 address for the SAP instances: Edit the Serviceguard package control.script files and add the IPv6 address for the SAP instances: vi ${SGCONF}/SX1/d01SX1.control.script IP[0]=2001::101:54 SUBNET[0]=2001::/24 vi ${SGCONF}/SX1/d00SX1.control.script IP[0]=2001::101:53 SUBNET[0]=2001::/24 vi ${SGCONF}/SX1/ascsSX1.control.script IP[0]=2001::101:51 SUBNET[0]=2001::/24 Copy all the modified control.script files to all cluster nodes Installation Step: IP1270 Edit all the instance START profiles and add the parameter NI_USEIPv6 = true for enabling IPv Step-by-Step Cluster Conversion

105 vi /sapmnt/sx1/profile/start_ascs02_sap51 vi /sapmnt/sx1/profile/start_d01_sap54 vi /sapmnt/sx1/profile/start_dvebmgs00_sap53 NI_USEIPv6 = true Add the same parameter to SAP admins startup environment vi ~/.cshrc setenv NI_USEIPv6 true Installation Step: IP1280 Start the Serviceguard packages and test IPv6 connectivity. cmrunpkg ascssx1 cmrunpkg d01sx1 cmrunpkg d00sx1 Installation Step: IPV6xxx Test IPv6 connectivity with SAP niping and lgtest commands. niping -6 -v -H sap51 Hostname/Nodeaddr verification: =============================== Hostname of local computer: sap147 (NiMyHostName) Lookup of hostname: sap147 (NiHostToAddr) --> IP-Addr.: 2001::101:147 Lookup of IP-Addr.: 2001::101:147 (NiAddrToHost) --> Hostname: sap147 Lookup of hostname: sap51 (NiHostToAddr) --> IP-Addr.: 2001::101:51 Lookup of IP-Addr.: 2001::101:51 (NiAddrToHost) Start niping as a server (option -s) on host sap146. Then start niping as a client (option -c) on local host sap146 or host sap147 niping -s connect from host 'localhost6.localdomain6', client hdl 1 o.k. client hdl 1 disconnected... niping -c -H sap51 Tue Nov 25 07:16: connect to server o.k. send and receive 10 messages (len 1000) times avg ms max ms min ms tr kb/s excluding max and min: av ms tr kb/s lgtst -H sap51 -S 3602 list of reachable application servers IPv6 Specific Installation 105

106 [saptux3_sx1_00] [sap53] [2001::101:53] [tick-port] [3200] [DIA UPD BTC SPO UP2 ICM ] [saptux4_sx1_01] [sap54] [2001::101:54] [cpq-tasksmart] [3201] [DIA BTC ICM ] 106 Step-by-Step Cluster Conversion

107 4 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the livecache The first technical component, the APO System is based on SAP WAS technology. Therefore, (ci), (db), (dbci), (d) and (sapnfs) packages may be implemented for APO. These APO packages are set up the same way as the Netweaver packages. The only difference is that the APO system needs access to livecache client libraries and database binaries (e.g. dbmcli...). These files usually are searched via the path /sapdb/programs. The second technical SCM component, called livecache, is an in-memory database. livecache is based on MaxDB database technology. SAP has defined a list of policies and requirements in order to allow support for livecache High Availability. Therefore a livecache Serviceguard Extension for SAP on Linux package does not use the generic (ci), (db), (dbci), (d) and (sapnfs) package templates. Instead a new Serviceguard Extension for SAP on Linux template called (lc) will be introduced. This chapter describes how to configure and setup SAP Supply Chain Management using a failover livecache. Topics are as follows: Planning the Volume Manager Setup Linux Setup Cluster Node Synchronization Cluster Node Configuration Serviceguard Extension for SAP on Linux Package Configuration Package Creation Service Monitoring APO Setup Changes General Serviceguard Setup Changes The tasks are presented as a sequence of steps. Each mandatory installation step is accompanied by a unique number of the format XXnnn, where nnn are incrementing values and XX indicates the step relationship, as follows: LCnnn: LP Package Installation Steps GSnnn: 'General Installation Steps Whenever appropriate, Linux sample commands are given to guide through the installation process in as detailed a manner as possible. It is assumed that hardware as well as the operating system and Serviceguard are already installed properly on all cluster hosts. Sometimes a condition is specified with the installation step. Follow the information presented only if the condition is true for your situation. NOTE: For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP application in a switchover environment usually example values are given. These values are for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's latest recommendation. Whenever possible the SAP OSS note number is given. Planning the Volume Manager Setup In the following, the livecache (lc) package of Serviceguard Extension for SAP on Linux gets described. The (lc) package was developed according to the SAP recommendations and fulfills all SAP requirements for livecache failover solutions. livecache distinguishes an instance dependant path (/sapdb/<lcsid>) and two instance independent paths IndepData (/sapdb/data) and IndepPrograms (sapdb/programs). Planning the Volume Manager Setup 107

108 NOTE: <LCSID> is the system id of the particular implementation a livecache system. <LCSID> denotes the three-letter database name of the livecache instance in uppercase.<lcsid>is the same name in lowercase. The term MaxDB based components and livecache, MaxDB and SAPDB: SABDB is the original name of a database from SAP. The name SAPB was recently replaced by the name MaxDB. LiveCache is a memory based variant of the MaxDB database. There are different configuration options for the file systems caused by a tradeoff between simplicity and flexibility. The options are described below ordered by increasing complexity. The cluster layout constraints that need to be fulfilled to allow the simplifications of a given option are stated within a bullet list. LVM Option 1: Simple Clusters with Separate Packages Cluster Layout Constraints: The livecache package does not share a failover node with the APO Central Instance package. There is no MAXDB or additional livecache running on cluster nodes. There is no intention to install additional APO Application Servers within the cluster. Table 4-1 File System Layout for livecache Package running separate from APO (Option 1) Storage Type Package Mount Point Volume group Logical Volume Device Number Shared Exclusive lc<lcsid> /sapdb Shared Exclusive lc<lcsid> /sapdb/ <LCSID>/ datan Shared Exclusive lc<lcsid> /sapdb /<LCSID>/ logn Shared Exclusive lc<lcsid> /var/spool/sql Shared Exclusive lc<lcsid>1 /sapdb /programs Local /etc/opt/sdb 1 This denotes the package for the database of the APO Central Instance. The logical volume should be added to the logical volumes that already belong to this package. In the above layout all relevant files get shared via standard procedures. The setup causes no additional administration overhead for synchronizing local files. SAP default paths are used. LVM Option 2: Full Flexibility - No Constraints If multiple MaxDB based database components are either planned or already installed, care must be taken how the storage volumes are configured. For configuring storage in a livecache environment the following directories are affected: /sapdb/programs: The central directory with all livecache/maxdb executables. The directory is shared between all livecache/maxdb Instances that reside on the same host. It is possible to share the directory via NFS between hosts. It is also possible to install different MaxDB versions (e.g.: version 7.3 and version 7.4) on the same host. In any case only one single directory for the livecache/maxdb binaries can be configured. This then implies if two different MaxDB versions will be used on the same system the installed binaries in /sapdb/programs will have to be from the newer version. /sapdb/data/config: This directory can be shared between livecache/maxdb instances: the directories and files below this path have instance specific names <LCSID>: (e.g. /sapdb/data/config/<lcsid>.*). According to SAP this path setting is static. /sapdb/data/wrk: The working directory of the main livecache/maxdb processes is also a subdirectory of the IndepData path for non-ha setups. If a livecache restarts after a crash, it copies important files from 108 SAP Supply Chain Management

109 this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for livecache versions lower than 7.6, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each livecache/maxdb individually. Serviceguard Extension for SAP on Linux expects the work directory to be part of the lc package. The mount point moves from /sapdb/data/wrkto /sapdb/data/<lcsid>/wrk. This directory should not be mixed up with the directory /sapdb/data/<lcsid>/db/wrk that might also exist. Core files of the kernel processes are written into the working directory. These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. NOTE: For livecache starting with version 7.6 these limitations do not exist any more. The working directory is utilized by all instances (IndepData/wrk) and can be globally shared. /var/spool/sql: This directory hosts local runtime data of all locally running livecache/maxdb Instances. Most of the data in this directory becomes meaningless in the context of a different host after failover. The only critical portion that still has to be accessible after failover is the initialization data in /var/spool/sql/ini. This directory is almost always very small (< 1MB). Table 4-2 General File System Layout for livecache (Option 3) Storage Type Package Mount Point Volume group Logical Volume Device Number Shared Exclusive lc<lcsid> /sapdb/<lcsid> Shared Exclusive lc<lcsid> /sapdb/<lcsid>/wrk* Shared Exclusive lc<lcsid> /sapdb /<LCSID>/datan Shared Exclusive lc<lcid> /sapdb/<lcsid>/saplogn Shared NFS sapnfs /export/sapdb/programs Shared NFS sapnfs /export/sapdb/data Shared NFS sapnfs /export/var/spool/sql/ini Local /etc/opt/sdb * only valid for livecache versions lower than 7.6. MaxDB Storage Considerations Serviceguard Extension for SAP on Linux supports current MaxDB, livecache and older SAPDB releases. The considerations made below will apply similarly to MaxDB, livecache and SAPDB clusters unless otherwise noticed. MaxDB distinguishes an instance dependant path /sapdb/<dbsid> and two instance independent paths, called IndepData and IndepPrograms. By default all three point to a directory below /sapdb. The paths can be configured in a configuration file called SAP_DBTech.ini. Depending on the version of the MaxDB database this file contains different sections and settings. A sample SAP_DBTech.ini for a host with a MaxDB/SAPDB 7.4 (LC1) and an APO 3.1 using a MaxDB/SAPDB 7.3 database instance (AP1): /var/spool/sql/ini/sap_dbtech.ini [Globals] IndepData=/sapdb/data IndepPrograms=/sapdb/programs [Installations] /sapdb/lc1/db= ,/sapdb/lc1/db /sapdb/ap1/db= ,/sapdb/ap1/db [Databases] Planning the Volume Manager Setup 109

110 .SAPDBLC=/sapdb/LC1/db LC1=/sapdb/LC1/db _SAPDBAP=/sapdb/AP1/db AP1=/sapdb/AP1/db [Runtime] /sapdb/programs/runtime/7240= , /sapdb/programs/runtime/7250= , /sapdb/programs/runtime/7300= , /sapdb/programs/runtime/7301= , /sapdb/programs/runtime/7401= , /sapdb/programs/runtime/7402= , For MaxDB and livecache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.iniin the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini and Runtimes.ini for a host with a livecache 7.5 (LC2) and an APO 4.1 using a MaxDB 7.5 (AP2): /var/spool/sql/ini/sap_dbtech.ini: [Globals]IndepData=/sapdb/dataIndepPrograms=/sapdb/programs /sapdb/data/config/installations.ini: [Installations] /sapdb/lc2/db= ,/sapdb/lc2/db /sapdb/ap2/db= ,/sapdb/ap2/db /sapdb/data/config/databases.ini: [Databases].M750015=/sapdb/LC2/db LC2=/sapdb/LC2/db.M750021=/sapdb/AP2/db AP2=/sapdb/AP2/db /sapdb/data/config/runtimes.ini: [Runtime] /sapdb/programs/runtime/7500= livecache installation Step: LC010 If you decided to use option three, and the livecache version is lower than 7.6: 1. Log on as <lcsid>adm on the machine on which livecache was installed. Make sure, you have mounted a sharable logical volume on /sapdb/<lcsid>/wrk as discussed above. 2. Change the path of the runtime directory of the livecache and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/<lcsid> find. -depth -print cpio -pd /sapdb/<lcsid>/wrk cd.. rmdir /sapdb/data/wrk/<lcsid> dbmcli -d <LCSID> -u control,control dbmcli on <LCSID>>param_directget RUNDIRECTORY OK RUNDIRECTORY /sapdb/data/wrk/<lcsid> 110 SAP Supply Chain Management

111 --- dbmcli on <LCSID>>param_directput RUNDIRECTORY /sapdb/<lcsid>/wrk OK --- Linux Setup dbmcli on <LCSID>> This section describes how to setup Serviceguard extension for SAP with Linux. Clustered Node Synchronization 1. Repeat the steps in this section for each node of the cluster that is different from the primary. 2. Logon as root to the primary host and prepare a logon for each of its backup hosts. livecache installation Step: LC030 Synchronize the /etc/group and /etc/passwd files. The livecache installation has created a <lcsid>adm user belonging to the sapsys group. Make sure the user and group exist on all nodes and that UID and GID are consistent across the nodes. livecache installation Step: LC040 Synchronize the sql<nn>entries from the/etc/services file between the nodes to match the entries on the primary node. Usually you will find service entries for sql6 and sql30. livecache installation Step: LC050 Change the Linux kernel on the backup nodes to meet the SAP livecache requirements as specified in the SAP livecache installation documents. livecache installation Step: LC060 Do the following to continue. 1. Copy the content of the <lcsid>adm home directory to the backup node. This is a local directory on each node. 2. Rename the environment scripts on the secondary nodes. Some of the environment scripts may not exist. For example: su - <lcsid>adm mv.dbenv_<primary>.csh.dbenv_<secondary>.csh mv.dbenv_<primary>.sh.dbenv_<secondary>.sh For livecache 7.6: su - <lcsid>adm mv.lcenv_<primary>.csh.lcenv_<secondary>.csh mv.lcenv_<primary>.sh.lcenv_<secondary>.sh NOTE: Never use the relocatable address in these file names. livecache installation Step: LC061 Copy file/etc/opt/sdb to the second cluster node. This file contains global path names for the livecache instance. livecache installation Step: LC062 Verify that the symbolic links listed below in directory /var/spool/sql exist on both cluster nodes. dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ini ipc -> /sapdb/data/ipc Linux Setup 111

112 pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid livecache installation Step: LC070 Make sure /var/spool/sql exists as a directory on the backup node. /usr/spool must be a symbolic link to /var/spool. livecache installation Step: LC080 On the backup node, create a directory as future mountpoint for all relevant directories from the table of section that refers to the layout option you chose. Option 1: mkdir /sapdb Option 2: mkdir -p /sapdb/data mkdir /sapdb/<lcsid> Option 3: mkdir -p /sapdb/<lcsid> Cluster Node Configuration livecache installation Step: LC100 Repeat the steps in this section for each node of the cluster. livecache installation Step: LC110 Add all relocatable IP address information to /etc/hosts. Remember to include the heartbeat IP addresses. livecache installation Step: LC120 If you use DNS: Configure /etc/nsswitch.conf to avoid problems. For example: hosts: files dns Serviceguard Extension for SAP on Linux Package Configuration This section describes how to package the Serviceguard Extension for SAP on Linux Configuration. livecache installation Step: LC150 As mentioned in an earlier section of this document, references to variable ${SGCONF} can be replaced by the definition of the variable that is found in this file /etc/cmcluster.config. The default values are: For SUSE Linux - SGCONF=/opt/cmcluster/conf For Redhat Linux - SGCONF=/usr/local/cmcluster/conf The Serviceguard NFS toolkit also uses a installation directory. This directory is different for SUSE Linux and Redhat Linux so any references to variable ${SGROOT} can be replaced by the following definition: For SUSE Linux - SGROOT=/opt/cmcluster For Redhat Linux - SGROOT=/usr/local/cmcluster SGeSAP/LX uses a staging directory during the installation. As the staging directory is different for SUSE Linux and Redhat Linux references to variable ${SAPSTAGE} can be replaced by the following definition: For SUSE Linux - SAPSTAGE=/opt/cmcluster/sap For Redhat Linux - SAPSTAGE=/usr/local/cmcluster/sap Export the variables so they are available to the user environment: export SGCONF export SGROOT 112 SAP Supply Chain Management

113 export SAPSTAGE Variables ${SGCONF} and ${SGROOT} can also be set with the following command: (Note: the leading "dot");. /etc/cmcluster.conf The command: rpm -Uhv / sgesap-toolkit-<version>.product.<linux-os>.<platform>.rpm copies all relevant SGeSAP/LX files in the ${SAPSTAGE} staging area. The following files need to be copied from the ${SAPSTAGE} staging to the SGeSAP/LX package directory: livecache installation Step: LC160 Copy the relevant Serviceguard Extension for SAP on Linux files for livecache to the cluster directory. <LCSID>is the SAP Instance ID for the livecache: mkdir ${SGCONF}/<LCSID> chown root:sapsys /etc/cmcluster/<lcsid> chown root:sapsys ${SGCONF}/<LCSID> chmod 775 ${SGCONF}/<LCSID> cp ${SAPSTAGE}/LC/saplc.mon ${SGCONF}/<LCSID> cp ${SAPSTAGE}/LC/saplc.sh ${SGCONF}/LCSID> cp ${SAPSTAGE}/LC/customer.functions ${SGCONF}/<LCSID> cp ${SAPSTAGE}/LC/sap.config ${SGCONF}/<LCSID> If this has not already been done, also copy the script file that contains the SGeSAP/LX functions cp ${SAPSTAGE}/*.functions ${SGCONF} livecache installation Step: LC170 Create the Serviceguard package control script and configuration files. Note the package naming convention lc<lcsid> as described in previous sections. cd ${SGCONF}/<LCSID> cmmakepkg -s lc<lcsid>.control.script cmmakepkg -p lc<lcsid>.config Both files need to be adapted to reflect the file system layout and the individual network setup chosen. For general information on how to customize the package control and configuration files refer to the Managing Serviceguard manual, which is available on For the lc<lcsid>.config template use the following settings: If a Serviceguard monitoring service will be used then also set the SERVICE_NAME variable. PACKAGE_NAME lc<lcsid> PACKAGE_TYPE FAILOVER RUN_SCRIPT <SGCONF>/<LCSID>/lc<LCSID>.control.script HALT_SCRIPT <SGCONF>/<LCSID>/lc<LCSID>.control.script SERVICE_NAME SGeSAPlc<LCSID> For lc<lcsid>.control.script the following settings are recommended if the Serviceguard monitoring service will be used: SERVICE_NAME[0]="SGeSAPlc<LCSID>" SERVICE_CMD[0]="/sapdb/<LCSID>/db/sap/lccluster monitor" SERVICE_RESTART[0]="-r 3" All other parameters should be chosen as appropriate for the individual setup. livecache installation Step: LC180 Serviceguard Extension for SAP on Linux Package Configuration 113

114 In file ${SGCONF}/<LCSID>/lc<LCSID>.control.script add the following lines to the customer defined functions customer_defined_run_cmds and customer_defined_halt_cmds. These commands will actually call the SGeSAP/LX relevant functions to start and stop the livecache instance. So the customer defined functions are the glue that connects the generic Serviceguard components with the SGeSAP/LX components. function customer_defined_run_cmds { ### Add following line ${SGCONF}/<SID>/saplc.sh startdb <LCSID> test_return 51 } function customer_defined_halt_cmds { ### Add following line ${SGCONF}/<SID>/saplc.sh stopdb <LCSID> test_return 52 } livecache installation Step: LC190 The${SGCONF}/<LCSID>/sap.config configuration file of a livecache package is similar to the sap.config configuration file of the generic SGeSAP/LX packages. The following standard parameters in sap.config have to be set for a livecache package: DB=LIVECACHE DBRELOC=<reloclc_s> TRANSRELOC=<relocsapnfs_s> CLEANUP_POLICY= [ strict normal lazy ] The strict option can be set in order to guarantee the livecache package gets all resources required on the adoptive node. Remember that this option can crash a system running on the adoptive node. The following livecache specific parameters in sap.config have to be set: LCSTARTMODE specifies the mode of the livecache database up to which it should be started automatically. Possible values for version 7.4 of livecache are: OFFLINE: only vserver will be started COLD: livecache started in cold mode SLOW: livecache started in cold -slow mode WARM: livecache started in warm mode In livecache version 7.5 some of these values have changed: (<from> -> <to>) OFFLINE -> OFFLINE COLD -> ADMIN SLOW -> SLOW WARM -> ONLINE It is recommended to use the new values for livecache version 7.5. For compatibility reasons the original values will still work. 114 SAP Supply Chain Management

115 NOTE: After a failover, the livecache instance will only be recovered to the state referenced in LCSTARTMODE. The behavior of the livecache service monitoring script is independent from the setting of LCSTARTMODE. The monitoring script will always monitor the vserver process. It will start to monitor if livecache is in WARM state, as soon as it reaches WARM state for the first time. The LCMONITORINTERVAL variable specifies how often the monitoring polling occurs (in sec.) livecache installation Step: LC200 You can optionally handle SAP ABAP Application Servers with a livecache package. Refer to Chapter 3 - Serviceguard Extension for SAP on Linux Configuration for more information on the usage of the parameters ASSID, ASHOST, ASNAME, ASNR, ASTREAT and ASPLATFORM livecache installation Step: LC205 Distribute the content of${sgconf}/<lcsid> to all cluster nodes on which the livecache should be able to run. Service Monitoring SAP recommends the use of service monitoring in order to evaluate the availability of livecache processes. The Serviceguard service monitor, provided with Serviceguard Extension for SAP on Linux, periodically checks the availability and responsiveness of the livecache system. The sanity of the monitor will be ensured by standard Serviceguard functionality. The livecache monitoring program is shipped with Serviceguard Extension for SAP on Linux as the saplc.mon file. The monitor runs as a service attached to the lc<lcsid> Serviceguard package. If the monitor recognizes the livecache to be unavailable, it will try to restart livecache on the node it is currently running. If this does not succeed, the runtime operating system resources of livecache are cleaned up and another local restart attempt is made. If the livecache instance still cannot be started, Serviceguard will switch the package and try to restart on another cluster node. Monitoring begins with package startup. At this point, the monitor will make sure, that livecache is working only up to the point that is specified in LCSTARTMODE. For example, if LCSTARTMODE=OFFLINE is set in file sap.config, only the vserver processes will be part of the monitoring. Still, the monitor detects any manual state change of livecache. By subsequent manual operations, livecache will be entering cold state and finally warm state, which the monitor automatically detects. As soon as the livecache reaches the warm state once, the monitoring increases its internal monitoring level to the same state. As with any Serviceguard Extension for SAP on Linux package, the (lc) package will skip any SAP specific startup steps when during startup a file called debug is found in either ${SGCONF}/<LCSID>/debug or ${SGCONF}/debug. When the debug file is found only the livecache specific volumes will be mounted and a virtual IP address is set. This is useful for debugging purposes to allow access to the logfiles placed on shared logical volumes if a package does not start up to its full extent. The lc monitor is started regardless of the availability of a debug file. The monitor will detect the existence of the file and will enter a pause mode until it is removed. This is valid not only during package startup, but also at runtime. As a consequence, the monitoring of a fully started package will be paused by the global debug file. NOTE: Activation of pause mode, state changes of livecache and livecache restart attempts get permanently logged into the standard package logfile ${SGCONF}/<LCSID>/lc<LCSID>.control.script.log. The monitor can also be paused by standard administrative tasks that use the administrative tools delivered by SAP. Stopping the livecache using the SAP lcinit shell command or the APO LC10 transaction will send the monitoring into pause mode. This prevents unwanted package failovers during livecache administration. Restarting the livecache in the same way will also trigger reactivation of the monitoring. The MaxDB Database Manager GUI is not yet cluster-aware and must not be used to stop livecache in combination with Serviceguard. livecache installation Step: LC210 Log in as <lcsid>adm on the primary node that has the shared logical volumes mounted. Serviceguard Extension for SAP on Linux Package Configuration 115

116 Create a symbolic link that acts as a hook that informs SAP software where to find the livecache monitoring software to allow the prescribed interaction with it. ln -s ${SGCONF}/<LCSID>/saplc.mon /sapdb/<lcsid>/db/sap/lccluster livecache installation Step: LC215 For the following steps the SAPGUI is required. Logon to the APO central instance as user SAP*. Start transaction /nlc10 and enter LCA for the logical connection. /nlc10 -> Logical connection -> LCA -> livecache -> Create/Change/Delete Connection Change livecache server name to the virtual IP name for the livecache (<relocls_s) and save it. Change the livecache instance name to <LCSID>. Redo the above steps for LDA. APO Setup Changes Running livecache within a Serviceguard cluster package means that the livecache instance is now configured for the relocatable IP of the package. This configuration needs to be adopted in the APO system that connects to this livecache. livecache installation Step: GS220 Run SAP transaction LC10 and configure the logical livecache names LCA and LCD to listen to the relocatable IP of the livecache package. livecache installation Step: GS230 Do the following: 1. Configure the XUSER file in the APO user home and livecache user home directory. If a.xuser file does not already exist, you must create it. 2. The XUSER file in the home directory of the APO administrator and the livecache administrator keeps the connection information and grant information for a client connecting to livecache. The XUSER content needs to be changed to the relocatable IP the livecache is running on. To list all mappings in the XUSER file, run the following command as <lcsid>adm of the APO user: su - <lcsid>adm dbmcli -ux SAPR3,SAP -ul For livecache 7.6 or higher: su - <lcsid>adm dbmcli -ux SAP<LCSID>,<password> -ul This command produces a list of MaxDB user keys that may be mapped to the livecache database schema via a local hostname. Keys commonly created by SAP include c, w and DEFAULT. If a mapping was created without specifying a key, entries of the form <num><lcsid><hostname>, e.g. 1LC1node1 exist. These will only work on one of the cluster hosts. To find out if a given user key mapping <user_key> works throughout the cluster, the relocatable address should be added to the primary host using cmmodnet -a. Run the following command as APO Administrator: dbmcli -U <user_key> quit exits the upcoming prompt: dbmcli on <hostname> : <LCSID>> quit <hostname> should be relocatable. If it is not, the XUSER mapping has to be recreated. Example: mv.xuser.62.xuser.62.org The following commands require ksh for them to work. DEFAULT key dbmcli -n <reloclc_s> -d <LCSID> -u SAP<LCSID>,SAP -uk DEFAULT -us\ SAP<LCSID>,SAP -up SQLMODE=SAPR3; TIMEOUT=0; ISOLATION=0; COLD key 116 SAP Supply Chain Management

117 dbmcli -n <reloclc_s> -d <LCSID> -u control,control -uk c -us control,control\ WARM key dbmcli -n <reloclc_s> -d <LCSID> -u superdba,admin -uk w -us superdba,admin\ LCA key dbmcli -n <reloclc_s> -d <LCSID> -us control,control -uk 1LCA<reloclc_s> -us\ control,control NOTE: Refer to the SAP documentation to learn for more information about the dbmcli syntax. After recreation of the.xuser62 file with the above commands always check the connection: # dbmcli -U <user_key> # dbmcli on <reloclc_s> : <LCSID>> quit livecache installation Step: GS240 Distribute the XUSER mappings by distributing the file /home/<aposid>adm/.xuser.<version> to all APO nodes that are member of the SGeSAP/LX cluster. General Serviceguard Setup Changes Depending on the storage option chosen, the globally available directories need to be added to different existing Serviceguard packages. The following installation steps require that the system has already been configured to use the automounter feature. If this is not the case, refer to installation steps IS730 to IS770 found in Chapter 3 of this manual for setting up the automounter. livecache installation Step: GS250 For Storage Option 1: livecache installation Step: GS260 Distribute all configuration files from the sapnfs (or dbci<aposid>, db<aposid>) package directory to the cluster nodes on which the package is able to run. livecache installation Step: GS270 Use cmapplyconf to add the newly configured package(s) to the cluster. cmapplyconf -P verifies the Serviceguard package configuration, updates the binary cmlconfig file and distributes the binary file to all cluster node. cmapplyconf -v -P ${SGCONF}/<LCSID>/lc<LCSID> Use command cmrunpkg to run the Serviceguard package. cmrunpkg lc<lcsid> General Serviceguard Setup Changes 117

118 118

119 5 Serviceguard Extension for SAP on Linux Cluster Administration A SAP application within a Serviceguard Extension for SAP on Linux cluster is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are nodes of the Serviceguard cluster. The Serviceguard packages provide a SAP adoptive computing layer that keeps the application independent of specific server hardware. The SAP adoptive computing is transparent in many aspects, but in some areas special considerations may apply. This affects the way a system gets administered. Administration topics presented in this chapter include: Change Management Mixed Clusters Switching Serviceguard Extension for SAP on Linux On and Off Change Management Serviceguard has to store information about the cluster configuration. It especially needs to know the relocatable IP addresses and its subnets, storage volume groups, the Logical Volumes and their mountpoints. System Level Changes Serviceguard Extension for SAP on Linux provides some flexibility for hardware change management. For example if you have to maintain the cluster node on which an SAP SCS or SAP ASCS instance is running, this instance can temporarily be moved to the cluster node that runs its Replicated Enqueue without interrupting ongoing work. Some users might experience a short delay in the response time for their ongoing transaction. No downtime is required for the maintenance action. If you add new hardware and SAP software needs access to it to work properly, make sure to allow this access from any host of the cluster by appropriately planning the hardware connectivity. E.g. it is possible to increase database disk space by adding a new shared LUN from a SAN device as physical volume to the shared volume groups on the primary host on which a database runs. The changed volume group configuration has to be redistributed to all cluster nodes afterwards via vgexport(1m) and vgimport(1m). It is a good practice to keep a list of all directories that were identified in Chapter Two to be common directories that are kept local on each node. As a rule of thumb, files that get changed in these directories need to be manually copied to all other cluster nodes afterwards. There might be exceptions. E.g. /home/<sid>adm does not need to be the same on all of the hosts. In clusters that do not use CFS, it is possible to locally install additional Dialog Instances on hosts of the cluster, although it will not be part of any package. SAP startup scripts in the home directory are only needed on this dedicated host. You do not need to distribute them to other hosts. If remote shell access is used, never delete the.rhosts entries of the root user and <sid>adm on any of the nodes. Or in the case of a secure shell setup is being used instead of remote shell access do not delete this setup either. Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group should be kept unified across all cluster nodes. If you use an ORACLE database, be aware that the listener configuration file of SQL*Net V2 is kept as a local copy in /etc/listener.ora by default. Files in the following directories and all subdirectories are typically shared: /usr/sap/<sid>/dvebmgs<inr> /export/usr/sap/trans (except for stand-alone J2EE) /export/sapmnt/<sid> Chapter Two can be referenced for a full list of cluster shared directories. These directories are only available on a cluster node if the package they belong to is running on it. They are empty on all other nodes. The Serviceguard package and the directories for this package together failover to another cluster node. All directories below /export have an equivalent directory whose fully qualified path comes without this prefix. These directories are managed by the automounter. NFS file systems and get mounted automatically Change Management 119

120 as needed. Servers outside of the cluster that have External Dialog Instances installed are set up in a similar way. Refer to /etc/auto.direct for a full list of automounter file systems of Serviceguard Extension for SAP on Linux. It enhances the security of the installation if the directories below /export are exported without root permissions. The effect is that the root user cannot modify these directories or their contents. With standard permissions set, the root user cannot even see the files. If changes need to be done as the root user, the equivalent directory below /export on the host the package runs on can be used as access point. If the Serviceguard Extension for SAP on Linux package (sapnfs) of the NFS server host is not available it is possible that the logon will hang if it is attempted as <sid>adm user. The reason for this is, that /usr/sap/<sid>/sys/exe is part of the path of the <sid>adm user. This directory is linked to /sapmnt/<sid>which in fact is handled by the automounter. If the (sapnfs) package is down the automounter cannot contact the virtual host of that Serviceguard package (the virtual IP address is disabled when the package is halted). This is the reason why a local copy of the SAP executables is kept on each cluster node. NOTE: If the database package with HA NFS services is halted, you cannot log in as <sid>adm unless you keep a local copy of the SAP binaries using sapcpe. To allow proper troubleshooting the level of information logged can be adjusted by changing the parameter DEBUG_LEVEL in the sap.config file. If problems with a Serviceguard package startup remain, a more general debug mode can be activated by creating a "debug" file touch ${SGCONF}/debug All Serviceguard Extension for SAP on Linux packages on that cluster node will now be in debug mode. If the file is created in the package directory ${SGCONF}/<SID>, it will only turn on the debug mode for packages in that directory. The debug mode allows a SGeSAP/LX package start up (mounts the appropriate storage volumes and enables virtual IP addresses) without executing any of the SAP specific startups commands form within that package. Service monitors will be started, but they will not report failures as long as the debug mode is turned on. With the debug mode enabled it then is possible to attempt manual startups of the database and/or SAP software. All rules of manual troubleshooting of SAP instances now apply. As the storage volumes for this package have know been mounted it is possible to access the SAP application work directories to check trace files. After the error has been diagnosed the "debug" files has to be removed to resume normal operation: rm ${SGCONF}/debug If the package still runs, all monitors will begin to work immediately and the package failover mechanism is restored. SAP Software Changes During the configuration of SAP on Serviceguard Extension for SAP on Linux the SAP profiles (with kernel <7.0) are modified to contain only virtual IP address instead of the initial physical IP addresses. This can be checked using SAP transaction RZ10. In the DEFAULT.PFL these entries are altered: SAPDBHOST= <relocatible_db_name> rdisp/mshost= <relocatible_ci_name> rdisp/vbname= <relocatible_ci_name>_<sid>_<inr> rdisp/enqname= <relocatible_ci_name>_<sid>_<inr> rdisp/btcname= <relocatible_ci_name>_<sid>_<inr> rslg/collect_daemon/host = <relocatible_ci_name> There are also two additional profile parameters SAPLOCALHOST and SAPLOCALHOSTFULL included as part of the Instance Profile of the Central Instance. Anywhere SAP uses the local hostname internally, this name is replaced by the relocatable values <relocatable_ci_name> or <relocatable_ci_name>.domain.organization of these parameters. Make sure that they are always defined and set to the correct value. This is vital for the system to function correctly. 120 Serviceguard Extension for SAP on Linux Cluster Administration

121 Relocatable IP addresses can be used as of SAP kernel Older releases use local hostnames in profile names and startup script names. Renamed copies of the files or symbolic links had to be created to overcome this issue. The SAP Spool Work Process uses the SAP Application Server name as the destination for print formatting. Use the relocatable name if you plan to use Spool Work processes with your Central Instance. In the case of a failover the SAP print system will continue to work. NOTE: Any running print job at the time of the failure will be canceled and needs to be reissued manually after the failover. To make the SAP print spooler highly available on the Central Instance, set the destination of the printer to <relocatible_ci_name>_<sid>_<nr>using the SAP transaction SPAD. Print all time critical documents via the high available spool server of the Central Instance. Print requests to other spool servers stay in the system after failure until the host is available again and the spool server has been restarted. These requests can be moved manually to other spool servers if the failed server is unavailable for a longer period of time. Batch jobs can be scheduled to run on a particular instance. Generally speaking, it is better not to specify a destination host at all. Sticking to this rule, the batch scheduler chooses a batch server which is available at the start time of the batch job. However, if you want to specify a destination host, specify the batch server running on the highly available Central Instance. The application server name and the hostname (which is retrieved from the Message Server) are stored in the batch control tables TBTCO, TBTCS,... In case the batch job is ready to run, the application server name is used to start it. Therefore, when using the relocatable name to create the Application Server name for the instance, you do not need to change batch jobs after a switchover. This is true even if the hostname which is also stored in the above tables, differs. Plan to use saplogon to application server groups instead of saptemu/sapgui to individual application servers. When logging on to an application server group with two or more application servers, the SAP user does not need a different login procedure if one of the application servers of the group fails. Also, using login groups provides workload balancing between application servers. Within the CCMS you can define operation modes for SAP instances. An operation mode defines a resource configuration. It can be used to determine which instances are started and stopped and how the individual services are allocated for each instance in the configuration. An instance definition for a particular operation mode consists of the number and types of Work processes as well as Start and Instance Profiles. When defining an instance for an operation mode you need to enter the hostname and the system number of the application server. By using relocatable names to fill in the hostname field, the instance will - after a failover and under control of CCMS - continue to work without a change NOTE: If an instance is running on the standby node in normal operation and is stopped during the switchover. Only configure the update service on a node for SAP application services running on the same node. The result is, the remaining servers, running on different nodes, are not affected by the outage of the update server. However, if the update server is configured to be responsible for application servers running on different nodes, any failure of the update server leads to subsequent outages at these nodes. Therefore configure the update server on a clustered instance. The use of local update servers should only be considered, if performance issues require it. Serviceguard Extension for SAP on Linux Administration Aspects Serviceguard Extension for SAP on Linux needs some additional information about the configuration of an SAP environment. This information is stored in the file ${SGCONF}/<SID>/sap.config. For more complicated cases additional configuration files ${SGCONF}/<SID>/sap<pkg_name>.config can be created. A sap<pkg_name>.config file is sometimes used to create a package specific SECTION TWO of the sap.config file in order to achieve proper SAP application server handling. In principle this file serves the same purpose and has the same syntax as the generic sap.configfile. It is very important that the information in this file is consistent with the way your system is configured. This file must be available on all nodes of the cluster. Under normal operation there is no need to modify this file. Comments are provided within the file and should be self-explanatory. The following administration activities are possible but they must be accompanied by an adaptation of the sap.config file on all cluster nodes: Serviceguard Extension for SAP on Linux Administration Aspects 121

122 changing the SAP System ID changing the name of the SAP System Administrator migrating to another database vendor adding/deleting Dialog Instances changing an Instance Name changing an Instance Number changing the network name belonging to a relocatable address changing the name of a Serviceguard Extension for SAP on Linux package changing hostnames of hosts inside the cluster changing hostnames or IP addresses of hosts that run additional application servers changing the location of any SAP-specific directory in the file system changing the name of the ORACLE listener After performing any of the above mentioned activities Serviceguard Extension for SAP on Linux failover has to be tested again to confirm proper operation. Figure 5-1 Serviceguard Extension for SAP on Linux cluster displayed in the HP Serviceguard Manager GUI Upgrading SAP Software Upgrading the SAP applications does not normally require changes to Serviceguard Extension for SAP on Linux. Usually Serviceguard Extension for SAP on Linux detects the release of the application that is packaged automatically and treats it as appropriate. Make sure that you have a valid Serviceguard Extension for SAP on Linux version in place by issuing the command: rpm -qa grep -i sgesap Review the release notes of the Serviceguard Extension for SAP on Linux version that gets reported by this command and make sure that the target SAP application release is supported with the installed Serviceguard Extension for SAP on Linux version. Should this not be the case, you should first update Serviceguard Extension for SAP on Linux For a clean Serviceguard Extension for SAP on Linux upgrade, Serviceguard Extension for SAP on Linux can be switched off as described in the section below. Perform failover tests for all potential failure scenarios before putting the system into production state. 122 Serviceguard Extension for SAP on Linux Cluster Administration

Managing Serviceguard Extension for SAP

Managing Serviceguard Extension for SAP Managing Serviceguard Extension for SAP Manufacturing Part Number: T2803-90011 December 2007 Legal Notices Copyright 2000-2007 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension

More information

Managing Serviceguard Extension for SAP Version A for Linux

Managing Serviceguard Extension for SAP Version A for Linux Managing Serviceguard Extension for SAP Version A.06.00 for Linux HP Part Number: T2392-90017 Published: December 2012 Legal Notices Copyright 2012 Hewlett-Packard Development Company, L.P. Serviceguard,

More information

Enabling High Availability for SOA Manager

Enabling High Availability for SOA Manager Enabling High Availability for SOA Manager Abstract... 2 Audience... 2 Introduction... 2 Prerequisites... 3 OS/Platform... 3 Cluster software... 4 Single SOA Manager Server Fail Over... 4 Setting up SOA

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

Architecture of the SAP NetWeaver Application Server

Architecture of the SAP NetWeaver Application Server Architecture of the NetWeaver Application Release 7.1 Online Help 03.09.2008 Copyright Copyright 2008 AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or

More information

Environment 7.1 SR5 on AIX: Oracle

Environment 7.1 SR5 on AIX: Oracle PUBLIC Installation Guide SAP NetWeaver Composition Environment 7.1 SR5 on AIX: Oracle Production Edition Target Audience Technology consultants System administrators Document version: 1.1 05/16/2008 Document

More information

Monitoring SAP ENCYCLOPEDIA ... ENCYCLOPEDIA. Monitoring Secrets for SAP. ArgSoft Intellectual Property Holdings, Limited

Monitoring SAP ENCYCLOPEDIA ... ENCYCLOPEDIA. Monitoring Secrets for SAP. ArgSoft Intellectual Property Holdings, Limited Monitoring Secrets for SAP ENCYCLOPEDIA ENCYCLOPEDIA Monitoring SAP.... 1991-2010 Contents Argent for SAP Overview 3 Introduction 3 Monitoring With Argent for SAP 4 SAP Instance 4 SAP Processes 4 Work

More information

Secure Login for SAP Single Sign-On Sizing Guide

Secure Login for SAP Single Sign-On Sizing Guide PUBLIC SAP Single Sign-On Document Version: 1.1 2018-07-31 Secure Login for SAP Single Sign-On 3.0 - Sizing Guide 2018 SAP SE or an SAP affiliate company. All rights reserved. THE BEST RUN Content 1 Introduction....3

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.1x and 9.2x UNIX and Linux operating systems High Availability Through Server Pooling Document Release Date: December 2016 Software Release Date: December 2016

More information

SAP Solutions on VMware vsphere : High Availability

SAP Solutions on VMware vsphere : High Availability SAP Solutions on VMware vsphere : High Availability Table of Contents Introduction...1 vsphere Overview...1 VMware Fault Tolerance...1 Background on High Availability for SAP Solutions...2 VMware HA...3

More information

WHITE PAPER. Implementing Fault Resilient Protection for mysap in a Linux Environment. Introducing LifeKeeper from SteelEye Technology

WHITE PAPER. Implementing Fault Resilient Protection for mysap in a Linux Environment. Introducing LifeKeeper from SteelEye Technology Implementing Fault Resilient Protection for mysap in a Linux Environment Introducing LifeKeeper from SteelEye Technology WHITE PAPER Introduction In the past, high-availability solutions were costly to

More information

TABLE OF CONTENTS. 2 INTRODUCTION. 4 THE CHALLENGES IN PROTECTING CRITICAL SAP APPLICATIONS. 6 SYMANTEC S SOLUTION FOR ENSURING SAP AVAILABILITY.

TABLE OF CONTENTS. 2 INTRODUCTION. 4 THE CHALLENGES IN PROTECTING CRITICAL SAP APPLICATIONS. 6 SYMANTEC S SOLUTION FOR ENSURING SAP AVAILABILITY. WHITE PAPER: TECHNICAL Symantec High Availability and Disaster Recovery Solution for SAP Keep your SAP application online, all the time with Symantec Venkata Reddy Chappavarapu Sr. Software Engineer Data

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

HP 3PAR OS MU3 Patch 17

HP 3PAR OS MU3 Patch 17 HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server

Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server Installation Guide SAP Landscape Transformation Replication Server Document Version: 1.6 2017-06-14 CUSTOMER Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server - For SAP

More information

HP Operations Orchestration

HP Operations Orchestration HP Operations Orchestration For Windows and Linux operating systems Software Version: 9.07.0006 System Requirements Document Release Date: April 2014 Software Release Date: February 2014 Legal Notices

More information

SAP HANA Disaster Recovery with Asynchronous Storage Replication

SAP HANA Disaster Recovery with Asynchronous Storage Replication Technical Report SAP HANA Disaster Recovery with Asynchronous Storage Replication Using SnapCenter 4.0 SAP HANA Plug-In Nils Bauer, Bernd Herth, NetApp April 2018 TR-4646 Abstract This document provides

More information

SIOS Protection Suite for Linux SAP Recovery Kit v9.2. Administration Guide

SIOS Protection Suite for Linux SAP Recovery Kit v9.2. Administration Guide SIOS Protection Suite for Linux SAP Recovery Kit v9.2 Administration Guide October 2017 This document and the information herein is the property of SIOS Technology Corp. (previously known as SteelEye Technology,

More information

High Availability Options for SAP Using IBM PowerHA SystemMirror for i

High Availability Options for SAP Using IBM PowerHA SystemMirror for i High Availability Options for SAP Using IBM PowerHA Mirror for i Lilo Bucknell Jenny Dervin Luis BL Gonzalez-Suarez Eric Kass June 12, 2012 High Availability Options for SAP Using IBM PowerHA Mirror for

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

HP Operations Orchestration

HP Operations Orchestration HP Operations Orchestration For the Linux or Windows operating systems Software Version: 9.02 Document Release Date: October 2011 Software Release Date: October 2011 Legal Notices Warranty The only warranties

More information

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Solution Pack Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Subject Governing Agreement Term DXC Services Requirements Agreement between DXC and Customer

More information

HPE Serviceguard I H6487S

HPE Serviceguard I H6487S Course data sheet HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses H6487S 5 days ILT View now View now HPE Serviceguard I H6487S This course

More information

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime Reduce Application Downtime Overview is an industry-leading high availability solution for reducing both planned and unplanned downtime. By monitoring the status of applications and automatically moving

More information

HP Serviceguard Quorum Server Version A Release Notes, Fourth Edition

HP Serviceguard Quorum Server Version A Release Notes, Fourth Edition HP Serviceguard Quorum Server Version A.02.00 Release Notes, Fourth Edition Manufacturing Part Number: B8467-90026 Reprinted December 2005 Legal Notices Copyright 2005 Hewlett-Packard Development Company,

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-03ENZ0(00) May 2011 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

COURSE LISTING. Courses Listed. Training for Database & Technology with Technologieberater in Associate with Database. Last updated on: 28 Sep 2018

COURSE LISTING. Courses Listed. Training for Database & Technology with Technologieberater in Associate with Database. Last updated on: 28 Sep 2018 Training for Database & Technology with Technologieberater in Associate with Database Courses Listed Fortgeschrittene ADM5 - Database Administration SAP MaxDB ADM540 - Database Administration SAP ASE for

More information

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

HP Storage Mirroring Application Manager 4.1 for Exchange white paper HP Storage Mirroring Application Manager 4.1 for Exchange white paper Introduction... 2 Product description... 2 Features... 2 Server auto-discovery... 2 (NEW) Cluster configuration support... 2 Integrated

More information

means an integration element to a certain software, format or function through use of the HP software product.

means an integration element to a certain software, format or function through use of the HP software product. Additional License Authorizations For HP Automation Center software products Products and suites covered Products E-LTU or E-Media available * Non-Production use category ** HP Automation Insight Yes Class

More information

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP StorageWorks Continuous Access EVA 2.1 release notes update HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

SAP High Availability with IBM Tivoli System Automation for Multiplatforms

SAP High Availability with IBM Tivoli System Automation for Multiplatforms IBM Software Group IBM Tivoli Solutions SAP High Availability with IBM Tivoli System Automation for Multiplatforms 2 SAP High Availability with Tivoli SA MP Contents 2 Introduction 5 Single Point of Failures

More information

HP integrated Citrix XenServer 5.0 Release Notes

HP integrated Citrix XenServer 5.0 Release Notes HP integrated Citrix XenServer 5.0 Release Notes Part Number 488554-003 March 2009 (Third Edition) Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-02ENZ0(00) November 2010 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

SR3 ABAP+Java on AIX: IBM DB2 for Linux,

SR3 ABAP+Java on AIX: IBM DB2 for Linux, PUBLIC Installation Guide SAP NetWeaver 7.0 SR3 ABAP+Java on AIX: IBM DB2 for Linux, UNIX, and Windows Including the following: SAP NetWeaver ABAP Application Server (AS-ABAP) SAP NetWeaver Java Application

More information

HP 3PAR Recovery Manager Software for Oracle

HP 3PAR Recovery Manager Software for Oracle HP 3PAR Recovery Manager 4.2.0 Software for Oracle User s Guide Abstract This document provides the information needed to install, configure, and use the HP 3PAR Recovery Manager 4.2.0 Software for Oracle

More information

HP Business Availability Center

HP Business Availability Center HP Business Availability Center for the Windows and Solaris operating systems Software Version: 8.00 Embedded UCMDB Applets Using Direct Links Document Release Date: January 2009 Software Release Date:

More information

EXPRESSCLUSTER X. System Configuration Guide. for Linux SAP NetWeaver. April 17, st Edition

EXPRESSCLUSTER X. System Configuration Guide. for Linux SAP NetWeaver. April 17, st Edition EXPRESSCLUSTER X for Linux SAP NetWeaver System Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New guide Copyright NEC Corporation 2018.

More information

TRIM Integration with Data Protector

TRIM Integration with Data Protector TRIM Integration with Data Protector Table of Contents Introduction... 3 Prerequisites... 3 TRIM Internals... 3 TRIM s Data Organization... 3 TRIM s Architecture... 4 Implications for Backup... 4 Sample

More information

HP Intelligent Management Center

HP Intelligent Management Center HP Intelligent Management Center Application Manager Administrator Guide Abstract This guide provides instructions for using IMC Application Manager. It includes information on prerequisites, service monitor

More information

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 HP Data Protector A.06.11 disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 Technical white paper Table of contents Introduction... 2 Installation... 2 Preparing for Disaster

More information

Data Protector. Software Version: Zero Downtime Backup Integration Guide

Data Protector. Software Version: Zero Downtime Backup Integration Guide Data Protector Software Version: 10.00 Zero Downtime Backup Integration Guide Document Release Date: June 2017 Software Release Date: June 2017 Legal Notices Warranty The only warranties for Hewlett Packard

More information

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved. Configuring the Oracle Network Environment Objectives After completing this lesson, you should be able to: Use Enterprise Manager to: Create additional listeners Create Oracle Net Service aliases Configure

More information

SAP R/3 Architecture. VIT

SAP R/3 Architecture. VIT SAP R/3 Architecture Prof.R.K.Nadesh @ VIT www.nadeshrk.webs.com SAP R/3 System Business Modules Logistical Sales & Distribution Financial Accounting Financial Materials Mgmt. Controlling Production Planning

More information

HP AutoPass License Server

HP AutoPass License Server HP AutoPass License Server Software Version: 9.0 Windows, Linux and CentOS operating systems Support Matrix Document Release Date: October 2015 Software Release Date: October 2015 Page 2 of 10 Legal Notices

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

HP 3PAR Recovery Manager Software for Oracle

HP 3PAR Recovery Manager Software for Oracle HP 3PAR Recovery Manager 4.6.0 Software for Oracle User Guide Abstract This document provides the information needed to install, configure, and use the HP 3PAR Recovery Manager 4.6.0 Software for Oracle

More information

Using NFS as a file system type with HP Serviceguard A on HP-UX and Linux

Using NFS as a file system type with HP Serviceguard A on HP-UX and Linux Technical white paper Using NFS as a file system type with HP Serviceguard A.11.20 on HP-UX and Linux Table of contents Introduction 2 Audience 2 Serviceguard support for NFS on HP-UX and Linux 2 Overview

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

QuickSpecs. HP Data Protector for Notebooks & Desktops software part numbers HP Data Protector for Notebooks & Desktops100 Pack

QuickSpecs. HP Data Protector for Notebooks & Desktops software part numbers HP Data Protector for Notebooks & Desktops100 Pack continuously and automatically protects users' files on networked Windows notebook and desktop computers. Continuous protection means that data files, such as Microsoft Office documents, are backed up

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

High Availability Policies Guide

High Availability Policies Guide Tioli System Automation for Multiplatforms High Aailability Policies Guide Version 4 Release 1 SC34-2660-03 Tioli System Automation for Multiplatforms High Aailability Policies Guide Version 4 Release

More information

SAP NetWeaver 04 Security Guide. Operating System Security: SAP System Security Under Windows

SAP NetWeaver 04 Security Guide. Operating System Security: SAP System Security Under Windows SAP NetWeaver 04 Security Guide Operating System Security: SAP System Security Under Windows Document Version 1.00 April 29, 2004 SAP AG Neurottstraße 16 69190 Walldorf Germany T +49/18 05/34 34 24 F +49/18

More information

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi and vsphere. HP Part Number: 616896-409 Published: September

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v3 VM Host) v4.2 HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 Integrity Virtual Machines (also called Integrity VM or HPVM) is a hypervisor product

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

Administrator Guide. Windows Embedded Standard 7

Administrator Guide. Windows Embedded Standard 7 Administrator Guide Windows Embedded Standard 7 Copyright 2010, 2012 2015, 2017 HP Development Company, L.P. Citrix and XenDesktop are registered trademarks of Citrix Systems, Inc. and/or one more of its

More information

Storage + VDI: the results speak for themselves at Nuance Communications

Storage + VDI: the results speak for themselves at Nuance Communications Storage + VDI: the results speak for themselves at Nuance Communications HPE 3PAR StoreServ All-Flash helps Nuance Communications build the right storage solution for a demanding high-performance computing

More information

Setup an NWDI Track for Composition Environment Developments

Setup an NWDI Track for Composition Environment Developments How-to Guide SAP NetWeaver 7.0 How To Setup an NWDI Track for Composition Environment Developments Version 2.00 January 2008 Applicable Releases: SAP NetWeaver 7.0 SP13 (Custom Development & Unified Life-Cycle

More information

HPE Security ArcSight. ArcSight Data Platform Support Matrix

HPE Security ArcSight. ArcSight Data Platform Support Matrix HPE Security ArcSight ArcSight Data Platform Support Matrix November 28, 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise products and services are set forth in the express

More information

HP Universal CMDB. Software Version: Content Pack (CP18) Discovery and Integrations Content Guide - Discovery Activities

HP Universal CMDB. Software Version: Content Pack (CP18) Discovery and Integrations Content Guide - Discovery Activities HP Universal CMDB Software Version: Content Pack 18.00 (CP18) Discovery and Integrations Content Guide - Discovery Activities Document Release Date: December 2015 Software Release Date: December 2015 Legal

More information

SAP NetWeaver How-To Guide. SAP NetWeaver Gateway Virtualization Guide

SAP NetWeaver How-To Guide. SAP NetWeaver Gateway Virtualization Guide SAP NetWeaver How-To Guide SAP NetWeaver Gateway Virtualization Guide Version 1.01 May 2012 Copyright 2012 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any

More information

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0

Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0 Highly Available Forms and Reports Applications with Oracle Fail Safe 3.0 High Availability for Windows NT An Oracle Technical White Paper Robert Cheng Oracle New England Development Center System Products

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the disaster recovery modular add-on to the Federation Enterprise Hybrid Cloud Foundation solution for SAP. It introduces the solution architecture and features that ensure

More information

SAP EDUCATION SAMPLE QUESTIONS: C_TADM50_75. Questions. 1. When is a savepoint triggered for SAP Max DB by default?

SAP EDUCATION SAMPLE QUESTIONS: C_TADM50_75. Questions. 1. When is a savepoint triggered for SAP Max DB by default? SAP EDUCATION SAMPLE QUESTIONS: C_TADM50_75 SAP Certified Technology Associate - System Administration (SAP Max DB) with SAP NetWeaver 7.5 Disclaimer: These sample questions are for self-evaluation purposes

More information

QuickSpecs. HPE Workload Aware Security for Linux. Overview

QuickSpecs. HPE Workload Aware Security for Linux. Overview Overview (WASL) is a one-click security compliance tool designed to secure Operating System and Applications. It is based on open architecture supplemented by years of security expertise and HPE Intellectual

More information

Universal CMDB. Software Version: Content Pack (CP20) Discovery and Integrations Content Guide - Discovery Activities

Universal CMDB. Software Version: Content Pack (CP20) Discovery and Integrations Content Guide - Discovery Activities Universal CMDB Software Version: Content Pack 20.00 (CP20) Discovery and Integrations Content Guide - Discovery Activities Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices

More information

HP XP7 High Availability User Guide

HP XP7 High Availability User Guide HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

SAP NetWeaver Performance and Availability

SAP NetWeaver Performance and Availability SAP NetWeaver Performance and SAP NetWeaver Performance and During the discovery process, the mapping of monitored elements is created, based on your SAP landscape. If you have both J2EE and R/3 stacks

More information

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002

Maximum Availability Architecture: Overview. An Oracle White Paper July 2002 Maximum Availability Architecture: Overview An Oracle White Paper July 2002 Maximum Availability Architecture: Overview Abstract...3 Introduction...3 Architecture Overview...4 Application Tier...5 Network

More information

Using Microsoft Certificates with HP-UX IPSec A.03.00

Using Microsoft Certificates with HP-UX IPSec A.03.00 Using Microsoft Certificates with HP-UX IPSec A.03.00 Introduction... 2 Related documentation... 2 Multi-tier PKI topology... 2 Configuration tasks... 4 Single-tier PKI topology with a standalone CA...

More information

Using NFS as a filesystem type with HP Serviceguard A on HP-UX 11i v3

Using NFS as a filesystem type with HP Serviceguard A on HP-UX 11i v3 Using NFS as a filesystem type with HP Serviceguard A.11.20 on HP-UX 11i v3 Technical white paper Table of contents Introduction... 2 Audience... 2 Terms and Definitions... 2 Serviceguard support for NFS

More information

Serviceguard NFS Toolkit A , A and A Administrator's Guide

Serviceguard NFS Toolkit A , A and A Administrator's Guide Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.08 Administrator's Guide HP-UX 11i v1, v2, and v3 HP Part Number: B5140-90049 Published: October 2011 Edition: 15 Copyright 2011 Hewlett-Packard

More information

Configure SSO in an SAP NetWeaver 2004s Dual Stack

Configure SSO in an SAP NetWeaver 2004s Dual Stack How-to Guide SAP xrpm 4.0 How To Configure SSO in an SAP NetWeaver 2004s Dual Stack Version 1.00 December 2005 Applicable Releases: SAP xrpm 4.0 Copyright 2004 SAP AG. All rights reserved. No part of this

More information

User's Guide - Master Schedule Management

User's Guide - Master Schedule Management FUJITSU Software Systemwalker Operation Manager User's Guide - Master Schedule Management UNIX/Windows(R) J2X1-3170-14ENZ0(00) May 2015 Preface Purpose of This Document This document describes the Master

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.22 UNIX and Linux operating systems Java GUI Operator s Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal Notices Warranty

More information

HPE Security ArcSight Connectors

HPE Security ArcSight Connectors HPE Security ArcSight Connectors SmartConnector for Microsoft System Center Configuration Manager DB Configuration Guide October 17, 2017 SmartConnector for Microsoft System Center Configuration Manager

More information

Forwarding Alerts to Alert Management (ALM)

Forwarding Alerts to Alert Management (ALM) Forwarding Alerts to Alert Management (ALM) HELP.BCCCM SAP NetWeaver 04 Copyright Copyright 2004 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Rolling Database Update in an SAP ASE and SAP Replication Server Environment

Rolling Database Update in an SAP ASE and SAP Replication Server Environment Rolling Database Update in an SAP ASE and SAP Replication Server Environment Applies to: Update SAP Application Server Enterprise (SAP ASE) from SAP ASE 15.7.0 SP 110 (and higher) to SAP ASE 15.7.0 SP

More information

ExpressCluster X SingleServerSafe 3.2 for Windows. Configuration Guide. 2/19/2014 1st Edition

ExpressCluster X SingleServerSafe 3.2 for Windows. Configuration Guide. 2/19/2014 1st Edition ExpressCluster X SingleServerSafe 3.2 for Windows Configuration Guide 2/19/2014 1st Edition Revision History Edition Revised Date Description First 2/19/2014 New manual Copyright NEC Corporation 2014.

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center Service Health Manager Administrator Guide Abstract This guide provides introductory, configuration, and usage information for Service Health Manager (SHM). It is for

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

HP IDOL Site Admin. Software Version: Installation Guide

HP IDOL Site Admin. Software Version: Installation Guide HP IDOL Site Admin Software Version: 10.9 Installation Guide Document Release Date: March 2015 Software Release Date: March 2015 Legal Notices Warranty The only warranties for HP products and services

More information

HA Monitor Kit for Oracle

HA Monitor Kit for Oracle For Linux (R) (x86) Systems HA Monitor Kit for Oracle Description and User's Guide 3000-9-135-10(E) Relevant program products P-F9S2C-E1121 HA Monitor Kit for Oracle 01-01 (for Red Hat Enterprise Linux

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE Security ArcSight Connectors

HPE Security ArcSight Connectors HPE Security ArcSight Connectors SmartConnector for Windows Event Log Unified: Microsoft Exchange Access Auditing Supplemental Configuration Guide July 15, 2017 Supplemental Configuration Guide SmartConnector

More information

HP Data Protector Media Operations 6.11

HP Data Protector Media Operations 6.11 HP Data Protector Media Operations 6.11 Getting started This guide describes installing, starting and configuring Media Operations. Copyright 2009 Hewlett-Packard Development Company, L.P. Part number:

More information

Veritas Cluster Server from Symantec

Veritas Cluster Server from Symantec Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overviewview protects your most important applications from planned and unplanned downtime.

More information

EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux. Configuration Guide. 10/02/2017 6th Edition

EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux. Configuration Guide. 10/02/2017 6th Edition EXPRESSCLUSTER X SingleServerSafe 3.3 for Linux Configuration Guide 10/02/2017 6th Edition Revision History Edition Revised Date Description 1st 02/09/2015 New manual 2nd 06/30/2015 Corresponds to the

More information

SAP Certified Technology Associate - System Administration (SAP HANA as a database) with SAP NetWeaver 7.4

SAP Certified Technology Associate - System Administration (SAP HANA as a database) with SAP NetWeaver 7.4 SAP EDUCATION SAMPLE QUESTIONS: C_TADM55_74 SAP Certified Technology Associate - System Administration (SAP HANA as a database) with SAP NetWeaver 7.4 Disclaimer: These sample questions are for self-evaluation

More information