Managing Serviceguard Extension for SAP Version A for Linux

Size: px
Start display at page:

Download "Managing Serviceguard Extension for SAP Version A for Linux"

Transcription

1 Managing Serviceguard Extension for SAP Version A for Linux HP Part Number: T Published: December 2012

2 Legal Notices Copyright 2012 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Valid license from HP is equired for possession, use, or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.the information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. SAP, SAP NetWeaver, Hana, MaxDB, ABAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Intel and Itanium are registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. Oracle Java are registered trademarks of Oracle Corporation. IBM and DB2 are registered trademarks of IBM in the United States and other countries. Sybase, is a registered trademark of Sybase, Inc, an SAP company. NFS is a registered trademark of Sun Microsystems, Inc. NIS is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of The Open Group. Linux is a U.S. registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat Software, Inc. SUSE is a registered trademark of SUSE AG, a Novell Business.

3 Contents 1 Overview...5 About this Manual...5 Related documentation SAP cluster concepts...7 SAP-specific cluster modules...7 Configuration restrictions...9 Example 1: Robust failover using the one package concept...9 Example 2: A mutual failover scenario...11 Example 3: Follow-and-push with automated enqueue replication...12 Example 4: Realization of the SAP-recommended SPOF isolation...15 Example 5: Dedicated failover host...15 Dedicated NFS packages...16 Virtualized dialog instances for adaptive enterprises...17 Handling of redundant dialog instances SAP cluster administration...19 Performing cluster administration in the System Management Homepage (SMH)...19 Performing SAP administration with SAP's startup framework...21 Change management...25 System level changes...25 SAP software changes...26 Ongoing verification of package failover capabilities...27 Upgrading SAP software...28 Package conversion SAP cluster storage layout planning...30 SAP Instance Storage Considerations...30 Option 1: SGeSAP NFS cluster...31 Common directories that are kept local...31 Directories that Reside on Shared Disks...32 Option 2: SGeSAP NFS idle standby cluster...34 Common directories that are kept local...34 Directories that reside on shared disks...35 Database instance storage considerations...35 Oracle single instance RDBMS...36 Oracle databases in SGeSAP NFS and NFS Idle standby clusters...36 MaxDB/liveCache storage considerations...37 Sybase ASE storage considerations...40 Special livecache storage considerations...40 Option 1: Simple cluster with separated packages...40 Option 2: Non-MaxDB environments Clustering SAP using SGeSAP packages...42 Overview...42 Three phase approach...42 SGeSAP modules and services...43 SGeSAP modules...43 SGeSAP services...43 Installation options...44 Serviceguard Manager GUI and Serviceguard CLI...45 SGeSAP easy deployment...46 Infrastructure setup, pre-installation preparation (Phase 1)...47 Contents 3

4 Prerequisites...47 Node preparation and synchronization...47 Intermediate synchronization and verification of virtual hosts...48 Intermediate synchronization and verification of mount points...48 Infrastructure setup for NFS toolkit (Phase 1a)...48 Creating NFS Toolkit package using Serviceguard Manager...48 Creating NFS toolkit package using Serviceguard CLI...51 Automount setup...51 Solutionmanager diagnostic agent file system preparations related to NFS toolkit...52 Intermediate node sync and verification...52 Infrastructure Setup - SAP base package setup (Phase 1b)...53 Intermediate synchronization and verification of mount points...53 SAP base package with Serviceguard and SGeSAP modules...53 Creating the package with the Serviceguard Manager...53 Creating the package configuration file with the CLI...55 SAP base package with Serviceguard modules only...56 Creating the package with Serviceguard Manger...56 Creating the package configuration file with the CLI...57 Verification steps...58 SAP installation (Phase 2)...58 Prerequisite...58 Installation of SAP instances...58 Post SAP installation tasks and final node synchronization (Phase 3a)...59 SAP post installation modifications and checks...59 User synchronization...61 Network services synchronization...62 NFS and automount synchronization...63 SAP hostagent installation on secondary cluster nodes...63 Other local file systems and synchronization...63 Completing SGeSAP package creation (Phase 3b)...64 Creating SGeSAP package with guided configuration using Serviceguard Manager...65 Creating SGeSAP package with CLI interface...65 Module sgesap/sap_global SAP common instance settings...65 Module sgesap/sapinstance SAP instances...67 Module sgesap/dbinstance SAP databases...67 Module sgesap/mdminstance SAP MDM repositories...68 Module sg/services SGeSAP monitors...69 Module sg/generic_resource SGeSAP enqor resource...70 Module sg/dependency SGeSAP enqor MNP dependency...70 Module sgesap/enqor SGeSAP enqor MNP template...70 Configuring sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache...71 Remote access between cluster nodes and to external application servers...71 Configuring external instances (sgesap/sapextinstance)...72 Configuring SAP infrastructure components (sgesap/sapinfra)...74 Module sgesap/livecache SAP livecache instance...76 Cluster conversion for existing instances...78 Converting an existing SAP instance...78 Converting an existing database Contents

5 1 Overview About this Manual This document describes how to plan, configure and administer highly available SAP Netweaver systems on Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) systems using HP Serviceguard high availability cluster technology in combination with the HP Serviceguard Extension for SAP (HP SGeSAP). To understand this document, knowledge of Serviceguard concepts and commands and familiarity with Linux operating system administration and SAP basis administration is required. This manual consists of five chapters: Chapter 1 Overview Chapter 2 SAP cluster concepts This chapter gives an introduction to the high-level design of a High Availability SAP server environment. Chapter 3 SAP cluster administration This chapter covers both SGeSAP cluster administration for IT basis administrators and clustered SAP administration for SAP basis administrators. Chapter 4 SAP cluster storage layout planning This chapter describes the recommended file system and shared storage layout for clustered SAP landscape and database systems. Chapter 5 SAP cluster configuration This chapter provides guidelines and configuration details for SGeSAP clusters. Table 1 Abbreviations Abbreviation <SID>, <sid> <INSTNAME> [A]SCS <INSTNR>, <INR> <primary>, <secondary>, <local> <relocdb_s>, <relocci_s, <relocdbci_s> <primary_s>, <secondary_s>, <local_s> <relocdb_s>, <relocci_s>, <relocdbci_s> <...> Meaning System ID of the SAP system, RDBMS or other components in uppercase/lowercase SAP instance,for example, DVEBMGS, D, J, ASCS, SCS, ERS Refers to either an SCS or an ASCS instance Instance number of the SAP system Names mapped to local IP addresses of the server LAN Names mapped to relocatable IP addresses of Serviceguard packages in the server LAN names mapped to local IP addresses of the server LAN names mapped to relocatable IP addresses of Serviceguard packages in the server LAN Other abbreviations are self-explanatory and can be derived from the surrounding context Related documentation The following documents contain additional related information: Serviceguard Extension for SAP Version A on Linux Release Notes Managing HP Serviceguard A for Linux About this Manual 5

6 HP Serviceguard A for Linux Release Notes HP Serviceguard Toolkit for NFS version A on Linux User Guide 6 Overview

7 2 SAP cluster concepts This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP for Linux (HP SGeSAP/LX) and explains several naming conventions. The following sections provide recommendations and examples for typical cluster layouts that can be implemented for SAP environments. SAP-specific cluster modules HP SGeSAP extends HP Serviceguard's failover cluster capabilities to SAP application environments. It is intended to be used in conjunction with the HP Serviceguard Linux product and the HP Serviceguard Toolkit for NFS on Linux. Serviceguard packages can be distinguished into legacy packages and module-based packages. SGeSAP focuses on extending the module-based packaging by providing SAP-specific modules, service monitors, cluster resources, cluster deployment and cluster verification tools as well as a shared library that makes SAP s startup framework cluster-aware. There are four major Serviceguard modules delivered with SGeSAP. They allow quick configuration of instance-failover clustering for all mandatory SAP Netweaver software services, that is mandatory services categorize the software Single Points (SPOFs). Most SAP applications rely on two central software services that define the major software Single Point of Failure for SAP environments: the SAP Enqueue Service and the SAP Message Service. These services are traditionally combined and run as part of a unique SAP Instance that is referred to as JAVA System Central Service Instance (SCS) for SAP JAVA applications or ABAP System Central Service Instance (ASCS) for SAP ABAP applications. If an SAP application has both JAVA and ABAP components, it is possible to have both an SCS and an ASCS instance for one SAP application. In this case, both instances are SPOFs that require clustering. In pure ABAP environments, the term Central Instance (CI) is still in use for a software entity that combines further SAP application services with these SPOFs in a single instance. As any other SAP instance, a Central Instance has an Instance Name. Traditionally it is called DVEBMGS. Each letter represents a service that is delivered by the instance. The "E" and the "M" stand for the Enqueue and Message Service that were identified as SPOFs in the system. Other SAP services can potentially be installed redundantly within additional Application Server instances, sometimes called Dialog Instances. As its naming convention may suggest, DVEBMGS, more services are available within the Central Instance than just those that cause the SPOFs. An undesirable result is a Central Instance is a complex software with a high resource demand. A Central Instance is complex software with a high resource demand. Shutdown and startup of Central Instances is slower and more error-prone. Starting with SAP Application Server 6.40, the SPOFs of the Central Instance was isolated in the ABAP System Central Service Instance (ASCS). The installer for SAP Application Server allows ASCS to install automatically. This installation procedure also creates a standard Dialog Instance called DVEBMGS for compatibility reasons. This DVEBMGS instance provides no Enqueue Service and no Message Service, and is not a Central Instance anymore. A package that uses the sgesap/sapinstance module can be set up to cluster the SCS and or ASCS (or Central Instance) of a single SAP application. All instance types and use cases for SAP Netweaver web application server software are covered by module sgesap/sapinstance. This module allows adding a set of SAP instances that belong to the same Netweaver system to a module-based Serviceguard package. The package can encapsulate the failover entity for a combination of ABAP-stack, JAVA-stack or double-stack instances. NOTE: Split-stack installations require separate packages for each stack. In this case, a package same_node dependency can be defined which ensures that split-stack packages can be handled as a single entity. SAP-specific cluster modules 7

8 Instance-type specific handling is provided by the module for SAP ABAP Central Service Instance, SAP JAVA Central Service Instance, SAP ABAP Application Server Instances, SAP JAVA Application Server Instances, SAP Central Instances, SAP Enqueue Replication Server Instances, SAP Gateway Instances and SAP Webdispatcher Instances. The module sgesap/mdminstance extends the coverage to the SAP Master Data Management Instance types. The module to cluster SAP livecache instances is called sgesap/livecache. SGeSAP also covers single-instance database instance failover with built-in routines. This ensures seamless integration and uniform look-and-feel of the solution across different database vendor technologies. For SAP Sybase, SAP MaxDB or Oracle-based SAP database services, the module sgesap/dbinstance can be used. For Sybase ASE, SAP MaxDB or Oracle-based SAP database services, the module sgesap/dbinstance can be used. It is possible to combine all the clustered components of a single SAP software stack into one failover package for simplicity and convenience. There is also full flexibility to split components up into several packages to avoid unwanted dependencies and to lower potential failover times. Multiple SAP applications of different types and the release version can be consolidated in different packages of a single cluster. SGeSAP enables SAP instance virtualization. It is possible to use SGeSAP to manually move SAP Application Server Instances between hosts and quickly adapt to changing resource demands or maintenance needs. All SGeSAP components support Serviceguard Live Application Detach, Serviceguard Package Maintenance Mode, and the SAP Online Kernel Switch technologies to minimize planned downtime. Optional software monitors that check the responsiveness of the major interaction interfaces of the clustered software components are available. On top of these four major SGeSAP modules, there are three additional modules that enable easy clustering of smaller SAP infrastructure software tools (sgesap/sapinfra), allow to manipulate the behavior of non-clustered SAP instances (sgesap/sapextinstance) and handle SAP in-memory replication failover policies (sgesap/enqor). The infrastructure tools covered by sgesap/sapinfra include the SAP sapccmsr, saposcol, rfcadapter, webdispatcher and saprouter binaries. Depending on the installation type SAP Web Dispatcher can be configured with either sgesap/sapinstance or sgesap/sapinfra. sgesap/enqor is a special case in that it is a multi-node package (MNP) that must run on all the nodes of the cluster, if ERS instances are installed. It is particularly useful for clusters with more than two nodes. It is an optional package that complements the [A]SCS/ERS package pairs and emulates a Follow-and-Push failover policy between them. Exactly one enqor MNP can exist per cluster. It automatically covers all [A]SCS/ERS package pairs without further configuration. The modulesgesap/all provides a combination of all the above mentioned package functionality, with the exception of the enqor MNP. Module sgesap/all is available for convenience reasons to simplify configuration steps for standard use cases. Only a subset of the SGeSAP modules can be configured in a package that was created based on sgesap/all. In many cases, SAP clusters require shared access to parts of the storage. This is usually provided via Network Filesystem Services (NFS). The separately available tkit/nfs/nfs module can be included in any SGeSAP package. It can also run separately as part of a SGeSAP cluster. SGeSAP delivers additional submodules that become included automatically when creating packages with the above mentioned modules. Explicitly using sub-modules during package creation is not allowed. NOTE: HP recommends that any SGeSAP configuration use only modular-based packages. For more information on modular package creation see the following sections in chapter 5 Clustering SAP using SGeSAP packages (page 42): Creating SGeSAP package with easy deployment Creating SGeSAP package with guided configuration using Servicegurad Manager 8 SAP cluster concepts

9 Configuration restrictions It is not allowed to specify a single SGeSAP package with two database instances in it. It is not allowed to specify a single SGeSAP package with a Central Service Instance [A]SCS and its Replication Instance ERS. Diagnostic Agent instances are not mandatory for SAP line-of-business processes, but they become installed on the relocatable IP address of the corresponding instance and must move with the relocatable IP address. They can be specified with module sgesap/sapextinstance. An issue with the Diagnostic Agent during failover is thus not considered as critical for the overall success of the failover operation. sgesap/sapinstance must not be used for Diagnostic Agents. It is not a requirement to do so, but it can help to reduce the complexity of a cluster setup, if SCS and ASCS are combined in a single package. Under these circumstances, it needs to be considered that the failure of one of the two instances will also cause failover for the other instance. This might be tolerable in those cases in which SAP replication instances are configured (see below). sgesap/sapinstance packages can identify the state of a corresponding sgesap/dbinstance package in the same cluster without the requirement of explicitly configuring Serviceguard package dependencies. The information is for example used to delay SAP instance package startups while the database is starting in a separate package, but not yet ready to accept connections. Example 1: Robust failover using the one package concept In a one-package configuration, the database, NFS, and SAP SPOFs run on the same node at all times and are configured in one SGeSAP package. Other nodes in the Serviceguard cluster function as failover nodes for the primary node on which the system runs during normal operation. Configuration restrictions 9

10 Figure 1 One-package failover cluster concept SGeSAP Cluster One-Package Concept Shared Disks dbcic11 DBCI package moved and required resources freed up in the event of a failure QAS Systems Dialog Instances Node 1 Node 2 LAN Application Servers To maintain an expensive idle standby is not required. SGeSAP allows utilizing the secondary node(s) with different instances during normal operation. A common setup installs one or more non-mission critical SAP Systems on the failover nodes, typically SAP Consolidation, Quality Assurance, or Development Systems. They can gracefully be shutdown by SGeSAP during failover to free up the computing resources for the critical production system. For modular packages, the sgesap/sapextinstance module can be added to the package to allow specifying this kind of behavior. Development environments tend to be less stable than production systems. This must be taken into consideration before mixing these use-cases in a single cluster. A feasible alternative would be to install Dialog Instances of the production system on the failover node. 10 SAP cluster concepts

11 Figure Visualization of a one-package cluster concept in Serviceguard Manager If the primary node fails, the database and the Central Instance failover and continue functioning on an adoptive node. After failover, the system runs without any manual intervention needed. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can stay up or can be restarted (triggered by a failover). A sample configuration in Figure (page 11) shows node1 with a failure, which causes the package containing the database and central instance to failover to node2. A Quality Assurance System and additional Dialog Instances get shut down, before the database and Central Instance are restarted. Example 2: A mutual failover scenario If you are planning a simple three-tier SAP layout in a two node cluster, use the SGeSAP mutual failover model. This approach distinguishes two SGeSAP packages one for the database SPOF and the other for the SAP SPOFs as defined above. In small and medium size environments, the database package gets combined with NFS toolkit server functionality to provide all filesystems that are required by the software in both packages. During normal operation, the two packages are running on different nodes of the cluster. The major advantage of this approach is that the failed SAP package will never cause a costly failover of the underlying database because it is separated in a different package. The process of failover results in downtime that typically lasts a few minutes, depending on the work in progress when the failover takes place. A main portion of downtime is needed for the recovery of a database. The total recovery time of a failed database cannot be predicted reliably. By tuning the Serviceguard heartbeat on a dedicated heartbeat LAN, it is possible to achieve failover times in the range of about a minute or two for a ci package that contains a lightweight [A]SCS instance without database. Example 2: A mutual failover scenario 11

12 A cluster can be configured in a way that two nodes back up each other. The basic layout is depicted in Figure 3 (page 12). Figure 3 Two-package failover with mutual backup scenario SGeSAP Cluster Mutual Failover Shared Disks dbc11 DB and CI package can fail and recover independently cic11 Node 1 Node 2 LAN Application Servers It is a best practice to base the package naming on the SAP instance naming conventions whenever possible. Each package name must also include the SAP System Identifier (SID) of the system to which the package belongs. If similar packages of the same type get added later, they have a distinct namespace because they have a different SID. For example, a simple mutual failover scenario for an ABAP application defines two packages, called dbsid and ascssid (or cisid for old SAP releases). Example 3: Follow-and-push with automated enqueue replication In case an environment has very high demands regarding guaranteed uptime, it makes sense to activate a Replicated Enqueue with SGeSAP. With this additional mechanism, it is possible to failover ABAP and/or JAVA System Central Service Instances without impacting ongoing transactions on Dialog Instances. NOTE: It only makes sense to activate Enqueue Replication for systems that have Dialog Instances that run on nodes different from the primary node of the System Central Service package. Each SAP Enqueue Service maintains a table of exclusive locks that can temporarily be granted exclusively to an ongoing transaction. This mechanism avoids inconsistencies that could be caused by parallel transactions that access the same data in the database simultaneously. In case of a 12 SAP cluster concepts

13 failure of the Enqueue Service, the table with all locks that have been granted gets lost. After package failover and restart of the Enqueue Service, all Dialog Instances need to get notified that the lock table content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These transactions need to be restarted. Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service on the Dialog Instances. Transactions no longer need to be restarted. The Enqueue Server has the ability to create a copy of its memory content to a Replicated Enqueue Service that needs to be running as part of a Enqueue Replication Service Instance (ERS) on a remote host. This is a realtime copy mechanism and ensures that the replicated memory accurately reflects the status of the Enqueue Server at all times. There might be two ERS instances for a single SAP system, replicating SCS, and ASCS locks separately. Figure 4 Follow-and-push failover concept for ABAP and JAVA instances SGeSAP Cluster Automated Replicated Enqueue Shared Disks ascsc11 scsc11 dbc11 SAP enqor multi-node package follow ers00c11 push push follow JD04 d04c11 ers01c11 d03c11 Node 1 Node 2 Node 3 LAN Application Servers Enqueue Services also come as an integral part of each ABAP DVEBMGS Central Instance. This integrated version of the Enqueue Service is not able to utilize replication features. The DVEBMGS Instance needs to be split up in a standard Dialog Instance and a ABAP System Central Service Instance (ASCS). The SGeSAP packaging of the ERS Instance provides startup and shutdown routines, failure detection, split-brain prevention and quorum services to the mechanism. SGeSAP also delivers a service monitor sapenqor.mon that is meant to be configured as the sole part of a enqor multi-node package. This MNP maintains a generic resource of type EnqOR (Enqueue Operation Example 3: Follow-and-push with automated enqueue replication 13

14 Resource) for each Replicated Enqueue. An EnqOR resource is refered to by the system as sgesap.enqor_<sid>_ers<instnr>. Setting up the enqor MNP implements a protected follow-and-push behavior for the two packages that include enqueue and its replication. As a result, an automatism will make sure that Enqueue and its Enqueue Replication server are never started on the same node initially. Enqueue will not invalidate the replication accidentally by starting on a non-replication node while replication is active elsewhere. It is possible to move the package with the replication server to any free node in a multi-node cluster without a requirement to reconfigure the enqueue package failover policy. Figure 5 Visualization of a follow-and-push cluster in Serviceguard Manager During failover of Enqueue, its replication is located dynamically and the Enqueue restarts on the currently active replication node. Enqueue synchronizes with the local replication server. As a next step, the package with the replication service shuts down automatically and restarts on a healthy node, if available. In case of a failover in a multi-node environment, this implements a self-healing capability for the replication function. Enqueue will failover to just any node from the list of statically configured hosts if no replication package is running. Two replication instances are required if Enqueue Replication Services are to be used for both the JAVA stack and the ABAP stack. From this approach, several configuration options derive. In most cases, it is the best practice to create separate packages for ASCS, SCS, and the two ERS instances. It is also supported to combine the replication instances within one SGeSAP package. To combine ASCS and SCS in one package, but only if the two ERS instances are combined in another package is not supported. To combine ASCS and SCS in one package and keep the two ERS instances in two separate packages is not supported. Otherwise, situations can arise in which a failover of the combined ASCS/SCS package is not possible. Finally, ASCS cannot be combined with its ERS instance (AREP) in the same package. For the same reason, SCS cannot be combined with its ERS instance (REP). SAP offers two possibilities to configure Enqueue Replication Servers: 14 SAP cluster concepts

15 1. SAP self-controlled using High Availability polling with Replication Instances on each cluster node(active/passive). 2. Completely High Availability failover solution controlled with one virtualized Replication Instance per Enqueue. SGeSAP implements the second concept and avoids costly polling and complex data exchange between SAP and the High Availability cluster software. There are several SAP profile parameters that are related to the self-controlled approach. Most of these parameters have names that start with the string enque/enrep/hafunc_. They will not have any effect in SGeSAP clusters. Example 4: Realization of the SAP-recommended SPOF isolation SAP offers a formal HA cluster API certification for Netweaver 7.3x and above. SGeSAP implements the certification requirements as specified by SAP. The approach requires isolating the SAP Central Service software SPOFs in packages separate from non-sap software packages like NFS package and database package. The obvious advantage of this approach is that a failing software component never causes a failover of still correctly running software that could be configured in the same package. Figure 6 Visualization of the Serviceguard cluster layout for SAP certification Correctly set up clusters of this type are capable to provide the highest level of availability that is technically possible. SGeSAP provides tools to master the complexity of SAP-recommended cluster configurations. NOTE: SGeSAP provides an easy deployment functionality (see deploysappkgs(1)) that generates all required package configuration files for a SAP System. The tool has an option that allows single-step creation of configuration files that are compliant with the SAP certification requirements. This minimizes the likelihood of running mal-configured systems. Example 5: Dedicated failover host More complicated clusters that consolidate a couple of SAP applications often have a dedicated failover server. While each SAP application has its own set of primary nodes, there is no need to also provide a failover node for each of these servers. Instead, there is one commonly shared secondary node that is capable of replacing any single failed primary node. Often, some or all of the primary nodes are partitions of a large server. Example 4: Realization of the SAP-recommended SPOF isolation 15

16 Figure 7 Dedicated failover server SGeSAP Cluster Dedicated Failover Server with Replicated Enqueue jdbscsc11 Shared Disks Node 1 dbascsc12 Failover paths from all primary partitions to the dedicated backup server ers00c11 ers10c12 ersnncnn Node 2 Dialog Instances dbscscnn Node 3 Figure 7 (page 16) shows an example configuration. The dedicated failover host can serve many purposes during normal operation. With the introduction of Replicated Enqueue Servers, it is a good practice to consolidate a number of Replicated Enqueues on the dedicated failover host. These replication units can be halted at any time without disrupting ongoing transactions for the systems they belong to. They are sacrificed in emergency conditions when a failing database and or Central Service Instance need the spare resources. Dedicated NFS packages Small clusters with only a few SGeSAP packages usually provide NFS by combining the NFS toolkit package functionality with the SGeSAP packages that contain a database component. The NFS toolkit is a separate product with a set of configuration and control files that must be customized for the SGeSAP environment. It needs to be obtained separately. NFS is delivered in a distributed fashion with each database package serving its own file systems. By consolidating this into one package, all NFS serving capabilities can be removed from the database packages. In complex, consolidated environments with several SGeSAP packages, it is of significant help to use one dedicated NFS package instead of blending this into existing packages. 16 SAP cluster concepts

17 A dedicated SAPNFS package is specialized to provide access to shared filesystems that are needed by more than one mysap component. Typical filesystems served by SAPNFS would be the common SAP transport directory or the global MaxDB executable directory of MaxDB 7.7. The MaxDB client libraries are part of the global MaxDB executable directory and access to these files is needed by APO and livecache at the same time. Beginning with MaxDB 7.8 isolated installations, each database installation keeps a separate client. SGeSAP setups are designed to avoid NFS shared filesystems with heavy traffic if possible. For many implementations, this gives the option to use one SAPNFS package for all NFS needs in the SAP consolidation cluster without the risk to create a serious performance bottleneck. NFS might still be required in configurations that use Cluster File Systems in order to provide access to the SAP transport directories to SAP instances that run on hosts outside of the cluster. Virtualized dialog instances for adaptive enterprises Databases and Central Instances are Single Points of Failure. ABAP and JAVA Dialog Instances can be installed in a redundant fashion. In theory, additional SPOFs in Dialog Instances are avoided. This doesn't mean that it is impossible to configure the systems including SPOFs on Dialog Instances. A simple example for the need of a SAP Application Server package is to protect dedicated batch servers against hardware failures. Any number of SAP Application Server instances can be added to a package that uses the module sgesap/sapinstance. Dialog Instance packages allow simple approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time. This might be desirable if the CPU resource consumption is eventually balanced poorly due to changed usage patterns. Dialog Instances can then be moved between the different hosts to address this. A Dialog Instance can also be moved to a standby host to allow planned hardware maintenance for the node it was running on. To simulate this flexibility, you can install Dialog Instances on every host and activate them if required. This might be a feasible approach for many purposes and saves the need to maintain virtual IP addresses for each Dialog Instance. But there are ways that SAP users unintentionally create additional short-term SPOFs during operation if they reference a specific instance via its hostname. This could e.g. be done during batch scheduling. With Dialog Instance packages, the system becomes invulnerable against this type of user error. Dialog Instance virtualization packages provide high availability and flexibility at the same time. The system becomes more robust using Dialog Instance packages. The virtualization allows moving the instances manually between the cluster hosts on demand. Handling of redundant dialog instances Non-critical SAP Application Servers can be run on HP-UX, SLES or RHEL Linux application server hosts. These hosts do not need to be part of the Serviceguard cluster. Even if the additional SAP services are run on nodes in the Serviceguard cluster, they are not necessarily protected by Serviceguard packages. All non-packaged ABAP instances are subsequently called Additional Dialog Instances or sometimes synonymously Additional SAP Application Servers to distinguish them from mission-critical Dialog Instances. An additional Dialog Instance that runs on a cluster node is called an Internal Dialog Instance. External Dialog Instances run on HP-UX or Linux hosts that are not part of the cluster. Even if Dialog Instances are external to the cluster, they may be affected by package startup and shutdown For convenience, Additional Dialog Instances can be started, stopped or restarted with any SGeSAP package that secures critical components. Some SAP applications require the whole set of Dialog Instances to be restarted during failover of the Central Service package. This can be triggered with SGeSAP. Virtualized dialog instances for adaptive enterprises 17

18 It helps to better understand the concept, if one considers that all of these operations for non-clustered instances are considered inherently non-critical. If they fail, this failure won t have any impact on the ongoing package operation. A best-effort attempt is made, but there are no guarantees that the operation succeeds. If such operations need to succeed, package dependencies in combination with SGeSAP Dialog Instance packages need to be used. Dialog Instances can be marked to be of minor importance. They will then be shut down, if a critical component fails over to the host they run to free up resources for the non-redundant packaged components. Additional Dialog Instances never get reflected in package names. The described functionality can be achieved by adding the module sgesap/sapextinstance to the package. NOTE: Declaring non-critical Dialog Instances in a package configuration does not add them to the components that are secured by the package. The package won't react to any error conditions of these additional instances. The concept is distinct from the Dialog Instance packages that got explained in the previous section. If Additional Dialog Instances are used, then follow the below rules: Use saplogon with Application Server logon groups. When logging on to an application server group with two or more Dialog Instances, you don't need a different login procedure even if one of the Application Servers of the group fails. Also, using login groups provides workload balancing between Application Servers. Avoid specifying a destination host when defining a batch job. This allows the batch scheduler to choose a batch server that is available at the start time of the batch job. If you must specify a destination host, specify the batch server running on the Central Instance or a packaged Application Server Instance. Print requests stay in the system until a node is available again and the Spool Server has been restarted. These requests could be moved manually to other spool servers if one spool server is unavailable for a long period of time. An alternative is to print all time-critical documents through the highly available spool server of the central instance. Configuring the Update Service as part of the packaged Central Instance is recommended. Consider using local update servers only if performance issues require it. In this case, configure Update Services for application services running on the same node. This ensures that the remaining SAP Instances on different nodes are not affected if an outage occurs on the Update Server. Otherwise, a failure of the Update Service will lead to subsequent outages at different Dialog Instance nodes. 18 SAP cluster concepts

19 3 SAP cluster administration In SGeSAP environments, SAP application instances are no longer considered to run on dedicated (physical) servers. They are wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a server virtualization layer. The virtualization is transparent in most aspects, but in some areas special considerations apply. This affects the way a system gets administered. This chapter discusses the following topics: Performing cluster administration in the System Management Homepage (SMH) Performing SAP administration with SAP's startup framework Change management activities Performing cluster administration in the System Management Homepage (SMH) SGeSAP packages can be administered using HP SMH. After login to SMH, click Serviceguard icon to access the Serviceguard Manager pages. Choose a cluster that has SGeSAP installed. In the Map and Large Scale Grid views, from the View window on the right side of a page, move cursor over a package icon to display the package information pop-up window. Each SGeSAP package is identified as such in the package pop-up information under the Toolkits heading. In the Table view, the toolkit is listed in the Type column of the Package Status table. Figure 8 Pop-up information for SGeSAP toolkit package To run, halt, move or enable maintenance on a SGeSAP toolkit package: From the View window on the right side of the Serviceguard Manager Main page, right click on a package icon to bring up the Operations menu, then click Run Package, Halt Package, or Enable Package Maintenance to bring up the screen(s) that will allow you to perform each of these operations. You can also perform administrative tasks by clicking the Packages tab on the Serviceguard Manager Main page to bring up the Packages screen. Select the package you want to perform administrative tasks, and then click Administration in the menu bar to display a drop down menu of administrative tasks. Click the task you want to perform to bring up the screen(s) associated with performing that task. Performing cluster administration in the System Management Homepage (SMH) 19

20 NOTE: Enabling package maintenance allows to temporarily disable the cluster functionality for the SAP instances of any SGeSAP package. The configured SGeSAP monitoring services tolerates any internal SAP instance service state while maintenance mode is activated. SAP support personnel might request or perform maintenance mode activation as part of reactive support actions. Similarly, you can use Serviceguard Live Application Detach (LAD) mechanism to temporarily disable the cluster for the whole node. Figure 9 Package administration tasks To view SGeSAP toolkit configuration settings: From the View window on the right hand side of the Serviceguard Manager Main page, left click on the package name below a package icon to bring up the Package Properties screen for that package. The Package Properties screen contains detailed package configuration information. The Package Properties screen can also be accessed by clicking the Packages tab on the Main page to bring up the Packages screen, then click on a package name in the Package Name column. To return to the Main page, click Cluster Properties in the bottom right-hand corner of the Package Properties screen. 20 SAP cluster administration

21 Figure 10 sgesap/sapinstance module configuration overview for a replication instance To monitor a SGeSAP toolkit package: Check the badges next to the SGeSAP package icons in the main view. Badges are tiny icons that are displayed to the right of the package icon. Any Serviceguard Failover Package can have Status, Alert, and HA Alert badges associated with it. In addition to the standard Serviceguard alerts, SGeSAP packages report SAP application-specific information via this mechanism. The additional data provides a more complete picture of the current and expected future availability level of the SAP application. The Alert and HA Alert badges are clickable icons; they are linked to the corresponding component s alert page. Figure 11 Package alerts To update (edit) a SGeSAP toolkit package configuration: From the View window on the right hand side of the Serviceguard Manager Main page, right click on a package icon to bring up the Operations Menu, then click Edit a Package to bring up the first in a series of screens where you can make edits to package properties. A package can also be edited by clicking the Packages tab on the Serviceguard Manager Main page to bring up the Packages screen, then click on the package you want to update in the Package Name column to bring up the Package Properties screen. Click Edit This Package in the upper left-hand corner of the Package Properties screen to bring up the first in a series of screens where you can make edits to package properties. Performing SAP administration with SAP's startup framework This section describes in which ways a clustered SAP system behaves different from a non-clustered SAP system and how they respond to SAP Netweaver 7.x standard administration commands. SAP Netweaver 7.x standard administration includes sapcontrol operations triggered by SAP Performing SAP administration with SAP's startup framework 21

22 system administrators, that is, sidadm users that are logged in to the Linux operating system and it includes remote SAP basis administration access via the SAP Management Console (SAP MC) or SAP s plugin for Microsoft Management Console (SAP MMC). The SAP Netweaver 7.x startup framework is made up of a host control agent (hostctrl) software process that runs on each node of the cluster and a sapstart service agent (sapstartsrv) software per SAP instance. SGeSAP does not interfere with the host control agents, but interoperates with the sapstart service agents during instance start, stop and monitoring operations. SGeSAP cannot tolerate the administration usage of SAP 4.x and 6.x startsap and stopsap scripts. They must be avoided. SGeSAP can use these scripts internally for startup and shutdown in case the startup framework is not available, but its monitors would not be able to judge whether an instance is down because of a failure or because of a stopsap operation. SGeSAP triggers an instance restart or an instance failover operation in reaction to a stopsap call. The startsap/stopsap scripts are not recommended by SAP with Netweaver 7.x and must not be used anymore. It is recommended to configure startup framework agents for older SAP releases, too. NOTE: startsap and stopsap operations must not be used in clusters that have SAP software monitors configured. sapcontrol operations can be used instead. For more information on how to use sapcontrol with old SAP releases, see SAP note Without a cluster, each sapstart service agent is statically registered in the SAP configuration of the host on which its SAP instance was installed. In a cluster, such registrations become dynamic. The cluster package start operations perform registration of the required agents and the cluster package shutdown operations include deregistration routines. After cluster package start, all required startup agents are registered and running. After cluster package halt, these startup agents are halted and not registered. As a consequence, the attempt to start a SAP startup agent after bringing down the instance package that it belongs to must fail, because the agent is not registered. NOTE: sapcontrol nr function StartService <SID> operations are usually not required in SGeSAP environments. They fail if the package of the instance is down. A clustered SAP instance might be accompanied with one or more SGeSAP service monitors that regularly check whether the instance is up and running and whether it is responsive to service requests. For this activity, the sapstart service agents are utilized. For the monitoring to operate it is thus mandatory that the sapstart services remain running. sapcontrol nr -function StopService operations degrade the cluster monitoring capabilities. SGeSAP has fallback mechanisms to monitoring routines that don t require a running startup agent, but the monitoring becomes less reliable without the agent. To reestablish reliable monitoring capabilities and to reenable remote administration console access, the cluster might chose to restart manually halted startup service agents immediately. sapcontrol nr -function StopService operations for the software single points of failure have the same effect as sapcontrol nr -function RestartService operations. The cluster awareness of the sapstart service agent itself becomes activated by specifying the SGeSAP cluster library in the profile of the corresponding SAP instance: For SLES: service/halib=/opt/cmcluster/lib/saphpsghalib.so For RHEL: service/halib=/usr/local/cmcluster/lib/saphpsghalib.so With this parameter being active, the sapstart service agent notifies the cluster software of any triggered instance halt. Planned instance downtime does not require any preparation of the cluster. A running sapstart service agent needs to be restarted in order for the parameter to become effective. 22 SAP cluster administration

23 During startup of the instance startup framework, a SAP instance with the SGeSAP HA library configured, prints the following messages in the sapstartsrv.log file located at the instance work directory: SAP HA Trace: HP SGeSAP<versioninfo> (SG) <versioninfo> cluster-awareness SAP HA Trace: Cluster <clustername> is up and stable SAP HA Trace: Node <hostname> is up and running SAP HA Trace: SAP_HA_Init returns: SAP_HA_OK... During startup of the instance startup framework, a SAP instance without the SGeSAP HA library configured, prints the following message in the sapstartsrv.log file located at the instance work directory: No halib defined => HA support disabled NOTE: Within a single Serviceguard package it is possible to mix instances having the HA library configured with instances not having HA library configured. Subsequent startup or shutdown of an instance triggers the startup framework to dynamically discover a package that has the instance configured. A corresponding sapstartsrv.log entry is as follows: SAP HA Trace: Reported package name is <packagename> During startup of the instance startup framework, a SAP instance without the SGeSAP HA library configured prints the following message in the sapstartsrv.log file located at the instance work directory: No halib defined => HA support disabled CAUTION: It might not be safe to stop an instance that has HA support disabled. Cluster software monitors will cause a failover of the halted instance and all other software instances configured in the same cluster package to the secondary node. You can stop the instance, if software monitoring is not used or if package maintenance mode is activated. Ask the cluster administrator for details on a specific configuration. While the cluster package is running, <sid>adm can issue the following command for a SAP instance with HA library configured: sapcontrol -nr <instnr> -function Stop Usually, the SAP instance shuts down as if there is no cluster configuration. The cluster package continues to run and the filesystems remain accessible. The sapstartsrv.log file reports: trusted unix domain socket user is stopping SAP System SAP HA Trace: Reported package name is ERS41SYA SAP HA Trace: Reported resource name is SYA_ERS41 SAP HA Trace: SAP_HA_FindSAPInstance returns: SAP_HA_OK SAP HA Trace: SAP_HA_StopCluster returns: SAP_HA_STOP_IN_PROGRESS Depending on the service monitors that are configured for the instance, one or more operations are logged to the package log file. /var/adm/cmcluster/log/<packagename>.log in the subsequent monitoring intervals. <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Manual start operation detected for DVEBMGS41 <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Manual stop in effect for DVEBMGS41 Performing SAP administration with SAP's startup framework 23

24 Other methods provided by SAP's sapcontrol command for instance shutdown work in the similar way. HP Serviceguard Manager displays a package alert that lists the manually halted instances of a package. The service monitoring for a halted instance is automatically suspended until you restart the instance. HP Serviceguard Manager displays a package alert (see Figure 11 (page 21)) that lists the manually halted instances of a package. The SGeSAP software service monitoring for a halted instance is automatically suspended until you restart the instance. The cluster package configuration also allows blocking any administrator-driven instance stop attempt via the SAP startup framework. In this case, if a stop operation is triggered anyways, the sapstartsrv.log file contains the following entries: trusted unix domain socket user is stopping SAP System SAP HA Trace: Reported package name is ERS41SYA SAP HA Trace: Reported resource name is SYA_ERS41 SAP HA Trace: SAP_HA_FindSAPInstance returns: SAP_HA_OK SAP HA Trace: sap_stop_blocked=yes is set in package config SAP HA Trace: The stop request is blocked by the cluster NOTE: If the SGeSAP HA library is configured in the SAP instance profile, SAP system administrators can stop and restart clustered Netweaver instances without interacting with the cluster software explicitly. Instance status is visualized in the Serviceguard Manager GUI which continues to provide a full picture of components that are up to the cluster administrators. The SGeSAP monitoring suspends operation while the instance is manually stopped. Packages that have several Netweaver instances configured, continue to monitor all the instances that are not manually halted. If any actively monitored instance fails, it results in a failover and restart of the whole package. One of the methods to restart a manually halted instance is to issue the following command: sapcontrol -nr <instnr> -function Start Any other startup method provided by SAP's sapcontrol command works in the similar way. Example of messages added to the package log: <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Manual start operation detected for DVEBMGS41 <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Resume monitored operation of DVEBMGS41 If the instance fails to start, the service monitor enters the yellow state. The yellow state is printed as a warning to the package log and displayed as a package alert in the HP Serviceguard Manager. <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Resume monitored operation of DVEBMGS41 <date> root@<node> sapdisp.mon[xxx]: (sapdisp.mon,dispmon_monitors): WARNING: Dispatcher of DVEBMGS41 - monitor state:yellow,2 The service monitor remains in yellow state for up to five monitoring intervals. Then, it changes to red state and fails the package with the next monitoring interval. If another instance halt operation is issued while the monitor is in yellow or red state, the monitoring is suspended again and the package failover is prevented. This occurs regardless of whether the manual halt succeeds or not. It is an effective way to prevent undesirable failovers. 24 SAP cluster administration

25 Triggering a package halt is possible whether instances of the package are currently halted or not. The operation causes the cluster to loose information about all the instances that were manually halted during the package run. NOTE: Activating package maintenance mode is a way to pause all SGeSAP service monitors of a package immediately, but it can only be triggered with Serviceguard commands directly. While package maintenance mode is active, failover of the package is disabled. Maintenance mode also works for instances without HA library configured. Change management Serviceguard manages the cluster configuration. Among the vital configuration data are the relocatable IP addresses and their subnets, the volume groups, the logical volumes and their mountpoints. If you change this configuration for the SAP system, you have to change and reapply the cluster configuration accordingly. System level changes Never delete mutual.rhosts entries of <sid>adm on any of the nodes. Never delete the secure shell setup. Do not delete the secure shell setup. Do not delete mutual.rhosts entries of <sid>adm on any of the nodes, if they still exist. Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group must be kept unified across all nodes. Directories below /export have an equivalent directory whose fully qualified path comes without this prefix. These directories are managed by the automounter. NFS file systems get mounted automatically as needed. Servers outside of the cluster that have External Dialog Instances installed are set up in a similar way. See /etc/auto.direct for a full list of automounter file systems of SGeSAP. It enhances the security of the installation if the directories below /export are exported without root permissions. The root user cannot modify these directories or their contents. With standard permissions set, the root user cannot even see the files. If changes need to be done as root, the equivalent directory below /export on the host the package runs on can be used as access point. If the system is badly configured, it might be possible that the system hangs if a logon is attempted as <sid>adm user. The reason for this is, that /usr/sap/<sid>/sys/exe is part of the path of <sid>adm. Without local binaries, this directory links to /sapmnt/<sid>, which is handled by the automounter. The automounter cannot contact the host belonging to the relocatable address that is configured because the package is down the system hangs. To avoid this, always keep local copies of the executables. NOTE: If the database package with NFS services is halted, you may be able to log in as <sid>adm unless you keep a local copy of the SAP binaries using sapcpe. To allow proper troubleshooting, there is a verbose package startup log in the Serviceguard log directory. It must be checked first in case of trouble. The level of information can be adjusted by changing the log_level package parameter. If problems with package startup remain, a general debug mode can be activated by creating an SGeSAP debug flag file: touch debug_<pacakagename> in the Serviceguad run directory location, which is: /usr/local/cmcluster/run on RHEL. /opt/cmcluster/run on SLES. Change management 25

26 The debug mode allows package start up to the level of SAP specific steps. All instance startup attempts will be skipped. Service monitors will be started, but they do not report failures as long as the debug mode is turned on. In this mode it is possible to attempt manual startups of the database and/or SAP software. All rules of manual troubleshooting of SAP instances now apply. For example it is possible to access the application work directories of the SAP instance to have a look at the trace files. CAUTION: Make sure that all debug flag files become removed before a system is handed back to production use. NOTE: Using partial package startup can likewise be used as SAP startup troubleshooting method. The debug behavior is different from package maintenance mode. In package maintenance mode, the debug file does not disable package failover or allow partial startup of the package, but allows a package in running state. Startup with debug mode starts all the SGeSAP service monitors, except the monitored application software. The monitors suspend execution until the debug file is removed. It is not required to halt the package before package operations can be tested. If a package halt operation is prompted while the debug file exists, all SAP specific routines in the package shutdown logic are executed. Clustered SAP software components that were absent during package startup in debug mode, but manually started during subsequent debugging operations are stopped with the standard package halt routines. Make sure to remove the debug file at the end of the debugging operations. If the package still runs, all monitors begins to work immediately and the package failover mechanism is restored. SAP software changes During installation of the SGeSAP Integration for SAP releases with kernel < 7.0, SAP profiles are changed to contain only relocatable IP-addresses for the database as well as the Central Instance. You can check these using transaction RZ10. In file DEFAULT.PFL these entries are altered: SAPDBHOST = <relocatible_db_name> rdisp/mshost = <relocatible_ci_name> rdisp/vbname = <relocatible_ci_name>_<sid>_<inr> rdisp/enqname = <relocatible_ci_name>_<sid>_<inr> rdisp/btcname = <relocatible_ci_name>_<sid>_<inr> rslg/collect_daemon/host = <relocatible_ci_name> The additional profile parameters are SAPLOCALHOST and SAPLOCALHOSTFULL, that are included as part of the Instance Profile of the Central Instance. Anywhere SAP uses the local hostname internally, this name is replaced by the relocatable values <relocatable_ci_name> or <relocatable_ci_name>.domain.organization of these parameters. Make sure that they are always defined and set to the correct value. This is vital for the system to function correctly. Relocatable IP addresses can be used consistently beginning with SAP kernel Older releases use local hostnames in profile names and startup script names. Renamed copies of the files or symbolic links exist to overcome this issue. The destination for print formatting, which is done by a Spool Work process, uses the Application Server name. Use the relocatable name if you plan to use Spool Work processes with your Central Instance. In these cases no changes need to be done in case of a failover printing will work persistently. NOTE: Any print job in process at the time of the failure is canceled and required to be reissued manually after the failover. To make a sprint spooler highly available in the Central Instance, set the destination of the printer to <relocatible_ci_name>_<sid>_<nr> using the transaction SPAD. Print all time critical documents via the high available spool server of the Central Instance. 26 SAP cluster administration

27 Print requests to other spool servers stay in the system after failure until the host is available again and the spool server has been restarted. These requests can be moved manually to other spool servers if the failed server is unavailable for a longer period of time. Batch jobs can be scheduled to run on a particular instance. Generally speaking, it is better not to specify a destination host at all. Sticking to this rule, the batch scheduler chooses a batch server that is available at the start time of the batch job. However, if you want to specify a destination host, specify the batch server running on the highly available Central Instance. The application server name and the hostname (retrieved from the Message Server) are stored in the batch control tables TBTCO,TBTCS,... In case a batch job is ready to run, the application server name will be used to start it. Therefore, when using the relocatable name to build the Application Server name for the instance, you do not need to change batch jobs that are tied to it after a switchover. This is true even if the hostname, that is also stored in the above tables, differs. Plan to use saplogon to application server groups instead of saptemu/sapgui to individual application servers. When logging on to an application server group with two or more application servers, the SAP user does not need a different login procedure if one of the application servers of the group fails. Also, using login groups, provides workload balancing between application servers, too. Within the CCMS you can define operation modes for SAP instances. An operation mode defines a resource configuration. It can be used to determine which instances are started and stopped and how the individual services are allocated for each instance in the configuration. An instance definition for a particular operation mode consists of the number and types of Work processes as well as Start and Instance Profiles. When defining an instance for an operation mode, you need to enter the hostname and the system number of the application server. By using relocatable names to fill in the hostname field, the instance will be working under control of the CCMS after a failover without a change. NOTE: If an instance is running on the standby node in normal operation and is stopped during the switchover, only configure the update service on a node for Application Services running on the same node. As a result, the remaining servers, running on different nodes, are not affected by the outage of the update server. However, if the update server is configured to be responsible for application servers running on different nodes, any failure of the update server leads to subsequent outages at these nodes. Configure the update server on a clustered instance. Using local update servers must only be considered, if performance issues require it. Ongoing verification of package failover capabilities The SGeSAP functionality includes SAP specific verifications that test the node-local operating environment configurations. These checks detect the incorrect local settings that might prevent a successful SAP failover. The routines are executed as part of cmcheckconf(1) and cmapplyconf(1) commands run on SGeSAP package configurations. The cmcheckconf -P <pkg_config_file> command can be scheduled at regular intervals to verify the failover capabilities of already running SGeSAP packages. These tests are executed on the current node as well as on all reachable, configured failover nodes of the SGeSAP package. The resulting logs will be merged. A cmcheckconf(1) run command performs a complete test only if an SGeSAP package is up and running. In this case all file systems are accessible, and allows a complete verification. If the SGeSAP package is halted, only a subset of the checks can be performed. NOTE: Successful execution of cmcheckconf(1) command does not guarantee a successful failover. The currently available functionality cannot replace any regular failover test mechanism. These checks complement existing tests and are useful to detect issues timely. If required, the execution of SGeSAP cluster verification as part of the cmcheckconf(1) and cmapplyconf(1) command routines can be deactivated. The existence of a file called ${SGRUN}/ debug_sapverify skips SGeSAP cluster verification for all packages on that cluster node. The Ongoing verification of package failover capabilities 27

28 existence of a file ${SGRUN}/debug_sapverify_<packagename> skips verification only for a single package on that cluster node. Generic and SGeSAP clustering specific check routines which are not related to SAP requirements towards local operating environment configurations are not deactivated and are executed as part of both cmcheckconf(1) and cmapplyconf(1) commands. The deploysappkgs(1) command is used during initial cluster creation. It is also called after system level or SAP application configuration change operations to verify if any of the performed changes must be reflected in the cluster package configuration. deploysappkgs(1) command is aware of the existing package configurations and compares them to settings of the SAP configuration and the operating system. Upgrading SAP software SAP rolling kernel switches can be performed in a running SAP cluster exactly as described in the SAP Netweaver 7.x documentation and support notes. Upgrading the application version of the clustered SAP application to another supported version does rarely require changes to the cluster configuration. Usually SGeSAP detects the release of the application that is packaged automatically and treats it as appropriate. A list of supported application versions can be taken from the SGeSAP release note document. The list of currently installed Serviceguard Solution product versions can be created with the command rpm -qa grep -i serviceguard For a safe upgrade of SAP with modular-style packages, put all impacted SGeSAP packages in package maintenance mode and perform a partial package start before the first SGeSAP specific module becomes executed. Now, you can manually handle the SAP startup or shutdown operations, the upgrade happens without interference from the cluster software. deploysappkgs(1) and cmcheckconf(1) issued on the existing packages after upgrade give hints on whether cluster configuration changes are required. Perform failover tests for all potential failure scenarios before putting the system back in production. Table 2 Summary of methods that allow SAP instance stop operations during package uptime Method Granularity How achieved? Effect Usecase example SAP stop block deactivation SAP Instance Ensure that package parameter setting sap_stop_blocked no is applied and stop the instance as <sid>adm with standard SAP methods, for example by calling sapcontrol function Stop SAP Instance service monitoring of the package is temporarily suspended for stopped instances; Stopped instances cause alerts in Serviceguard Manager SAP rolling kernel switch Package maintenance mode Serviceguard Package cmmodpkg m on <pkgname> All package service monitoring is suspended; Package cannot fail or switch nodes while in maintenance mode SAP software version upgrade 28 SAP cluster administration

29 Table 2 Summary of methods that allow SAP instance stop operations during package uptime (continued) Method Granularity How achieved? Effect Usecase example SGeSAP debug flag SGeSAP Package Create debug flag file All SGeSAP package touch service monitoring is debug_<packagename> temporarily suspended; in the SG run directory SGeSAP modules are location, which is skipped during package /usr/local/cmcluster/run start on Red Hat and /opt/cmcluster/run on SUSE) non-production SGeSAP cluster trouble-shooting Live Application Detach Serivceguard Node cmhaltnode d Package can fail, but cannot yet failover Serviceguard patch installation Package conversion The deploysappkgs(1) command of SGeSAP/LX A.06.xx can still be used to create modular-based cluster configuration files for NFS, database, SAP central and SAP replication services of existing clusters if there are SGeSAP/LX A legacy configurations. Thus, for the majority of existing clusters, no additional migration tool is required to move from legacy to modular. For other cases, like livecache, SAP external instance and SAP infrastructure tool clusters, the conversion of SGeSAP/LX 3.xx legacy configurations to SGeSAP/LX A.06.xx module configurations requires manual steps. Preparatory effort lies in the range of 1 hour per package. The cmmigratepkg(1) command can be applied to SGeSAP legacy packages. The output file will lack SAP-specific package configurations of the sap*.config file, but the resulting configuration file can be used to simplify the creation of a modular SGeSAP package: cmmakepkg i <cmmigratepkg_output_file> \ -m sgesap/all\> \ <modular_sap_pkg.config> The configuration of the SGeSAP specific parameters to modular_sap_pkg.config can be done manually. For more information on configuring modular package configuration, see Clustering SAP using SGeSAP packages (page 42). Caution needs to be taken in clusters that alter any unofficial SGeSAP parameter that is not described in the official product documentation and in clusters that use customized code as part of the SGeSAP customer.functions hooks. In these cases, an individual analysis of the required steps is needed. Package conversion 29

30 4 SAP cluster storage layout planning Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. The standard Logical Volume Manager (LVM). The following steps describe two standard setups for the LVM volume manager. Chapter explores the concepts and details the implementation steps discussed in this chapter. Database storage layouts for usage with parallel databases are only briefly described for Oracle Real Application Clusters. Detailed configuration steps for parallel database technologies are not covered in this manual. Additional information about SGeSAP and parallel databases is being released as whitepapers from HP. Refer to the Additional Reading section of the relevant SGeSAP release notes to verify the availability of whitepapers in this area. This chapter discusses disk layout for clustered SAP components and database components of several vendors on a conception level. It is divided into two main sections: SAP Instance Storage Considerations Database Instance Storage Considerations SAP Instance Storage Considerations In general, it is important to stay as close as possible to the original layout intended by SAP. But, certain cluster-specific considerations might suggest a slightly different approach. SGeSAP supports various combinations of providing shared access to file systems in the cluster. Table 3 Option descriptions Option 1. SGeSAP NFS Cluster 2. SGeSAP NFS Idle Standby Cluster Description Optimized to provide maximum flexibility. Following the recommendations given below allows for expansion of existing clusters without limitations caused by the cluster. Another important design goal of SGeSAP option 1 is that a redesign of the storage layout is not imperative when adding additional SAP components later on. Effective change management is an important aspect for production environments. The disk layout needs to be as flexible as possible to allow growth to be done by just adding storage for newly added components. If the design is planned carefully at the beginning, making changes to already existing file systems is not required. Optimized to provide maximum simplicity. The option is only feasible for very simple clusters. It needs to be foreseeable that the layout and configuration won't change over time. It comes with the disadvantage of being locked into restricted configurations with a single SAP System and idle standby nodes. HP recommends option 1 in case of uncertainty about potential future layout changes. Each file system added to a system by SAP installation routines must be classified and a decision has to be made: Whether the file system needs to be kept as a local copy on internal disks of each node of the cluster (local). Whether the file system needs to be shared on a SAN storage device to allow failover and exclusive activation (shared exclusive). Whether the file system to be allowed shared access to more than one node of the cluster at the same time (shared NFS). 30 SAP cluster storage layout planning

31 NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools. Depending on what SAP changed, this might introduce additional dependencies on shared libraries that weren't required before the patch. Depending on the shared library path settings (LD_LIBRARY_PATH) of the root user, it may not be possible for SGeSAP to execute the SAP tools after applying the patch. The introduced additional libraries are not found. Creating local copies of the complete central executable directory prevents this issue. The following sections details the different storage options. Option 1: SGeSAP NFS cluster With this storage setup, SGeSAP makes extensive use of exclusive volume group activation. Concurrent shared access is provided via NFS services. Automounter and cross-mounting concepts are used in order to allow each node of the cluster to switch roles between serving and using NFS shares. It is possible to access the NFS file systems from servers outside of the cluster that is an intrinsic part of many SAP configurations. Common directories that are kept local The following common directories and their files are kept local on each node of the cluster: Table 4 List of common directories Directory /home/<sid>adm /home/<dasid>adm /usr/sap/<sid>/sys/exe/run> /usr/sap/tmp /usr/sap/hostctrl /usr/sap/ccms /usr/sap/sapservices Description Home directory of the SAP system administrator with node specific startup log files Home directory of the SAP diagnostic agent administrator Directory holding local copies of SAP instance executables, libraries, and tools (optional for kernel 7.x and higher). Directory where the SAP operating system collector keeps monitoring data of the local operating system Directory where SAP control services for the local host are kept (kernel 7.x and higher). CCMS agent work directory (6.40 and 7.00 only) List of startup service started by sapinit (boot) Depending on database vendor and version, it might be required to keep local database client software. Details can be found in the database sections below. All files belonging to the cluster software and runtime environment are kept local. Part of the content of the local group of directories must be synchronized manually between all the nodes of the cluster. Serviceguard provides a tool cmcp(1) that allows easy replication of a file to all the cluster nodes. SAP instance (startup) profile names contain either local hostnames or virtual hostnames. SGeSAP A prefes profiles with virtual hostname and uses those with local hostnames only for fallback and backwards compatibility. In clustered SAP environments prior to 7.x releases, local executables must be installed. Local executables help to prevent several causes of package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. It is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP. SAP Instance Storage Considerations 31

32 To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that Reside on Shared Disks Volume groups on SAN shared storage are configured as part of the SGeSAP packages. The volume groups can be following: Instance specific System specific Environment specific Instance-specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. System-specific volume groups get accessed from all instances that belong to a particular SAP System. Environment-specific volume groups get accessed from all instances that belong to all SAP Systems installed in the whole SAP environment. System and environment-specific volume groups are set up using NFS to provide access for all instances. They must not be part of a package that is only dedicated to a single SAP instance if there are several of them. If this package is down, then other instances would also be impacted. As a rule of thumb, it is a good default to put all these volume groups into a package that holds the database of the system. These filesystems often provide tools for database handling that don't require the SAP instance at all. In consolidated environments with more than one SAP application component, the recommendation is to separate the environment-specific volume groups to a dedicated NFS package. This package will be referred to as sapnfs package. It must remain running all the time, since it is of central importance for the whole setup. Because sapnfs is serving networked file systems, there rarely is a need to stop this package at any time. If environment-specific volume groups become part of a database package, there is a potential dependency between packages of different SAP Systems. Stopping one SAP System by halting all related Serviceguard packages will lead to a lack of necessary NFS resources for unrelated SAP Systems. The sapnfs package avoids this unpleasant dependency. It is an option to also move the system-specific volume groups to the sapnfs package. This can be done to keep NFS mechanisms completely separate. A valuable naming convention for most of these shared volume groups is vg<instname> or vg<instname><sid> (for example, vgascsc11). Table 5 (page 33) provides an overview of SAP shared storage and maps them to the component and package type for which they occur. Usually, instance specific volume groups can be put into dedicated packages or combined with packages containing the database. Exceptions are ERS instances, because they need to failover separately and Gateway (G) or WebDispatcher (W) instances, because there is no database configure with these. Modular SGeSAP package names do not have to follow a certain naming convention, but it is recommended to include instance names (at least instance types) and the SAP SID into the name. A package containing a database must be indicated this in its name ( DB ). 32 SAP cluster storage layout planning

33 Table 5 Instance specific volume groups for exclusive activation with a package Mount point /usr/sap/<sid>/scs<inr> For example, /usr/sap/c11/scs10 /usr/sap/<sid>/ascs<inr> For example, /usr/sap/c11/ascs11 /usr/sap/<sid>/dvebmgs<inr> For example, /usr/sap/c11/dvebmgs12 /usr/sap/<sid>/d<inr> /usr/sap/<sid>/j<inr> For example, /usr/sap/c11/d13 /usr/sap/<sid>/ers/<inr> For example, /usr/sap/c11/ers20 /usr/sap/<sid>/g<inr> For example, /usr/sap/gw1/g50 /usr/sap/<sid>/w<inr> For example, /usr/sap/wdp/w60 /usr/sap/<dasid>/smda<inr> For example, /usr/sap/daa/smda97 /usr/sap/<sid>/mds<inr> /usr/sap/<sid>/mdis<inr> /usr/sap/<sid>/mdss<inr> For example, /usr/sap/sp8/mds30 For example, /usr/sap/sp8/mdis31 For example, /usr/sap/sp8/mdss32 /export/sapmnt/<sid> For example, /export/sapmnt/c11 /usr/sap/trans /export/sapmnt/<dasid> /usr/sap/put Access point Shared disk Shared disk or NFS toolkit Shared disk Recommended packages setups SAP instance specific Combined SAP instances Database plus SAP instances SAP instance specific Combined SAP instances Database and SAP instances NOTE: Combining a DB with SAP instances is not a recommended package set up. SAP ERS instance specific Combined ERS instances SAP gateway instance specific Combined gateway instances, if configured more than one to the SID SAP WebDispatcher instance specific Combined WebDispatcher instances, if configured more than one to the SID Solutionmanager Diagnostic Agent instance associated with the a clustered dialog instance SAP MDM instance specific Combined SAP MDM instances Database plus SAP MDM instances Package containing DB or dedicated NFS package (sapnfs) None /usr/sap/<sid> must not be added to a package, because using this as a dynamic mount point prohibits access to the instance directories of additional SAP application servers that are locally installed. The /usr/sap/<sid> mount point will also be used to store local SAP executables. This prevents problems with busy mount points during database package shutdown. Due to the size of the directory content, it must not be part of the local root file system. The /usr/sap/tmp might or might not be part of the root file system. This is the working directory of the operating system collector process saposcol. The size of this directory will rarely be beyond a few megabytes. SAP Instance Storage Considerations 33

34 If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory must not be added to any package. This ensures that they are independent from any SAP SAP Netweaver system and you can mount them on any host by hand if needed. All files ystems mounted below /export are part of NFS cross-mounting via the automount programm. The automount program uses virtual IP addresses to access the NFS directories via the path that comes without the /export prefix. Three components must be configured for NFS toolkit: The NFS server consisting of a virtual hostname, storage volumes and mount points in the /export directory The NFS server export table consisting of a list of NFS exported file systems, export options and NFS client access control. Note: The specification of the fsid export option is key as it ensures that the device minor number is retained during the failover to an adoptive node. The automount configuration on each adoptive node, consisting of a list of NFS client mount points This ensures that the directories are quickly available after a switchover. The cross-mounting allows coexistence of NFS server and NFS client processes on nodes within the cluster. Special attention needs to be given to the diagnostic agent (DA) instance directory if a related dialog instance (both using the same virtual hostname) is planned to be clustered. Such DA instances need to move with the related dialog instances. Therefore, their instance directory has to be part of the package. It is recommended that the DA instance filesystem resides on the volume group of the dialog instance (not on a volume group common to all DA instances). However, such a setup does not care for the SYS directory of the DA SID. The DA installation does not create links underneath SYS as the standard Netweaver installation does, but just creates this directory into the local filesystem. To keep a central and consistent copy of this SYS directory within the cluster, it is recommend to create a similar setup manually (SYS containing links into /sapmnt and /sapmnt itself mounted to an exported directory) for the diagnostic agents. For more details on preparation steps, see Chapter 5, Clustering SAP using SGeSAP packages (page 42). If each cluster node has local dialog instances installed and therefore the DA SYS directory on each node, that /sapmnt approach won t be necessary. Option 2: SGeSAP NFS idle standby cluster This option has a simple setup, but is limited in flexibility. It is recommended to follow option 1 for most of the cases. A cluster can be configured using option 2, if it fulfills all of the following prerequisites: Only one SGeSAP package is configured in the cluster. Underlying database technology is a single-instance Oracle RDBMS. The package combines failover services for the database and all required NFS services and SAP central components (ABAP CI, SCS, ASCS). Application Server Instances are not installed on cluster nodes. Replicated Enqueue is not used. Additional SAP software is not installed on the cluster nodes. The use of a NFS toolkit service can be configured to export file systems to external Application Servers that manually mount them. A dedicated NFS package is not possible. Dedicated NFS requires option 1. Common directories that are kept local For information on common directories that are kept local, see Table 4 (page 31) Local database client software needs to be stored locally on each node. Details can be found in the database sections below. Part of the content of the local group of directories must be synchronized manually between all the nodes in the cluster. 34 SAP cluster storage layout planning

35 In clustered SAP environments prior to 7.x releases, install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP. To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that reside on shared disks Volume groups on a SAN shared storage get configured as part of the SGeSAP package. Instance-specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. In this configuration option, the instance-specific volume groups are included in the package. System-specific volume groups get accessed from all instances that belong to a particular SAP System. Environment-specific volume groups get accessed from all instances that belong to any SAP System installed in the whole SAP scenario. System and environment-specific volume groups must be set up using NFS toolkit to provide access capabilities to SAP instances on nodes outside of the cluster. The cross-mounting concept of option 1 is not required. A valuable naming convention for most of these shared volume groups is vg<instname><sid> or vg<instname><inr><sid>. Table 6 (page 35) provides an overview of SAP shared storage for this special setup and maps them to the component and package type for which they occur. Table 6 File systems for the SGeSAP package in NFS idle standby clusters Mount Point /sapmnt/<sid> /usr/sap/<sid> /usr/sap/trans Access Point Shared disk and NFS toolkit Shared disk Shared disk and NFS toolkit Remarks Required Optional If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory must not be added to any package. This ensures that they are independent from any SAP WAS system and you can mount them on any host by hand if needed. Database instance storage considerations SGeSAP internally supports clustering of database technologies of different vendors. The vendors have implemented individual database architectures. The storage layout for the SGeSAP cluster environments needs to be discussed individually for each of the following. Due to its similiarity to MaxDB this section also contains livecache storage considerations, although it cannot be considered as a dataset instance being part of a Netweaver installation. All supported platforms are Intel x86_64 only: Oracle Single Instance RDBMS Storage Consideration MaxDB/liveCache Storage Considerations Sybase ASE Storage Considerations Special livecache Storage Considerations Database instance storage considerations 35

36 Table 7 Availability of SGeSAP storage layout options for different Database RDBMS DB Technology Oracle Single-Instance SAP MaxDB Sybase ASE Oracle Single Instance SGeSAP Storage Layout Options SGeSAP NFS clusters Idle standby Cluster Software Bundles 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit (Optional) Oracle single instance RDBMS Single Instance Oracle databases can be used with both SGeSAP storage layout options. The setup for NFS and NFS Idle Standby Clusters are identical. Oracle databases in SGeSAP NFS and NFS Idle standby clusters Oracle server directories reside below /oracle/<dbsid> (for example, /oracle/c11). These directories get shared via the database package. ORACLE_HOME usually points to /oracle/<dbsid>/<version>_64 (for example, for Oracle 10.2: /oracle/c11/102_64) In addition, the SAP Application Servers will need access to the Oracle client libraries, including the Oracle National Language Support files (NLS) shown in Table 8 (page 36). The default location where the client NLS files get installed differs with the SAP kernel release used. See the table below: Table 8 NLS files - default location Kernel version 6.x, 7.x Client NLS location /oracle/client/<rdbms_version>/ocommon/nls/admin/data /oracle/client/<rdbms_version>/instantclient For systems using Oracle instant client (/oracle/client/<major-version>x_64/instantclient) no client side NLS directory exists. A second type of NLS directory, called the "server" NLS directory, always exists. This directory is created during database or SAP Central System installations. The location of the server NLS files is identical for all supported SAP kernel versions: $ORACLE_HOME/nls/data The setting of the ORA_NLS10 variable in the environments of <sid>adm and ora<sid> determines whether the client or the server path to NLS is used. The variable gets defined in the dbenv_<hostname>.[c]sh files in the home directories of these users. However, newer installations don t define that variable anymore and it is even forbidden to set it for user <sid>adm (SAP Note ) Sometimes a single host may have an installation of both a Central Instance and an additional Application Server of the same SAP System. These instances need to share the same environment settings. SAP recommends using the server path to NLS files for both instances in this case. This does not work with SGeSAP since switching the database would leave the application server without NLS file access. The Oracle database server and SAP server might need different types of NLS files. The server NLS files are part of the database Serviceguard package. The client NLS files are installed locally on all hosts. Do not mix the access paths for ORACLE server and client processes. 36 SAP cluster storage layout planning

37 The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts where an Application Server might run. The directories cannot be relocated to different paths. The content needs to be identical to the content of the corresponding directories that are shared as part of the database SGeSAP package. The setup for these directories follows the "on top" mount approach, that is, the directories might become hidden beneath identical copies that are part of the package: $ORACLE_HOME/rdbms/mesg $ORACLE_HOME/oracore/zoneinfo $ORACLE_HOME/network/admin Table 9 File system layout for NFS-based Oracle clusters Mount Point Access Point Potential Owning Packages VG Type $ORACLE_HOME /oracle/<sid>/saparch Shared disk Database only or combined DB plus CI package DB instance specific /oracle/<sid>/sapreorg /oracle/<sid>/sapdata1... /oracle/<sid>/sapdatan /oracle/<sid>/origloga /oracle/<sid>/origlogb /oracle/<sid>/mirrloga /oracle/<sid>/mirrlogb /oracle/client Local None Environment specific Some local Oracle client files reside in /oracle/<sid> as part of the root filesystem Local None DB instance specific MaxDB/liveCache storage considerations This section describes the recommended storage considerations for MaxDB and livecache. NOTE: SGeSAP/LX does not support hoststandby livecache(hss), hence the following description also applies to a livecache setup, unless noted otherwise. MaxDB can be substituted by livecache, <DBSID> by <LCSID>. The main difference between MaxDB and livecache is how it is used by the clients. Therefore, depending on the situation alternative setups are possible for livecache. For more information on only livecache storage considerations, see Special livecache storage considerations (page 40) section. A High Availabilily (HA) setup for livecache must ensure that the livecache client (usually the SCM installation) has the client libraries. SGeSAP supports failover of MaxDB databases as part of SGeSAP NFS cluster option. MaxDB distinguishes an instance dependant path /sapdb/<dbsid> and two instance independent paths, called IndepDataPath and IndepProgramsPath (IndepData and IndepProgram in ini-file). By default, all three point to a directory below /sapdb. The paths are configured in a file called /etc/opt/sdb. For compatibility with older release there must be a file called /var/spool/sql/ini/sap_dbtech.ini. Depending on the version of the MaxDB database, this file contains different sections and settings. The sections [Installations], [Databases], and [Runtime] are stored in separate files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData path /sapdb/data/config. MaxDB/liveCache storage considerations 37

38 MaxDB 7.8 does not create SAP_DBTech.ini anymore. The [Globals] section is defined in /etc/opt/sdb. With the concept of isolated installations, a DB installation contains its own set of (version specific) executables (/sapdb/<dbsid>/db/bin), its own data directory (/sapdb/<dbsid>/data), and a specific client directory (/sapdb/clients/<dbsid>). At runtime, there will be a database specific set of x_server related processes. NOTE: IndepDataPath and IndepProgramsPath are now referred as GlobalDataPath and GlobalProgramPath respectively. The following directories are of special interest: /sapdb/programs This can be seen as a central directory with all MaxDB executables. The directory is shared between all MaxDB instances that reside on the same host. It is also possible to share the directory across hosts. But, it is not possible to use different executable directories for two MaxDB instances on the same host. Furthermore, different SAPDB versions could get installed on the same host. The files in /sapdb/programs must be the latest version in any MaxDB on the cluster node. Files in /sapdb/programs are downwards compatible. /sapdb/data/config This directory is also shared between instances, though you can find lots of files that are Instance specific in here; for example, /sapdb/data/config/<dbsid>.* According to SAP this path setting is static. /sapdb/data/wrk The working directory of the main MaxDB processes is also a subdirectory of the IndepData path for non-ha setups. If a SAPDB restarts after a crash, it copies important files from this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for SAPDB/MaxDB lower than version 7.6, this directory must move with the package. Therefore, SAP provided a way to redefine this path for each SAPBDB/MaxDB individually. SGeSAP expects the work directory to be part of the database package. The mount point moves from /sapdb/data/wrk to /sapdb/data/<dbsid>/wrk for the clustered setup. This directorymust not be mixed up with the directory /sapdb/data/<dbsid>/db/wrk that might also exist. Core files of the kernel processes are written into the working directory. These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. For MaxDB version 7.8 or later, this directory is replaced by /sapdb/<dbsid>/data (private data path).. NOTE: For MaxDB RDBMS starting with version 7.6, these limitations do not exist. The working directory is utilized by all the instances (IndepData/wrk) and can be shared globally. /etc/opt/sdb : Only exists when using MaxDB or livecache 7.5 or later. Needs to be local on each node together with entries in /etc/passwd and /etc/group. /var/spool/sql: For MaxDB version 7.5 or later, /var/spool/sql is created only for compatibility with older versions. Depending on the versions installed it may not exist anymore This directory hosts local runtime data of all locally running MaxDB instances. Most of the data in this directory would become meaningless in the context of a different host after failover. The only critical portion that still has to be accessible after failover is the initialization data in /var/spool/sql/ini. This directory is usually very small (< 1 Megabyte). With MaxDB and livecache 7.5 or higher, the only local files are contained in /var/spool/sql/ini, other paths are links to local runtime data in IndepData path: dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid 38 SAP cluster storage layout planning

39 pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node in the MaxDB for the livecache instance to run. /sapdb/clients (MaxDB 7.8): Contains the client files in <DBSID> subdirectories for each database installation. /var/lib/sql: Certain patch level of MaxDB 7.6 and 7.7 (see SAP Note ) use this directory for shared memory files. Needs to be local on each node. NOTE: In HA scenarios, valid for SAPDB/MaxDB versions up to 7.6, the runtime directory /sapdb/data/wrk is configured to be located at /sapdb/<dbsid>/wrk to support consolidated failover environments with several MaxDB instances. The local directory /sapdb/data/wrk is referred to by the VSERVER processes (vserver, niserver), that means VSERVER core dump and log files are located there. Table 10 File system layout for SAPDB clusters Mount Point Access Point Potential Owning Packages VG Type /sapdb/<dbsid> /sapdb/<dbsid>/wrk * Shared disk DB only or combined DB and CI package DB specific /sapdb/<dbsid>/sapdata<nr> /sapdb/<dbsid>/saplog<nr> /sapdb/<dbsid>/data ** /sapdb/data<dbsid>/data<n> log<n> *** /export/sapdb/programs /export/sapdb/data /export/sapdb/clients ** Shared disk and NFS toolkit DB only or combined DB+CI SAPNFS Environment specific /export/var/spool/sql/ini /etc/opt/sdb Local None /var/lib/sql Local None Environment specific *Only valid for versions lower than 7.6. **Only valid for versions 7.8 or higher. *** Only valid for older versions. NOTE: When using tar or cpio to copy or move directories, it must be ensured that the file or ownership permissions transported are retained, especially for files having the s-bit set: /sapdb/<sid>/db/pgm/lserver and /sapdb/<sid>/db/pgm/dbmsrv. These files are important for the vserver process ownership and they have an impact on starting the SAPDB via <sid>adm. Database and SAP instances depend on the availability of /sapdb/programs. To minimize dependencies between otherwise unrelated systems, using a dedicated SAPNFS package is strongly recommended especially when the cluster has additional SAP application servers installed, more than one SAPDB is installed, or the database is configured in a separate DB package. Keeping local copies is possible, though not recommended because there are no administration tools that keep track of the consistency between the local copies of these files on all the systems. Using NFS toolkit filesystems underneath or export Table 10 (page 39) is required when multiple MaxDB based components (including livecache) are either planned or already installed. These MaxDB/liveCache storage considerations 39

40 directories are shared between the instances and must be part of an instance package. Otherwise the halt of one instance would prevent the other one to be started or run. Sybase ASE storage considerations SGeSAP supports failover of Sybase ASE databases as part of SGeSAP NFS cluster option 1. It is possible to consolidate SAP instances in SGeSAP ASE environments. Table 11 File system layout for Sybase ASE clusters Mount Point Access Point Owning Package VG Type /sybase/<dbsid> sybase/<dbsid>/sapdiag Shared disk DB only or combined DB and CI package DB specific sybase/<dbsid>/sybsystem sybase/<dbsid>/sybtemp sybase/<dbsid>/sapdata<n> sybase/<dbsid>/saplog<n> Special livecache storage considerations Depending on the setup of the related Netweaver installation (usually a SCM or CRM application) there are two additional options to setup livecache that can be used instead of the approach described in the MaxDB storage consideration. Option 1: Simple cluster with separated packages Cluster layout constraints: The livecache package does not share a failover node with the SCM central instance package. There is no MaxDB or additional livecache running on cluster nodes. There is no intention to install additional SCM Application Servers within the cluster. Table 12 File System Layout for livecache package running separate from SCM (Option 1) Mount point /sapdb/data /sapdb/programs /sapdb/clients /sapdb/<lcsid>/sapdata<n> /sapdb/<lcsid>/saplog<n> /var/spool/sql Storage type Shared disk Owning packages Dedicated livecache package (lc<lcsid>) In the above layout all relevant files get shared via standard procedures. The setup causes no administrative overhead for synchronizing local files. SAP default paths are used. Option 2: Non-MaxDB environments Cluster layout constraints: There is no MaxDB or additional livecache running on cluster nodes. Especially the SCM System RDBMS is either based on ORACLE or Sybase, but not on MaxDB. Often SCM does not rely on MaxDB as underlying database technology. But independent from that, all Instances of the SCM System still need access to the livecache client libraries. The best 40 SAP cluster storage layout planning

41 way to deal with this is to make the client libraries available throughout the cluster via AUTOFS cross-mounts from a dedicated NFS package. Table 13 File system layout for livecache in a non-maxdb environment (Option 2) Mount point /sapdb/data /sapdb/<lcsid>/sapdata<n> /sapdb/<lcsid>/saplog<n> /var/spool/sql /sapdb/programs /sapdb/clients Storage type Shared disk Autofs shared Owning packages Dedicated livecache package (lc<lcsid>) sapnfs 1 1 This can be any standard, standalone NFS package. The SAP global transport directory must already be configured in a similar package. This explains why this package is often referred to as "the trans package" in related literature. A package serving SAP trans directory can optionally be extended to also serve the global livecache file shares. Special livecache storage considerations 41

42 5 Clustering SAP using SGeSAP packages Overview This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). Each task is described with examples. A prerequisite for clustering SAP using SGeSAP is that the Serviceguard cluster software installation must have completed and the cluster must have setup and running. The minimum software requirements are as follows: Serviceguard for providing High Availability (HA) SGeSAP for clustering SAP in an HA environment NFS toolkit (optional for certain type of installations) for NFS toolkit setup Serviceguard Manager for the GUI based setup and configuration of Serviceguard clusters Three phase approach A three phase approach is used for clustering the SAP. 1. Setup the infrastructure for SAP installation (SAP pre-installation). a. Setup of one (or more) sapnfs packages for providing NFS services to all the cluster nodes. b. Setting up a base package (also called a tentative package ) with some selected Serviceguard and SGeSAP modules. The base package is used for the initial SAP instance and database installations. Technically the base package is not required, but this package makes it easy to troubleshoot configuration issues in the cluster. 2. Install SAP instances and databases (SAP installation). 3. Complete the package setup (SAP post-installation). a. Synchronize configuration changes on the primary node with secondary nodes in the cluster. b. Add SGeSAP modules and or update attributes to the base packages introduced in step 1b of first phase. The steps in Phase 2 of this approach is normally performed by a certified SAP Installer, whereas, the steps in Phase 1 and 3 are performed by a customer service engineer trained in Serviceguard and SGeSAP. It is important to categorize the following: The volume groups and logical volumes of the SAP or database instances The virtual hostnames designated for these instances How these are mapped in to Serviceguard packages Before starting with phase 1, it is important to determine the volume groups and logical volumes that belong to the package, and finally it is also important to remember that resources like IP addresses (derived from the virtual hostnames) and volume groups can only belong to one package. This implies that two SAP instances sharing these resources must be part of the same package. Finally, before clustering SAP using the Phase I approach, it is important to decide on the following: The file systems to be used as local copies. The file systems to be used as shared exclusive file systems. The file systems to be used as shared nfs file systems. For more information on file system configurations, see chapter 4 SAP cluster storage layout planning (page 30). 42 Clustering SAP using SGeSAP packages

43 There can also be the requirement to convert an existing SAP instance or database for usage in a Serviceguard cluster environment. For more information on how to convert an existing SAP instance or database see, Converting an existing SAP instance (page 78). SGeSAP modules and services The following components are important for the configuration of a Serviceguard package with SGeSAP: Modules and the scripts that are used by these modules. Service monitors. SGeSAP modules Various Serviceguard and SGeSAP modules are available for clustering SAP instances and database instances. These modules contain attribute definitions that describe an SAP instances or database instances needed to configure them into a Serviceguard package. The following points give an overview of the top-level SGeSAP modules: Table 14 SGeSAP modules Module sgesap/sapinstance sgesap/dbinstance sgesap/sapextinstance sgesap/sapinfra sgesap/livecache sgesap/mdminstance Description For clustering of one or more SAP instances such as SAP Central Instances, System Central Services, Replication Instances, ABAP Application Servers, JAVA Application Servers, Web Dispatcher, Gateway, and MDM instances of a single SAP system. For Oracle, MaxDB, and Sybase ASE RDBMS databases. For handling external instances (SAP software running in a non-clustered environment). For clustering SAP infrastructure software. For clustering SAP livecache instances. For handling SAP MDM repositories. NOTE: These modules can also include other common SGeSAP and Serviceguard modules that are required to setup a complete package. SGeSAP services SGeSAP provides monitors that can be used with the sg/service module, which monitor the health of the SAP instances and their databases. Some monitors also offer local restart functionality. For example, in the case of an instance failure, the monitors attempt to restart the instance on the same node before initiating a package failover. The following monitors are provided with SGeSAP: Table 15 SGeSAP monitors Monitor sapms.mon sapenq.mon sapenqr.mon sapdisp.mon Description To monitor a message service that comes as part of a Central Instance or System Central Service Instance for ABAP/JAVA usage. To monitor an enqueue service that comes as part of a System Central Service Instance for ABAP/JAVA usage. To monitor an enqueue replication service that comes as part of an Enqueue Replication Instance. To monitor a SAP dispatcher that comes as part of a Central Instance or an ABAP Application Server Instance. Overview 43

44 Table 15 SGeSAP monitors (continued) Monitor sapwebdisp.mon sapgw.mon sapdatab.mon saplc.mon sapmdm.mon sapenqor.mon Description To monitor a SAP Web Dispatcher that is included either as a part of (W-type) instance installation into a dedicated SID or by unpacking and bootstrapping into an existing SAP Netweaver SID. To monitor a SAP Gateway (G-type instance). To monitor MaxDB, Oracle, and Sybase ASE database instances. Additionally, it monitors the xserver processes for MaxDB and listener processes for Oracle. To monitor SAP livecache instances. To monitor SAP MDM servers. To coordinate package startup in follow-and-push SCS/ERS scenarios. Used internally by SGeSAP in the enqor MNP (multi node package). These monitors are located at directory $SGCONF/monitors/sgesap. Each monitor automatically performs regular checks on the availability and responsiveness of a specific software component within all the SAP instances that provide this service in the package. NOTE: Sourcing the Serviceguard cluster configuration file with. /etc/cmcluster.conf sets the above $SGCONF environment variable, as well as other cluster environment variables. For Oracle databases, issues with the Oracle listener process and the database are detected and local restarts of the listener are triggered by the monitor, if required. For MaxDB databases, issues with the xserver processes and the database are detected and local restarts are triggered by the monitor, if required. The SAP central service monitors detect issues with the SAP startup agent of their instances and attempt local restarts of the agent software. The SAP message service monitor sapms.mon can work in environments that use the Restart_Program_... setting in the SAP instance (start) profiles of the [A]SCS instances to achieve local restarts of failing message services without triggering unnecessary instance failovers. It is recommended to use the SAP restart mechanism only for the message server. The SAP enqueue replication service monitor sapenqr.mon has built-in software restart functionality. It locally restarts the replication instance in case the software fails. A related failover is only triggered, if the instance fails to remain up for more than ten minutes three times in a row. Momentary instability of the software is reported as an alert message in the Serviceguard Manager. The SAP enqueue service monitor sapenq.mon and the SAP dispatcher monitor does not provide built-in software restart and the native SAP instance restart must not be configured either. Configuring local restarts may lead to serious malfunctions for these software components. Service monitors that are enabled to react to shutdown notifications from SAP's startup framework include sapdisp.mon, sapgw.mon, sapms.mon, sapenq.mon, sapenr.mon, sapwebdisp.mon, and sapmdm.mon. For more information on SAP's startup framework, see Performing SAP administration with SAP's startup framework (page 21) section. Installation options Serviceguard and SGeSAP provide three different methods for installing and configuring packages in an SAP environment. 1. SGeSAP Easy Deployment using the deploysappkgs script: This is applicable for some selected SAP installations types. For example, SAP Central Instance installations. It provides 44 Clustering SAP using SGeSAP packages

45 an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP SID. 2. A guided installation using the Serviceguard Manager GUI: A web based graphical interface, with plugins for automatic pre-filling of SGeSAP package attributes based on the currently installed SAP and DB instances. 3. The classical Command Line Interface (CLI): The commands cmmakepkg, cmcheckconf, and cmapplyconf are used for creating a package configuration, checking the configuration, and registering the package with the cluster. Table 16 (page 45) table provides a quick summary of the pros and cons of the methods and the suggestions on when to use. Table 16 Installing and configuring packages in SAP environment Method Description Pros Cons SGeSAP Easy Deployment using the deploysappkgs script This only works for some selected SAP installations types. For example, SAP Central Instance installations. It provides an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP SID. NOTE: This method is useful only for: a fully automatic package configuration creation with no manual intervention a SAP Central Instance and database It assumes the SAP installation is complete and therefore is only available in phase 3 approach. For more information on deploysappkgs, see manpages. fully automatic Serviceguard package configuration file generation, no manual intervention required. Can update existing packages with the attributes necessary to protect a SAP instance or DB with the package. limited to certain configurations auto-discovery code requires all package relevant file systems to be mounted no GUI can only be used in phase 3 approach A guided installation using the Serviceguard Manager GUI A web based graphical interface, with plugins for automatic pre-filling of SGeSAP package attributes based on the currently installed SAP and DB instances. NOTE: This method is useful for a guided and GUI based package creation and registration of any SAP configuration In phase 1 approach, this method can be used to setup a base package configuration before installing SAP as well as a package configuration after the SAP installation was completed in phase 3 approach. user guided, GUI based setup of packages easy to reconfigure the packages, if required a basic validation of entered data is provided - pre-filling plugin requires all package relevant file systems to be mounted for auto-discovery The classical Command Line Interface (CLI) The commands cmmakepkg, cmcheckconf, and cmapplyconf are used for creating a package configuration, checking the configuration, and registering the package with the cluster. NOTE: This method is useful for package setup where every detail is required. every package attribute can be edited the package configuration file contains extensive documentation manual edits of package attributes can be cumbersome and error prone Serviceguard Manager GUI and Serviceguard CLI For more information about the installation option 2: package creation using the Serviceguard Manager GUI, see the respective online help available in the Serviceguard Manager GUI. For more information about installation option 3: package creation using the Serviceguard Command Line Interface (CLI), see the Managing HP Serviceguard A for Linux manual at linux-serviceguard-docs. Overview 45

46 SGeSAP easy deployment This section describes the installation and configuration of packages using easy deployment (via the deploysappkgs command, which is part of the SGeSAP product). This script allows easy deployment of the packages that are necessary to protect the critical SAP components. The components that can be deployed into one or more packages are: System Central Services (SCS or ASCS) Central Instance (DVEBMGS) (if no ASCS is configured) Enqueue Replication Services (Both ERS for Java and ABAP) Database Instance The NFS exports that are necessary to operate this environment are not part of this version of easy deployment, and must be configured separately (for example, in Phase 1 approach). SGeSAP easy deployment is invoked via command line using deploysappkgs packaging-option SAPSID with packaging-option being either multi (multiple packages) or combi (combined packages) and SAPSID being the SAP SID for which the packages must be created. The multiple packages is the default and recommended option. This allows the distribution of non-redundant SAP components into multiple, separate packages. It is a very flexible option, as the individual components can failover independently, and unlike combined packages, failure of one component does not bring down the other components. Thus failover dependencies can be avoided. The combined packages option allows the combining of the non-redundant SAP components of a SAP system in as few packages as possible. This keeps the setup simple and can save resources. With this option, a package initiates a failover, even if one of the configured SAP components configured in the package fails. Easy deployment generates (or updates) one or more package configurations. These must be reviewed before they are applied. The screen output reports the filename of the generated configuration files as well as a detailed log of each step performed during generating the package configuration. Table 17 SGeSAP use cases for easy package deployment Use case Create packages from scratch Extend "base packages" Add a new SAP instance to an already configured package Update existing package with additionally required resources Scenario SAP instances and DB are installed. Instances and DB can be running or can be halted. They are not configured into a SGeSAP package yet. Easy deployment will create new packages. Before setting up SAP, minimal package(s) with required volume groups and IP addresses have been created and started. No SGeSAP configuration has been added yet. After setting up SAP this/these package(s) must be extended with SGeSAP related attributes including installed instances and/or DB. For example: A newly installed ASCS must be part of the existing SCS package. Easy deployment will add such an ASCS into the existing package, if it is configured to the same virtual host as the SCS or if option "combined" is selected. For example: A new volume group related to a SAP instance must be added. IPv6 is enabled and the virtual hostname now also has an IPv6 address which must be added. Easy deployment discovers these new attributes and add them to the appropriate existing SGeSAP packages. 46 Clustering SAP using SGeSAP packages

47 Infrastructure setup, pre-installation preparation (Phase 1) Prerequisites This section describes the infrastructure that is provided with the setup of a NFS toolkit package and a base package for the upcoming SAP Netweaver installation. It also describes the prerequisites and some selected verification steps. There is a one to one or one to many relationship between a Serviceguard package and SAP instances and a one to one relationship between a Serviceguard package and a SAP database. A package can only serve a maximum of one SAP SID and one DB SID at a time, but SAP SID and DB SID are not required to be identical. Common resources on which SAP instances depend must go into a single package and (unless Serviceguard package dependencies are going to be used) instances depending on these resources must be configured into the same package later. Volume groups, logical volumes, and file system (with their appropriate sizes) must be setup according to the storage layout described in Chapter 4 SAP cluster storage layout planning (page 30). Volume groups must be accessible from all cluster nodes. The file system mount points must be created on all the cluster nodes. Virtual hostnames (and their IP addresses) required for the SAP installation must exist and must resolve on all the cluster nodes. Virtual hostnames are mandatory for the NFS toolkit package setup. If no NFS toolkit setup is used, continue to Infrastructure Setup - SAP base package setup (Phase 1b). After completing the steps in this section everything is ready for starting the SAP installation. This infrastructure consists of: A running sapnfs package exporting the relevant file systems (it depends upon the setup chosen, as NFS may also be part of a SGeSAP package instead of a separate NFS toolkit package). A working automount configuration on all the hosts that will run the SAP instance. One or more SGeSAP base packages providing the environment for the subsequent SAP installation. Node preparation and synchronization Node preparation needs to be performed on every cluster node only once. If a node is added to the cluster after the SGeSAP package setup, node preparation must be performed before the packages are enabled on that node. NOTE: It is critical for any of the following configuration and installation setup steps of phase1 that the prerequisites (setup of volume groups, logical volumes, file systems, mount points, and virtual hostnames) must be implemented and synchronized on all the nodes before continuing the configuration or installation. Generally speaking synchronization refers that the secondary nodes in the cluster must be coordinated with the configuration changes from the primary node. For example, configuration file changes in the cluster are copied from one source location (primary) to one or more target locations (secondary). Synchronization in phase 1 is intermediate as the goal here is to identify and isolate configuration issues at an early stage. Phase 3 contains the final synchronization steps. For more information on final synchronization, see Post SAP installation tasks and final node synchronization (Phase 3a) (page 59) section. Infrastructure setup, pre-installation preparation (Phase 1) 47

48 Intermediate synchronization and verification of virtual hosts To synchronize virtual hosts: 1. Ensure that all the virtual hosts that are used later in the SAP installation and the NFS toolkit package setup are added to the /etc/hosts. If a name resolver is used instead of /etc/hosts, then ensure that all the virtual hosts resolve correctly. 2. Verify the order and entries for the host name lookups in the /etc/nsswitch.conf. For example: hosts: files dns Verification step(s): To verify whether the name is resolved with the same IP address, ping the virtual host on all the nodes. ping nfsreloc PING nfsreloc ( ) 56(84) bytes of data. 64 bytes from saplx-0-31 ( ): icmp_seq=1 ttl=64 time=0.113 ms 64 bytes from saplx-0-31 ( ): icmp_seq=2 ttl=64 time=0.070 ms Intermediate synchronization and verification of mount points The procedure for synchronizing mount points are as follows: 1. Ensure that all the file system mount points for this package are created on all the cluster nodes as specified in the prerequisites. Verification step(s): For example: Run the cd /sapmnt/c11 command on all the nodes, and test for availability. Infrastructure setup for NFS toolkit (Phase 1a) If a dedicated NFS toolkit package sapnfs for SAP is planned for the installation, this must be setup at a veryan early stage. You can create the package using either the Serviceguard Manager or the CLI interface. NOTE: If a common sapnfs package already exists, it can be extended by the new volume groups, file systems, and exports. Mount points for the directories that are used by the NFS toolkit package and the automount subsystem must exist as part of the prerequisites. If the mount points do not exist, you must create it depending on the requirement. For example: mkdir -p /export/sapmnt/c11 Creating NFS Toolkit package using Serviceguard Manager NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes GUI steps and the CLI steps are described in the Creating NFS toolkit package using Serviceguard CLI (page 51) section. The Serviceguard Manager GUI can be used to setup, verify and apply SAP sapnfs packages. To create an NFS toolkit package: 1. From the Serviceguard Manager Main page, click Configuration in the menu toolbar, then select Create a Modular Package from the drop down menu. 2. If toolkits are installed, a Toolkits Selection screen for selecting toolkits appears. Click yes following the question Do you want to use a toolkit? 48 Clustering SAP using SGeSAP packages

49 3. Select NFS toolkit and click Next >>. The Package Selection screen appears. Figure 12 Toolkit selection page 4. In the Package Name box, enter a package name that is unique for the cluster. NOTE: The name can contain a maximum of 39 alphanumeric characters, dots, dashes, or underscores. The Failover package type is pre-selected and Multi-Node is disabled. NFS does not support Multi-Node. 5. Click Next >>. The Modules Selection screen appears. The modules in the Required Modules table are selected by default and cannot be changed. In the Select Modules table, you can select additional modules (or clear the default recommended selections) by selecting the check box next to each module that you want to add (or remove) from the package. NOTE: Click Reset at the bottom of the screen to return to the default selections. 6. Click Next >>. The first of several consecutive sg/failover modules configuration screens appear with the following message at the top of the screen: Step 1 of X: Configure Failover module attributes (sg/failover) X will vary, depending on how many modules you selected. There are two tables in this screen, Select Nodes and Specify Parameters. By default, nodes and node order are pre-selected in the Select Nodes table. You can clear the selection or change the node order, to accommodate your configuration requirements. Alternatively, you can select Enable package to run on any node configured in the cluster (node order defined by Serviceguard) and allow Serviceguard to define the node order. To help in decision making, you can move the cursor over the configurable parameters, and view the tool tips that provide more information about the parameter. 7. Click Next >>. The second of several consecutive sg/failover modules configuration screens appear. Fill in the required fields, and accept or edit the default settings. 8. Click Next >> at the bottom of the screen to open the next screen in the series. 9. See the configuration summary below for an example of the NFS file system mount points, the directories to export as well as the NFS export options. 10. After you complete all of the configuration screens, the Verify and submit configuration change screen is displayed. Use the Check Configuration and Apply Configuration buttons at the bottom of the screen to confirm and apply your changes. Infrastructure setup, pre-installation preparation (Phase 1) 49

50 Figure 13 Configuration summary page- sapnfs package Figure 14 Configuration summary page- sapnfs package (continued) 50 Clustering SAP using SGeSAP packages

51 Creating NFS toolkit package using Serviceguard CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes the CLI steps and the GUI steps are described in the Creating NFS Toolkit package using Serviceguard Manager (page 48) section. 1. Run the cmmakepkg n sapnfs -m tkit/nfs/nfs sapnfs.config command to create the NFS server package configuration file using the CLI. 2. Edit the sapnfs.config configuration file. The following is an example for a package configuration with volume group vgnfs with filesystem lvnfsc11 to be exported and mounted from virtual host Add the relevant attributes for the NFS server: virtual hostname, volume groups, file systems, and the mount point of the /export directory. A package_ip address specifies the virtual address through which the NFS clients must mount the exports. vg vgnfs fs_name /dev/vgnfs/lvnfsc11 fs_server "" fs_directory /export/sapmnt/c11 fs_type ext4 fs_mount_opt "" fs_umount_opt "" fs_fsck_opt "" ip_subnet ip_address Add the list of exported files systems for the NFS clients. The fsid needs to be unique for each exported file system: tkit/nfs/nfs/xfs "-o rw,no_root_squash,fsid=102 *:/export/sapmnt/c11" NOTE: Change the service_name attribute, if it is not unique within the cluster. 3. Run the cmapplyconf P sapnfs.config command to apply the package. 4. Run the cmrunpkg sapnfs command to start the package. Automount setup Add the following to /etc/auto.direct on each NFS client that will mount /sapmnt/c11 from the NFS server. This is also valid, if the NFS server and an NFS client are on the same cluster node. /sapmnt/c11 -fstype=nfs,nfsvers=3,udp,nosymlink :/export/sapmnt/C11 NOTE: You can specify the virtual host name nfsreloc, instead of the IP address. Reload the autofs changes with: /etc/init.d/autofs reload For more information about how to specify options and the NFS export string, see HP Serviceguard Toolkit for NFS version A for Linux User Guide at linux-serviceguard-docs. NOTE: If a common sapnfs package already exists it can be extended by the new volume groups, file systems, and exports instead. Infrastructure setup, pre-installation preparation (Phase 1) 51

52 Solutionmanager diagnostic agent file system preparations related to NFS toolkit If a dialog instance with a virtual hostname is installed initially and clustering the instance is done later, then some steps related to the file system layout must be performed before the SAP installation starts. These steps are optional if: It is planned to keep all the diagnostic agent installations on the local file system or The agent is not configured to move with the related dialog instance. The SAP installation installs a separate diagnostic agent instance for each host of a dialog instance installation (physical and virtual). Therefore, diagnostic agent and dialog instances are linked via the virtual hostname and shares the same IP address. As a result of this link, an agent instance must move with the related (clustered) dialog instances, if the dialog instances fail over. As described in chapter 4 SAP cluster storage layout planning (page 30), the logical volume of the diagnostic agent also has to fail over. There is also a SYS directory underneath /usr/sap/dasid. Compared to other SAP installations this does not contain links to /sapmnt. To have the same diagnostic agent SYS available on all the cluster nodes, these links must be created and subsequently the /sapmnt/dasid must be mapped to a NFS-exported directory. The steps to install file system layout are as follows: 1. Create directory /sapmnt/dasid. 2. Create a link from /usr/sap/dasid/sys to /sapmnt/dasid. 3. Create a logical volume and filesystem for the files on /sapmnt/dasid. 4. Mount that file system to /export/sapmnt/dasid (create this directory if it doesn t exist yet) and export it via NFS. 5. Mount the exported filesystem to /sapmnt/dasid. To make this exported filesystem highly available, the same mechanism as for other SAP SIDs can be used. 1. Add the exported file system together with its volume_group, logical volume, and file system mountpoints to a NFS toolkit package. 2. Add /sapmnt/dasid to the automount configuration. 3. Mountpoint /export/sapmnt/dasid must be available on all the cluster nodes where the NFS toolkit package runs. /sapmnt/dasid must be available on all the nodes where the dialog instances run. Intermediate node sync and verification For more information about synchronization of the other cluster nodes with the automount configuration, see Post SAP installation tasks and final node synchronization (Phase 3a) (page 59) section. It is possible to perform intermediate synchronization to test the NFS configuration. For more information on synchronization with the other cluster nodes, see NFS and automount synchronization (page 63) section. Verification step(s): 52 Clustering SAP using SGeSAP packages

53 1. Check if the package will start up on each cluster node, where it is configured. 2. Run showmount e <sapnfs package ip_address> and verify if name resolution works. 3. Run showmount e <virtual NFS hostname> on an external system (or a cluster node currently not running the sapnfs package) and check the exported file systems are shown. On each NFS client in the cluster check the following: Run cd /usr/sap/trans command to check the read access of the NFS server directories. Run touch /sapmnt/c11/abc; rm /sapmnt/c11/abc command to check the write access. NOTE: For more information on synchronization with the other cluster nodes, see NFS and automount synchronization (page 63) section. For information on the final synchronization of the other cluster nodes with the automount configuration, see Post SAP installation tasks and final node synchronization (Phase 3a) (page 59) section. Infrastructure Setup - SAP base package setup (Phase 1b) This step will finally make the basic infrastructure available to start the SAP installation afterwards, which includes the instance and database file systems as well as the IP addresses of the virtual hostnames used for the installation. While there are other manual ways to provide that basic infrastructure, setting up a Serviceguard package is the recommended way. SAP base package setup enables the available basic infrastructure to start the SAP installation. This includes the instance and database file systems as well as the IP addresses of the virtual hostnames used for the installation. Setting up a Serviceguard package is another recommended method to set up the basic infrastructure. There are two ways to set up the initial SAP base package: 1. Set up the package with both Serviceguard and SGeSAP modules. 2. Set up the package with only Serviceguard modules. Intermediate synchronization and verification of mount points The procedure for synchronizing mount points are as follows: Ensure that all the file system mount points for this package are created on all the cluster nodes as specified in the prerequisites. For example: mkdir /usr/sap/c11/scs40 Verification step(s): Invoke cd /usr/sap/c11/scs40 on all the nodes and test for availability. SAP base package with Serviceguard and SGeSAP modules At this stage any SGeSAP modules relevant to the SAP instance can be included into the base package configuration. Creating the package with the Serviceguard Manager NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes the GUI steps and the CLI steps are described in the Creating the package configuration file with the CLI (page 55) section. Infrastructure setup, pre-installation preparation (Phase 1) 53

54 1. From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and then select Create a Modular Package from the drop down menu. If Metrocluster is installed, a Create a Modular Package screen for selecting Metrocluster appears. If you do not want to create a Metrocluster package, click no (default is yes). Click Next >> and another Create a Modular Package screen appears. 2. If toolkits are installed, a Create a Modular Package screen for selecting toolkits appears. 3. Click yes following the question Do you want to use a toolkit? 4. Select the SGeSAP toolkit. 5. In the Select the SAP Components in the Package table, select the SAP Instances. This component is incompatible with SAP NetWeaver Operation Resource (enqor). Optionally for a database package, select the SAP Database Instance. For a combined package, select both SAP Instances and SAP Database Instance. Other combinations are also possible. Figure 15 Toolkit selection screen 6. Click Next >> and in the Select package type window, enter a package name. The Failover package type is pre-selected and Multi-Node is disabled. The SGeSAP Package with SAP Instances does not support Multi-Node. 7. Click Next >> at the bottom of the screen and another Create a Modular Package screen appears with the following messages at the top of the screen: The recommended modules have been preselected. Choose additional modules for extra Serviceguard capabilities. 8. The modules in the Required Modules window are set by default and cannot be changed. In the Select Modules window, you can select additional modules (or clear the default recommended selections) by selecting the check box next to each module that you want to add (or remove) from the package. Click Reset to return to the default selections. 54 Clustering SAP using SGeSAP packages

55 9. Click Next >> and another Create a Modular Package screens appears with the following message: Step 1 of X: Configure Failover module attributes (sg/failover), where X will vary, depending on how many modules you selected. There are two windows in this screen, Select Nodes and Specify Parameters. By default, nodes and node order are pre-selected in the Select Nodes window. You can deselect nodes, or change the node order, to accommodate your configuration requirements. Alternatively, you can select Enable package to run on any node configured in the cluster (node order defined by Serviceguard) and allow Serviceguard to define the node order. To help in decision making, you can move the cursor over the configurable parameters, and view the tool tips that provide information about the parameter. 10. Click Next >> and another Create a Modular Package screens appears. Step 2 of X: Configure SGeSAP parameters global to all clustered SAP software (sgesap/sap_global). Fill in the required fields and accept or edit the default settings. Click Next >>. Another Create a Modular Package screen appears with the following message at the top of the screen: Step 3 of X: Configure SGeSAP SAP instance parameters (sgesap/sapinstance). Fill in the required fields, and accept or edit the default settings. Click Next >> until the mandatory <SID> can be entered. Figure 16 Configuration screen: SAP System ID 11. After you are done with all the Create a Modular Package configuration screens, the Verify and submit configuration change screen appears. Use the Check Configuration and Apply Configuration buttons to confirm and apply your changes. Creating the package configuration file with the CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes CLI steps and the GUI steps are described in Creating the package with the Serviceguard Manager (page 53) the section. Invoke cmmakepkg n <pkg> -m sgesap/sapinstance [-m...] <pkg>.config command. Infrastructure setup, pre-installation preparation (Phase 1) 55

56 Initially no SGeSAP attributes are enabled except for the mandatory attribute sgesap/sap_global/sap_system, which must be set to the SAP SID designated for the installation. All others SGeSAP related attributes must be left unspecified at this point. For a database package, specify the module sgesap/dbinstance. An sgesap/dbinstance does not have any mandatory attributes. For a combined package, both sgesap/sapinstance and sgesap/dbinstance module must be specified. Other combinations are also possible. NOTE: Specifying the SGeSAP modules automatically adds the necessary Serviceguard modules such as volume_group or filesystem, or package_ip required for a base package. SAP base package with Serviceguard modules only It is possible to create a package configuration only specifying the Serviceguard modules. SGeSAP modules can be installed later. Such a package configuration requires at least the following Serviceguard modules: volume_group filesystem package_ip NOTE: Include the service module at this stage to use SGeSAP service monitors (for both SAP instance and DB) on a later stage. Add generic_resource module, if the package is used for a SGeSAP SCS/ERS follow-and-push configuration. Creating the package with Serviceguard Manger NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes GUI steps and the CLI steps are described in the Creating the package configuration file with the CLI (page 57) section. 1. From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and then select Create a Modular Package from the drop down menu. 2. Click no following the question Do you want to use a toolkit? 3. Click Next >> and in the Package Name window, enter a package name. The Failover package type is pre-selected and Multi-Node is disabled. The SGeSAP Package with SAP Instances does not support Multi-Node. 4. Click Next >> and another Create a Modular Package screen appears with the following messages at the top of the screen: The recommended modules have been preselected. Choose additional modules for extra Serviceguard capabilities. 5. The modules in the Required Modules window are set by default and cannot be changed. In the Select Modules window, you can select additional modules (or clear the default recommended selections) by selecting the check box next to each module that you want to add (or remove) from the package. 56 Clustering SAP using SGeSAP packages

57 Figure 17 Module selection page Click Reset at the bottom of the screen to return to the default selections. 6. After you are done with all the Create a Modular Package configuration screens, the Verify and submit configuration change screen appears. Use the Check Configuration and Apply Configuration buttons to confirm and apply your changes. Creating the package configuration file with the CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes CLI steps and the GUI steps are described in the Creating the package with Serviceguard Manger (page 56) section. 1. Run the cmmakepkg n <pkg> \ -m sg/volume_group \ -m sg/filesystem \ m sg/package_ip <pkg>.config or cmmakepkg n <pkg> \ -m sg/volume_group \ -m sg/filesystem \ m sg/package_ip \ -m sg/service \ -m sg/generic_resource <pkg>.config or cmmakepkg n <pkg> -m sg/all <pkg>.config command. Add the required attributes for the SAP instance and database installation to the resulting package configuration. The required attributes are vg, fs_name, fs_directory, fs_type, ip_subnet, and ip_address. Infrastructure setup, pre-installation preparation (Phase 1) 57

58 Verification steps For examples of these attributes, see Creating NFS toolkit package using Serviceguard CLI (page 51) section. 2. Verify the package using the cmcheckconf P <pkg>.config command, and if there are no errors, run the cmapplyconf P <pkg>.config command to apply the configuration. A simple verification of the newly created base package is to test if the package startup succeeds on each cluster node where it is configured. SAP installation (Phase 2) Prerequisite This section provides information for installing SAP into a Serviceguard cluster environment. For more information, see SAP installation guides. SAP instances are normally installed on one dedicated node, (referred to as <primary>). In Phase 3, the changes from the SAP installation on the <primary> node must be distributed to the other nodes in the cluster (referred to as <secondary> nodes) intended to run the SAP instance. Once the SAP instance is clustered, there is no concept of <primary> and <secondary> node anymore and all the nodes provides an identical environment. The sapnfs package must be running on one of the nodes in the cluster and is exporting the relevant NFS files systems. The automount subsystem must be running and the NFS client file systems are available. A SGeSAP base package(s) for the SAP instances installation must be running on the <primary> node. For example, the primary node is the node, where the sapinst tool (now known as Software Provisioning Manger) is executed to start the installation. Determine the SAP instance number(s) to be used for the installation and verify they are unique in the cluster. Installation of SAP instances The SAP instances and database are installed with the SAP sapinst tool. Use either of the following commands to specify the virtual hostname, where the instances are to be attached: Run export SAPINST_USE_HOSTNAME=<virtual host> before running sapinst, or add the virtual hostname to the command line sapinst SAPINST_USE_HOSTNAME=<virtual host>. The sapinst tool offers various installation types. The type of installation determines the way instances and the database are installed and the flexibility of the setup. The three installation types offered are: Central system Distribute system High-Availability system NOTE: While all three installation types can be clustered, the recommended installation type is HA system. The HA option is available for all the Netweaver 7.x versions. After completing the installation, the SAP system must be up and running on the local (primary) node. For more information on the installation types, see the SAP installation guides. 58 Clustering SAP using SGeSAP packages

59 Post SAP installation tasks and final node synchronization (Phase 3a) After the SAP installation has completed in Phase 2, some SAP configuration values may have to be changed for running the instance in the cluster. Additionally, each cluster node (except the primary where the SAP installation runs) must be updated to reflect the configuration changes from the primary. Complete the following tasks for Phase 3 before the SGeSAP package is finalized: Configuration settings of the SAP installation: Program start entries in the SAP profiles MaxDB xserver autostart Oracle listener names Hostname references in DB configuration files Review SAP parameters, which can conflict with SGeSAP. At the Operating System (OS) level, synchronize the SAP installation changes on the primary node with all the secondary nodes in the cluster. User and groups related information for the SAP SID and DB administrator. Update login scripts containing virtual hostnames. Duplicate file systems identified as local to the secondary nodes. For more information on local, see chapter 4 SAP cluster storage layout planning (page 30). DB related synchronization for MaxDB and Oracle Adjust system wide configuration parameters to meet SAP requirements. NOTE: Before starting with any Phase 3 configurations steps, ensure all the base packages are up and running. For example, ensure that all the relevant file systems are mounted. This is important for the Serviceguard Manager auto-discovery tool to work properly and provide pre-filled fields of the currently installed SAP configuration. SAP post installation modifications and checks HP recommends that you modify some of the settings generated by the SAP installation to avoid conflicting behavior when run together with SGeSAP. Disable SCS enqueue restarts, if SGeSAP ERS is also configured Disable SCS enqueue restarts if SCS is installed with the Restart_Program parameter enabled for the enqueue process. This configuration automatically restarts the enqueue on the current node, destroying the replicated enqueue lock table when the ERS reconnects to the restarted SCS instance. The desired behavior is that the SCS package failover to the node, where the ERS package with the replicated lock table is running and recover the replicated enqueue locks from there. In [A]SCS profile the line with Restart (number might vary) Restart_Program_01 = local $(_EN) has to be changed to Start. Start_Program_01 = local $(_EN) Post SAP installation tasks and final node synchronization (Phase 3a) 59

60 Avoid database startup as part of Dialog instance startup A dialog instance installation contains Start_Program_00 = immediate $(_DB) entry in its profile. This entry is generated by the SAP installation to start the DB before the dialog instance is started. It is recommended to disable this entry to avoid the possible conflicts with the DB startup managed by the SGeSAP database package. MaxDB/liveCache: Disable Autostart of instance specific xservers With an isolated installation each MaxDB/liveCache 7.8 database has its own installation-specific xservers. The global xserver exists for older databases and for forwarding requests to the xserver of a 7.8 database. The startup of the global xserver also starts the installation-specific xserver out of the DB specific bin directory. NOTE: Stopping of the global xserver does not stop the specific ones. When running more than one 7.8 database in a clustered environment, the startup behavior can lead to error messages (because the file system with the specific bin directory of the other DB is not mounted) or even busy file systems error messages belonging to the other database. Therefore, it is recommended to switch the xserver autostart feature off when running more than one MaxDB database and or livecache in the cluster. However, SAP s startdb currently relies on autostart being switched on and therefore does not explicitly start the DB specific xserver. Switch off the Autostart by editing the following entry: [Params-/sapdb/<DBSID>/db] section in /sapdb/data/config/installations.ini: Set XserverAutostart to no (default=yes). A DB or livecache SGeSAP package with the sgesap/dbinstance module or the sgesap/livecache module configured controls the starting and stopping of the instance specific xserver together with the package. Oracle: Set SID specific listener names If more than one Oracle database is configured to run on a cluster node, it is recommend to use <SID> specific listener names to avoid conflicts. Duplicate the contents of file listener.ora. For the first section, replace the LISTENER by LISTENER<SID1>. For the duplicated section, replace the LISTNER with LISTENER<SID2>. Update the HOST and PORT lines for each listener definition with the respective new values. NOTE: If an existing package has the LISTENER name configured, then it must also be updated with the new name. By adding tlistsrv<sid2> entries to /etc/services, the use of the port can be documented. The ports must reflect the PORT used in listener.ora. Check if database configuration files use the DB virtual hostname The database installation by SAP configures the virtual host names, but it is recommended to verify whether the Table 18 (page 60) are properly configured. You must update the files, if it is not properly configured. Table 18 DB configuration files DB type File(s) Path Fields/Description Oracle tnsnames.ora $ORACLE_HOME/network/ admin/oracle/client/<vers>/ network/admin /sapmnt/<sid>/ profile/oracle (HOST = hostname) listener.ora $ORACLE_HOME/network/admin (HOST = hostname 60 Clustering SAP using SGeSAP packages

61 Table 18 DB configuration files (continued) DB type File(s) Path Fields/Description MaxDB.XUSER.62 /home/<sid>adm Nodename in xuser list output. If necessary recreate userkeys with xuser -n vhost Sybase interfaces $SYBASE Fourth column of master and query entry for each server User synchronization The synchronization of the user environment consists of the following three sub tasks after completing the SAP installation: Synchronize the user and group ids. Copy home directory to all the secondary nodes. In the home directory, adap the filenames containing hostnames. NOTE: The user environment necessary to run the SAP instances and database instances must be identical on each cluster node. To be independent of external services to the cluster like DNS or LDAP, local authorization (for example, /etc/passwd, /etc/shadow, and /etc/group) is recommended for users and group information. The SAP and database administrators of the various SAP and DB instances require the entries listed in Table 19 (page 61) table. The database specific user and groups exist only if SAP is installed with the corresponding database. Table 19 Password file users Username sapadm <sid>adm <dasid>adm ora<dbsid> sqd<dbsid> <lcsid>adm sdb syb<dbsid> Purpose SAP system administrator SAP SID administrator SAP Diagnostic Agent administrator Oracle database administrator MaxDB database administrator livecache database administrator 1 MaxDB file owner Sybase database administrator Home directory /home /sapadm /home/<sid>adm /home/<dasid>adm /home/ora<dbsid> or /oracle/<dbsid> (shared) /home/sqd<dbsid> /home/<lcsid>adm /sybase/<dbsid> (shared) 1 does not follow sqd<dbsid>maxdb convention. Table 20 Groupfile file groups Groups sapsys sapinst sdba oper dba Remark Primary group for all SAP SID users and DB users SAP installer group, secondary for SAP SID and DB users MaxDB file owner Oracle database operators (limited privileges) Oracle database administrators Post SAP installation tasks and final node synchronization (Phase 3a) 61

62 NOTE: For more information of the terms local, shared exclusive, and shared nfs file systems used in this section, see chapter 4 SAP cluster storage layout planning (page 30). Along with synchronizing user and group information, the HOME directories of the administrators must be created on the local file system (unless this directory does not reside on a local disk as in the case for some DB users) on each secondary node. This duplication of the user s HOME to secondary nodes is done by running the tar command on the HOME directory on the primary and unpacking that archive on the secondary. Use the tar p flag to preserve permissions, user, and group ids. Some SAP login scripts in the <sid> adm and database admin HOME directories contain versions for execution on the local node. For example, they contain hostnames in their filename. These login scripts also have versions for bash (.sh), and csh (.csh). Some filenames which are often found are.sapenv,.dbenv,.lcenv,.sapsrc, and.dbsrc (it is not necessary that all of them must exist). As these files are copied from the primary s home directories, they will have the primary node name in the filename. The primary must be replaced with the secondary node name. For example:.dbenv.csh on sgxsap51 was duplicated from sgxsap50. On the secondary node, sgsap51 execute: mv.dbenv_sgxsap50.csh.dbenv_sgxsap51.csh In the older installations, startsap and stopsap scripts exist with the primary node in the filename. These must be replaced accordingly. In the case of the Sybase database administrator, the home directory resides on a shared exclusive disk. There is no duplication required as the file system is only mounted on the node running the package. However, the node specific login scripts exist and therefore must be created for all the secondary nodes. Run the copy command cp.dbenv_sgxsap50.csh.dbenv_sgxsap51.csh on the primary (instead of the mv command). Verification: Login to each secondary cluster node and su to each user listed in Table 20 (page 61). The command must not produce errors. If the home directory does not reside on a local file system but resides on shared file system instead, start the package containing the corresponding volume group on that node first. Network services synchronization During the SAP installation the file /etc/services is updated with the SAP definitions. These updates must be synchronized with the /etc/services on the secondary nodes. The very first SAP installation on the primary node creates all the entries for the first four type of entries in Table 21 (page 62). Table 21 Services on the primary node Service name sapdp<inr> sapdp<inr>s sapgw<inr> sapgw<inr>s sapms<sid> Remarks Dispatcher ports Dispatcher ports (secure) Gateway ports Gateway ports (secure) Port for (ABAP) message server for installation <SID> 62 Clustering SAP using SGeSAP packages

63 Table 21 Services on the primary node (continued) Service name saphostctrl saphostctrls tlistsrv sql6 sapdbni72 Remarks SAP hostctrl SAP hostctrl (secure) Oracle listener port MaxDB MaxDB NOTE: <INR> = There are no services related to Sybase ASE database in /etc/services. NFS and automount synchronization 1. Synchronize the automount configuration on all the secondary nodes, if it is not done in Phase Create the mount points for the directories that are used by the NFS package and the automount subsystem. For example: mkdir -p sapmnt/c11 Synchronize /etc/auto.direct from the primary node to all secondary nodes. Reload the autofswith /etc/init.d/autofs reload. SAP hostagent installation on secondary cluster nodes It is recommend to have a SAP hostagent installation on each clusternode, eventhough it is not a requirement for SGeSAP. Such an installation might exist on these hosts through a previous installation. If not, it must be installed according to the instructions in SAP note The sapinst used for instance installation may offer an option to install the hostagent. This step can be executed within or directly after phase 2. Make sure that both the uid of sapadm and the gid of sapsys are identical in all the cluster nodes. Other local file systems and synchronization There are other directories and files created during SAP installation that reside on local file systems on the primary node. This must be copied to the secondary node(s). SAP Recreate directory structure /usr/sap/sid/sys on all secondary nodes. MaxDB files Copy the local /etc/opt/sdb to all the secondary nodes. This is required only after the first MaxDB or livecache installation. If /var/spool/sql is created by the installation (usually only for older versions), recreate the directory structure on all the secondary nodes. SAP s Oracle instant client files Depending on the Oracle client version, the SAP Oracle instant client files are installed on either the /oracle/client/11x_64 or /oracle/client/10x_64 directories. Synchronize these with the secondary nodes. Post SAP installation tasks and final node synchronization (Phase 3a) 63

64 For more information about the Oracle Instant Client Installation and Configuration into SAP environments, see SAP note Oracle client installation for MDM If an MDM configuration is configured as a distributed Oracle configuration, for example, database and MDM server run in separate packages, then the full Oracle client installation is required. 1. After the installation, update tnsnames.ora with the virtual hostnames as described in Check Database Configuration Files. 2. Synchronize /oracle/client with all the secondary nodes. 3. Set the environment for the mdm<sid> administrator as follows: export LD_LIBRARY_PATH=/oracle/client/112_64/lib export ORACLE_HOME=/oracle/client/112_64 4. Synchronize these with the secondary nodes. Verification: After synchronization, it is possible to manually start up the SAP instances and database on all the clusters, where its base packages is configured. Follow the below procedure to test the manual start and stop of SGeSAP clustering: 1. Stop the SAP instances and or database intended for this package (by using stopsap or other sap commands like sapcontrol). 2. Stop that package on the local node. 3. Start the package on another cluster node. 4. Start the SAP instances on the new cluster node. This is the preliminary test for SGeSAP clustering. If the test fails, clustering the SAP instance later on with SGeSAP also fails. Completing SGeSAP package creation (Phase 3b) The three options for creating the final SGeSAP package are as follows: Easy deployment with deploysappkgs command. Guided configuration using Serviceguard Manager. Package creation with the CLI interface using the cmmakepkg and cmapplyconf command. Creating SGeSAP package with easy deployment To create the packages with the deploysappkgs command run: deploysappkgs combi C11 or deploysappkgs multi C11 This command attempts to create either a minimal (combi = combine instances) or a maximum (multi = multiple instances) number of packages. If suitable base packages are already created in phase 1, it extends those packages with the necessary attributes found for the installed C11 instance. If necessary, the configuration file for the enqor multi-node package is also created. You must review the resulting configuration files before applying them. Depending on the attributes changed or added, cmapplyconf command might fail, and stop running packages. 64 Clustering SAP using SGeSAP packages

65 NOTE: To get complete package configurations it is recommend that SAP database and instances are running on the node where deploysappkgs is invoked. Otherwise, attributes (especially regarding the filesystem and volume_group module) might be missing. deploysappkgs can also be invoked at the end of Phase2 on the primary node. However, the created package configurations cannot be applied yet. Creating SGeSAP package with guided configuration using Serviceguard Manager NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes GUI steps and the CLI steps are described in the Creating SGeSAP package with CLI interface (page 65) section. 1. Startup the Servicegurad Manager and add the SGeSAP modules to the existing base packages, if they are not added in phase Update the SGeSAP attributes with the current values, if the SGeSAP modules are already added in Phase 1. Creating SGeSAP package with CLI interface NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes CLI steps and the GUI steps are described in the Creating SGeSAP package with guided configuration using Serviceguard Manager (page 65) section. The SGeSAP configuration must be added to the Serviceguard base packages created earlier, if they are not added in phase 1. An example for adding or updating the package using the command line is as follows: 1. Run the mv <pkg>.config<pkg>.config.save command to add sapinstance module to an existing package. 2. Run the cmmakepkg m sgesap/sapinstance -i <pkg>.config.save <pkg>.config command 3. Run the mv <pkg>.config<pkg>.config.save command to add dbinstance module to an existing package. 4. Run the cmmakepkg m sgesap/dbinstance -i <pkg>.config.save <pkg>.config command 5. Edit the package configuration file to update the relevant SGeSAP attributes. NOTE: Non SGeSAP attributes such as service or generic_resource must also be updated. 6. Run the cmapplyconf P <pkg>.config command to apply the configuration. 7. Run the cmrunpkg <pkg> command to start the package. Module sgesap/sap_global SAP common instance settings This module contains the common SAP instance settings that are included by the following SGeSAP modules: sapinstance mdminstance sapextinstance sapinfra Completing SGeSAP package creation (Phase 3b) 65

66 The following table describes the SGeSAP parameters and their respective values: Parameter sgesap/sap_global/sap_system sgesap/sap_global/rem_comm sgesap/sap_global/parallel_startup sgesap/sap_global/cleanup_policy sgesap/sap_global/retry_count sgesap/sap_global/sapcontrol_usage Possible value C11 ssh rsh yes no normal lazy strict 5 preferred exclusive disabled Description Defines the unique SAP System Identifier (SAP SID) Defines the commands for remote executions. Default is ssh Allows the parallel startup of SAP application server instances. If set to no, the instances start sequentially. Before the instance startups, the package attempts to free up unsused system resources (temporary files, IPC resources, and so on.) in order to make the startups more likely to succeed. A database package only frees up database related resources. SAP instance packages only remove IPCs belonging to SAP administrators. If this parameter is set to normal, only instance shared memory is cleaned up. If this parameter is set to lazy, cleanup is deactivated. If this parameter is set to strict and process is not attached, all shared memory - regardless of whether it is s cleaned up. NOTE: Using the strict setting can crash running instances of different SAP Systems on the failover host. Specifies the number of retries for several cluster operations that might not succeed immediately due to racing conditions with other parts of the system. The Default is 5. Specifies whether the SAP sapcontrol interface and the SAP startup agent framework are required for startup, shutdown, and monitoring of SAP software components. Setting the value to preferred ensures that all the available SAP-provided legacy monitoring tools are used in addition to the agent framework monitors. When the value is exclusive only the sapcontrol is used to start, stop, and monitor SAP instances When the value is disabled the sapcontrol method is not used to start, stop, and monitor SAP instances. The default is preferred. 66 Clustering SAP using SGeSAP packages

67 Module sgesap/sapinstance SAP instances This module contains the common attributes for any SAP Netweaver Instance. The following table describes the SAP instance parameters: Parameter sgesap/stack/sap_instance sgesap/stack/sap_virtual_hostname Possible value 1. SCS40 2. ERS50 1. vhostscs 2. vhosters Description Defines any SAP Netweaver Instances such as: DVEBMGS, SCS, ERS, D,J, ASCS, MDS, MDIS,MDSS,W,G. Corresponds to the virtual hostname, which is specified during the SAP installation. sgesap/stack/sap_replicated_instance SCS40 For each SAP ERS Instance that is part of the package, the corresponding, replicated Central Service instance (SCS/ASCS) needs to be specified. sgesap/stack/ sap_stop_blocked no Blocks manually triggered instance stop commands. Figure 18 Configuring SAP instance screen Module sgesap/dbinstance SAP databases This module defines the common attributes of the underlying database. Parameter sgesap/db_global/db_vendor sgesap/db_global/db_system For db_vendor = oracle sgesap/oracledb_spec/listener_name Possible value oracle C11 LISTENER Description Defines the underlying RDBMS database: Oracle, MaxDB, or Sybase Determines the name of the database (schema) for SAP Oracle listener name. Specify if name was changed to a SID specific name Completing SGeSAP package creation (Phase 3b) 67

68 Parameter sgesap/oracledb_spec/listener_password For db_vendor = maxdb sgesap/maxdb_spec/maxdb_userkey For db_vendor = sybase sgesap/sybasedb_spec/aseuser sgesap/sybasedb_spec/asepasswd Possible value c sapsa Description Specify Oracle listener password, if set User key of control user Sybase system administration or monitoring user Password for specified aseuser attribute For Sybase the attribute aseuser and asepaswd are optional. When specified and that user has system administration right, it is used for native fallback in database shutdown situations. If the user does not have the right, it is used for monitoring purposes only. Figure 19 Configuring SAP instances Module sgesap/mdminstance SAP MDM repositories The sgesap/mdminstance module is based on sgesap/sapinstance with additional attributes for MDM repositories, MDM access strings, and MDM credentials. Many configurations combine the MDM instances like MDS, MDIS, and MDSS (and possibly a DB instance) into one SGeSAP package. This is called a MDM Central or MDM Central System installation. Each instance can also be configured into separate packages, called a distributed MDM installation. All MDM repositories defined in the package configuration are automatically mounted and loaded from the database, after the MDM server processes start successfully. The following table contains some selected SGeSAP parameters relevant to a MDM Central System instance. For more information, see the package configuration file. Parameter sgesap/sap_global/sap_system sgesap/stack/sap_instance Value MO7 MDS01 Description Defines the unique SAP System Identifier (SAP SID) Example for defining an MDM MDS instance with instance number Clustering SAP using SGeSAP packages

69 Parameter sgesap/stack/sap_instance sgesap/stack/sap_instance sgesap/stack/sap_virtual_hostname sgesap/db_global/db_system sgesap/db_global/db_vendor sgesap/mdm_spec/mdm_mdshostspec_host sgesap/mdm_spec/mdm_credentialspec_user sgesap/mdm_spec/mdm_credentialspec_password Value MDIS02 MDSS03 mdsreloc MO7 oracle mdsreloc Admin Description Example for defining an MDM MDIS instance with instance number 02 in the same package Example for defining an MDM MDSS instance with instance number 03 in the same package Defines the virtual IP hostname that is enabled with the start of this package Determines the name of the database (schema) for SAP Defines the underlying RDBMS database that is to be used with this instance The MDS server is accessible under this mdsreloc virtual IP address/hostname User credential for executing MDM CLIX commands Password credential for executing MDM CLIX commands The following contains some selected SGeSAP parameters relevant to MDM repository configuration. For more information, see package configuration file. Parameter sgesap/mdm_spec/mdm_repositoryspec_repname sgesap/mdm_spec/mdm_repositoryspec_dbsid sgesap/mdm_spec/mdm_repositoryspec_dbtype sgesap/mdm_spec/mdm_repositoryspec_dbuser sgesap/mdm_spec/mdm_repositoryspec_dbpasswd Value PRODUCT_HA_REP MO7 o mo7adm abcxyz Description MDM repository name DBMS instance name DBMS instance type: "o" stands for Oracle DBMS user name DBMS password Module sg/services SGeSAP monitors Depending on the instance type configured in the package, SGeSAP monitors can be configured with this module to check the health of the instance. Also, a monitor for the DB used can also be configured. Table 22 Module sg/services SGeSAP monitors parameter Parameter service_name service_cmd service_restart service_fast_fail service_halt_timeout Value CM2CIdisp $SGCONF/monitors/sgesap/sapdisp.mon 0 No 5 Description Unique name. A combination of package name and monitor type is recommend Path to monitor script Usually, no restarts must be configured for a SGeSAP monitor to have an immediate failover if the instance fails. No fast fail configured > 0 to give monitor some time to cleanup after it received the TERM signal. Serviceguard Manager guided package setup pre-populates the services screen with the monitors appropriate for the instance if the service module has been selected to be included in the package. Completing SGeSAP package creation (Phase 3b) 69

70 Configure a database monitor with: service_name <pkg>datab service_cmd $SGCONF/monitors/sgesap/sapdatab.mon All other values are set up as described in the Table 22 (page 69) table. Module sg/generic_resource SGeSAP enqor resource A generic resources has to be setup for SCS and ERS packages, if the SGeSAP enqueue follow-and-push mechanism is used. There is a common resource for each SCS/ERS pair. The naming schema of the resource follows the below convention: sgesap.enqor<sid>_<ers> For example: SAP System C11 has SCS40 and ERS41 configured. ERS41 replicates SCS40. Both the package containing SCS40 and the package of ERS41 must have the generic_resource_name sgesap.enqor_c11_ers41 setup with the generic_resource module. The resource must be of evaluation_type before_pacakge_start. Up_criteria for the SCS pacakage is!=1, for the ERS package!=2. For example: For the SCS package: generic_resource_name sgesap.enqor_c11_ers41 generic_resource_evaluation_type before_package_start generic_resource_up_criteria!=1 For the ERS package generic_resource_name sgesap.enqor_c11_ers41 generic_resource_evaluation_type before_package_start generic_resource_up_criteria!=2 NOTE: In order to have any effect these enqor resources require the enqor MNP to be up. Serviceguard Manager guided configuration offers the correct values preselected for the generic_resource screen only if a SGeSAP enqor MNP is already setup. The deploysappkgs script supports the generic_resource module for enqor. Module sg/dependency SGeSAP enqor MNP dependency SCS and ERS packages taking part in the SGeSAP follow-and-push mechanism must have same-node/up dependency with the enqor MNP. The attributes have to be set as follows: dependency_name dependency_location dependency_condition enqor_dep same_node enqor = UP Servicegurad Manager guided configuration offers the correct values preselected for the dependency screen only if a SGeSAP enqor MNP is already setup. The deploysappkgs script supports the dependecy module for enqor. Module sgesap/enqor SGeSAP enqor MNP template This module is used to setup the SGeSAP enqor MNP. It has no attributes to be configured. A SGeSAP enqor MNP is only mandatory in the SCS/ERS follow-and-push context. The sgesap/enqor module must not be combined with any other SGeSAP module. A configured enqor MNP is a prerequisite for the correct function of the sg/dependency and sg/generic_rsesource attributes configured into a sapinstance package. For the configuration of the sg/dependency and sg/generic_resource modules described above the enqor MNP is a prerequisite. 70 Clustering SAP using SGeSAP packages

71 On command line a enqor MNP can be created with: cmmakepkg n enqor m sgesap/enqor enqor.config The resulting enqor.config can be applied without editing. The Serviceguard Manager offers SAP Netweaver Operations Resource in the Select the SAP Components in the Package screen for configuring the enqor MNP. deploysappkgs creates the enqor.config file when the follow-and-push mechanism is the recommended way of operation for the creates SCS/ERS packages (and no enqor MNP is not configured yet). In such a situation deploysappkgs will also extend existing SCS/ERS package with the requires generic_resource and dependency module and their attributes. Verification of Phase 3: Start and stop packages on each configured node. When testing SGeSAP follow-and-push mechanism the enqor MNP package must be up. This will restrict the possible nodes for SCS and ERS package startup. Make sure client applications (dialog instances) can connect Configuring sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache This section describes configuring SGeSAP toolkit with sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache parameters. Remote access between cluster nodes and to external application servers For external applications servers configured in a package, remote access between the cluster nodes and between external hosts needs to be enabled. Root access between cluster hosts must be enabled and the users <sid>adm and root from the cluster (in this case also a cluster host can assume the role of an external appserver) must be allowed to run as <sid>adm on the from external application servers. It is recommended to use ssh(1). Usage of rsh is discouraged. To accomplish this, the following steps are necessary: Create ssh keys for root and <sid>adm Distribute those keys to allow access To generate the keys execute runt the command ssh-keygen t rsa as user root and <sid>adm on each host. This will create files for the private (id.rsa) and public key (id_rsa.pub) in the user s.ssh directory. The public key then needs to be distributed to the other hosts. This can be accomplished by running the command ssh-copy-id i id_rsa.pub user@host. This will add the user s public key to the authorized_keys (not authorized_keys2) on the target host. On each cluster node this has to be executed as the root user and host being one of the other cluster nodes in turn. On each clusternode and for each external application server appserverinvoke the ssh-copy-id user@host command twice, replacing the user@host string with <sid>adm@appserver and root@appserver. It is also recommended to pre-populate the known hosts file (/etc/ssh/ssh_known_hosts) on each cluster node by executing ssh-keyscan list-of-remote-hosts >> /etc/ssh/ssh_known_hosts This avoids the first login from the remote host hanging in the fingerprint. After finishing this section a password less login must be possible between root user on all cluster nodes root andsidadm to external appservers Completing SGeSAP package creation (Phase 3b) 71

72 Configuring external instances (sgesap/sapextinstance) External dialog instances ( D and J -type) can be configured into a SGeSAP package using the sgesap/sapextinstance module. These instances can eitherbelong to the SID configured in the package (values sap_ext_system and sap_system are identical), ora as a foreign SID (values of sap_ext_system and sap_system are different). They can be started, stopped and restarted with the package, but also stopped when the package fails over to the node where the instance is running. A restriction that applies to instances with a foreign SID is that these can only be stopped if the package fails on the local node. Any instances configured with the sapextinstance module are handled on a best effort basis. Failing to start or stop an external instance will not cause the whole package to fail. Such instances are also not monitored by a SGeSAP service monitor. If the sapcontrol usage attribute is enabled (enabled per default on SGeSAP/LX), SGeSAP will try to use sapcontrol commands to start and stop instances. For instances on remote hosts, sapcontrol will use the host option to control the remote instance. Note this requires that the remote instance s sapstartsrv is already running and the required webservices (for starting and stopping the instance) are open for remote access from the local host (For more information, see the SAP notes and ). If remote access via sapcontrol fails and fallback access via remote shell is enabled, the remote shell will be used instead. The sapextinstance module also uses the attributes configured in the sgesap/sap_global module. The attribute sgesap/sap_global/sap_system is used as the default SID, sgesap/sap_global/rem_comm as the communication mechanism from the cluster node to the application server. Serviceguard Manager or the CLI interface can be used to configure the module. NOTE: deploysappkgs cannot configure this module. Attributes to define a single external instance are Module Attribute sap_ext_instance sap_ext_sytem sap_ext_host sap_ext_treat GUI Label External SAP Instance SAP System ID Hostname Values represented as checkboxes Description Instance type and number (like D01). Only D,J and SMDA types allowed. SID of the external instance. If unspecified, sap_system (the SID of the package) is assumed. Host were the external instance resides. Virtual hostname allowed. Actions on external instance (see table Table 23 (page 72) for more information) Contains a y for each action to be executed, a n if action must be skipped. List of five y/n values. Table 23 Overview of reasonable treat values Value (. = y/n) y... (position 1).y... (position 2)..y.. (position 3) Meaning start with package Stop with package Restart during failover Description Application server is started with the package (own SID) Application server is stopped with the package (own SID) Application server is restarted when package performs a failover (own SID). Restart occurs on package start. 72 Clustering SAP using SGeSAP packages

73 Table 23 Overview of reasonable treat values (continued) Value (. = y/n)...y. (position 4)...y (position 5) Meaning Stop if package local Reserved for future used Description Application server is stopped when package fails over to local node, that is on the same node where the application server is currently running (own & foreign SID) Figure 20 Configuring sapextinstance screen Supported operating systems for running external instances are Linux, HP-UX and Microsoft Windows-Server. For Windows the example functions start_windows_app and stop_windows_app must be adapted to the remote communication mechanism used on the Windows Server. In this case, there must be a customer specific version of these functions in customer_functions.sh. This sgesap/sapextinstance module can also be used to configure diagnostic instances which failover with clustered dialog instances (They start together with the dialog instance and stop together with the dialog instance). Although technically they belong to a different SID, they can be started and stopped with the package. The hostname to be configured is the same as the virtual hostname of the instance configured in the package (which usually also is part of diagnostic instance profile name). If a SMDA instance is configured, it will be displayed in the Serviceguard Manger guided package configuration. Example 1: The package is associated with SAP System SG1. The primary node is also running a non-clustered ABAP Dialog Instance with instance ID 01. It must be stopped and started with manual package operations. In case of a failover, a restart attempt must be made on the primary node (if the primary node is reachable from the secondary). There is a second instance D01 on a server outside of the cluster that must similarly be started, stopped and restarted. sap_ext_instance sap_ext_host sap_ext_treat sap_ext_instance sap_ext_host sap_ext_treat Example 2: D01 node1 yyynn D02 hostname1 yyynn Completing SGeSAP package creation (Phase 3b) 73

74 The failover node is running a central, non-clustered test system QAS and a dialog instance D03 of the clustered SG1. All these must be stopped in case of a failover to the node, in order to free up resources. sap_ext_instance sap_ext_system sap_ext_host sap_ext_treat sap_ext_instance sap_ext_host sap_ext_treat Example 3: DVEBMGS10 QAS node2 nnnyn D03 node2 yyyyn The package contains one or more dialog instances configured for vhost1 for which also a diagnostic agent is configured. It must stop before the instances are stopped and started after the instances are started. sap_ext_instance sap_ext_system sap_ext_host sap_ext_treat SMDA97 DAA vhost1 yynnn Configuring SAP infrastructure components (sgesap/sapinfra) The SAP infrastructure software defines software components that support a specific SAP Netweaver Application Server, but are independent of the Server start or stop sequence. NOTE: SAP Netweaver Application Server instances cannot be specified here. Legal values for sgesap/sap_infra_sw_type are described in Table 24 (page 74). Table 24 Legal values for sgesap/sap_infra_sw_type Value saposcol sapccmsr rfcadapter sapwebdisp saprouter Description SAP operating system monitor collector SAP additional monitor collector SAP XI/PI/EAI remote function call adapter SAP webdispatcher (not installed as SAP instance, but unpacked and bootstrapped to /usr/sap/<sid>/sapwebdisp) SAP software network routing tool The values saprouter and biamaster can be specified more than once. The attribute sap_infra_treat defines whether the component will only be started/notified with the package startup, or whether it will also be stopped as part of a package shutdown (default). Possible values are startonly and startnstop. sap_infra_sw_params defines additional command line parameters to be called with the component. A sap_infra_sw_host value can be added to specify the hostname where to start a BIA master instance. This parameter will be ignored for other infrastructure components, which always get started/stopped locally. Examples: sap_infra_sw_type saposcol sap_infra_sw_type saprouter sap_infra_sw_treat startnstop sap_infra_sw_params "-H virtual_ip -W 20000\ -R /sapmnt/c11/profile/saprouttab\ -T /sapmnt/c11/profile/dev_rout1" 74 Clustering SAP using SGeSAP packages

75 sap_infra_sw_type sapccmsr sap_infra_sw_params /sapmnt/c11/profile/ccmsr_profilename sap_infra_sw_type sapwebdisp sap_infra_sw_treat startnstop sap_infra_sw_params -shm_attachmode 6 When using the Serviceguard Manager to configure this module the following can be configured: Figure 21 Configuring SAP infrastructure software components screen To add an SAP infrastructure software component to the Configured SAP Infrastructure Software Components list: 1. Enter information to the Type, Start/Stop, and Parameters boxes. 2. Click <Add to move this information to the Configured SAP Infrastructure Software Components list. To remove an SAP infrastructure software component from this list: Click the option adjacent to the SAP infrastructure software component that you want to remove, then click Remove. To edit a configured SAP infrastructure software component: Click the option adjacent to the SAP infrastructure software component that you want to edit, then click Edit>>. The SAP infrastructure software component information moves to the Type, Start/Stop, and Parameters boxes where you can make changes. Click Update, and the edited SAP infrastructure software component information is returned to the Configured SAP Infrastructure Software Components list. Completing SGeSAP package creation (Phase 3b) 75

76 Module sgesap/livecache SAP livecache instance The livecache setup is very similar to a sgesap/dbinstance setup with MaxDB. However, there are a few minor differences: The livecache installation does not create a XUSER file (.XUSER.62) The livecache clients are the work processes of the SCM system which belong to a different SID Additional steps for liveccache are: Create the XUSER file with the c key Make sure SAP transaction LC10 in the SCM system has the virtual hostname for livecache configured Disable livecache xserver autostart (optional Create the livecache monitoring hook Create XUSER file The SGeSAP livecache module requires that a userkey with the control user has been setup for the lcsidadm user. Normally, the key c is used for this, but other keys can also be used. If the c key does not exist, login as user <lcsid>adm and execute xuser U c u control,password d LCSID n virtual-host set to create the XUSER file and the c key. Other keys will only necessary if SCM/liveCache integration uses decentralized authorization instead of centralized authorization. Latter is preselected in transaction LC10 and the recommended way of authorization. Verify the userkey setup by running a connect test using dbmcli U c db_state This command must return online if the livecache is up. The XUSER file must be distributed (along with the user itself if not done yet) to all node in cluster planned to run the livecache. Verify transaction LC10 Make sure that the SAP LC10 transaction (Maintain livecache integration) of the SCM system uses the virtual hostname for the LCA, LDA and LEA database connections: Field livecache Server in the livecache connection information. This is usually the case if livecache client was selected during SCM central instance installation and the values used during livecache installation were provided in that step. Disable xserver Autostart (optional) If the livecache is version >=7.8, then the xserver structure is the same as that of the MaxDB of the corresponding version. If it is planned to run more than one livecache or MaxDB on the system it is advisable to decouple the xserver startup (sdggloballistener and DB specific). For more information, see MaxDB section describing the decoupling of startup. Setup of monitoring hook Create a symbolic link that acts as a hook that informs SAP software where to find the livecache monitoring software to allow the prescribed interaction with it. Optionally, you can change the ownership of the link to sdb:sdba. For this step the shared file system /sapdb/lcsid must be mounted and the environment variable $SGCONF must be defined. ln -s $SGCONF/sgesap/monitors/saplc.mon /sapdb/lcsid/db/sap/lccluster Setting up the livecache package 76 Clustering SAP using SGeSAP packages

77 Attributes offered by the livecache module are: Parameter lc_system lc_virtual_hostname lc_start_mode lc_user_key Parameter (Example) LC1 reloc1 online c Description The virtual hostname onto which the livecache has been installed List of values which define into which state the livecache must be started. Possible values are offline, (only vserver will be started) admin, (start in admin mode) slow, (start in cold-slow mode) online. (start in oline mode) Key to access the livecache as described in previous step. Default is c. The Serviceguard Manager guided package setup offers discovered values, if a livecache is installed. Check SAP livecache instance in the initial SGeSAP module selection screen (Select a Toolkit -> SGeSAP -> SAP livecache instance). The configuration dialogs brings up the screen for configuring the livecache module. Figure 22 Configuring sapextinstance screen From the command line use: cmmakepkg m sgesap/livecache lclc1.config to create the package configuration file. Then edit and apply the configuration file. NOTE: An SGeSAP livecache package should not be configured with other SGeSAP modules, even though it is technically possible. The SGeSAP easy deployment (deploysappkgs) script does not support livecache. Verification: Start up package on each configured node Make sure the livecache can be accessed on each client node by executing dbmcli U c on these nodes Make sure SCM LC10 integration can connect every time Completing SGeSAP package creation (Phase 3b) 77

Managing Serviceguard Extension for SAP

Managing Serviceguard Extension for SAP Managing Serviceguard Extension for SAP Manufacturing Part Number: T2803-90011 December 2007 Legal Notices Copyright 2000-2007 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension

More information

Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64)

Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64) *T2392-90015* Printed in the US HP Part Number: T2392-90015 Published: March 2009 Legal Notices Copyright (R) 2000-2009 Hewlett-Packard

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

Environment 7.1 SR5 on AIX: Oracle

Environment 7.1 SR5 on AIX: Oracle PUBLIC Installation Guide SAP NetWeaver Composition Environment 7.1 SR5 on AIX: Oracle Production Edition Target Audience Technology consultants System administrators Document version: 1.1 05/16/2008 Document

More information

Enabling High Availability for SOA Manager

Enabling High Availability for SOA Manager Enabling High Availability for SOA Manager Abstract... 2 Audience... 2 Introduction... 2 Prerequisites... 3 OS/Platform... 3 Cluster software... 4 Single SOA Manager Server Fail Over... 4 Setting up SOA

More information

TRIM Integration with Data Protector

TRIM Integration with Data Protector TRIM Integration with Data Protector Table of Contents Introduction... 3 Prerequisites... 3 TRIM Internals... 3 TRIM s Data Organization... 3 TRIM s Architecture... 4 Implications for Backup... 4 Sample

More information

HPE Serviceguard I H6487S

HPE Serviceguard I H6487S Course data sheet HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses H6487S 5 days ILT View now View now HPE Serviceguard I H6487S This course

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

EXPRESSCLUSTER X. System Configuration Guide. for Linux SAP NetWeaver. April 17, st Edition

EXPRESSCLUSTER X. System Configuration Guide. for Linux SAP NetWeaver. April 17, st Edition EXPRESSCLUSTER X for Linux SAP NetWeaver System Configuration Guide April 17, 2018 1st Edition Revision History Edition Revised Date Description 1st Apr 17, 2018 New guide Copyright NEC Corporation 2018.

More information

SAP Solutions on VMware vsphere : High Availability

SAP Solutions on VMware vsphere : High Availability SAP Solutions on VMware vsphere : High Availability Table of Contents Introduction...1 vsphere Overview...1 VMware Fault Tolerance...1 Background on High Availability for SAP Solutions...2 VMware HA...3

More information

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime Reduce Application Downtime Overview is an industry-leading high availability solution for reducing both planned and unplanned downtime. By monitoring the status of applications and automatically moving

More information

Monitoring SAP ENCYCLOPEDIA ... ENCYCLOPEDIA. Monitoring Secrets for SAP. ArgSoft Intellectual Property Holdings, Limited

Monitoring SAP ENCYCLOPEDIA ... ENCYCLOPEDIA. Monitoring Secrets for SAP. ArgSoft Intellectual Property Holdings, Limited Monitoring Secrets for SAP ENCYCLOPEDIA ENCYCLOPEDIA Monitoring SAP.... 1991-2010 Contents Argent for SAP Overview 3 Introduction 3 Monitoring With Argent for SAP 4 SAP Instance 4 SAP Processes 4 Work

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

High Availability Options for SAP Using IBM PowerHA SystemMirror for i

High Availability Options for SAP Using IBM PowerHA SystemMirror for i High Availability Options for SAP Using IBM PowerHA Mirror for i Lilo Bucknell Jenny Dervin Luis BL Gonzalez-Suarez Eric Kass June 12, 2012 High Availability Options for SAP Using IBM PowerHA Mirror for

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

HP 3PAR OS MU3 Patch 17

HP 3PAR OS MU3 Patch 17 HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright

More information

LifeKeeper for Linux v5.0. Sybase ASE Recovery Kit Administration Guide

LifeKeeper for Linux v5.0. Sybase ASE Recovery Kit Administration Guide LifeKeeper for Linux v5.0 Sybase ASE Recovery Kit Administration Guide October 2010 SteelEye and LifeKeeper are registered trademarks. Adobe Acrobat is a registered trademark of Adobe Systems Incorporation.

More information

WHITE PAPER. Implementing Fault Resilient Protection for mysap in a Linux Environment. Introducing LifeKeeper from SteelEye Technology

WHITE PAPER. Implementing Fault Resilient Protection for mysap in a Linux Environment. Introducing LifeKeeper from SteelEye Technology Implementing Fault Resilient Protection for mysap in a Linux Environment Introducing LifeKeeper from SteelEye Technology WHITE PAPER Introduction In the past, high-availability solutions were costly to

More information

Architecture of the SAP NetWeaver Application Server

Architecture of the SAP NetWeaver Application Server Architecture of the NetWeaver Application Release 7.1 Online Help 03.09.2008 Copyright Copyright 2008 AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or

More information

Veritas Cluster Server from Symantec

Veritas Cluster Server from Symantec Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overviewview protects your most important applications from planned and unplanned downtime.

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R

Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R Protecting Mission-Critical Workloads with VMware Fault Tolerance W H I T E P A P E R Table of Contents Fault Tolerance and Virtualization... 3 Fault Tolerance in the Physical World... 3 VMware Fault Tolerance...

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server

Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server Installation Guide SAP Landscape Transformation Replication Server Document Version: 1.6 2017-06-14 CUSTOMER Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server - For SAP

More information

Monitoring System Landscapes Using the DBA Cockpit

Monitoring System Landscapes Using the DBA Cockpit Monitoring System Landscapes Using the DBA Cockpit Applies to: Database Monitoring and Administration of SAP NetWeaver systems using the latest DBA Cockpit that is provided with release 7.10 and SAP NetWeaver

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

ExpressCluster X 2.0 for Linux

ExpressCluster X 2.0 for Linux ExpressCluster X 2.0 for Linux Installation and Configuration Guide 03/31/2009 3rd Edition Revision History Edition Revised Date Description First 2008/04/25 New manual Second 2008/10/15 This manual has

More information

SAP HANA Disaster Recovery with Asynchronous Storage Replication

SAP HANA Disaster Recovery with Asynchronous Storage Replication Technical Report SAP HANA Disaster Recovery with Asynchronous Storage Replication Using SnapCenter 4.0 SAP HANA Plug-In Nils Bauer, Bernd Herth, NetApp April 2018 TR-4646 Abstract This document provides

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-02ENZ0(00) November 2010 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum

HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum This addendum to the HPE Datacenter Care Service data sheet describes HPE Datacenter Care SAP and SAP HANA service features, which are

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux

Systemwalker Service Quality Coordinator. Technical Guide. Windows/Solaris/Linux Systemwalker Service Quality Coordinator Technical Guide Windows/Solaris/Linux J2X1-6800-03ENZ0(00) May 2011 Preface Purpose of this manual This manual explains the functions and usage of Systemwalker

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

HP Universal CMDB. Software Version: Content Pack (CP18) Discovery and Integrations Content Guide - Discovery Activities

HP Universal CMDB. Software Version: Content Pack (CP18) Discovery and Integrations Content Guide - Discovery Activities HP Universal CMDB Software Version: Content Pack 18.00 (CP18) Discovery and Integrations Content Guide - Discovery Activities Document Release Date: December 2015 Software Release Date: December 2015 Legal

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

COURSE LISTING. Courses Listed. Training for Database & Technology with Technologieberater in Associate with Database. Last updated on: 28 Sep 2018

COURSE LISTING. Courses Listed. Training for Database & Technology with Technologieberater in Associate with Database. Last updated on: 28 Sep 2018 Training for Database & Technology with Technologieberater in Associate with Database Courses Listed Fortgeschrittene ADM5 - Database Administration SAP MaxDB ADM540 - Database Administration SAP ASE for

More information

Forwarding Alerts to Alert Management (ALM)

Forwarding Alerts to Alert Management (ALM) Forwarding Alerts to Alert Management (ALM) HELP.BCCCM SAP NetWeaver 04 Copyright Copyright 2004 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

Universal CMDB. Software Version: Content Pack (CP20) Discovery and Integrations Content Guide - Discovery Activities

Universal CMDB. Software Version: Content Pack (CP20) Discovery and Integrations Content Guide - Discovery Activities Universal CMDB Software Version: Content Pack 20.00 (CP20) Discovery and Integrations Content Guide - Discovery Activities Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices

More information

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2

HP Data Protector A disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 HP Data Protector A.06.11 disaster recovery support for Microsoft Windows 7 and Windows Server 2008 R2 Technical white paper Table of contents Introduction... 2 Installation... 2 Preparing for Disaster

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

ADM800 AS Java 7.3 Administration

ADM800 AS Java 7.3 Administration AS Java 7.3 Administration SAP NetWeaver Course Version: 99 Course Duration: 5 Day(s) Publication Date: 07-05-2013 Publication Time: 1141 Copyright Copyright SAP AG. All rights reserved. No part of this

More information

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved.

Configuring the Oracle Network Environment. Copyright 2009, Oracle. All rights reserved. Configuring the Oracle Network Environment Objectives After completing this lesson, you should be able to: Use Enterprise Manager to: Create additional listeners Create Oracle Net Service aliases Configure

More information

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note February 2002 30-000632-011 Disclaimer The information contained in this publication is subject to change without

More information

HPE 3PAR OS MU5 Patch 49 Release Notes

HPE 3PAR OS MU5 Patch 49 Release Notes HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:

More information

HP 3PAR OS MU2 Patch 11

HP 3PAR OS MU2 Patch 11 HP 3PAR OS 321 MU2 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software 321200 (MU2) Patch 11 (P11) HP Part Number: QL226-98118 Published:

More information

HP Serviceguard Quorum Server Version A Release Notes, Fourth Edition

HP Serviceguard Quorum Server Version A Release Notes, Fourth Edition HP Serviceguard Quorum Server Version A.02.00 Release Notes, Fourth Edition Manufacturing Part Number: B8467-90026 Reprinted December 2005 Legal Notices Copyright 2005 Hewlett-Packard Development Company,

More information

1 BRIEF / Oracle Solaris Cluster Features and Benefits

1 BRIEF / Oracle Solaris Cluster Features and Benefits Oracle Solaris Cluster is a comprehensive high availability (HA) and disaster recovery (DR) solution for Oracle SPARC and x86 environments that is based on Oracle Solaris. It combines extreme service availability

More information

Environment 7.1 SR5 on Linux: IBM DB2 for Linux, UNIX, and

Environment 7.1 SR5 on Linux: IBM DB2 for Linux, UNIX, and PUBLIC Installation Guide SAP NetWeaver Composition Environment 7.1 SR5 on Linux: IBM DB2 for Linux, UNIX, and Windows Production Edition Target Audience Technology consultants System administrators Document

More information

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Technical white paper HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Handling HP 3PAR StoreServ Peer Persistence with HP Storage Provisioning Manager Click here to verify the latest

More information

SiteScope Adapter for HP OpenView Operations

SiteScope Adapter for HP OpenView Operations SiteScope Adapter for HP OpenView Operations for the UNIX and Windows Operating System Software Version: 1.00, 1.01 User s Guide Document Release Date: 24 November 2009 Software Release Date: December

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.1x and 9.2x UNIX and Linux operating systems High Availability Through Server Pooling Document Release Date: December 2016 Software Release Date: December 2016

More information

SAP High Availability with IBM Tivoli System Automation for Multiplatforms

SAP High Availability with IBM Tivoli System Automation for Multiplatforms IBM Software Group IBM Tivoli Solutions SAP High Availability with IBM Tivoli System Automation for Multiplatforms 2 SAP High Availability with Tivoli SA MP Contents 2 Introduction 5 Single Point of Failures

More information

Proficy* HMI/SCADA - ifix LAN R EDUNDANCY

Proficy* HMI/SCADA - ifix LAN R EDUNDANCY Proficy* HMI/SCADA - ifix LAN R EDUNDANCY Version 5.5 February 2012 All rights reserved. No part of this publication may be reproduced in any form or by any electronic or mechanical means, including photocopying

More information

QuickSpecs. HPE Workload Aware Security for Linux. Overview

QuickSpecs. HPE Workload Aware Security for Linux. Overview Overview (WASL) is a one-click security compliance tool designed to secure Operating System and Applications. It is based on open architecture supplemented by years of security expertise and HPE Intellectual

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

ETERNUS SF AdvancedCopy Manager Operator's Guide for Cluster Environment

ETERNUS SF AdvancedCopy Manager Operator's Guide for Cluster Environment ETERNUS SF AdvancedCopy Manager 14.2 Operator's Guide for Cluster Environment J2X1-7452-04ENZ0(00) June 2011 Preface Purpose This manual explains the installation and customization of ETERNUS SF AdvancedCopy

More information

Rolling Database Update in an SAP ASE and SAP Replication Server Environment

Rolling Database Update in an SAP ASE and SAP Replication Server Environment Rolling Database Update in an SAP ASE and SAP Replication Server Environment Applies to: Update SAP Application Server Enterprise (SAP ASE) from SAP ASE 15.7.0 SP 110 (and higher) to SAP ASE 15.7.0 SP

More information

EXPRESSCLUSTER X 3.3. Configuration Example. for Linux SAP NetWeaver. 10/3/2016 4th Edition

EXPRESSCLUSTER X 3.3. Configuration Example. for Linux SAP NetWeaver. 10/3/2016 4th Edition EXPRESSCLUSTER X 3.3 for Linux SAP NetWeaver Configuration Example 10/3/2016 4th Edition Revision History Edition Revised Date Description First 10/1/2012 New manual 2 nd 10/25/2013 Changing the setting

More information

SAP NetWeaver 04 Security Guide. Operating System Security: SAP System Security Under Windows

SAP NetWeaver 04 Security Guide. Operating System Security: SAP System Security Under Windows SAP NetWeaver 04 Security Guide Operating System Security: SAP System Security Under Windows Document Version 1.00 April 29, 2004 SAP AG Neurottstraße 16 69190 Walldorf Germany T +49/18 05/34 34 24 F +49/18

More information

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP StorageWorks Continuous Access EVA 2.1 release notes update HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.

More information

HP StorageWorks Cluster Extension XP user guide

HP StorageWorks Cluster Extension XP user guide HP StorageWorks Cluster Extension XP user guide XP48 XP128 XP512 XP1024 XP10000 XP12000 product version: 2.06.00 seventh edition (October 2005) part number T1609-96006 This guide explains how to use the

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v3 VM Host) v4.2 HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 Integrity Virtual Machines (also called Integrity VM or HPVM) is a hypervisor product

More information

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

HP Storage Mirroring Application Manager 4.1 for Exchange white paper HP Storage Mirroring Application Manager 4.1 for Exchange white paper Introduction... 2 Product description... 2 Features... 2 Server auto-discovery... 2 (NEW) Cluster configuration support... 2 Integrated

More information

means an integration element to a certain software, format or function through use of the HP software product.

means an integration element to a certain software, format or function through use of the HP software product. Additional License Authorizations For HP Automation Center software products Products and suites covered Products E-LTU or E-Media available * Non-Production use category ** HP Automation Insight Yes Class

More information

vsphere Availability Update 1 ESXi 5.0 vcenter Server 5.0 EN

vsphere Availability Update 1 ESXi 5.0 vcenter Server 5.0 EN Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

User's Guide - Master Schedule Management

User's Guide - Master Schedule Management FUJITSU Software Systemwalker Operation Manager User's Guide - Master Schedule Management UNIX/Windows(R) J2X1-3170-14ENZ0(00) May 2015 Preface Purpose of This Document This document describes the Master

More information

Siebel Application Deployment Manager Guide. Version 8.0, Rev. A April 2007

Siebel Application Deployment Manager Guide. Version 8.0, Rev. A April 2007 Siebel Application Deployment Manager Guide Version 8.0, Rev. A April 2007 Copyright 2005, 2006, 2007 Oracle. All rights reserved. The Programs (which include both the software and documentation) contain

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

How to Package and Deploy SAP Business One Extensions for Lightweight Deployment

How to Package and Deploy SAP Business One Extensions for Lightweight Deployment How To Guide SAP Business One 9.1 Document Version: 1.0 2014-05-09 How to Package and Deploy SAP Business One Extensions for Lightweight Deployment All Countries Typographic Conventions Type Style Example

More information

Visual Business Configuration with SAP TM

Visual Business Configuration with SAP TM SAP Transportation Management Visual Business Configuration with SAP TM CUSTOMER Document Version: 3.0 December 2013 SAP AG 1 Copyright Copyright 2013 SAP AG. All rights reserved. SAP Library document

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites

Solution Pack. Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Solution Pack Managed Services Virtual Private Cloud Managed Database Service Selections and Prerequisites Subject Governing Agreement Term DXC Services Requirements Agreement between DXC and Customer

More information

HPE XP7 Performance Advisor Software 7.2 Release Notes

HPE XP7 Performance Advisor Software 7.2 Release Notes HPE XP7 Performance Advisor Software 7.2 Release Notes Part Number: T1789-96464a Published: December 2017 Edition: 2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information

More information

HPE Serviceguard REST API Reference Guide

HPE Serviceguard REST API Reference Guide HPE Serviceguard REST API Reference Guide Published: June 2017 Table of Contents: 1. Introduction 2. Configuring and using Serviceguard Rest API 3. Accessing the SG REST API Protocol and message format

More information

HPE FlexNetwork HSR6800 Routers

HPE FlexNetwork HSR6800 Routers HPE FlexNetwork HSR6800 Routers IRF Configuration Guide Part number: 5998-4487R Software version: HSR6800-CMW520-R3303P25 Document version: 6W105-20151231 Copyright 2015 Hewlett Packard Enterprise Development

More information

HPE Enterprise Integration Module for SAP Solution Manager 7.1

HPE Enterprise Integration Module for SAP Solution Manager 7.1 HPE Enterprise Integration Module for SAP Solution Manager 7.1 Software Version: 12.55 User Guide Document Release Date: August 2017 Software Release Date: August 2017 HPE Enterprise Integration Module

More information

HP Data Protector Integration with Autonomy IDOL Server

HP Data Protector Integration with Autonomy IDOL Server Technical white paper HP Data Protector Integration with Autonomy IDOL Server Introducing e-discovery for HP Data Protector environments Table of contents Summary 2 Introduction 2 Integration concepts

More information

Available Packs and Purchase Information

Available Packs and Purchase Information Overview Rapid Deployment Pack (RDP) is a complete deployment solution for HP ProLiant servers. RDP automates the process of deploying and provisioning server software, enabling companies to quickly and

More information

ADM920 SAP Identity Management

ADM920 SAP Identity Management ADM920 SAP Identity Management. COURSE OUTLINE Course Version: 10 Course Duration: 5 Day(s) SAP Copyrights and Trademarks 2014 SAP AG. All rights reserved. No part of this publication may be reproduced

More information

Virtual Disaster Recovery

Virtual Disaster Recovery The Essentials Series: Managing Workloads in a Virtual Environment Virtual Disaster Recovery sponsored by by Jaime Halscott Vir tual Disaster Recovery... 1 Virtual Versus Physical Disaster Recovery...

More information

SAP EarlyWatch Alert. SAP HANA Deployment Best Practices Active Global Support, SAP AG 2015

SAP EarlyWatch Alert. SAP HANA Deployment Best Practices Active Global Support, SAP AG 2015 SAP EarlyWatch Alert SAP HANA Deployment Best Practices Active Global Support, SAP AG 2015 Learning Objectives of this Presentation After completing this presentation, you will be able to: Understand the

More information

HP XP7 High Availability User Guide

HP XP7 High Availability User Guide HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

It is also available as part of the HP IS DVD and the Management DVD/HPSIM install.

It is also available as part of the HP IS DVD and the Management DVD/HPSIM install. Overview The HP is a web-based interface that consolidates and simplifies the management of individual ProLiant and Integrity servers running Microsoft Windows or Linux operating systems. By aggregating

More information

HP Load Balancing Module

HP Load Balancing Module HP Load Balancing Module High Availability Configuration Guide Part number: 5998-2687 Document version: 6PW101-20120217 Legal and notice information Copyright 2012 Hewlett-Packard Development Company,

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

Overview Guide. Mainframe Connect 15.0

Overview Guide. Mainframe Connect 15.0 Overview Guide Mainframe Connect 15.0 DOCUMENT ID: DC37572-01-1500-01 LAST REVISED: August 2007 Copyright 1991-2007 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software and

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

Enterprise Integration Module for SAP Solution Manager 7.2

Enterprise Integration Module for SAP Solution Manager 7.2 Enterprise Integration Module for SAP Solution Manager 7.2 Software Version: 12.53-12.55 User Guide Go to HELP CENTER ONLINE http://admhelp.microfocus.com/alm Document Release Date: May 2018 Software Release

More information

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery White Paper Business Continuity Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery Table of Contents Executive Summary... 1 Key Facts About

More information

HP Online ROM Flash User Guide. July 2004 (Ninth Edition) Part Number

HP Online ROM Flash User Guide. July 2004 (Ninth Edition) Part Number HP Online ROM Flash User Guide July 2004 (Ninth Edition) Part Number 216315-009 Copyright 2000, 2004 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required

More information

TABLE OF CONTENTS. 2 INTRODUCTION. 4 THE CHALLENGES IN PROTECTING CRITICAL SAP APPLICATIONS. 6 SYMANTEC S SOLUTION FOR ENSURING SAP AVAILABILITY.

TABLE OF CONTENTS. 2 INTRODUCTION. 4 THE CHALLENGES IN PROTECTING CRITICAL SAP APPLICATIONS. 6 SYMANTEC S SOLUTION FOR ENSURING SAP AVAILABILITY. WHITE PAPER: TECHNICAL Symantec High Availability and Disaster Recovery Solution for SAP Keep your SAP application online, all the time with Symantec Venkata Reddy Chappavarapu Sr. Software Engineer Data

More information

HP Universal CMDB. Software Version: DDMI to Universal Discovery Migration Walkthrough Guide

HP Universal CMDB. Software Version: DDMI to Universal Discovery Migration Walkthrough Guide HP Universal CMDB Software Version: 10.22 DDMI to Universal Discovery Migration Walkthrough Guide Document Release Date: December 2015 Software Release Date: December 2015 Legal Notices Warranty The only

More information

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide Abstract This document explains how to install and use the HPE StoreEver MSL6480 Tape Library CLI utility, which provides a non-graphical

More information

HP Data Protector A support for Microsoft Exchange Server 2010

HP Data Protector A support for Microsoft Exchange Server 2010 HP Data Protector A.06.11 support for Microsoft Exchange Server 2010 White paper Introduction... 2 Microsoft Exchange Server 2010 concepts... 2 Microsoft Volume Shadow Copy Service integration... 2 Installation

More information

ETERNUS SF AdvancedCopy Manager V15.0. Quick Reference

ETERNUS SF AdvancedCopy Manager V15.0. Quick Reference ETERNUS SF AdvancedCopy Manager V15.0 Quick Reference B1FW-5967-02ENZ0(00) April 2012 Preface Purpose This manual describes the pre-installation requirements, installation procedure, configuration procedure,

More information